COVID-19 will change how the majority of us live and work, at least in the short term. It’s also creating a challenge for tech companies such as Facebook, Twitter and Google that ordinarily rely on lots and lots of human labor to moderate content. Are A.I. and machine learning advanced enough to help these firms handle the disruption?
First, it’s worth noting that, although Facebook has instituted a sweeping work-from-home policy in order to protect its workers (along with Google and a rising number of other firms), it initially required its contractors who moderate content to continue to come into the office. That situation only changed after protests, according to The Intercept.
Now, Facebook is paying those contractors while they sit at home, since the nature of their work (scanning peoples’ posts for content that violates Facebook’s terms of service) is extremely privacy-sensitive. Here’s Facebook’s statement:
“For both our full-time employees and contract workforce there is some work that cannot be done from home due to safety, privacy and legal reasons. We have taken precautions to protect our workers by cutting down the number of people in any given office, implementing recommended work from home globally, physically spreading people out at any given office and doing additional cleaning. Given the rapidly evolving public health concerns, we are taking additional steps to protect our teams and will be working with our partners over the course of this week to send all contract workers who perform content review home, until further notice. We’ll ensure that all workers are paid during this time.”
Facebook, Twitter, Reddit, and other companies are in the same proverbial boat: There’s an increasing need to police their respective platforms, if only to eliminate “fake news” about COVID-19, but the workers who handle such tasks can’t necessarily do so from home, especially on their personal laptops. The potential solution? Artificial intelligence (A.I.) and machine-learning algorithms meant to scan questionable content and make a decision about whether to eliminate it.
Here’s Google’s statement on the matter, via its YouTube Creator Blog.
“Our Community Guidelines enforcement today is based on a combination of people and technology: Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.”
To be fair, the tech industry has been heading in this direction for some time. Relying on armies of human beings to read through every piece of content on the web is expensive, time-consuming, and prone to error. But A.I. and machine learning are still nascent, despite the hype. Google itself, in the aforementioned blog posting, pointed out how its automated systems may flag the wrong videos. Facebook is also receiving criticism that its automated anti-spam system is whacking the wrong posts, including those that offer vital information on the spread of COVID-19.
If the COVID-19 crisis drags on, though, more companies will no doubt turn to automation as a potential solution to disruptions in their workflow and other processes. That will force a steep learning curve; again and again, the rollout of A.I. platforms has demonstrated that, while the potential of the technology is there, implementation is often a rough and expensive process—just look at Google Duplex.
Nonetheless, an aggressive embrace of A.I. will also create more opportunities for those technologists who have mastered A.I. and machine-learning skills of any sort; these folks may find themselves tasked with figuring out how to automate core processes in order to keep businesses running.
Before the virus emerged, Burning Glass (which analyzes millions of job postings from across the U.S.), estimated that jobs that involve A.I. would grow 40.1 percent over the next decade. That percentage could rise even higher if the crisis fundamentally alters how people across the world live and work. (The median salary for these positions is $105,007; for those with a PhD, it drifts up to $112,300.)
If you’re trapped at home and have some time to learn a little bit more about A.I., it could be worth your time to explore online learning resources. For instance, there’s a Google “crash course” in machine learning. Hacker Noon also offers an interesting breakdown of machine learning and artificial intelligence. Then there’s Bloomberg’s Foundations of Machine Learning, a free online course that teaches advanced concepts such as optimization and kernel methods.