The tech industry is often blamed for taking away new jobs, rather than creating new ones. But interestingly, social media giant Facebook recently created thousands of new jobs in the form of content moderators. However, there remains some doubt that these are jobs that people actually want to be doing.
In an announcement made on Facebook in early May, Mark Zuckerberg responded directly to the widespread criticism and horror over recent Facebook Live videos depicting gruesome scenes, including one of a father killing his young daughter. He wrote: “Over the next year, we’ll be adding 3,000 people to our community operations team around the world — on top of the 4,500 we have today — to review the millions of reports we get every week, and improve the process for doing it quickly. These reviewers will also help us get better at removing things we don’t allow on Facebook like hate speech and child exploitation. And we’ll keep working with local community groups and law enforcement who are in the best position to help someone if they need it — either because they’re about to harm themselves, or because they’re in danger from someone else.”
With this statement, Zuckerberg seems to suggest that he understands that relying on algorithmic or automated means of content moderation is not an option, and that human power is needed to decipher what’s acceptable and what’s not. This is a step forward from previous statements where he insisted Facebook was a technology company, not a media company, and thus moderation and curation wasn’t their job. However, many are wondering if relying on humans to parse through some of society’s most gruesome sides on social media is an ethical job offer.
While Facebook didn’t go into more details about these roles, The Guardian writes that whether the roles are designed to be for full time employees or contractors, “such work is grueling and, experts say, can lead to psychological trauma.” Quoting a professor from UCLA whose work is focused on content moderation on large social platforms, The Guardian went on to say: “People can be highly affected and desensitised. It’s not clear that Facebook is even aware of the long term outcomes, never mind tracking the mental health of the workers.”
There is precedent for this kind of working relationship not being beneficial. Employees of Microsoft previously sued the company for working in similar roles, pointing out that repeated exposure to horrific scenes and traumas including sexual assaults and violence had an adverse effect on their mental health, with symptoms that mirror post-traumatic stress disorder.
There is another issue: content moderation is not always straightforward. As the censorship controversy over Facebook removing (and re-posting) an iconic Vietnam War photo last year demonstrated, there is a huge grey area about what constitutes appropriate content. As The Guardian wrote: “Beyond the psychological toll moderators face, there’s an enormous burden of judgement: they have to distinguish between child pornography and iconic Vietnam war photos, between the glorification of violent acts and the exposure of human rights abuses. Decisions must be nuanced and culturally contextualised or Facebook will be accused of infringing freedom of speech.”
So what can be done? If algorithmic moderation isn’t good enough, but human moderation can potentially damage the people that are hired to do it, what good options does Facebook have? One certainty is that Facebook cannot treat human employees as mere productivity machines. There must be a recognition of the mental toll this work can take, as well as ample resources, time off, and support services for people hired to do this work. Because Facebook has yet to release details about the moderator roles, it remains to be seen if they are treating this hiring as seriously and cautiously as they should.