It's a new normal: Thousands of people now have jobs that require them to view graphic and disturbing videos for hours on end. But in this wild, wild west of work, critics say companies need to better understand and support the needs of those in this growing industry.
The role of content moderators was once again put into focus this week following an explosive report by The Verge into the lives of some of these workers for Facebook. The report is just the latest glimpse into the dark underbelly of the internet.
Content moderators typically help companies weed out disturbing content ranging from suicide and murder videos to conspiracy theories in order to make platforms more palatable. The report about Facebook, which cited interviews with a dozen workers who do or have done moderation work for the company, showed workers are reportedly paid $28,800 annually with little upward mobility and few perks. Some reported coping with trauma by getting high on breaks or having sex at the office.
"It's not really clear what the ideal circumstance would be for a human being to do this work," said Sarah T. Roberts, an assistant professor of information studies at UCLA, who has been sounding the alarm about the work and conditions of content moderators for years.
Not much is known about how many workers are tasked with viewing the worst of social media. There is also a little understanding of the long-term effects of this kind of work or how to mitigate against on-the-job trauma.
The content moderation industry is growing as the platforms do. Some companies have increasingly touted these human workforces as a solution to criticism over inappropriate content, oftentimes relying on outsourced workers in a "call center" environment to handle the disturbing tasks. YouTube announced in late 2017 it would hire 10,000 people to clean up offensive videos after backlash including troubling messages seeping through its YouTube Kids platform. Facebook has said the company had 15,000 workers doing content moderation, nearly double what it had just last April.
It is hard to track exact numbers because job titles vary across companies, some employees are under nondisclosure agreements, much of the work is outsourced and there tends to be a high turnover rate. But the work is almost always taxing on the workers.
A person who formerly vetted social media content for the news industry spoke to CNN Business on the condition of anonymity about experiencing PTSD as a result of moderation duties. His job required viewing footage such as chemical attacks and bus bombings.
While he was not reviewing content to evaluate whether it went against a platform's user policies, his job was not dissimilar. He had to watch and rewatch violent and disturbing videos to get his job done.
"Every terrible Islamic state video over the last four years that you can think of, I saw it," he said. "The WWI term for PTSD was shell shock and that's kind of what you suddenly feel."
Despite being "well compensated," the worker said there weren't any resources outside of a standard employee assistance program to help cope with the job's trauma.
"The horrible images that you have to see, money doesn't really enter into it," said the worker, who ultimately took off two months from the job to go to intensive therapy, and is still recovering.
Instead, he suggested that companies better care for workers by distributing the work or limiting the amount of time people spend on viewing extreme content. He said firms can provide experts who understand the trauma and symptoms that could result from exposure to certain types of content.
Roberts says there's still a "reckoning" that needs to happen when it comes to understanding the facets, implications and costs of the job on workers.
"There's really two exit pathways for people who do this work for the most part: Burnout and Desensitization," she told CNN Business.
While there's not a clear alternative content moderation model for big companies like Facebook and Google, Roberts said there are other approaches being taken. Reddit and Wikipedia have more community-based models while some smaller companies allow moderators to have more responsibility in crafting content moderation policies.
Multiple companies including Facebook are beginning to use automated systems based on artificial intelligence, but they are not yet close to the point where they can replace human judgment.
Kate Klonick, an assistant professor at St. John's University Law School who has studied content moderation from a policy perspective told CNN Business, "A cost of not over-censoring, and having the nuance, is having humans do this kind of really gross work, at least for now."
"AI is basically the only thing that can save people from this type of stuff, and it's years and years away," she said.
According to The Verge's report, some moderators even started to embrace views of the conspiracy videos they were viewing on the platform. Roberts, who has a book out in June on the genesis of content moderation work over the past decade, said that rings true to her.
"I have spoken extensively with a woman who was a MySpace moderator in 2007," Roberts said. "She talked about the propensity to be irrevocably altered based on consumption of what [moderators] were supposedly moderating against."
- Moderating the internet is hurting workers. How can companies help them?
- 2 workers hurt when heater explodes at Indiana high school
- Diabetes study ties lower risk to just a moderate amount of body strength
- Mayor celebrates city workers
- Kaiser Tool Company expanding
- Man hurt in workplace stabbing
- Man critically hurt in crash
- One hurt in mall shooting
- 7-year-old hurt in crash
- Homeowner seriously hurt stopping intruder