Firm regrets taking Facebook moderation work
Chris Vallance - BBC News
Tue, August 15, 2023
Facebook on a screen
A firm which was contracted to moderate Facebook posts in East Africa has said with hindsight it should not have taken on the job.
Former Kenya-based employees of Sama - an outsourcing company - have said they were traumatised by exposure to graphic posts.
Some are now taking legal cases against the firm through the Kenyan courts.
Chief executive Wendy Gonzalez said Sama would no longer take work involving moderating harmful content.
Warning - this article contains distressing content
Some former employees have described being traumatised after viewing videos of beheadings, suicide and other graphic material at the moderation hub, which the firm ran from 2019.
Former moderator Daniel Motaung previously told the BBC the first graphic video he saw was "a live video of someone being beheaded".
Mr Motaung is suing Sama and Facebook's owner Meta. Meta says it requires all companies it works with to provide round-the-clock support. Sama says certified wellness counsellors were always on hand.
Ms Gonzalez told the BBC that the work - which never represented more than 4% of the firm's business - was a contract she would not take again. Sama announced it would end it in January.
"You ask the question: 'Do I regret it?' Well, I would probably put it this way. If I knew what I know now, which included all of the opportunity, energy it would take away from the core business I would have not entered [the agreement]."
She said there were "lessons learned" and the firm now had a policy not to take on work that included moderating harmful content. The company would also not do artificial intelligence (AI) work "that supports weapons of mass destruction or police surveillance".
Chris Vallance - BBC News
Tue, August 15, 2023
Facebook on a screen
A firm which was contracted to moderate Facebook posts in East Africa has said with hindsight it should not have taken on the job.
Former Kenya-based employees of Sama - an outsourcing company - have said they were traumatised by exposure to graphic posts.
Some are now taking legal cases against the firm through the Kenyan courts.
Chief executive Wendy Gonzalez said Sama would no longer take work involving moderating harmful content.
Warning - this article contains distressing content
Some former employees have described being traumatised after viewing videos of beheadings, suicide and other graphic material at the moderation hub, which the firm ran from 2019.
Former moderator Daniel Motaung previously told the BBC the first graphic video he saw was "a live video of someone being beheaded".
Mr Motaung is suing Sama and Facebook's owner Meta. Meta says it requires all companies it works with to provide round-the-clock support. Sama says certified wellness counsellors were always on hand.
Ms Gonzalez told the BBC that the work - which never represented more than 4% of the firm's business - was a contract she would not take again. Sama announced it would end it in January.
"You ask the question: 'Do I regret it?' Well, I would probably put it this way. If I knew what I know now, which included all of the opportunity, energy it would take away from the core business I would have not entered [the agreement]."
She said there were "lessons learned" and the firm now had a policy not to take on work that included moderating harmful content. The company would also not do artificial intelligence (AI) work "that supports weapons of mass destruction or police surveillance".
Wendy Gonzales said "lessons" had been learned
Citing continuing litigation, Ms Gonzalez declined to answer if she believed the claims of employees who said they had been harmed by viewing graphic material. Asked if she believed moderation work could be harmful in general, she said it was "a new area that absolutely needs study and resources".
Stepping stone
Sama is an unusual outsourcing firm. From the beginning its avowed mission was to lift people out of poverty by providing digital skills and an income doing outsourced computing tasks for technology firms.
In 2018 the BBC visited the firm, watching employees from low-income parts of Nairobi earn $9 (£7) a day on "data annotation" - labelling objects in videos of driving, such as pedestrians and street lights, which would then be used to train artificial intelligence (AI) systems. Employees interviewed said the income had helped them escape poverty.
The company still works mainly on similar computer vision AI projects, that do not expose workers to harmful content, she says.
"I'm super proud of the fact that we've moved over 65,000 people out of poverty," Ms Gonzales said.
It's important, she believes, that African people are involved in the digital economy and the development of AI systems.
Throughout the interview Ms Gonzales reiterated that the decision to take the work was motivated by two considerations: that moderation was important, necessary work undertaken to prevent social media users from harm. And that it was important that African content was moderated by African teams.
"You cannot expect somebody from Sydney, India, or the Philippines to be able to effectively moderate local languages in Kenya or in South Africa or beyond," she said.
She also revealed that she had done the moderation work herself.
Moderators' pay at Sama began at around 90,000 Kenyan shillings ($630) per month, a good wage by Kenyan standards comparable to nurses, firemen and bank officers, Ms Gonzalez said.
Asked if she would do the work for that amount of money she said "I did do the moderation but that's not my job in the company".
Training AI
Sama also took on work with OpenAI, the company behind ChatGPT.
One employee, Richard Mathenge, whose job was to read through huge volumes of text the chatbot was learning from and flag anything harmful, spoke to the BBC's Panorama programme. He said he was exposed to disturbing content.
Sama said it cancelled the work when staff in Kenya raised concerns about requests relating to image-based material which was not in the contract. Ms Gonzalez said "we wrapped up this work immediately".
OpenAI said it has its own "ethical and wellness standards" for our data annotators and "recognises this is challenging work for our researchers and annotation workers in Kenya and around the world".
But Ms Gonzalez regards this type of AI work as another form of moderation, work that the company will not be doing again.
"We focus on non-harmful computer vision applications, like driver safety, and drones, and fruit detection and crop disease detection and things of that nature," she said.
"Africa needs a seat at the table when it comes to the development of AI. We don't want to continue to reinforce biases. We need to have people from all places in the world who are helping build this global technology."
No comments:
Post a Comment