wordpress blog stats
Connect with us

Hi, what are you looking for?

Kenyan Workers Call for Probe into Disturbing Work Conditions in AI Content Moderation for OpenAI

The petitioners were regularly exposed to harmful content without adequate psychological support

Content moderators from Kenya who have trained OpenAI’s ChatGPT have petitioned the country’s National Assembly, calling for an investigation into the operations of companies like Samasource, registered in Kenya, to whom big tech companies like Google, Meta, and OpenAI outsource their content moderation and AI work. The petition, shared with MediaNama by digital rights advocate Mercy Sumbi, sheds light on the working conditions of young Kenyan workers employed to label a wide range of internet content as toxic and harmful for ChatGPT training. Samasource, a San Francisco-based company, essentially employs workers to label and filter data and content for big tech companies.

What are the issues raised by Kenyan employees?

The petition reveals significant details about the nature of work that Kenyan content moderators are employed in for training AI models of OpenAI since 2021 – it was then that the company partnered with Samasource Kenya. The petitioners were engaged in temporary contracts with Sama to train ChatGPT, which involved, “reading and viewing material that depicted sexual and graphic violence and categorizing it”. This meant the workers were regularly exposed to content including “acts of bestiality, necrophilia, incestuous sexual violence, rape, defilement of minors, self-harm (e.g. suicide), and murder” among others.

The petitioners highlight that the nature of the job and the work undertaken by them were not sufficiently described in their contracts. They were regularly exposed to harmful content without adequate psychological support, and many workers developed “severe mental illnesses including PTSD, paranoia, depression, anxiety, insomnia, sexual dysfunction”. Additionally, the workers were sent back home without receiving their pending dues or any medical care for the impact on their mental health caused by the job when the contract between Sama and OpenAI abruptly ended.


Article continues below, you might also want to read: Summary: Global Technology Policy Council lists core principles for use of generative AI systems 


An investigation by Time earlier this year revealed how OpenAI employed Kenyan workers to label tens of thousands of snippets of text from the “darkest recesses of the internet,” depicting violence, hate speech, and sexual abuse. These labeled samples were used to train ChatGPT’s models, helping the chatbot learn to identify and filter such content. The investigation also uncovered that the data labelers employed by Sama for OpenAI were paid low wages, ranging from around $1.32 to $2 per hour, depending on seniority and performance.

Advertisement. Scroll to continue reading.

The petitioners emphasize that the outsourcing model employed by big tech companies from the US often hurt the rights of the Kenyan citizens against exploitation and fail to provide safe employment conditions. They have also complained that the workers are paid poorly and are mostly “disposed of at will”.

Why it matters:

The petition uncovers issues related to the fast-paced deployment of AI that remain underserved in narratives restricted to benefits and end-user harms from algorithm-based tools. The working conditions highlighted by Kenyan workers and how their rights are impacted in the process of development of AI are critical to the debate of ensuring accountability from AI developers and the companies that deploy them. Whether it is a direct impact or an indirect involvement, one must also question who ultimately benefits from such operations and at what cost. As countries focus on the regulation of AI and AI businesses through a risk-based and rights-based approach, the case put forth by Kenyan workers is pertinent to the areas of intervention needed to adopt a comprehensive regulatory approach.

What are petitioners asking for?

According to the petition reviewed by MediaNama, the petitioners have appealed for:

  1. Investigation into the nature of work and working conditions of Kenyan employees at companies like Samasource.
  2. Interrogate the role of the Ministry of Labour in the protection of Kenyan youth working for Sama or other companies on behalf of tech companies outside Kenya.
  3. Make recommendations to prevent the exploitation of workers and propose the withdrawal of licenses of companies that enable the exploitation of Kenyan employees.
  4. Bring in a law to regulate outsourcing of “harmful and dangerous” tech work and to protect workers engaged in such work arrangements.
  5. Amend the country’s Employment Act 2007 to offer protection to workers engaged in outsourced work.
  6. Define exposure to harmful content as an “occupational hazard” in relevant country laws.

Observations by Kenyan Courts in a Complaint against Meta:

In June this year, a Kenyan employment court ordered Meta to provide “proper medical, psychiatric and psychological care” to content moderators in Nairobi who screened content for Facebook, as per a report by Guardian. While the case dealt with Facebook’s move to declare around 260 such screeners in Nairobi as “redundant”, the underlying challenges to the company include a growing discontent among the workers who underwent traumatic experiences while screening toxic content under tight timelines without adequate psychological support. According to the report, a Kenyan court has also ruled that Meta was the primary or principal employer of the workers in Nairobi and Sama was only an agent and that the work done by these moderators ultimately served and was also provided by Meta.

Advertisement. Scroll to continue reading.

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Also Read:

Written By

Curious about privacy, surveillance developments and the intersection of technology with education, caste and welfare rights.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

The Central Board of Film Certification found power outside the Cinematograph Act and came to be known as the Censor Board. Are OTT self-regulating...

News

Jio is engaging in many of the above practices that CCI has forbidden Google from engaging in.

News

Is it safe to consider all "publicly available data" as public?

News

PhonePe launched an e-commerce buyer app for ONDC called Pincode. We, however, believe that it should also launch a seller app.

News

Amazon announced that it will integrate its logistics network and SmartCommerce services with the Open Network for Digital Commerce (ONDC).

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

MediaNama