According To Report, Kenyan Workers Paid Less Than $2 Per Hour By OpenAI To Work On ChatGPT

 

#image_title

ChatGPT, an artificial intelligence (AI) chatbot, has become a hot topic in the tech world since its debut.

However, an investigative report detailing the alleged exploitation to improve it has been released.

According to TIME, the creator of ChatGPT, OpenAI, outsourced Kenyan workers earning less than $2 per hour to make the chatbot less toxic.

Sama, a San Francisco-based firm that hires people in Kenya, Uganda, and India, employs the workers. According to the report, the goal of OpenAI was to feed an AI “labeled examples of violence, hate speech, and sexual abuse, so that tool could learn to detect those forms of toxicity in the wild.”

The company then sent tens of thousands of text snippets pulled from the dark web to Sama in November 2021 in order to obtain those labels.

According to the firm, there were three contracts worth $200,000, with Kenyan workers being paid $12.50 per hour.

“Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” said Partnership on AI, a coalition of AI orgs that OpenAI is actually a part of, per the outlet. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”

In addition to allegedly being underpaid, the Kenyan workers say that they were “mentally scarred” from the text they had to read through.

“That was torture,” one anonymous Sama worker said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.”

Sama canceled all of its work for OpenAI in February 2022 due to the alleged traumatic work environment. As a result, the two parties agreed that the original $200,000 would not be paid. According to OpenAI, the contracts were worth “around $150,000 over the course of the partnership.”

OpenAI issued a statement regarding the impact of the labeling on the mental health of Sama workers.

“…we take the mental health of our employees and those of our contractors very seriously,” an OpenAI spokesperson stated. “Our previous understanding was that [at Sama] wellness programs and 1:1 counseling were offered, workers could opt out of any work without penalization, exposure to explicit content would have a limit, and sensitive information would be handled by workers who were specifically trained to do so.”

 

Leave a Reply