AI's Dirty Little Secret: Human Moderators' Exploitation and the AI Outsourcing Paradox

The Hidden Cost of AI

8/3/20232 min read

a strange looking object with eyes and a nose
a strange looking object with eyes and a nose

In a jarring revelation, recent reports indicate an alarming pattern of exploitation within the AI ecosystem, highlighting a concerning lack of oversight and an urgent need for AI ethics reform. In an ironic twist, AI's promise of efficiency and automation is reportedly pushing underpaid workers to use AI tools themselves, while others endure psychological trauma from exposure to disturbing content.

An investigation into Kenyan content moderators for OpenAI's language model, ChatGPT, revealed unsettling working conditions. Moderators often confront hundreds of brutal text passages daily, many containing graphic sexual violence. The toll on their mental health has been profound, leading some to drastically change their lives and lose relationships. The outsourced content moderators, mainly employed by California based data annotation services company, Sama, have alleged exploitation, psychological trauma, and sudden dismissal. In response, they are urging the Kenyan government to examine and regulate such practices .

Simultaneously, another report sheds light on a distinct but related concern. Some gig workers tasked with training AI models are found to be outsourcing their assignments to AI. The Swiss Federal Institute of Technology's study discovered that about a third of their hired workers likely utilized AI tools like ChatGPT to perform their tasks, indicating an unprecedented shift in the AI training ecosystem.

This confluence of human labor exploitation and AI outsourcing raises significant questions about the ethical implications within the rapidly expanding AI field. These reports starkly illustrate the human toll in the race to perfect AI and expose the industry's tendency to prioritize technological advancement over the human wellbeing of its workforce.

In particular, the Kenyan moderators case lays bare the emotional and psychological costs of AI development. The graphic and violent content that moderators are forced to review is detrimental to their mental health and, ultimately, to their lives. The absence of adequate psychological support and the relatively low wages compound these issues. OpenAI and Sama, despite being at the heart of these allegations, have remained largely unresponsive, further exacerbating the situation.

Simultaneously, the growing trend of AI outsourcing itself to AI unveils a paradoxical cycle within the system. In the attempt to increase efficiency and earnings, gig workers are resorting to AI tools to complete their tasks. However, this approach could lead to amplified errors in the AI models due to inaccurate data input, leading to an error propagation cycle that could undermine the quality of AI systems over time .

These interconnected issues underscore the urgency of implementing robust AI ethics frameworks, regulating AI work outsourcing, and setting up industry wide standards for psychological support for content moderators. They call for an industry-wide reevaluation of the human role within AI and the development of mechanisms to safeguard these roles.

If AI is to achieve its full potential, it must do so in a manner that respects and protects human dignity. It’s high time the tech industry recognized and addressed the ethical, psychological, and socioeconomic implications of AI's growth trajectory. It's not just about making AI smarter and more efficient it's about ensuring that the path to advancement is ethically sound, psychologically safe, and socially responsible.