Unveiling the Truth: Scrutinizing Microsoft, Google, and OpenAI on their AI Technologies

As tech executives flock to Capitol Hill to speak with lawmakers about potential AI regulations this week, they are also being probed on the working conditions of the workers who make ChatGPT, Bard, and Bing possible.

US lawmakers are probing nine tech giants—Microsoft, OpenAI, Anthropic, Meta, Alphabet, Amazon, Inflection AI, Scale AI, and IBM—on the working conditions of data labelers. Data labelers are human workers tasked with labeling training data and rating chatbot responses to ensure AI systems are safe and reliable.

“Despite the essential nature of this work, millions of data workers around the world perform these stressful tasks under constant surveillance, with low wages and no benefits,” wrote a group of lawmakers including Senator Edward Markey, Elizabeth Warren, and Bernard Sanders in a letter to the tech executives on Wednesday (Sept. 13). “These conditions not only harm the workers, they also risk the quality of the AI systems—potentially undermining accuracy, introducing bias, and jeopardizing data protection.”

The letter also brings attention to newer AI startups including Inflection AI, Scale AI, and Anthropic, highlighting a who’s who of the companies shaping AI systems today.

Tech companies have a responsibility to ensure these workers have safe working conditions, fair pay, protection from unjust disciplinary proceedings, and must be more transparent about the role these workers play in AI companies, the lawmakers wrote.

The data-labeling workforce behind ChatGPT and Bard

Tech companies tend to outsource data labeling to staffing firms that hire workers outside of the US in countries including Kenya, India, and the Philippines.

AI products are used to automate decision-making processes, and the algorithms behind the products must be taught how to “see” things. For instance, a self-driving car algorithm must be able to differentiate between a pedestrian and a stop sign. The algorithm is trained by data labelers who analyze hours of video content and identify the objects in each frame, as The Financial Times reported. It takes eight hours to annotate one hour of video, according to the FT.

These workers often face harsh conditions. To ensure the safety of ChatGPT, Kenyan laborers, earning less than $2 per hour, have to label images involving sexual abuse, hate speech, and violence, as reported by Time investigation. These employees are required to read and label 150 to 250 pieces of text, ranging from 100 words to over 1,000 words, during a nine-hour shift. The employees have reported mental trauma from the work, and although wellness sessions are offered, they often find them unhelpful. Data labelers also lack the benefits provided to the employees of the tech companies.

With the continuous release of generative AI products, data labeling remains an ongoing process. The global data annotation and labeling market reached $800 million last year and is projected to reach $3.6 billion by 2027, according to Markets and Markets, a market research firm. As the volume of data requiring labeling increases, data-labeling companies note that workers are specializing in different types of data such as driving or medical information.

Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment