Researchers Claim ChatGPT Demonstrates a ‘Noticeable’ Liberal Bias

Recent research conducted by scholars from the University of East Anglia reveals that OpenAI’s widely popular ChatGPT AI service displays a distinct bias towards the Democratic Party and other liberal perspectives. The study involved testing ChatGPT by asking it political questions, instructing it to respond as either a Republican, a Democrat, or without any political leaning. The responses were then analyzed and compared to determine their position on the political spectrum. The research team found substantial evidence of a systematic political bias in ChatGPT, favoring Democrats in the US, Lula in Brazil, and the Labour Party in the UK.

ChatGPT has already faced criticism for exhibiting political biases, such as its refusal to generate a story about Hunter Biden in the style of The New York Post while accepting a similar prompt from a left-leaning source like CNN. The Manhattan Institute, a conservative think tank, released a damning report in March revealing that ChatGPT was more tolerant towards hateful comments directed at conservatives compared to comments about liberals.

To reinforce their findings, the UK researchers repeated the same questions with ChatGPT 100 times. The process was then repeated 1,000 times for each answer to account for the chatbot’s randomness and tendency to produce false information. The researchers suggest that these results raise concerns about how ChatGPT and other large language models can potentially exacerbate the challenges associated with political processes in the online and social media landscape.

While bias is a significant concern, it is only one aspect of the larger challenges in developing AI tools like ChatGPT. Critics, including OpenAI’s CEO Sam Altman, have expressed worries that without proper safeguards, AI could lead to chaos or even pose a threat to humanity. OpenAI has attempted to address potential concerns regarding political bias in a detailed blog post from February, where they explain the process of “pre-training” and “fine-tuning” the chatbot’s behavior with the input of human reviewers. The blog post affirms that the guidelines explicitly instruct reviewers not to favor any political group, stating that any biases that emerge are considered flaws rather than intentional features.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment