Study finds that AI is more human-like than real humans on social media

Artificial intelligence has proven to be quite proficient at creating text that appears more human-like than content written by actual humans, according to a study conducted by researchers. Notably, chatbots like OpenAI’s ChatGPT have gained immense popularity due to their ability to convincingly engage in human-like conversations based on user prompts. This development has had a significant impact on the field of artificial intelligence, as it allows the general public to easily converse with AI-powered bots that can assist with academic and professional tasks, and even provide dinner recipes.

The researchers conducting the study, supported by the American Association for the Advancement of Science, were primarily interested in OpenAI’s text generator GPT-3. They sought to explore whether humans could distinguish between disinformation and accurate information presented in the form of tweets, and determine whether the content was generated by a human or AI.

One of the study’s authors, Federico Germani from the Institute of Biomedical Ethics and History of Medicine at the University of Zurich, revealed that the most surprising finding was that participants more often identified AI-generated tweets as human-generated, rather than tweets that were actually crafted by humans. This suggests that AI has the ability to convince people that it is a real person more effectively than an actual person can. Germani described this as a fascinating side result of their study.

Given the widespread use of chatbots, experts in the technology industry and Silicon Valley have raised concerns about the potential risks of AI spiraling out of control and potentially leading to the downfall of civilization. One of the primary concerns is the dissemination of disinformation on the internet, where AI could manipulate humans by spreading false information.

To investigate these concerns, the researchers focused on eleven topics that were susceptible to disinformation, such as 5G technology and the COVID-19 pandemic. They created false and true tweets using GPT-3 as well as false and true tweets crafted by humans. A survey was then conducted with 697 participants from various countries, including the US, UK, Ireland, and Canada. The participants were presented with the tweets and asked to determine their accuracy and origin, whether AI-generated or created by humans.

The study highlights the challenge of distinguishing between AI-generated and human-created information. It underscores the critical need to critically evaluate the information we receive and rely on trustworthy sources. The researchers also encourage individuals to become familiar with emerging technologies to better understand their potential benefits and drawbacks.

Interestingly, the participants were more adept at identifying disinformation created by humans compared to disinformation generated by GPT-3. Additionally, they were more likely to recognize tweets containing accurate information that were AI-generated rather than accuracy from humans.

The study revealed another significant result: participants’ confidence in their ability to distinguish between AI-generated and human-created tweets decreased as they progressed through the survey. There could be various reasons for this, such as the convincing nature of GPT-3’s human-like text or the underestimation of the AI system’s intelligence in mimicking humans.

The researchers posit that when individuals are confronted with a vast amount of information, they may feel overwhelmed and give up on critically assessing it. Consequently, they might be less motivated to differentiate between synthetic and organic tweets, leading to a decline in confidence regarding their ability to identify synthetic content.

It is worth noting that the AI system sometimes refused to generate disinformation, but it also occasionally produced false information when instructed to create a tweet containing accurate information.

While the study raises concerns about the potential for AI to generate persuasive disinformation, further research is needed to fully understand its real-world implications. Germani suggests conducting extensive studies on social media platforms to observe how people engage with AI-generated information and how these interactions influence individual and public behavior, particularly related to health recommendations.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment