The cancer treatment advice provided by ChatGPT might have potential risks

A recent study revealed that the cancer treatment recommendations provided by ChatGPT, an AI chatbot developed by OpenAI, contained a mixture of accurate and false information. Researchers from Brigham and Women’s Hospital, affiliated with Harvard Medical School, tasked ChatGPT with generating treatment advice that aligned with guidelines established by the National Comprehensive Cancer Network (NCCN). While a majority of ChatGPT’s outputs included at least one NCCN-concordant treatment, approximately 34% also contained incorrect recommendations. Additionally, around 12% of ChatGPT’s responses provided outright false information unrelated to accepted cancer treatments.

The study highlighted the potential dangers associated with ChatGPT’s confident and persuasive manner of speaking, as it has the ability to blend accurate and inaccurate information. According to Danielle Bitterman, an oncologist at the Artificial Intelligence in Medicine program of the Mass General Brigham health system, ChatGPT’s misleading nature poses a significant risk. Notably, researchers found instances where ChatGPT “hallucinated” false cancer treatment recommendations.

This study supports concerns voiced by critics, such as Elon Musk, who have emphasized the rapid dissemination of misinformation facilitated by advanced AI tools lacking proper safeguards. In prompting ChatGPT to generate breast, prostate, and lung cancer treatment recommendations, the researchers discovered that while language models can acquire medical knowledge and diagnose better than laypeople, the chatbot’s cancer treatment recommendations were not accurate. The researchers noted that the hallucinations primarily consisted of recommendations for localized treatments of advanced disease, targeted therapy, and immunotherapy.

OpenAI has acknowledged the limitations of its current chatbot, GPT-4, stating that it is prone to errors and hallucinatory responses. The company emphasized the need for caution when using language model outputs, particularly in high-stakes situations. OpenAI explained that specific use-cases require careful protocols, such as human review, additional context grounding, or avoiding critical applications altogether.

The researchers stressed a shared responsibility between developers, who should distribute technologies that do no harm, and patients and clinicians, who must be aware of the limitations of these technologies. ChatGPT has garnered significant attention this year but has faced criticism for exhibiting bias towards liberal political viewpoints. Similar issues have been observed with Google’s chatbot, Bard, which has been known to generate false information in response to user queries. Experts have also voiced concerns about the potential disruptions caused by chatbots and other AI products in future elections, including the upcoming 2024 presidential election.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment