ChatGPT, the artificial intelligence chatbot developed by OpenAI, is gaining popularity for offering assistance in several areas, from meal planning to medical information. However, studies reveal the bot falls short when responding to health crises. Research conducted by John W. Ayers and his team evaluated ChatGPT’s responses to 23 public health questions under categories such as addiction, interpersonal violence, mental health, and physical health. While the bot provided evidence-based responses in 91% of the cases, it failed to offer referrals to trained professionals for further assistance. Only 22% of the responses had specific resources to help questioners.
According to Ayers, AI assistants like ChatGPT could shape the way people access health information, but a holistic approach is necessary. ChatGPT is not designed to provide medical information and instead is a valuable tool for general health information and guidance. Experts call for AI assistants to make referrals to specific resources, bridge the gap between technology and human intervention, and promote public health outcomes. Ayers suggests regulators encourage AI companies to promote essential resources, establish public health partnerships, and disseminate a database of recommended resources and incorporate them into fine-tuning the AI’s responses to public health questions. Castro highlights the efforts to develop specialized AI models for medical use and more guardrails for sensitive health topics.
Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.