Surprised by an AI Bot Doctor’s Advice, Dr. Ellie Cannon Seeks Assistance

Robot doctors are no longer a thing of the future, confined to the realm of science fiction movies. With the latest advancements in healthcare technology, it’s very likely that some real-life doctors will become obsolete within the next decade. Recent developments have shown that sophisticated computer programs, powered by artificial intelligence (AI) and known as bots or chatbots, can diagnose major diseases like cancer and heart disease more quickly and accurately than experienced human consultants.

However, what does this mean for general practitioners (GPs)? The demand for family doctors is higher than ever before, with some GPs having to handle more than 90 appointments per day, well beyond the recommended safety limit. Nearly 1.5 million patients in the UK are waiting for at least a month to see a GP, according to the latest NHS figures. To cope with this demand, GPs are increasingly utilizing digital tools such as telephone appointments, photo consultations for skin problems, and video calls. Around a third of GP appointments now take place over the phone.

Despite the effectiveness and safety of virtual appointments, there are physical signs of illness that can only be spotted in person by GPs, such as weight loss. However, the likelihood of in-person appointments becoming the default again is low. This raises the question of whether computers could do just as good a job as human doctors.

To test this, the Mail on Sunday conducted an experiment pitting their GP columnist, Dr. Ellie Cannon, against a popular AI chatbot called ChatGPT. The AI chatbot had previously passed the exam that US doctors take to qualify, and a study found that it answered real-life patient questions better than GPs. The Mail on Sunday conducted their own study, with a panel of healthcare professionals scoring the answers provided by both Dr. Ellie and ChatGPT. In 80% of cases, ChatGPT’s answers were deemed to be of better quality than those of the human doctor.

The experiment included questions from readers, such as one about hemorrhoids and complications with a prolapse. While Dr. Ellie provided a comprehensive answer addressing the concerns and providing useful advice on managing the conditions, ChatGPT failed to fully address the patient’s concerns and did not mention the possibility of more serious underlying conditions like cancer. The experts also critiqued ChatGPT’s complicated language and lack of empathy.

In another question about side effects of osteoporosis medication, both Dr. Ellie and ChatGPT provided similar information and helpful advice. However, ChatGPT made a mistake by advising against stopping the medication, which the experts noted was incorrect.

Overall, while AI chatbots like ChatGPT can provide accurate and useful information, there are still limitations to their abilities. They often lack empathy and struggle with addressing specific concerns or providing nuanced advice. Currently, they should not be used for diagnosing patients. However, with the rapid advancements in AI, the potential for revolutionizing the healthcare system is vast.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment