Report: Microsoft and Google chatbots falsely claim cease-fire in Israel-Hamas war

Microsoft and Google’s AI chatbots are producing erroneous information about the Israel-Hamas conflict, including false claims of a cease-fire agreement. Google’s Bard stated that “both sides are committed” to peace, while Bing Chat claimed that “the ceasefire signals an end to the immediate bloodshed.” However, no such cease-fire has occurred, as Hamas continues to fire rockets into Israel. The chatbots’ inaccuracies have raised concerns about their credibility and potential to confuse the public. This analysis was reported by Bloomberg.

Evacuation in Gaza
Mirrorpix / MEGA

Despite these errors, Bloomberg acknowledges that the chatbots generally provide balanced and informative responses on this sensitive topic. Bard apologized and retracted its claim about the cease-fire when questioned, while Bing Chat amended its response. Both Microsoft and Google have acknowledged that their chatbots are experimental and may occasionally provide false information. Critics are particularly concerned about the spread of misinformation through AI chatbots.

Hamas attack last weekend
MOHAMMED SABER/EPA-EFE/Shutterstock

A Google spokesperson mentioned that Bard and their AI-powered search functions are opt-in experiments, with ongoing efforts to improve their quality and reliability. Google takes information quality seriously, providing tools to help users assess online information. They also continue to make improvements to combat low-quality or outdated responses. Google’s trust and safety teams actively monitor Bard and address issues promptly.

Google Bard is an experimental phase
Gado via Getty Images

Microsoft stated that it investigated the mistakes and will make adjustments to the chatbot. They have made progress by using text from top search results to ground the chatbot’s responses but will further invest in improving the system. The Post has reached out to Microsoft for additional comments.

Earlier this year, experts warned that AI-generated “deepfake” content could cause disruptions in the 2024 presidential election if protective measures are not implemented. In a study, British researchers found that Microsoft-backed OpenAI’s ChatGPT generated cancer treatment regimens containing a potentially dangerous mix of accurate and false information.

Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment