Unveiling the technique Google used to outsmart OpenAI’s ChatGPT with just one word

Demis Hassabis, the CEO and co-founder of DeepMind, attends an AI Safety Summit on Nov. 2, 2023, in Bletchley, England. DeepMind, an artificial intelligence research lab, was purchased by Google in 2014.

Demis Hassabis, the CEO and co-founder of DeepMind, attends an AI Safety Summit on Nov. 2, 2023, in Bletchley, England. DeepMind, an artificial intelligence research lab, was purchased by Google in 2014.




Toby Melville – WPA Pool/Getty Images

For ChatGPT’s first birthday, a team of Google researchers revealed just how easy it is to disrupt OpenAI’s highly-touted technology.

The recently published paper offers insight into how artificial intelligence researchers are pushing the boundaries of popular products in real time. The study reveals the competitive landscape between Google and its AI lab, DeepMind, and rivals like OpenAI and Meta.

The research delves into “extraction,” an “adversarial” technique to extract data used to train an AI tool. The study indicates the potential privacy concerns if AI models are trained on personal information, highlighting the need for privacy and security.

Google’s team discovered that ChatGPT, when subjected to a simple query to repeat the word “poem” endlessly, eventually yielded content from its training data, exposing vulnerabilities in the system.

Advertisement

Article continues below this ad

Repeated tests on ChatGPT revealed that with a simple investment of $200, the research team obtained thousands of instances of the chatbot regurgitating training data, including private information and NSFW content.

404 Media was able to find some of the exposed training data online, raising concerns about latent vulnerabilities in language models.

The researchers expressed worry about the difficulty in distinguishing genuinely safe AI models from those that merely appear safe, emphasizing the need for ongoing vigilance in this area.

The vulnerable nature of ChatGPT was communicated to OpenAI in advance, allowing the startup an opportunity to address the issue, though the problem persisted when tested by SFGATE.

Advertisement

Article continues below this ad

Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment