Google cautions employees about sharing confidential information with ChatGPT adversaries

Alphabet, the parent company of Google, is cautioning its employees about the use of chatbots, including its own program called Bard, while still promoting the technology globally. The company has advised employees not to enter confidential information into AI chatbots, in line with its long-standing policy on safeguarding information. These chatbots, such as Bard and ChatGPT, are highly advanced AI programs that engage in human-like conversations with users and provide answers to various prompts. However, human reviewers can access these conversations, and researchers have found that similar AI can replicate the absorbed data, posing a risk of leakage. Alphabet has also alerted its engineers to avoid direct utilization of computer code that chatbots can generate. While Bard may provide undesirable code suggestions, Google asserts that it still benefits programmers.

These concerns demonstrate how Google hopes to minimize the negative impact on its business caused by software that competes with ChatGPT. The race between Google and ChatGPT’s supporters, OpenAI and Microsoft, holds substantial investments and potential revenue from advertising and cloud services associated with new AI programs. Google’s cautious approach also aligns with the growing security standards adopted by corporations, which involve warning employees about using publicly-available chat programs. Reuters reports that numerous companies around the world, including Samsung, Amazon, and Deutsche Bank, have implemented measures to regulate AI chatbots. Apple, although unresponsive to requests for comment, is rumored to have done the same.

According to a survey conducted by Fishbowl, 43% of professionals were already using ChatGPT or other AI tools as of January, often without their superiors’ knowledge. Insider reported that, in February, Google instructed staff testing Bard ahead of its launch not to disclose internal information. As Google introduces Bard to more than 180 countries and 40 languages as a means to foster creativity, the company’s warnings extend even to its code suggestions. Google has engaged in detailed discussions with Ireland’s Data Protection Commission and is addressing regulators’ queries, following a Politico report that suggested the company was postponing Bard’s launch in the EU to gather more information about its impact on privacy.

The use of such technology offers significant benefits in terms of speeding up tasks, such as drafting emails, documents, and even software. However, it also introduces potential risks, including misinformation, the exposure of sensitive data, and the incorporation of copyrighted content. In a privacy notice updated on June 1, Google explicitly states not to include confidential or sensitive information in Bard conversations. Some companies have developed software solutions to address these concerns. For example, Cloudflare offers a capability for businesses to tag and restrict the external flow of certain data, thereby protecting it. Both Google and Microsoft are also providing conversational tools to business customers that come at a higher cost but do not share data with public AI models. The default setting in Bard and ChatGPT is to retain users’ conversation history, which can be deleted upon user request.

Microsoft’s consumer chief marketing officer, Yusuf Mehdi, explained that it makes sense for companies to discourage the use of public chatbots for work purposes and adopt a conservative approach. He highlighted the stricter policies in place for Microsoft’s enterprise software compared to its free Bing chatbot. While Microsoft did not comment on a blanket ban on staff entering confidential information into public AI programs, a different executive indicated personal restrictions. The CEO of Cloudflare, Matthew Prince, described typing confidential matters into chatbots as the equivalent of “allowing a group of PhD students access to all of your private records.”

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment