Corporations Uncertain of Chat GPT Safety, while Staff Crave AI Assistance

Justin experienced the helpfulness of ChatGPT when he used it at work earlier this year. As a research scientist at a Boston biotechnology firm, he asked the chatbot to create a genetic testing protocol, which typically takes hours for him to do manually. However, with ChatGPT, the task was completed within seconds, saving him valuable time. Although Justin was pleased with the efficiency of the AI tool, his bosses later banned its use due to concerns about potential leakage of company secrets. Their caution stemmed from OpenAI’s secrecy regarding the inner workings of their chatbot. Justin understands the reasoning behind the ban, acknowledging the need for caution when using a tool with undisclosed methods.

Generative AI tools like OpenAI’s ChatGPT have shown promise in increasing productivity and finding creative solutions to complex problems in the workplace. However, as these tools are integrated into human resources platforms and other work-related tools, they pose challenges for corporations. Apple, Spotify, Verizon, and Samsung are among the big companies that have imposed restrictions or bans on the use of generative AI tools by employees in order to safeguard sensitive company and customer information. The main concern is the potential for employees to inadvertently disclose proprietary code or confidential discussions while seeking assistance from the chatbots. Executives fear that hackers or competitors could exploit this information by prompting the chatbot for secrets. Nevertheless, computer science experts highlight that the validity of these concerns remains uncertain.

The rapidly evolving landscape of AI has put corporations in a dilemma characterized by both a fear of missing out and a fear of making mistakes. According to Danielle Benecke, the global head of the machine learning practice at Baker McKenzie, companies are cautious about not falling behind and about not rushing into adoption without thoroughly understanding the implications. There is a delicate balance of being a fast follower while avoiding missteps.

OpenAI’s CEO, Sam Altman, has expressed the company’s private intention to develop a “supersmart personal assistant for work” in the form of an advanced ChatGPT equipped with extensive knowledge about employees and their workplace. This assistant would be capable of drafting emails and documents in a person’s individual style and providing up-to-date information about the firm. OpenAI declined to comment on the privacy concerns raised by companies, but they have enabled ChatGPT users to engage in private mode conversations to prevent training data from including their prompts.

The history of banning cutting-edge technology in the workplace is not new. In the past, companies prohibited the use of social media platforms due to concerns about employee distractions. However, as social media became more mainstream, these restrictions gradually faded away. Similarly, companies initially hesitated to adopt cloud storage for corporate data but eventually embraced the practice. Google has found itself involved in the generative AI debate from both sides. While the company is marketing its own AI tool called Bard to rival ChatGPT, it also advises caution against sharing confidential information with chatbots. James Manyika, a senior vice president at Google, acknowledges that although large language models like Bard can be valuable for generating new ideas and saving time, they still have limitations in terms of accuracy and bias.

Verizon was among the companies warning their employees against using ChatGPT at work. The company’s chief legal officer, Vandana Venkatesh, explained in a video that Verizon has a responsibility to protect customer information, internal software code, and other proprietary assets. The company cannot control what happens once such data is fed into AI platforms like ChatGPT. Joseph B. Fuller, a professor at Harvard Business School, suggests that companies are hesitant to adopt chatbots due to uncertainties surrounding their capabilities. He predicts that companies may temporarily ban ChatGPT until they gain a better understanding of its functionality and assess the associated risks.

Fuller also predicts that companies will eventually integrate generative AI into their operations to stay competitive with start-ups that leverage these tools. Delaying adoption could result in loss of business to emerging competitors. HR leaders, caught in a balancing act, are gradually creating guidelines on the use of ChatGPT and similar AI chatbots. Eser Rizaoglu, a senior analyst at Gartner, notes that as time goes by, HR leaders are realizing the longevity of AI chatbots in the workplace.

Different companies have taken different approaches towards generative AI. Some, like Northrop Grumman and iHeartMedia, have completely banned its use, deeming the risks too significant. In client-facing industries such as financial services, companies like Deutsche Bank and JPMorgan Chase have recently blocked the use of ChatGPT. Other companies, like Steptoe & Johnson, have established policies that dictate when and how generative AI can be employed. Donald Sternfeld, the chief innovation officer at the law firm, highlights cautionary tales such as that of New York lawyers who faced repercussions for submitting a ChatGPT-generated legal brief containing fictitious references. Sternfeld emphasizes that the AI tool is designed to provide answers, even if it lacks the correct information. Thorough human oversight remains crucial when using generative AI, given its potential to generate plausible yet inaccurate responses.

Arlene Arin Hahn, global head of the technology transactions practice at White & Case, advises clients to closely monitor developments in generative AI and be flexible enough to revise policies accordingly. She emphasizes the importance of reserving the ability to adapt to new technology without stifling innovation. Baker McKenzie, an early adopter of ChatGPT, allows certain employee tasks to be assisted by the AI tool. However, Benecke highlights the necessity of human supervision due to generative AI’s tendency to provide convincing yet false responses.

While some concerns voiced by corporations regarding ChatGPT are valid, Yoon Kim, an assistant professor at MIT, believes that fears of corporate secrets being revealed may be exaggerated. Nonetheless, companies must strike a balance between embracing the potential of generative AI and implementing safeguards to protect sensitive information.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment