How Your Name Will Be Stolen Before Your Job, by a Bot

In May, Tessa made controversial recommendations and was promptly removed from her position as the National Eating Disorder Association’s chatbot. Users were taken aback when Tessa, designed to aid those at risk of eating disorders, instead advised them to lose weight. The nonprofit’s CEO clarified that Tessa was not intended to replace the human connection provided by the Helpline. However, it raises the question of why the bot was given a human name if not to imply that connection.

The use of human names for chatbots is on the rise, with bots like Ernie, Claude, and Jasper joining the ranks. Advanced chatbots like ChatGPT and Bard often have clunky or anonymous identities, but customer-service bots are increasingly given real names like Maya, Bo, and Dom. As AI continues to progress, expect a flood of new bots with human names in the future, according to Suresh Venkatasubramanian, a computer-science professor.

These names aim to make bots seem more believable and real, as Katy Steinmetz, the creative and project director of naming agency Catchword, explains. They can have negative effects in some cases but are often just annoying or mundane marketing tactics to shape how consumers perceive products. While the future of AI may or may not involve bots taking over jobs, it is likely to include bots with human names.

The original chatbot, ELIZA, developed in the 1960s, had limited capabilities but still sparked emotional involvement from users. This phenomenon of projecting human traits onto computers became known as the ELIZA effect. Throughout the years, chatbots with names like Parry, Jabberwacky, and A.L.I.C.E. emerged, and even a humanoid robot named Sophia was granted citizenship. However, with the advancement of AI, human names are now just another layer of faux humanity added to already anthropomorphic products.

While Microsoft’s chatbot is officially called Bing Chat, it initially presented itself as Sydney before being suppressed. The thoughtfully designed responses of ChatGPT, with a blinking rectangle as it types, create a sense of sentience. These design choices play on our vulnerability as humans to believe that bots are human-like.

Names are an effective way to make products feel smarter and more personalized. This is especially true for customer-service bots following the rise of ChatGPT. Banks have bots named Erica, Sandi, and Amy, while White Castle introduced Julia as a voice assistant for their drive-through orders. Lufthansa uses Elisa for their AI, aiming to provide a human touch. Research shows that giving chatbots anthropomorphic features, including a human name, directly affects transaction outcomes.

The proliferation of chatbots with human names, inspired by the popularity of Amazon’s Alexa, raises some concerns. Many of these bots are female-coded and designed to obey commands, although some, like Anthropic’s Claude, aim to break the convention of female-named assistants. Interactions with chatbots mimic human communication, unlike with physical devices like Alexa. This poses risks, especially for vulnerable users such as children and adults with dementia. If voice assistants can lead to dangerous actions, more human-like AI bots with human names are likely to backfire. Careful consideration is necessary when giving a device a human name, considering the goals and user relationship.

While bots like Julia from White Castle are clearly not sentient, as customer-service chatbots become more ubiquitous, attempts at forced relatability will become tiresome. Trying to manipulate users into feeling comfortable and emotionally connected to an inanimate AI tool through human names may not be the best approach. In a field already grappling with ethical dilemmas, refraining from giving every bot a human identity allows the bot’s function to stand on its own.

For now, bots with human names are becoming increasingly unavoidable. While Silicon Valley has yet to adopt my name for an AI, it’s only a matter of time before I find myself expressing concerns to an AI-powered Jacob.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment