PR Creates Illusion of an ‘AI Apocalypse’

On Tuesday morning, leaders in the field of artificial intelligence once again raised concerns about the potential dangers posed by AI technology. CEOs, researchers, and other prominent figures, including Sam Altman of OpenAI and Bill Gates, signed a concise statement from the Center for AI Safety stating that the world should prioritize mitigating the risks of AI-induced extinction on par with other large-scale risks like pandemics and nuclear war. This statement, consisting of just 22 words, followed a series of calls from executives at OpenAI, Microsoft, Google, and other tech companies for limited regulation of AI. These executives spoke before Congress, the European Union, and various other platforms, advocating for collaboration between industry and governments to address the potential harms of AI, despite their companies investing billions in the technology. However, some prominent AI experts and critics remain skeptical of this rhetoric, viewing the proposed regulations as ineffectual and self-serving.

For years, the tech industry in Silicon Valley has disregarded substantial research demonstrating the material harms of AI. It is only now, with the release of OpenAI’s ChatGPT and a surge in funding, that there seems to be a significant interest in how AI can be made safe. “This seems like really sophisticated PR from a company that is going full speed ahead with building the very technology that their team is flagging as risks to humanity,” said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project.

The underlying assumption behind the fear of “extinction” caused by AI is the belief that AI is on a path to becoming extremely capable, turning these companies’ work into a doomsday scenario. It presents the product as a powerful force that could potentially eliminate humanity. This assumption serves as a tacit advertisement, positioning CEOs as wielding a technology as transformative as fire, electricity, nuclear fission, or a pandemic-inducing virus. The aim is to entice investors by suggesting that not investing would be foolish. Furthermore, this posture serves as a shield against criticism, echoing the crisis communication tactics used by tobacco companies, oil magnates, and Facebook. It implies that the companies are requesting regulation for their product, so any negative consequences can’t be blamed solely on them.

However, the supposed AI apocalypse remains in the realm of science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” said Meredith Whittaker, a co-founder of the AI Now Institute. While AI, such as GPT-4, has made incremental improvements, there are no fundamental changes in how these systems operate. AI may have a significant impact on areas like medicine and job automation, but there is no reason to believe that the offerings from companies like Microsoft and Google would lead to the downfall of civilization.

Two weeks before signing the AI-extinction warning, Altman compared his company to the Manhattan Project and himself to Robert Oppenheimer. He stated before a Senate panel that regulatory intervention will be critical to mitigate the risks of increasingly powerful AI models. While Altman and the senators view increasing AI power and associated risks as inevitable, many experts disagree, emphasizing that AI can harm people even at its current level of advancement. The division lies not in whether AI is harmful but in determining which harm is the most concerning. There is a concern that the extinction narrative presented by AI architects distracts from the present harms caused by AI that governments, researchers, and the public have been combating for years. These current harms disproportionately affect marginalized communities and are thus easier to ignore, but a supposed civilizational collapse would impact everyone.

Discrimination is a real issue with many existing AI products. Examples include racist and misgendering facial recognition, biased medical diagnoses, and sexist recruiting algorithms. AI should be assumed to be biased until proven otherwise, according to Albert Fox Cahn. Additionally, advanced AI models often face accusations of copyright infringement and labor violations in relation to their data sets and production processes. Synthetic media is filling the internet with scams and nonconsensual pornography. By promoting the “sci-fi narrative” of AI, proponents are diverting attention from these pressing issues that can be addressed today, according to Deborah Raji, a fellow at Mozilla studying algorithmic bias. The risk narrative also has a misleading effect, as people pay attention to the warnings of influential figures like Sam Altman, even though they are detached from the real-life consequences of AI.

However, it often feels like the words spoken by these AI leaders are empty. Shortly after Altman’s testimony in front of the Senate, he suggested in an interview with reporters in London that OpenAI might cease operating in Europe if the EU’s AI regulations are too stringent. The apparent change of stance received backlash, and Altman later tweeted that OpenAI had no plans to leave Europe. “It sounds like some of the actual, sensible regulation is threatening the business model,” said Emily Bender, a computational linguist at the University of Washington. When asked about Altman’s remarks and OpenAI’s position on regulation, a spokesperson for OpenAI emphasized the company’s commitment to mitigating risks and its collaboration with policymakers, researchers, and users.

This regulatory charade is a well-known tactic in Silicon Valley. After facing scandals related to misinformation and privacy, Mark Zuckerberg testified before Congress, pledging to use Facebook’s tools for good and welcoming appropriate regulation. However, Meta’s platforms have since failed to effectively combat election and pandemic misinformation. Similarly, Sam Bankman-Fried called for clear and consistent regulatory guidelines for cryptocurrencies, only to have his own crypto firm face serious allegations of financial fraud. Cahn views these moves as an attempt to distance AI companies from tech platforms like Facebook and Twitter, which have faced increasing scrutiny for the harms they cause.

Some of the signatories of the AI-extinction warning genuinely believe that superintelligent machines could pose an existential risk to humanity. Yoshua Bengio, considered a “godfather” of AI, believes that AI has become so capable that it could potentially trigger a catastrophe that could end the world. Likewise, Dan Hendrycks, the director of the Center for AI Safety, shares these concerns and argues that the public needs to end the ongoing AI arms race and prioritize safety. The fact that leaders from major tech companies signed the warning from Hendrycks’ center could indicate genuine concern. However, even under this charitable interpretation, it raises the question of why these companies continue to develop and build something they consider so dangerous.

The proposed solutions from these companies to address the empirical and fantastical harms of AI are vague and deviate from the well-established body of work on regulating AI. Altman focused on the need for a new government agency dedicated to AI during his testimony, a notion that Microsoft also supports. However, these proposals lack specificity and seem to be a rehash of previous ideas.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment