The Importance of Concerning Ourselves with Technological Pessimists

Sign up to receive free updates on Artificial Intelligence. Our myFT Daily Digest email will provide you with the latest Artificial Intelligence news every morning.

The writer is the founder of Sifted, a European start-up site backed by the FT

Historically, new technologies have often been met with technophobia. Take the early use of the radio, which led to reports of supernatural phenomena like talking radiators and stoves. English doctors even worried that excessive use of the bicycle would have negative effects on the nervous system, resulting in “bicycle faces”. Similarly, the replacement of the slide rule with electronic calculators caused concerns among teachers about a decline in our understanding of mathematical concepts and the consequences of battery depletion.

These instances of technophobia can be found in the Pessimists Archive, an admirable collection of anxieties about new things in the past. It is fascinating to observe how many of the worries about the increasing dominance of machines and human obsolescence resonate with our modern concerns about artificial intelligence. However, it is also reassuring to note how many of these former moral panics turned out to be drastically wrong, appearing almost comical in hindsight.

Naturally, the fact that pessimists were frequently mistaken about the evils of previous technologies does not automatically mean that they are wrong about AI today. However, it is worth examining whether the latest AI differs significantly from what came before. Some technologists suggest that we would have less concern about AI if we were to demystify the field and rename it “computational statistics”. The Pessimists Archive also highlights that futurists tend to overstate the speed of adoption of most technologies, while underestimating the opportunities for adaptation. They can inform us of what technologies can accomplish in theory, but not how they will be practically used.

Philosopher Daniel Dennett, a long-time observer of AI, presents an argument for why AI might be different and why this is a legitimate concern. He claims that AI has the potential to create “counterfeit people” who can pass as real individuals in the digital world. These deepfakes, controlled by influential entities like corporations and governments, pose a significant threat as they can be used to distract, confuse, and erode rational debate. Dennett writes, “Creating counterfeit digital people risks destroying our civilization. Democracy relies on the informed (not misinformed) consent of the governed.”

While Dennett may be alarmist about the extent of this threat, he acknowledges that technology can also offer a solution. In the same way that we have largely solved the issue of counterfeit money, we can defer or eliminate the ominous threat of counterfeit people. Computer scientists are already developing watermarking techniques, similar to those found on banknotes, to identify AI-generated deepfake content. However, one generative AI company founder I spoke with last week believes that watermarking, while technically feasible, may not be the most effective approach. The focus should also be on how deepfakes are distributed, which emphasizes the importance of holding social media companies accountable by ensuring verifiable user accounts.

Other technologists agree with Dennett that the novelty of AI lies in its ability to blur the line between machines and humans. However, they argue that this can be a positive development. Many of the problems in our computerized society stem from machines’ inflexibility. Most computers are deterministic and can only solve problems quantitatively, leaving little room for ambiguity, doubt, or nuance. On the other hand, the latest generative AI models are probabilistic machines trained on the entirety of human knowledge available on the internet, making them more embedded in human culture. This opens up the possibility for machines to address problems qualitatively, akin to how humans do. Neil Lawrence, a professor of machine learning at Cambridge University, supports this viewpoint and emphasizes that this technology can adapt to us instead of us adapting to it, as previous technologies have required. Lawrence is the author of an upcoming book that makes this case.

Machine adaptability could be particularly valuable when it comes to developing healthcare chatbots or self-driving cars. It is important to consider the viewpoints of optimists in addition to those of pessimists.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment