UK Tech Czar Sounds Alarm on Cyber Threats in NHS Amidst AI Advances

Receive free Artificial intelligence updates

The UK’s new AI tsar has issued a warning about the potential misuse of artificial intelligence to hack the NHS, posing a threat equivalent to that of the Covid-19 pandemic. In outlining his priorities for the £100mn task force, Ian Hogarth, chair of the UK government’s “Frontier AI” task force, highlighted the risk of weaponizing AI for disrupting the National Health Service and carrying out a “biological attack.” The task force aims to address these threats and stresses the importance of global collaboration, including with China, to tackle these large-scale risks. Hogarth has appointed AI pioneer Yoshua Bengio and GCHQ director Anne Keast-Butler to the task force’s external advisory board to assist in their safety research on frontier AI models. The government has allocated £100mn in funds for this purpose, the largest commitment made by any nation-state towards AI safety. Hogarth believes that the potential risks posed by AI are comparable to the NHS disruptions caused by the Covid pandemic and the WannaCry ransomware attack.

In an interview with the Financial Times, Hogarth expressed concerns about the development of AI systems capable of writing code at a superhuman level, as it lowers the barriers to cyber attacks and cyber crimes. He emphasized the need for the UK to develop state capacity to understand and moderate these risks effectively in order to harness the benefits of AI. Hogarth’s team is actively involved in organizing the UK’s first global AI safety summit, modeled after the Covid vaccine task force, to facilitate discussions among state leaders, tech companies, academics, and civil society.

By bringing independent academics into the government, Hogarth aims to ensure expert participation in policymaking and regulation related to frontier AI. Not relying solely on AI companies to evaluate their own work, he believes that the state should be an active partner in understanding and mitigating the risks associated with AI.

Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment