UK Drives for Enhanced AI Transparency: Paving the Path to Evaluate Risks

Subscribe to receive free updates on Artificial Intelligence

The United Kingdom is seeking unprecedented access to the internal workings of companies like OpenAI and Google’s DeepMind, in order to examine the technology behind their artificial intelligence models. The UK government is in discussions with these companies, including Anthropic, to gain insights into their large language models, such as Google DeepMind’s ChatGPT. However, the companies are concerned about revealing proprietary information and exposing themselves to cybersecurity threats.

Access to the internal workings of these models is crucial for understanding how the technology functions and identifying potential risks. For instance, sharing “model weights” would provide valuable insights into the blueprints of large language models. Despite calls for regulation and transparency in AI technology, companies are currently not obligated to disclose these details.

If the United Kingdom successfully convinces these AI companies to grant access, it would mark the first time they share such information with any government worldwide. The UK government intends to use the access for research and safety purposes, aligning with its plan to host the world’s first AI safety summit at Bletchley Park in November. However, the specifics of the access and technical details are yet to be determined.

Anthropic has engaged in discussions about granting the government access to model weights. However, it highlights the security implications and is exploring alternative methods such as delivering the model via API. The government, on the other hand, wants a deeper level of oversight.

DeepMind has expressed the importance of providing access to models for safety research, while the specifics of which models and how the access will work remain unresolved.

OpenAI has not commented on the matter at this time.

According to sources close to the discussions, these companies are not intentionally obstructing access but are cautious due to the complexity of the issue and their legitimate concerns.

The UK government aims to reach an agreement with these companies before the global AI summit, which will take place at the historic Buckinghamshire country estate where Alan Turing and other codebreakers operated during World War II. The summit will bring together world leaders, AI companies, academics, and civil society to discuss the risks associated with the rapid advancement of AI technology, particularly in the areas of cyber security and the potential misuse of AI in bioweapon design.

Two insiders revealed that the UK government is actively working towards reaching an agreement with the AI companies to announce at the summit.

The Department for Science, Innovation, and Technology has established the Frontier AI Taskforce, dedicated to AI safety, and has engaged experts to ensure the safe utilization of the technology. The Taskforce has been collaborating with leading AI companies to gain access to their models for research purposes, specifically focusing on safety.

Reference

Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment