During recent hearings, Silicon Valley CEOs and US lawmakers expressed a consensus that AI regulation is necessary. The attendees included top leaders in the AI industry like Mark Zuckerberg from Meta, Elon Musk from X, Jensen Huang from Nvidia, William Dally from Palantir, Sam Altman from OpenAI, Alex Karp from Palantir, and Brad Smith from Microsoft.
Here’s a roundup of what some of these leaders, who aren’t necessarily known for their fondness of regulation, had to say about AI regulation:
Microsoft president Brad Smith
Smith stated that licensing is indispensable in high-risk scenarios but acknowledged its limitations. According to him, certain requirements need to be met before AI models or applications can be made available, similar to how one can only drive a car after obtaining a license.
Nvidia’s chief scientist William Dally
Dally mentioned that existing laws and regulations already govern many AI applications and high-risk sectors. He suggested that enhanced licensing and certification requirements could be imposed as needed. He also emphasized the importance of international cooperation for the development of safe and trustworthy AI.
Meta CEO Mark Zuckerberg
In his prepared remarks, Zuckerberg expressed the belief that the government should create regulations that support innovation. He highlighted two crucial aspects of AI: ensuring safety through responsible product development and deployment, and ensuring access to state-of-the-art AI.
Twitter CEO Elon Musk
Musk told reporters that there is a need for a “referee” to ensure the safety of AI. He emphasized the importance of a regulator that can ensure companies take actions that are safe and in the public’s interest.
AI regulation in the US is still far away
It remains unclear when and how the US government will regulate AI companies. Given the time it took for the EU to implement its own AI laws, immediate action seems unlikely. Senators Richard Blumenthal and Josh Hawley recently proposed an AI framework focusing on licensing “high-risk” AI models and establishing an independent body for oversight.
One of the challenges in creating AI laws is that it is not always possible to explain why an algorithm produces certain outcomes. Experts in AI suggest that regulation should instead focus on the outcomes and hold responsible parties accountable, for example, when AI hiring tools discriminate against job candidates.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.