President Joe Biden’s administration has brokered a set of AI safeguards that have been agreed upon by leading companies in artificial intelligence technology, including Amazon, Google, Meta, Microsoft, and others. These companies have voluntarily committed to ensuring the safety of their AI products before their release. The commitments include third-party oversight of commercial AI systems, although the specific details regarding who will audit the technology and hold the companies accountable have not been outlined.
There has been a surge in investment in generative AI tools that possess the ability to mimic human-like text and produce new media, which has brought both public fascination and concerns about their potential to deceive people and spread disinformation. To address major risks related to biosecurity and cybersecurity, the four tech giants, along with OpenAI, Anthropic, and Inflection, have committed to security testing carried out, in part, by independent experts.
Furthermore, these companies have pledged to implement methods for reporting vulnerabilities in their systems, as well as using digital watermarking to differentiate between real and AI-generated images, commonly known as deepfakes. They have also agreed to publicly disclose flaws and risks in their technology, including anything pertaining to fairness and bias.
These voluntary commitments serve as an immediate measure to address risks while the administration works towards enacting legislation to regulate AI in the long term. However, some advocates argue that more should be done to hold companies accountable for their AI products.
Senate Majority Leader Chuck Schumer has expressed his intention to introduce legislation for AI regulation, highlighting bipartisan interest in the issue. Technology executives have called for regulation and have engaged with government officials to discuss the matter.
However, there are concerns among experts and smaller competitors that proposed regulations may favor larger companies like OpenAI, Google, and Microsoft, as smaller players might struggle with the high costs associated with ensuring regulatory compliance for their AI systems known as large language models.
The software trade group BSA, which includes Microsoft, has welcomed the Biden administration’s efforts to establish rules for high-risk AI systems and expressed willingness to collaborate on enacting legislation to address AI risks and promote its benefits.
Various countries, including European Union lawmakers, have been exploring ways to regulate AI. U.N. Secretary-General Antonio Guterres has suggested that the United Nations could be the ideal platform to adopt global AI standards and has appointed a board to explore options for global AI governance. Additionally, some countries have called for the creation of a new U.N. body to support global AI governance, drawing inspiration from organizations like the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.
The White House has already consulted with several countries regarding the voluntary commitments made by the companies.
Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.