New coalition of leading tech companies aims to steer the direction of AI technology

Stay updated with the latest developments in Artificial Intelligence by subscribing to our free newsletter.

Four leading companies in the field of artificial intelligence have joined together to form the Frontier Model Forum. Anthropic, Google, Microsoft, and OpenAI have come together to conduct research and establish best practices in the development and control of powerful AI. This initiative is a response to growing public concern and regulatory scrutiny regarding the impact and implications of this technology.

The companies involved have recently introduced AI tools capable of generating original content in various forms, such as images, text, and videos, by drawing upon existing data. However, these advancements have raised apprehensions about copyright issues, privacy breaches, and the potential displacement of human workers.

According to Brad Smith, Vice-Chair and President of Microsoft, it is the responsibility of AI technology creators to ensure its safety, security, and human control. He believes that the formation of this forum is a crucial step in advancing AI responsibly and addressing the challenges it presents to humanity.

Membership in the forum is limited to companies that develop large-scale machine-learning models surpassing the capabilities of existing AI models. This indicates that the focus of the forum’s work will primarily be on the risks associated with highly powerful AI, rather than addressing current regulatory concerns related to copyright, data protection, and privacy.

OpenAI is currently under investigation by the US Federal Trade Commission, examining potential privacy and data security violations, as well as the spread of false information. President Joe Biden has expressed his intention to take executive action to promote responsible innovation.

Meanwhile, the leaders of the four participating companies emphasize their commitment to mitigating the risks associated with AI. During a meeting at the White House, they pledged to ensure the safe, secure, and transparent development of AI technology.

Emily Bender, a computational linguist at the University of Washington, expresses skepticism towards the reassurances provided by the companies involved. She believes that their aim is to avoid external regulation and maintain the ability to self-regulate, which she views with doubt. She argues that the government should enforce regulations to limit the actions of large corporations, addressing issues such as data theft, surveillance, and the challenges faced by gig economy workers.

The Frontier Model Forum seeks to promote safety research and facilitate communication between the AI industry and policymakers. Other similar groups have been established in the past, such as the Partnership on AI, which includes Google and Microsoft as founding members. The Partnership on AI is a multi-stakeholder organization that aims to ensure responsible AI usage through collaboration between civil society, academia, and industry.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment