How Washington Has the Power to Curb the AI Free-for-All

In April, Avianca Airlines’ lawyers made a peculiar discovery. It was found that a passenger named Robert Mata had sued the airline, claiming that his left knee had been severely injured by a serving cart on a flight. However, upon investigation, it was revealed that several cases mentioned in Mata’s lawsuit were completely fabricated. These fictitious cases were generated by ChatGPT, a chatbot that had been used by one of Mata’s lawyers, Steven A. Schwartz, for legal research. Schwartz stated that he was unaware that the content generated by the chatbot could be false.

This incident is just one example of the ways in which generative AI technology can spread falsehoods and cause harm. Tech companies are capitalizing on AI products without sufficient accountability or legal oversight for the potential real-world damage they can cause. Recognizing the urgency, the Biden administration recently announced that seven leading tech companies involved in AI development have committed to voluntary measures to ensure the safety, security, and trustworthiness of their products.

These commitments from OpenAI, Microsoft, Google, Meta, and others include subjecting their products to third-party testing, investing in bias reduction, and being more transparent about their AI systems’ capabilities and limitations. While these commitments are promising, they lack enforcement mechanisms and specific details about the next steps.

Regulating AI is a complex task that requires addressing the secretive nature of tech companies and the rapidly evolving nature of the technology itself. The federal government plays a crucial role in safeguarding people’s lives and livelihoods from the risks posed by generative AI. This includes not only long-term concerns about superintelligent machines but also addressing the biases and potential misuse already exhibited by AI systems.

To effectively regulate AI, experts suggest five key strategies:

1. Don’t rely solely on the claims of AI companies. Currently, there is no accountability or validation process for the claims made by these companies. Mandating third-party testing of AI tools to evaluate their bias, accuracy, and interpretability is essential. Companies should also be required to disclose information about their training methods, software limitations, and harm mitigation strategies.

2. Avoid creating a separate Department of AI. Instead of creating a new agency, existing laws and federal agencies should be empowered and adequately funded to enforce regulations on AI. This approach prevents regulatory capture and allows for contextual assessment of AI applications.

3. Use the White House as a model. While comprehensive legislation may take time, the federal government can lead by example through its own AI usage, research support, and funding. The government can enforce standards for companies seeking access to federal resources, influencing industry-wide adoption of responsible AI practices.

4. Develop tamper-proof security measures for AI-generated content. Deepfakes and synthetic media have already caused harm and spread misinformation. A robust method of watermarking and documenting the creation and editing process of AI-generated content is needed to combat this issue effectively.

5. Involve public input and education. AI regulations should be accessible and understandable to the general public. Public input and education initiatives can ensure that AI governance reflects the needs and concerns of society as a whole.

By implementing these strategies, the government can effectively regulate AI, mitigate risks, and protect individuals from both immediate and future threats posed by AI technology.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment