Michigan has announced its participation in an initiative to curtail deceptive use of artificial intelligence (AI) and manipulated media through state-level policies. This announcement comes as Congress and the Federal Elections Commission continue to debate more extensive regulations in anticipation of the 2024 elections.
The legislation expected to be signed by Gov. Gretchen Whitmer, a Democrat, would necessitate political advertisements airing in Michigan to clearly disclose whether they have been created using artificial intelligence. Additionally, the legislation would forbid the use of AI-generated deepfakes within 90 days of an election without a separate disclosure identifying the manipulated media.
Concerns are growing that AI could be utilized to mislead voters, impersonate candidates, and undermine the 2024 presidential race in a manner that is unprecedented in scale and speed. Currently, campaigns and outside groups are already experimenting with AI technology to create realistic fake images, videos, and audio clips at a much quicker pace and cheaper cost.
Some examples of this include the Republican National Committee’s AI-generated ad depicting a potential future for the United States if President Joe Biden is reelected, and Never Back Down’s imitation of former President Donald Trump’s voice via an AI voice cloning tool.
So far, states such as California, Minnesota, Texas, and Washington have passed laws regulating deepfakes in political advertising, while Illinois, Kentucky, New Jersey, and New York have introduced similar measures.
According to Michigan’s legislation, any person or entity distributing political advertisements for a candidate is required to clearly state if they were created using generative AI, in the same font size as the majority of the text in print ads or appearing “for at least four seconds in letters that are as large as the majority of any text” in television ads. Furthermore, deepfake media used within 90 days of an election would necessitate a separate disclaimer informing the viewer that the content is manipulated, with the possibility of a misdemeanor and associated penalties for non-compliance.
Calls to action have been made at the federal level to legislate deepfakes in political advertising, with bipartisan efforts in the Senate, including a recent bill to ban “materially deceptive” deepfakes. Federal support is also being sought to assist states in tackling the challenges posed by AI, as each state’s efforts are limited by federal law and funding allocations.
Social media companies, such as Meta (which owns Facebook and Instagram) and Google, have announced guidelines designed to mitigate the spread of harmful deepfakes, such as requiring disclosed identification for political ads created using AI.
The debate surrounding AI and political advertising continues to evolve on both the state and federal levels, with ongoing discussions, legislation, and attempts to address potential challenges ahead of the 2024 elections.