As the global landscape of artificial intelligence continues to evolve, the United States Federal Communications Commission (FCC) has made a significant move to enhance transparency in political advertising. The FCC has proposed a new regulation that mandates the disclosure of AI-generated content in political ads on television and radio. This initiative aims to prevent misinformation and ensure that voters are fully aware when they are viewing or listening to content created using artificial intelligence.
The proposal, unveiled by FCC Chairwoman Jessica Rosenworcel, stipulates that broadcasters, cable operators, and satellite TV and radio providers must disclose when AI tools, such as deepfake technology or voice cloning, have been used to create political advertisements. This move does not extend to digital and streaming platforms, leaving a considerable portion of online political content unregulated by these specific rules.
“As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue”.
The need for such regulation has become increasingly apparent as AI-generated content becomes more sophisticated and widespread. The ability of AI to create realistic but entirely fabricated audio and video presents a unique challenge in maintaining the integrity of democratic processes. This was highlighted by previous incidents where AI was used in robocalls to impersonate political figures, leading to voter confusion and misinformation.
Under the proposed rules, political advertisers would be required to include clear disclosures on-air and in their written political files, indicating that AI tools were used in the creation of their ads. This regulation would apply to both candidate and issue advertisements. The goal is to add a layer of transparency, allowing voters to critically assess the authenticity of the content they are exposed to during election periods​.
Despite the comprehensive nature of this proposal, it has limitations. The exclusion of digital and streaming platforms means that AI-generated content on social media and other online venues will not be subject to these disclosure requirements. This gap highlights the challenge of regulating the rapidly evolving digital advertising space, where much of today’s political advertising occurs.
The proposal marks the second major initiative by the FCC this year to address the use of AI in political communications. Earlier, the commission confirmed that AI voice-cloning tools in robocalls are banned under existing law, following incidents where such technology was used to mislead voters​.
The reaction from the tech and political communities has been mixed. Advocates for transparency and voter rights have praised the proposal, seeing it as a necessary step to combat the potential misuse of AI in elections. Critics, however, argue that the exclusion of digital platforms represents a significant loophole that needs to be addressed to ensure comprehensive transparency.
In Australia, the developments in the US are closely watched as similar challenges with AI and political advertising emerge. The Australian Electoral Commission (AEC) has been proactive in monitoring AI technologies’ impact on electoral processes. As AI tools become more prevalent, Australian regulators might consider similar measures to ensure transparency and maintain public trust in the electoral system.
Read More: Rosenworcel Proposes AI Disclosure for Political Ads on TV & Radio

