In a dramatic turn of events that underscores the growing risks of artificial intelligence in politics, American political consultant Steve Kramer has been hit with severe legal and financial penalties for orchestrating a deceitful AI-generated robocall campaign. The robocall, which mimicked President Joe Biden’s voice, was strategically disseminated ahead of New Hampshire’s presidential primary, misleading voters by urging them not to participate.
The Federal Communications Commission (FCC) has proposed a staggering $6 million fine against Kramer, marking one of the most significant actions against AI misuse in electoral processes. Additionally, Kramer faces 26 criminal charges in New Hampshire, including 13 counts of voter suppression and 13 counts of impersonating a candidate, reflecting the serious implications of his actions.
Kramer’s robocall scandal is a prime example of how advanced AI technologies can be weaponised to influence political outcomes. The AI-generated voice was crafted to sound convincingly like President Biden, exploiting the trust and familiarity voters have with his voice. This tactic was intended to suppress voter turnout among Democrats by sowing confusion and distrust on the eve of the primary.
New Hampshire Attorney General John Formella has been vocal about the necessity of stringent enforcement actions to prevent such technological abuses. “Our enforcement actions are intended to send a strong deterrent signal to anyone who might consider interfering with elections, whether through AI or other means,” he stated.
Jessica Rosenworcel, the FCC Chair, echoed these concerns, highlighting the broader implications for democratic processes worldwide. “The ability of AI to create realistic but false content poses a new threat to the integrity of elections. When a caller sounds like a trusted figure, it can easily mislead people,” she said.
Kramer, who has not publicly commented on the charges, previously claimed that his actions were meant to highlight the dangers of AI technology. However, neither he nor Lingo Telecom, the company that transmitted the robocalls and now faces a proposed $2 million fine, have responded to inquiries.
This incident has sparked a broader conversation about the need for robust regulatory frameworks to govern the use of AI in political contexts. The rapid advancement of generative AI technology, which can produce highly realistic audio, video, images, and text, presents new challenges for maintaining the integrity of electoral systems.
The implications of this case extend beyond the United States, serving as a cautionary tale for democracies globally, including Australia, where similar technologies could be employed to manipulate voter behaviour. It underscores the urgent need for international cooperation and comprehensive legislation to safeguard against the misuse of AI in elections.

