AI-Mimicked Biden Voice Leads to $1M FCC Fine for Deceptive Calls

8

A company that sent deceptive calls to New Hampshire voters using artificial intelligence to mimic President Joe Biden’s voice has agreed to pay a $1 million fine, federal regulators announced Wednesday.

Lingo Telecom, the voice service provider responsible for transmitting the robocalls, reached a settlement with the Federal Communications Commission (FCC), which had initially proposed a $2 million fine. This case is regarded as a troubling early example of AI’s potential to sway voter groups and compromise democracy.


TRUSTED PARTNER ✅ Bitcoin Casino


Steve Kramer, the political consultant who masterminded these calls, still faces a proposed $6 million fine from the FCC along with state criminal charges. On January 21, thousands of New Hampshire voters received calls featuring a voice resembling Biden’s, falsely indicating that participating in the state’s presidential primary would prevent them from voting in the November general election.

Kramer, who commissioned a magician and self-proclaimed “digital nomad” to create the recording, previously told The Associated Press that his goal wasn’t to influence the primary outcome. Instead, he aimed to spotlight the potential dangers of AI and urge lawmakers to take action.

If convicted, Kramer could face up to seven years in prison for voter suppression and up to one year for impersonating a candidate.

As part of the settlement, Lingo Telecom agreed not only to the civil fine but also to adhere to strict caller ID authentication and verification requirements to ensure the accuracy of information provided by its customers and upstream providers.

“Every one of us deserves to know that the voice on the line is exactly who they claim to be,” FCC chairperson Jessica Rosenworcel said in a statement. “If AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it. The FCC will act when trust in our communications networks is on the line.”

Although Lingo Telecom did not immediately respond to requests for comment, the company had previously expressed strong disagreement with the FCC’s actions, labeling them as an attempt to retroactively impose new rules.

Nonprofit consumer advocacy group Public Citizen praised the FCC’s actions. Co-president Robert Weissman supported Rosenworcel’s stance, affirming that consumers have a right to know whether they are receiving genuine content or AI-generated deepfakes. Weissman emphasized that the case illustrates the existential threat such deepfakes pose to democracy.

FCC Enforcement Bureau Chief Loyaan Egal highlighted the serious risk posed by the combination of caller ID spoofing and generative AI voice-cloning technology. This risk looms large “whether at the hands of domestic operatives seeking political advantage or sophisticated foreign adversaries conducting malign influence or election interference activities,” he stated.