• The telecoms company that used AI to mimic president Joe Biden’s voice to transmit robo-calls will pay a $1 million fine, FCC announced on Wednesday.
  • Lingo Telecom targeted New Hampshire voters with calls before the Granite State primary in January, using a recording of the president’s cloned voice to tell people not to vote.

OUR TAKE
By holding Lingo Telecom accountable for its role in transmitting spoofed robo-calls with AI-generated messages, the FCC is sending a strong message that election meddling and spoofing techniques will not be tolerated. In the wake of this case, companies offering similar services should be vigilant in preventing their technology from being used in the wrong places.
— Iydia Ding, BTW reporter

What happened

Lingo Telecom targeted New Hampshire voters with calls before the Granite State primary in January, using a recording of the president’s cloned voice to tell people not to vote. In response, the Federal Communications Commission (FCC) announced Wednesday that the telecom company, which used an AI to mimic President Biden’s voice to transmit robo-calls, will pay a $1 million fine. In addition to the fine, the voice service provider agreed to implement a compliance programme that requires “strict adherence” to the FCC’s caller ID certification framework, according to the agency’s press release.

Steve Kramer, a veteran Democratic operative who admitted to directing the robo-calls, faces separate $6 million fines proposed by the FCC. He was also charged in New Hampshire with 26 felony and misdemeanour charges of voter suppression and impersonating a candidate.

New Hampshire Attorney General John Formella said in a statement, “This settlement is a significant victory for election integrity, especially for New Hampshire and the voters it targeted.

Also read: Congo Telecom educates police cadets on cybersecurity practices

Also read: Awal Telecom & Technology: one of the Libyan internet providers

Why it’s important

By holding Lingo Telecom accountable for its role in transmitting deceptive robo-calls with AI-generated messages, the FCC is sending a strong message that election meddling and deceptive technologies will not be tolerated.

In the past few years of rapid AI development, there has been a surge in the number of lawsuits against developers of large-scale language models for AI. What all of these cases have in common is the use of technology from tech companies that have trained AI chatbots to mimic real people to engage in conversations for nefarious purposes.Some tech companies have argued that the training of AI models complies with the fair use doctrine of U.S. law, which allows limited use of copyrighted material.

This suggests that there is no mature and perfect system or regulation to govern the development of artificial intelligence. There are not yet clear limits on the materials and pathways used by AI programs, which is what leads to incidents like this from time to time. In this regard, the authorities need to intervene as soon as possible.