Generative AI tools have a wide range of applications, but their misuse has raised serious concerns, including their role in state-led influence campaigns. A recent report from Recorded Future, a Massachusetts-based threat intelligence firm, highlights one such instance where AI voice generation technology was "very likely" utilized in a Russian-linked disinformation campaign.
The operation, called "Operation Undercut," aimed to weaken European support for Ukraine. The campaign produced misleading videos that criticized Ukrainian leaders and questioned the value of military assistance to the country. For instance, one video claimed that advanced American tanks like the Abrams were ineffective, portraying military aid as futile.
Recorded Future’s analysis suggests that creators of these videos used AI-generated voiceovers to make the content appear authentic. ElevenLabs, a prominent AI voice generation startup, was identified as a likely source of the technology. Researchers confirmed this by analyzing the videos with ElevenLabs’ AI Speech Classifier, which can detect whether audio was created using their tools. Although ElevenLabs was named, the report also indicated that other commercial AI tools may have been involved, but did not specify which ones.
One notable advantage of AI-generated voices in this campaign was their ability to speak fluently in multiple European languages, including English, German, French, Polish, and Turkish, all without noticeable foreign accents. This was a stark contrast to some videos featuring human voiceovers with clear Russian accents, which were less convincing. The use of AI allowed the campaign to produce multilingual content quickly and effectively.
The campaign was attributed to the Social Design Agency, a Russian organization previously sanctioned by the U.S. government for running fake news websites and using social media to amplify misleading content on behalf of the Russian government. Despite these efforts, Recorded Future concluded that the operation had little impact on public opinion in Europe.
ElevenLabs has faced scrutiny for similar instances of misuse. In January 2024, its technology was linked to a robocall impersonating President Joe Biden, urging voters to abstain from a primary election. In response, the company introduced safety measures, such as blocking the use of political figures' voices and enhancing moderation with both automated and human oversight.
Since its founding in 2022, ElevenLabs has grown rapidly, increasing its annual recurring revenue from $25 million to $80 million in less than a year. The company is backed by notable investors, including Andreessen Horowitz and former GitHub CEO Nat Friedman, and may soon reach a valuation of $3 billion.
While ElevenLabs and similar AI tools offer impressive capabilities, their misuse highlights the ongoing challenges in regulating generative AI technology to prevent harmful applications.