Meta Reports Minimal Impact of AI on Election-Related Misinformation Across Its Platforms

author
By Tanu Chahal

03/12/2024

cover image for the blog

At the beginning of the year, concerns were raised about the potential misuse of generative AI in spreading propaganda and disinformation during elections worldwide. However, Meta has reported that these fears did not significantly materialize on its platforms, including Facebook, Instagram, and Threads.

According to Meta, an analysis of content related to major elections in countries such as the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil revealed limited influence of AI-generated misinformation.

In a blog post, the company stated:
“While there were instances of confirmed or suspected AI use in this context, the volume of such content remained low. Our existing policies and processes effectively mitigated the risks associated with generative AI content. During the election periods for the listed major elections, less than 1% of all fact-checked misinformation on our platforms was related to AI-generated content about elections, politics, or social issues.”

Meta highlighted its proactive measures, such as its Imagine AI image generator rejecting 590,000 requests in the lead-up to the U.S. election. These rejections included attempts to create images of prominent figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden to prevent election-related deepfakes.

The company also addressed coordinated disinformation campaigns, noting that networks attempting to spread propaganda or false information saw only minor efficiency gains from generative AI. Meta explained that its approach focuses on the behavior of accounts rather than the content they post, allowing it to effectively counter these influence operations regardless of whether AI was used.

Additionally, Meta disclosed the removal of around 20 covert influence operations globally to prevent foreign interference. It observed that most disrupted networks lacked authentic audiences and often used fake likes and followers to create an illusion of popularity.

Meta also pointed out issues on other platforms, noting that Russian-linked false videos about the U.S. election were frequently posted on X (formerly Twitter) and Telegram.

The company emphasized its ongoing commitment to evaluating and updating its policies to address evolving challenges, stating:
“As we reflect on this remarkable year, we will continue reviewing our policies and announce any necessary changes in the coming months.”