Microsoft Warns of Misuse of AI Content to Influence Lok Sabha Election 2024

Updated on April 8 2024

In a recent report released by Microsoft, concerns have been raised about China’s potential use of artificial intelligence (AI) to influence upcoming Lok Sabha Election in India. The report sheds light on China’s experimentation with AI-generated content, which could impact election outcomes.

Past Instances of AI Misuse In Election Campaigns

China has already attempted an AI-generated disinformation campaign during the Taiwan presidential election in January 2024. This was one of the occasions when a government supported entity used AI-generated content to intervene in international elections for the first time. Finally, now it’s more likely that a spotlight shines upon India and other countries.

This tactic is used to get to know the voter divisions and to program artificial intelligence that generates content on various issues including domestic issues and international affairs. The Taiwanese Presidential election in January 2024 has been marked as an event that, for the first time, any nation employed AI generated content to influence a foreign election.

The Storm-1736 group, an entity with an alternative name of Spamouflage or Dragonbridge, which has operations in Beijing, has been crucially in spreading misinformation and meddling in elections through the use of AI-generated content, partially in Taiwan. This group is famous for multiple such activities among which they are discrediting the reliable information, inflating the public discontent towards certain topics and disseminating divisive themes.

Operations and Tactics Used

Terry gau fake video
Terry Gau fake video

Taiwan’s Presidential Election: January 2024 has been a time for Storm-1736 group to use these tactics during Taiwan’s presidential election. They tried to influence the election by manipulating an audio post on YouTube of the election candidate Terry Gou endorsing another competitor.

AI-Generated Memes and News Anchors: The organisation produced a range of AI-created memes involving the successful candidate, William Lai, who is anti-Beijing and instead backed by the pro-sovereignty candidate, these memes were made up of totally false claims against him.

By combining fake videos with AI-generated TV news anchors, the tactic was also used by Iran to make unsubstantiated claims and accuse Laik of embezzling state funds. The news anchors were made using the CapCut software which belongs to Bytedance, a Chinese multinational online technology company, owning TikTok.

Disinformation and Deepfakes: Microsoft noticed the influence operations tied to China, including Storm-1736, fake the AI content to shroud conspiracy theories. The posts were made with AI-generated photos to increase their attractiveness and showed the organisation’s power to bloom disinformation.

Also Read: Who Owns AI Generated Content?

What Does the Microsoft Report Say?

Here are the key points from Microsoft’s report:

“With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests,” Clint Watts, General Manager, Microsoft Threat Analysis Center, said in a blog post”

  • AI-Generated Content: AI-based content distribution being created and sent through social media channels by China is highly possible. The content will be strategically created to help China sever its chances in prominent elections, for instance, the upcoming Lok Sabha elections in India.
  • Low Immediate Impact: Even though the chances of influence on election outcomes of these contents is still very low, the fact that China is testing various ways to use the AI-power memes, videos, and audio might result in more effective content in the future.
  • Unique Methods: This is indicated by the report that North Korea and China keep on targeting the traditional targets. They also have planned to introduce more advanced influence strategies based on outcomes of this campaign.

Three Target Areas: Chinese hackers have focussed goals on three particular areas which are:

  • South Pacific Islands: Intense targeting the local entities.
  • South China Sea Region: Cyberattacks carried out against regional adversaries.
  • US Defense Industrial Base: Compromising strategic defence key system.

How Can AI generated Content Affect Lok Sabha Election?

AI-generated content has the potential to significantly impact voter turnout and participation in elections in several ways, posing both direct and indirect challenges to the democratic process.

Direct Impact on Voter Turnout

  • Disinformation and Voter Suppression: AI can be the source of spreading fake news, the impossibly misleading messages about time and place of casting of vote or about discouraging voters to come to their polling locations. This manipulation can be seen as voter suppression, where voters are prevented from voting due to the distribution of misinformation or intimidation.
  • Targeting Specific Voter Groups: This disinformation appears subject to be worsened by discrimination to whom social challenges for equal participation in the democratic process are numerous. This selectivity targeting can still segregate these groups and detrimentally force their turnout and participation in elections.

Indirect Influence on Voter Enrolment

  • Deceptive Content and Misinformation: One of the major impacts of AI technology is that it allows the creation of deeply convincing activities of fake, such as “deep fakes” which, in turn, can deceive the general population about what candidates have been saying, what their positions are on the issues and even whether certain events have occurred.
  • Undermining Election Administration: AI technologies may be utilised for generating bogus photos and wrong evidence of misconduct, including voting corruption, such as ballot tampering or shredding. Besides, it not only destroys public trust in the outcome of elections but also can turn the public threats of violence against electoral officials, impeding voters from casting their votes which could also be dangerous.

The Threat of AI-Generated Content

AI-generated deep fakes are so advanced and unusual in their nature that it becomes a hard task to locate where they originate and also to devise workable counter-methods. The adoption of this technology should be monitored and regulated because the possibility to produce high-quality and reliable content that can impact elections and the public mind is developing. This introduces a serious challenge to democracy as people start believing things which they watched or heard rather than what they see.

To curb the popularity of AI-based diversification and deep fakes, it is necessary for the governments, tech firms, and the public to stay alert and apply informative tools and detection strategies. Through reliance on only trustworthy sources of information, cross-checking questionable content across known reliable sites, and using all available validation equipment, fighting deep fake content is a drooling task.

Also Read: How Does AI Image Generation Work?

Way Forward

Vigilance is the main factor for the upcoming Lok Sabha elections in India. Detection and countering single purpose AI constructed disinformation campaigns is crucial for keeping our democracy untouched by breaches. Since elections attract a lot of attention, leading to them being sensitive events, they are therefore prone to such strategies.

Thus, the AI-politics relations may deliver both possibilities and threats. It is imperative that tech companies and citizens be knowledgeable and alert on the AI tech being abused like this.

Featured Tools



Humanize AI

Air Chat






Related Articles