The specter of misinformation has never loomed so large over the world as in 2024, with the disturbing arrival of artificial intelligence (AI).
About half of the world’s population lives in countries where elections will be held in 2024. Taiwanese were the first to vote, on January 15, in a presidential election won by Lai Ching-te of the Democratic Progressive Party.
For months, subscribers to the social network TikTok on the island were inundated with misleading videos that particularly attacked candidates in favor of maintaining independence from China.
“One of the most important challenges will be to check whether advances, especially in AI, will be used on a critical enough scale to change the course of voting. It is an important unknown,” says Julien Nocetti, associate researcher at the Russia Center/ Eurasia of the French Institute of International Relations (Ifri).
At stake is the “resistance capacity of the democratic model to attacks by external actors,” he warns.
The context is conducive to disinformation, whose main objective is to stoke dissension on controversial issues (inflation, migration, religion…). Especially with the existing political polarization and the deterioration of trust in the media and political leaders. And with important conflicts: Ukraine and Gaza.
Generative AI, which allows you to easily create fake images or imitate voices, and whose use has become widespread, is a fearsome tool.
In recent months, images of an alleged arrest of Donald Trump have circulated, a fake video showing President Joe Biden announcing a general mobilization to support the war effort in Ukraine, or the manipulated use of the voice of leaders such as the Frenchman Emmanuel Macron.
‘Sophisticated’ misinformation?
China and Russia are the two countries that arouse suspicion on the international scene.
The disinformation campaign in Taiwan ahead of the presidential election, according to experts, was orchestrated by Beijing, which claims the island as an integral part of its territory.
In the United States, which will vote in November, the analysis group Insikt anticipates that “Russia, China, Iran, violent activists and hackers will most likely carry out disinformation campaigns at various levels, depending on their magnitude and sophistication, to give shape or disrupt” the elections, according to a December report.
Disinformation undermines the legitimacy of the results through campaigns to discredit candidates, cast doubt on the electoral process, and encourage abstention.
The consequences are sometimes dangerous for democracy, as shown by Trump’s recurring accusations of alleged election fraud that led his supporters to storm the Capitol on January 6, 2021.
According to Julien Nocetti, the European Union could face campaigns in the June elections that “delegitimize cohesion and the European project, as well as support for Ukraine”, as has already happened in recent months.
Meta (Facebook, Whatsapp and Instagram) and the French authorities see the hand of groups close to the Kremlin in the “Doppelgänger” operation, which consists of usurping the identity of media to spread disinformation, especially against Ukraine.
Paradoxically, some repressive regimes could take advantage of the fight against disinformation to impose measures that violate human rights, warns the World Economic Forum in a recent report.
‘Automation’ of the fight
States are trying to prepare for battle, but political time is slower than social media and technology.
The Indian government announced a law on the digital sector that, however, will not be ready for the spring elections.
In the EU, digital services legislation imposes obligations on platforms, such as acting “swiftly” to remove content reported as illegal or suspending users who regularly violate bans.
“Useful but limited improvements,” explains researcher Federica Marconi in a study for the Institute of International Affairs and the European Policy Center published at the end of 2023.
As for the specific draft law on AI in the EU, the first of those features, it should not come into force before… 2026.
In the United States, Biden signed an executive order in October on rules and guidance for technology companies, but there is no binding federal law.
Urged to act, the giants of the sector insist on new initiatives: the obligation to mention the use of AI in advertising in Meta, a Microsoft tool that allows candidates to authenticate their content with a digital watermark.