Marketing

Meta and X Approve AI Ads Referencing Nazi War Crimes Ahead of German Elections, Research Finds

Meta rejected five ads for potentially being political content. But the rejections were based on their classification as being social issue, electoral, or political ads, not on violations of hate speech or incitement to violence. In contrast, X did not review or reject any of the test ads, scheduling all for immediate publication without further inspection.

Breaches of the EU’s DSA and German national laws

The failure to remove these extremist ads could put both Meta and X in breach of the EU’s Digital Services Act (DSA), which came into effect in 2022. The DSA holds platforms accountable for spreading illegal content and mandates that platforms assess and mitigate risks to fundamental rights, civic discourse, and public security, among others. Article 35 of the DSA obliges platforms to implement “reasonable, proportionate, and effective mitigation measures tailored to the specific systemic risks.”

Peter Hense, founder and partner at Spirt Legal, told ADWEEK that Meta and X have made no efforts to address these risks and are thus in violation of the DSA. “X published an audit report issued by FTI, which states that the platform has done nothing to comply with the DSA in this respect,” he said.

The ads also likely violate German national laws governing hate speech and Nazi-era propaganda. Germany enforces some of the strictest hate speech laws in Europe, particularly concerning content that glorifies Nazi crimes or advocates violence against minorities.

Advertisers are trying to measure their risk

Bill Fisher, senior analyst at Emarketer, said that advertisers continue to spend on platforms with audiences. However, brands motivated primarily by profit are also aware of the reputational risks tied to advertising on platforms that allow extremist content to flourish, Fisher noted.

Brands still seek assurances that their ads won’t appear alongside harmful ads. As Katy Howell, CEO of social media agency Immediate Future, put it: “If platforms can offer assurances that ads will be placed in safe environments, brands are weighing whether it’s worth the risk to continue advertising there.”

As Meta and X embrace right-wing influences like ending third-party fact-checking and relaxing restrictions on free speech, the platforms have favored user-generated community notes to moderate content. Ekō argues that this system is fundamentally flawed when it comes to filtering out harmful content.

“By the time the ads are live, no one knows how long they’ll remain up or how many views they’ll get before other checks come into play,” the Ekō spokesperson said.

What happens next?

Ekō has submitted its research to Meta, X, and the European Commission but is still awaiting responses. In the submission to the EU Commission, reviewed by ADWEEK, Ekō stated, “The approval of such extreme content suggests that Meta and X are failing to meet their obligations and may be in breach of EU law.”

This website uses cookies. By continuing to use this site, you accept our use of cookies.