Press freedom advocates are urgin Apple to ditch an “immature” generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself.
Reporters Without Borders (RSF) said this week that Apple’s AI kerfuffle, which generated a false summary as “Luigi Mangione shoots himself,” is further evidence that artificial intelligence cannot reliably produce information for the public. Apple Intelligence, which launched in the UK on December 11, needed less than 48 hours to make the very public mistake.
“This accident highlights the inability of AI systems to systematically publish quality information, even when it is based on journalistic sources,” RSF said. “The probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public.”
Because it isn’t reliably accurate, RSF said AI shouldn’t be allowed to be used for such purposes, and asked Apple to pull the feature from its operating systems.
“Facts can’t be decided by a roll of the dice,” said Vincent Berthier, head of RSF’s tech and journalism desk. “RSF calls on Apple to act responsibly by removing this feature.
“The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs,” Berthier added.
It’s unknown if or how Apple plans to address the issue. The BBC has filed its own complaint, but Apple declined to comment to the British broadcaster publicly on the matter.
According to the BBC, this doesn’t even appear to be the first time Apple’s AI summaries have falsely reported news. The beeb pointed to an Apple AI summary from November shared by a ProPublica reporter that attributed news of Israeli prime minister Benjamin Netanyahu’s arrest (which hasn’t happened) to the New York Times, suggesting Apple Intelligence might be a serial misreader of the daily headlines.
Google’s AI search results have also been tricked into surfacing scam links, and have also urged users to glue cheese to pizza and eat rocks.
Berthier stated, “The European AI Act – despite being the most advanced legislation in the world in this area – did not classify information-generating AIs as high-risk systems, leaving a critical legal vacuum. This gap must be filled immediately.”
The Register has reached out to Apple to learn about what it might do to address the problem of its AI jumping to conclusions about the news, and RSF to see if it’s heard from Apple, but we haven’t heard back from either. ®