Apple Intelligence, the tech giant’s new AI-powered feature, has landed in hot water just a week after its UK launch. The BBC has filed a formal complaint after discovering that Apple’s AI-generated summaries falsely attributed fake news to its name.
The issue came to light when the AI summarisation tool incorrectly claimed that Luigi Mangione, charged with the murder of UnitedHealthcare CEO Brian Thompson, had died by suicide. The summary further cited the BBC as the source of this false report. In reality, Mangione remains in US custody, and the BBC never published such an article. Understandably frustrated, the BBC emphasised that trust and accuracy are at the heart of its global reputation, and such errors threaten the credibility it has built.
More missteps with major news outlets
The BBC isn’t the only media outlet caught in the AI crossfire. Reports suggest that Apple Intelligence also misrepresented news content from The New York Times. In one instance, the AI summarised an article with the inaccurate claim that Israeli Prime Minister Benjamin Netanyahu had been “arrested.” While an arrest warrant was issued against Netanyahu by the International Criminal Court (ICC), no such arrest has occurred.
These examples have added fuel to existing concerns about the accuracy of generative AI tools. The potential for misleading summaries, especially when attributed to trusted sources, risks damaging both media credibility and user confidence in AI systems.
The broader problem of AI “hallucinations”
Apple’s troubles highlight a wider challenge plaguing generative AI: hallucinations. This term refers to AI systems generating content that sounds plausible but is factually incorrect. Such incidents aren’t unique to Apple — other AI platforms, including ChatGPT, have similarly struggled with misattributing or decontextualising content.
A recent study by Columbia Journalism School underscored this issue. The research found numerous cases where generative AI tools mishandled block quotes, inaccurately citing trusted outlets like The Washington Post and the Financial Times. These missteps raise serious questions about whether AI is ready to manage something as sensitive as news reporting and summarisation.
A problem Apple must fix
With trust in both AI and digital media already fragile, Apple now faces the challenge of proving that its AI tools can be reliable. The BBC’s swift response highlights how serious the issue is, especially for news organisations whose credibility rests on accuracy.
For Apple, this controversy is a clear signal to refine its AI systems and tackle the hallucination problem head-on. If not, its efforts to enhance user experience with smarter, AI-driven summaries might backfire — eroding trust rather than improving it.