OpenAI’s chatbot, ChatGPT, is facing legal trouble for fabricating a “horror story."
A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.
Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded that the chatbot maker be penalised.
The latest example of so-called “ hallucinations” occurs when artificial intelligence (AI) systems fabricate information and pass it off as fact.
Let’s take a closer look.
What happened?
Holmen received false information from ChatGPT when he asked: “Who is Arve Hjalmar Holmen?”
The response was: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
Holmen stated that the chatbot had some accurate data about him because it estimated their age difference correctly.
“Some think that ‘there is no smoke without fire’. The fact that someone could read this output and believe it is true is what scares me the most,” Hjalmar Holmen said.
What’s the case against OpenAI?
Vienna-based digital rights group, Noyb (None of Your Business) has filed the complaint on Holmen’s behalf.
“OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it,” Noyb said in a press release, adding ChatGPT has “falsely accused people of corruption, child abuse – or even murder”, as was the case with Holmen
Holmen “was confronted with a made-up horror story” when he wanted to find out if ChatGPT had any information about him,” Noyb said.
It added in its complaint filed with the Norwegian Data Protection Authority (Datatilsynet) that Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”
“To make matters worse, the fake story included real elements of his personal life,” the group said.
Noyb says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data.
It wants the agency to order OpenAI “to delete the defamatory output and fine-tune its model to eliminate inaccurate results,” and impose a fine.
The EU’s data protection regulations require that personal data be correct, according to Joakim Soederberg, a Noyb data protection lawyer. “And if it’s not, users have the right to have it changed to reflect the truth,” he said.
Moreover, ChatGPT carries a disclaimer which says, “ChatGPT can make mistakes. Check important info.” However, as per Noyb, it’s insufficient.
“You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” Noyb lawyer Joakim Söderberg said.
Since Holmen’s search in August 2024, ChatGPT has modified its approach and now looks for pertinent information in recent news items.
Noyb informed the BBC When Holmen entered his brother’s name into the chatbot, among other searches he conducted that day, it gave “multiple different stories that were all incorrect.”
Although they admitted that the response regarding his children might have been shaped by earlier searches, they asserted that OpenAI “doesn’t reply to access requests, which makes it impossible to find out more about what exact data is in the system” and that huge language models are a “black box.”
Noyb already filed a complaint against ChatGPT last year in Austria, claiming the “hallucinating” flagship AI tool has invented wrong answers that OpenAI cannot correct.
Is this the first case?
No.
One of the primary issues computer scientists are attempting to address with generative AI is hallucinations, which occur when chatbots pass off inaccurate information as fact.
Apple halted its Apple Intelligence news summary feature in the UK earlier this year after it offered fictitious headlines as legitimate news.
Another example of hallucination was Google’s AI Gemini, which last year recommended using glue to adhere cheese to pizza and stated that geologists advise people to consume one rock daily.
The reason for these hallucinations in the big language models — the technology that powers chatbots — is unclear.
“This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what is actually going on in a large language model?” Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow, told BBC, adding, that this also holds true for those who work on these kinds of models behind the scenes.
“Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they’re coming up with this particular information that they came up with,” she told the publication.
With inputs from agencies