Arve Hjalmar Holmen, a Norwegian citizen, reportedly received a distressing response when he inquired about himself on ChatGPT. The AI’s response included a fabricated story suggesting that Holmen had murdered his children and had been imprisoned for such acts. Given the mixing of false narratives with real details about his personal life, Holmen has filed a formal complaint against OpenAI, the developer of ChatGPT.
Individuals often search their names on the internet to see what information is available about them. Holmen took a similar approach with ChatGPT, leading to his decision to file a complaint against OpenAI. The AI-generated response falsely claimed that Holmen was convicted for murdering his two sons, aged 7 and 10, and attempting to murder a third son.
These events never occurred, as ChatGPT produced a completely fabricated story it deemed accurate, a phenomenon known as AI “hallucination.” Consequently, Holmen, with the assistance of Noyb, a European digital rights center, lodged a complaint against OpenAI. The complaint alleges a violation of the accuracy principle mandated by the EU’s General Data Protection Regulation (GDPR).
The complaint highlighted Holmen’s distress and the potential harmful impact on his personal life if the information were circulated in his community. The response from ChatGPT was concerning because it combined real aspects of Holmen’s life, such as his home town and the number of children he has, with falsehoods.
JD Harriman, a partner at Foundation Law Group LLP in Burbank, California, suggested that proving defamation might be challenging for Holmen. Harriman questioned whether statements made by AI should be construed as facts and noted several instances of AI providing false information.
Additionally, the AI did not broadcast its results to a third party. Harriman mentioned that if Holmen had distributed the erroneous AI message, he would essentially become the publisher, requiring him to take legal action against himself. Moreover, proving negligence might also be problematic, as AI might not be considered a legal actor capable of negligence in the same way individuals or companies are. Holmen would also need to demonstrate tangible harm, such as financial losses or emotional distress.
Avrohom Gefen, a partner at Vishnick McGovern Milizio LLP in New York, remarked that defamation cases involving AI hallucinations are novel in the U.S. However, he referenced a pending case in Georgia, where a defamation lawsuit filed by a radio host against AI had recently advanced.
The complaint calls for OpenAI to remove the defamatory content related to Holmen, adjust its AI model to return accurate information about him, and impose fines for the alleged infractions of GDPR rules, which require the prompt correction or deletion of personal data inaccuracies.
Harriman emphasized that legal processes are inherently complex and cited Ambrose Bierce’s analogy of entering litigation as a pig and emerging as a sausage to illustrate this. OpenAI has yet to provide a response to the request for comment from Fortune.
This article initially appeared on Fortune.com.