OpenAI Defamation Lawsuit: Exploring Legal Challenges Surrounding ChatGPT

194

An AI-generated defamation lawsuit against OpenAI LLC brings legal challenges surrounding the popular program ChatGPT into the spotlight.

ChatGPT’s Alleged Fake Legal Complaint

Georgia radio host Mark Walters filed a lawsuit on June 5, claiming that ChatGPT produced a false legal complaint accusing him of embezzling money from a gun rights group. Walters maintains that he has never been associated with the group nor faced such accusations. Journalist Fred Riehl, who was using ChatGPT for research purposes, received the fabricated complaint.

Previous Cases of ChatGPT Issues

This is not the first instance of ChatGPT generating misleading information. In April, an Australian mayor announced plans to sue OpenAI over false bribery allegations produced by ChatGPT. Additionally, a New York lawyer faces potential sanctions for filing legal briefs citing fake precedents researched using ChatGPT.

OpenAI’s Liability for ChatGPT’s False Claims

Legal experts believe that Walters’ lawsuit could be the first of many that will explore the legal liability of AI chatbots producing false information. Eugene Volokh, a First Amendment law professor at UCLA, says, “In principle, I think libel lawsuits against OpenAI might be viable,” but he doubts the success of the current lawsuit.

ChatGPT’s Limitations and Disclaimer

OpenAI has acknowledged the limitations of ChatGPT, including its ability to generate false information, or “hallucinations.” The company has not commented on the lawsuit but has included a disclaimer explaining that ChatGPT’s outputs may not always be reliable.

Defamation Laws and Retractions

Defamation attorney Megan Meier notes that defamation laws vary by state, and some require plaintiffs to request a retraction before filing a lawsuit. Walters’ lawyer, John Monroe, claims he is not aware of any request for retraction or the legal requirement to make one.

Section 230 Defense and Generative AI

The 1996 federal law, Section 230 of the Communications Decency Act, protects internet platforms from legal liability based on user-generated content. However, whether generative AI programs like ChatGPT are covered by this legal shield remains untested in court. Jess Miers, legal counsel at the Chamber of Progress, believes that Section 230 would likely cover generative AI, while others, like Volokh, argue that it would not.

Story at news.bloomberglaw.com – 2023-06-12 09:51:06

Read More US Media News