Privacy Concerns Arise: OpenAI Faces Complaints About ChatGPT's Hallucinations
A system that fails to deliver accurate and transparent results should not be employed to generate data concerning individuals.
OpenAI's ChatGPT Faces Fresh Legal Concerns Over Privacy
OpenAI's widely used AI bot, ChatGPT, is once again under legal scrutiny due to ongoing privacy issues. A new complaint, spearheaded by the privacy rights group noyb, highlights a significant issue: ChatGPT's tendency to fabricate information and present it as factual.
The complaint revolves around an incident involving an unnamed public figure who queried ChatGPT about their birthday date and received incorrect responses.
For years, ChatGPT and similar generative AI tools have struggled to prevent inaccuracies during basic conversations. This latest complaint raises the stakes, potentially exposing them to legal repercussions for perpetuating misleading personal data.
What’s ChatGPT Being Called Out For?
Understanding AI Hallucinations and Their Impact
The term "AI hallucination" refers to any incorrect or misleading outcome generated by AI models. These models, being large language models, are trained on vast datasets, allowing them to replicate or rearrange information in seemingly authentic ways. However, they often prioritize realism over accuracy.
An infamous example occurred during Google's chatbot Bard launch in February 2023, where it falsely claimed that the James Webb Space Telescope had captured the first images of a planet outside our solar system.
These hallucinations can range from ChatGPT refusing to acknowledge its lack of knowledge about a public figure's birthday to more significant inaccuracies. This issue poses a challenge for a tool marketed as capable of handling basic tasks. The new complaint aims to prompt legal action against the company to address this problem.
According to Maartje de Graaf, a data protection lawyer from noyb, "It's clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology must adhere to legal requirements, not the other way around."
Why the New Privacy Complaint Matters
The noyb Project: Advocating for Privacy Rights in Europe
The noyb project, cleverly named "none of your business," is a nonprofit organization dedicated to addressing commercial privacy issues on a European level, specifically focusing on digital privacy violations by companies and corporations, as stated on its website.
With the launch of the EU's General Data Protection Regulation (GDPR) in 2018, watchdog groups like noyb have the ability to spotlight major privacy violations affecting individuals across Europe.
GDPR non-compliance can lead to substantial fines, up to 4% of a company's global annual turnover, making them significant penalties. A favorable GDPR ruling can establish a precedent for the treatment of all generative AI bots, not just leading platforms like ChatGPT.
Addressing the Need for Stricter Regulations on AI and Emerging Technologies
While there are certainly more severe stories with obvious negative impacts than simply getting a birthday wrong, such as AI-generated guides to mushroom foraging that could pose serious health risks, the new complaint brings attention to OpenAI's ChatGPT as a target for scrutiny, distinct from ebook creators. This complaint aims to highlight ChatGPT's tendency to spread misinformation and establish legal standards accordingly.
The debate over regulations in the tech world is ongoing. Silicon Valley has historically embraced an anti-regulation ethos, championed by libertarians, techno-rationalists, and proponents of the "move fast and break things" philosophy.
Your stance on regulations may hinge on your comfort level with the activities of today's data-driven tech giants. Some advocate for a cautious, deliberate approach to technology development.
In the realm of big AI companies, concerns persist about moving too quickly. Following the Google Bard launch failure, reports emerged suggesting that Google had disregarded a risk assessment that flagged the AI tool's dishonesty.
Tags:
AI