OpenAI Faces FTC Probe over Potential Misinformation Risks Posed by ChatGPT
More Scrutiny For OpenAI
The Federal Trade Commission (FTC) has initiated an investigation into OpenAI, the Microsoft-backed creator of the artificial intelligence chatbot, ChatGPT. This marks the first time that the potential risks of AI chatbots are formally being assessed by US regulators.
The FTC is examining whether OpenAI has engaged in "unfair or deceptive" data security practices, as well as the impact of fabricated information created by ChatGPT, potentially causing harm to individuals. This investigation follows broader industry concern about the vast amount of personal data these AI technologies consume and the potential harm their outputs can cause, including the spread of misinformation and discriminatory comments.
In May, the FTC signaled a tighter focus on the AI industry. The regulator highlighted its concern about the significant potential impact on consumers of the ways companies might employ new generative AI tools.
As part of the investigation, the FTC has requested OpenAI to divulge a wide range of internal material. The information sought includes the procedures the organization follows in retaining and using user information, as well as measures taken to mitigate the risk of the model generating false or disparaging statements.
In the spotlight are the concerns regarding the massive quantities of data ingested by language models like ChatGPT. Not long after its launch, OpenAI reported over 100 million active users monthly, while Microsoft's Bing search engine, also powered by OpenAI's technology, was used by over a million people across 169 countries within two weeks of its launch.
Reports of fabricated information—names, dates, facts, and even bogus references to news sites and academic papers, a problem referred to in the industry as "hallucinations"—have raised alarm bells among users.
The FTC's investigation delves into the technical details of ChatGPT's design, including the measures taken to rectify these "hallucinations," and the supervision of human reviewers whose decisions directly affect consumers. The inquiry also covers the consumer complaints received and OpenAI's efforts to evaluate consumer understanding of the chatbot's accuracy and reliability.
Earlier this year, Italy's privacy watchdog temporarily banned ChatGPT, citing issues relating to the collection of personal data after a cybersecurity breach. The ban was lifted a few weeks later when OpenAI enhanced the accessibility of its privacy policy and introduced an age verification tool.
OpenAI's CEO, Sam Altman, has acknowledged the limitations of ChatGPT, cautioning users not to rely on the AI for crucial tasks. In a recent tweet, Altman recognized the product's ability to give a "misleading impression of greatness," despite its inherent limitations, indicating that there is still substantial work to be done in the areas of robustness and truthfulness.
As AI technology continues to evolve and permeate everyday life, this landmark FTC investigation underscores the growing urgency for robust regulatory oversight to safeguard consumer interests. With implications that extend far beyond OpenAI and ChatGPT, the outcome of this probe could well shape the future of AI regulation.
https://www.ft.com/content/8ce04d67-069b-4c9d-91bf-11649f5adc74