Artificial Intelligence

ChatGPT maker investigated by US regulators over AI risks


Receive free Artificial intelligence updates

The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging probe into ChatGPT maker OpenAI.

In a letter sent to the Microsoft-backed company, the FTC said it would look at whether people have been harmed by the AI chatbot’s creation of false information about them, as well as whether OpenAI has engaged in “unfair or deceptive” privacy and data security practices.

Generative AI products are in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm over the enormous amount of personal data consumed by the technology, as well as its potentially harmful outputs, ranging from misinformation to sexist and racist comments.

In May, the FTC fired a warning shot to the industry, saying it was “focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers”.

In its letter, the US regulator asked OpenAI to share internal material ranging from how the group retains user information to steps the company has taken to address the risk of its model producing statements that are “false, misleading or disparaging”.

The FTC declined to comment on the letter, which was first reported by The Washington Post. Writing on Twitter later on Thursday, OpenAI chief executive Sam Altman called it “very disappointing to see the FTC’s request start with a leak and does not help build trust”. He added: “It’s super important to us that our technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”

Lina Khan, the FTC chair, on Thursday morning testified before the House judiciary committee and faced strong criticism from Republican lawmakers over her tough enforcement stance.

When asked about the investigation during the hearing, Khan declined to comment on the probe but said the regulator’s broader concerns involved ChatGPT and other AI services “being fed a huge trove of data” while there were “no checks on what type of data is being inserted into these companies”.

She added: “We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else. We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we’re concerned about.”

Khan was also peppered with questions from lawmakers on her mixed record in court, after the FTC suffered a big defeat this week in its attempt to block Microsoft’s $75bn acquisition of Activision Blizzard. The FTC on Thursday appealed against the decision.

Meanwhile, Republican Jim Jordan, chair of the committee, accused Khan of “harassing” Twitter after the company alleged in a court filing that the FTC had engaged in “irregular and improper” behaviour in implementing a consent order it imposed last year.

Khan did not comment on Twitter’s filing but said all the FTC cares “about is that the company is following the law”.

Experts have been concerned by the huge volume of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100mn monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1mn people in 169 countries within two weeks of its release in January.

Users have reported that ChatGPT has fabricated names, dates and facts, as well as fake links to news websites and references to academic papers, an issue known in the industry as “hallucinations”.

The FTC’s probe digs into technical details of how ChatGPT was designed, including the company’s work on fixing hallucinations, and the oversight of its human reviewers, which affect consumers directly. It has also asked for information on consumer complaints and efforts made by the company to assess consumers’ understanding of the chatbot’s accuracy and reliability.

In March, Italy’s privacy watchdog temporarily banned ChatGPT while it examined the US company’s collection of personal information following a cyber security breach, among other issues. It was reinstated a few weeks later, after OpenAI made its privacy policy more accessible and introduced a tool to verify users’ ages.

Echoing earlier admissions about the fallibility of ChatGPT, Altman tweeted: “We’re transparent about the limitations of our technology, especially when we fall short. And our capped-profits structure means we aren’t incentivised to make unlimited returns.” However, he said the chatbot was built on “years of safety research”, adding: “We protect user privacy and design our systems to learn about the world, not private individuals.”





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.