AI luminaries grilled by senators at a hearing on Tuesday agreed that laws are needed to regulate the tech before wider real-world deployments.
OpenAI’s CEO Sam Altman told the Senate Committee on the Judiciary that the US government should consider implementing rules that would require companies to obtain a license to build AI models that have advanced capabilities beyond a specific threshold.
That’s quite a statement from the guy who runs OpenAI, which offers various powerful machine-learning models, such as ChatGPT. On the one hand, one can see the potential merit in permits, and on the other, it could lead to rivals being locked out of the market. The requirements could be too much for smaller biz to bear, while well-funded organizations like OpenAI can continue thriving.
When Altman was pressed on that licensing threshold, he replied: “I think a model that can persuade, manipulate, or influence a person’s behavior, or a person’s beliefs – that would be a good threshold. I think a model that could help create novel biological agents would be a great threshold – things like that.”
Gary Marcus, a retired professor of psychology and neural science at New York University, and founder of startups Robust AI and Geometric Intelligence (the latter since acquired by Uber), highlighted the ways chatbots can trick humans into believing false information or engaging in harmful and risky behaviors.
Marcus pointed to an incident in which ChatGPT falsely accused a law professor of sexual harassment. And another when Snapchat’s AI apparently advised a 13-year-old girl to lie to her parents about a meeting with a 31-year-old man.
Such responses are possible because, while AI can generate coherent text free of grammatical errors, it does not understand language, he opined. It does not know what is real and what is not. Generative AI therefore often fabricates facts, and produces inappropriate responses.
People are nevertheless increasingly turning to AI chatbots for information – which has obvious downsides if it results in the spread of misinformation.
“The more that that happens, the more that anybody can deny anything. As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe, and contribute to the undermining of democracy,” Marcus said.
Christine Montgomery, IBM’s chief privacy and trust officer, urged Congress to establish rules to govern the deployment of AI in specific use cases instead of wide-ranging policies that could restrict innovation. She suggested the US government apply “different rules for different risks.”
“The strongest regulation should be applied to use cases with the greatest risks to people and society … There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts,” she argued in her opening statement.
Who should regulate AI?
Senators leading the hearing of the Subcommittee on Privacy, Technology, and the Law, as well as the three invited speakers, largely supported the idea of creating an institution focused on regulating and auditing AI. They disagreed, however, on who should act as regulators.
Marcus said a neutral international organization made up of researchers might be needed. Some senators and OpenAI CEO Altman talked about setting up a specialized federal agency. Altman said the agency could be in charge of granting licenses to companies permitting them to build models that have safety guardrails in place, and would have the power to revoke licenses if model-makers break the rules.
John Kennedy (R-LA) even went so far as to ask Altman whether he was qualified to lead such a group himself. “I love my current job,” he replied.
Josh Hawley (R-MO), however, seemed to be more in favor of litigation driving regulation than government agencies. “Having seen how agencies work in this government, they usually get captured by the interests that they’re supposed to regulate. They usually get controlled by the people who they’re supposed to be watching,” he lamented.
“I have a different idea. Why don’t we just let people sue you? Why don’t we just make you liable … we can create a federal right of action that will allow private individuals who are harmed by this technology to get into court and to bring evidence into court. And it can be anybody,” he continued.
Hawley said he wanted to make it easier for consumers harmed by AI – he suggested harms such as generating medical or election misinformation – to launch class-action lawsuits against companies that built the technology.
Marcus, however, disagreed. He argued the legal system moves too slowly to effectively regulate technology. He added that it’s unclear whether current laws can deal with AI’s impact on current issues – like copyright, misinformation, or whether companies can be held accountable for what chatbots utter, or if they’re protected under US laws that exempt publishers from liability for content they host but which was created by third parties.
“We get a lot of situations where we don’t really know which laws apply. There will be loopholes – the system is really not well thought through,” he said.
Montgomery was more optimistic about the US government’s ability to control things and expressed his belief that current regulators can address current concerns. “I think we don’t want to slow down regulation to address real risks right now. We have existing regulatory authorities in place, who have been clear that they have the ability to regulate in their respective domains.”
Over the last few months, agencies like the Federal Trade Commission and the Department of Justice have issued strongly-worded statements warning that they will crack down on AI companies violating civil rights and consumer protection laws.
Competition and data
Developing state-of-the-art AI systems is difficult, and requires substantial, expensive, and scarce hardware resources.
Many experts are concerned that the technology will be controlled by giant corporations establishing monopolies. Altman acknowledged that developers build great open source technology, but are not generally capable of building competitive models. He said if there are only a few major players then AI will be easier to regulate – but there are downsides to a concentration of power.
“There’s a real risk of a kind of technical technocracy combined with oligarchy where a small number of companies influence people’s beliefs through the nature of these systems … [They] can subtly shape our beliefs and have enormous influence on how we live our lives and having a small number of players do that with data that we don’t even know about. That scares me,” Marcus said.
The data used to train AI models massively impacts its capabilities, performance, and behavior. Policymakers have proposed auditing data to understand the model, and ensure the datasets used are of high quality and do not infringe upon copyrighted content.
Privacy is also a top concern. Officials in Italy, France, Spain, and Canada have launched investigations into why ChatGPT’s data trove includes personal information such as phone numbers and addresses.
It’s also unclear how information provided by users – which might itself be sensitive – might be used or stored. Samsung, for example, has temporarily banned employees from using the tool after one worker leaked proprietary source code.
Altman said he thought users should be able to opt out from having their data used to train models, and that companies like OpenAI should make it easier for people to delete their data. Last month, the San Francisco-based startup rolled out a feature allowing users to turn off their chat histories in ChatGPT, to prevent their data from being used.
Printing press or atomic bomb?
Ultimately, Congress seemed confused on how to think about AI.
“Is it going to be like the printing press that diffused knowledge and power and learning widely across the landscape, that empowered ordinary everyday individuals? That led to greater flourishing, that led above all to greater liberty? Or is it going to be more like the atom bomb – A huge technological breakthrough, but the consequences severe, terrible and continue to haunt us to this day?” Senator Hawley asked.
“I don’t think any of us in the room know the answer to that question. So I think that answer has not yet been written. And to a certain extent, it’s up to us here and to us as the American people to write the answer,” he added.
Experts in industry and academia, however, are more savvy. They agree that AI is advancing rapidly, and poses novel societal risks and challenges. They want regulatory policies to tackle safety issues, without hampering research and development.
“I think it’s important that any new approach, any new law, does not stop the innovation from happening with smaller companies, open source models, researchers that are doing work at a smaller scale. That’s a wonderful part of this ecosystem and of America. We don’t want to slow that down,” Altman concluded. ®