Dev

Fear not, White House chatted to OpenAI and pals, and they promised to make AI safe


Seven top AI development houses have promised to test their models, share research, and develop methods to watermark machine-generated content in a bid to make the technology safer, the White House announced on Friday.

Leaders from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI vowed to work toward tackling safety, security, and trust issues in artificial intelligence, we’re told.

“Artificial intelligence offers enormous promise and great risk,” the Biden-Harris administration said [PDF]. “To make the most of that promise, America must safeguard our society, our economy, and our national security against potential risks.

“The companies developing these pioneering technologies have a profound obligation to behave responsibly and ensure their products are safe.”

Those orgs have agreed to voluntarily ensure products are safe in high-risk areas, such as cybersecurity and biosecurity, before they are made generally available, by conducting internal and external security audits. Some of that testing will reportedly be performed by independent experts. They also promised to share best practices for safety and collaborate with other developers to develop technical solutions to current issues.

AI models are often proprietary, and the companies pledged that if they decide to keep low-level details of their models, such as their neural network weights, a secret for safety and/or commercial reasons, they will strive to safeguard that data so that it isn’t stolen by intruders or sold off by rogue insiders. Essentially, if a model is not to be openly released for whatever reason, it should be kept under lock and key so that it doesn’t fall into the wrong hands.

All seven companies will also support a way for users to report any vulnerabilities to be fixed, and will clearly state their models’ capabilities, and limitations, and explain what they shouldn’t be used for.

To try and tackle issues like disinformation and deepfakes, the group promise to develop techniques like digital watermarking systems to label AI-generated content. Finally, they also all promised to prioritize safety research addressing bias, discrimination, and privacy, and to apply their technology for good – think digging into cancer research and climate change.

Talk the talk, walk the walk?

The latest announcement from the White House is pretty weak in terms of real regulation, however.

All these above things are what you would hope these companies would be doing on their own anyway. If they do go back on their word, there won’t really be any repercussions considering these commitments were made on a voluntary basis. 

That said, the White House isn’t totally naive. It did note that hard regulations to curb and steer ML systems may be on the horizon:

The US and UK haven’t been as heavy-handed as lawmakers in Europe. Neither of those two countries have passed any legislation that specifically targets how AI is developed and deployed, unlike the EU’s AI Act. America’s Justice Department and the Federal Trade Commission have issued warnings about how technology must abide by laws protecting civil rights, fair competition, consumer protection, and more. 

Last week, the FTC sent a letter to OpenAI asking it to explain how its models are trained and what data it collects as the agency investigates whether the company might be breaking consumer protection laws. ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.