Four of the world’s leading technology companies agreed to a new partnership aimed at promoting responsible use of artificial intelligence.
On Wednesday, Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, an industry group focused on developing safe and responsible frontier AI models, which they define as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.”
According to the announcement, the forum will “draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem,” including through technical evaluations, benchmarks, and industry best practices and standards.
Microsoft Vice Chair and President Brad Smith said, “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
White House secures voluntary commitments from seven AI companies
The announcement comes less than a week after U.S. President Joe Biden secured voluntary commitments from seven leading AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — to underscore safety, security and trust.
Prior to publicly releasing new products, the companies agreed to conduct internal and external security testing of their AI systems, and pledged to share information across industry, governments, civil society and academia on managing AI risks.
Other commitments included investing in cybersecurity, mitigating insider threats, facilitating third-party discovery and reporting vulnerabilities in their systems. The seven companies will also develop mechanisms to ensure users know when content is AI generated, to prioritize research on societal risks created by AI, including bias, discrimination and privacy protections.
The White House has been engaging on AI policy for some time, including the release of a Blueprint for an AI Bill of Rights earlier this year.
Calls for an executive order
However, not everyone thinks the voluntary commitments are enough. Suresh Venkatasubramanian, a former assistant director for science and justice at the White House Office of Technology and Policy and co-author of the AI blueprint, said there are “a few sound practices within these commitments,” including testing for potential harms prior to deployment, independent evaluations, and safety by design.
“The problem is that these commitments are vague and voluntary,” he wrote in a column for Wired.
Venkatasubramanian believes, in addition to legislation that ensures companies live up to their commitments, the federal government “can make a real difference by changing how it acts, even in the absence of legislation.”
To help, Venkatasubramanian called on the White House “to issue the executive order promised at last week’s meeting, alongside specific guidance that the Office of Management and Budget … will give to agencies.”
He said, between the blueprint and the U.S. National Institute for Standards and Technology’s AI Risk Management Framework, “we already have a roadmap for how the government should oversee the deployment of AI systems in order to maximize their ability to help people and minimize the likelihood that they cause harm.” The executive order could “enshrine these best practices,” he said.
In a podcast interview soon to be released by the IAPP, Venkatasubramanian also said while broader calls for international codes of conduct are worthwhile, he is more concerned about more localized solutions.
Will there be international cooperation?
In an op-ed published in the Financial Times Tuesday, U.S. Secretary of State Antony Blinken and Secretary of Commerce Gina Raimondo said “to shape the future of AI, we must act quickly.”
“As home to many of the leading companies, technologies and minds driving the AI revolution,” they wrote, the U.S. “has the ability and responsibility to lead on its governance. We are committed to doing so in partnership with others around the world to ensure the future reflects our shared values and vision for this technology.”
Blinken and Raimondo touted the work of the G7 through the Japan-led Hiroshima Process to help realize international commitments. “We want AI governance to be guided by democratic values and those who embrace them, and G7-led action could inform an international code of conduct for private actors and governments, as well as common regulatory principles for states.”
Regulators already on the move
Though there are growing calls for international cooperation, the U.S. Federal Trade Commission has been consistent and clear in its belief that it already has many of the tools to enforce responsible AI practices.
At the IAPP Global Privacy Summit 2023 in April, FTC Commissioner Alvaro Bedoya said, “The reality is AI is regulated (in the U.S.). Unfair and deceptive trade practices laws apply to AI. … At the FTC, our core Section 5 authority extends to companies making, selling or using AI. If a company makes a deceptive claim using or about AI, that company can be held accountable.”
Earlier this month, the FTC initiated an investigation into OpenAI over an alleged data leak and the accuracy of OpenAI’s popular ChatGPT. The agency sent OpenAI a 20-page “demand for records about how it addresses risks related to its AI models.”
In response, OpenAI said it would work with the agency.