Open source AI may dodge the worst of looming US regulation — and that could make it a popular alternative to the closed systems in development.
According to reports from the Associated Press, published in ABC News, the US government is set to argue that open source AI systems are less of a risk than their closed counterparts, suggesting there’s less need to restrict companies making open systems.
That’s the result of President Joe Biden last year asking for the US Commerce Department to consider the merits of open models, which are not open source but have publicly available details such as the weights in the model. For example, Meta’s Mark Zuckerberg has argued for AI to be more open, while OpenAI prefers to keep the workings of its systems locked down.
“A year ago, there was a strong narrative about risk and long-term concerns about AI systems being too powerful,” Alan Davidson, an assistant secretary of the U.S. Commerce Department, told AP.
“We continue to have concerns about AI safety, but this report reflects a more balanced view that shows that there are real benefits in the openness of these technologies.”
Open source AI vs closed source AI
The US report highlights the debate between open and closed systems. Notably, proponents of the former not only believe transparency can help mitigate risks such as bias but also help spur innovation, versus closed-box systems that don’t allow ideas to be shared or improved upon — but do keep key technologies from being used in dangerous ways by those outside the company.
Angus Allan, Senior Product Manager at CreateFuture, told ITPro that while the early days of the generative AI ‘boom’ have been dominated by closed source, proprietary models, open source development is beginning to gain traction.
“Until recently, ‘state of the art’ AI models have almost exclusively been the product of closed source models like ChatGPT, Gemini, and Claude,” he said.
“Open source models have lagged behind their peers, meaning there has been no motivation to strictly regulate them.”
That changed with Meta, which has taken a more open approach, he noted.
“Meta has been a trailblazer in advocating for open sourced AI model development, and their latest model, Llama 3.1, is on-par with its closed source counterparts, even beating the original GPT-4 in many benchmarks.”
But Chris Black, AI evangelist at broadcast company Vizrt, told ITPro that there are risks to open models — and that means some regulation may be warranted.
“The impending EU AI Act shows that reasonable safety guardrails can be put in place without hindering innovation, making exceptions for some open-sourced models that are developed with full transparency,” Black said.
“With this in mind, the US placing little to no regulation on open source models poses a degree of risk… The free rein of open source AI models could set a dangerous precedent of easy access to powerful technologies by bad actors.”
Indeed, while transparency offered by open systems can mitigate some problems, it doesn’t solve every danger raised by AI, said Ilia Kolochenko, CEO at ImmuniWeb.
“Technically speaking, while open-source AI – however we define it – may provide a higher degree of transparency compared to proprietary AI technologies, other risks including hallucinations and inaccuracies, capacity to cause harm, infringement of intellectual property, and privacy risks are not necessarily mitigated in any manner by the fact that AI is an open source AI,” he said.
Should businesses favor open source AI too?
Dodging potential restrictive regulatory action could be one reason to look at using open source models in your business — but it’s not the only factor to consider, according to Allan.
This latest seal of approval from US officials is likely “signaling to industry” that their preference is to spur open innovation. Crucially, Allan said, this could “tip the scale in terms of industry adoption of open source models”.
“For businesses, though, the focus shouldn’t be on a difference in regulatory regime, the difference is on performance, safety, and cost,” he added. “And open source models are now competing strongly on these metrics.”
Paul Henninger, head of connected technology for KPMG UK, argued that open systems can be more cost effective to begin with, but rise in cost when adapted to specific tasks. Henninger advised companies to consider how well any AI can be tailored to meet the businesses objectives.
“Open systems are the best way to maintain access to the latest research on AI problem solving because they benefit from the community’s collective knowledge and improvements,” he added.
“The democratic nature of the technology means that long-term transformation depends on broad access to the tools. On the flip side, participating in the open system means sharing innovation and advancements with competitors.”
What even is open source AI?
Problematically, various government reports haven’t defined what they mean by open source AI. While open source software has specific definitions and licenses, that’s not the case for open AI, Kolochenko warned,
“First and foremost, one needs to give a crystal-clear definition of open-source AI. Does it mean that, say, the LLM will be publicly accessible? Does it mean that the model’s architecture and underlying algorithms will be publicly documented or otherwise disclosed?
“Does it mean that the model’s training data will be publicly available or at least described in a comprehensive manner? Finally, does it mean that the model is lawfully trained on data in public access? These are all separate and complex but interrelated questions, which are frequently conflated by people.”
Amanda Brock, CEO of OpenUK, agreed that the terminology matters — if open source AI will be restricted less than closed systems, we need to define what each means and stop calling it “open source”.
“The real question for every government is of course what are AI ‘open systems’ — which the press and Meta insist on calling open source AI,” she says. “At the moment despite the EU AI Act giving exclusions to ‘free and open-source AI, neither term has a meaning.”
Brock noted there are efforts to define open source AI, but warned any definition must include all aspects of AI, with a full understanding of what it means for each to be partly or totally open and the benefits and risks of each — she pointed to the recent Radboud University Report that breaks systems down into 14 key components as an example.
“Understanding the principles of open source software at the heart of the Open Source Software Definition is key — the free flow where anyone can use it for any purpose removes the friction of other licensing or payment,” Brock added.
“Meta’s Llama, whilst a great step in the right direction of openness, is limited in what is opened and introduces friction with commercial restrictions in the license.”