A group of government, academic and military leaders from around the world spent the past few days talking about the need to address the use of artificial intelligence in warfare.
Their conclusions? We need to act now to avoid regulating AI only after it causes a humanitarian disaster or war crime.
The first global Summit on Responsible Artificial Intelligence in the Military Domain, or REAIM, brought representatives from more than 60 countries – including the US and China – to the Hague in the Netherlands to discuss and ultimately sign a call to action on how to make responsible use of AI in the military.
Russia did not participate.
Signatories agreed that AI’s accelerating use means it’s critical to act now to establish international military AI norms, as well as address issues of AI unreliability, how responsible humans should be in the AI decision-making process, unintended consequences of AI use, and potential escalation risks.
One of the ways the summit hopes to enact its goals is through the establishment of a Global Commission on AI that will raise awareness of how AI can and should be used in the military domain, and how such tech can be developed and deployed responsibly.
AI’s two paths: Destruction or mercy
One discussion at the summit was to what extent will humans be responsible for the actions taken by autonomous systems, with the conclusion seeming to lean towards people being the final decision-makers when it comes to firing an autonomous system or enacting an action recommended by an AI.
“Imagine a missile hitting an apartment building,” said Dutch deputy prime minister Wopke Hoekstra. “In a split second, AI can detect its impact and indicate where survivors might be located. Even more impressively, AI could have intercepted the missile in the first place. Yet AI also has the potential to destroy within seconds.”
Hoekstra went on to explain in his speech opening the summit how the current status of AI reminded him of previous international rules of war established to prevent human rights abuses.
“The ban on expanding dumdum bullets that left victims with horrific wounds, the prohibition of biological and chemical weapons and the treaty to prevent the spread of nuclear weapons” could all be seen as parallels, Hoekstra said.
But there’s one big difference between rules enacted to prevent the use of inhumane weapons and AI: we’re already considering action before the worst has happened. “We have the opportunity to expand and fortify the international legal order, and prevent it breaking down,” Hoekstra said.
As to what sort of applications responsible use of AI could be turned to in war, Dutch minister of Defense Kajsa Ollongren implied there was no reason it couldn’t be used to save lives.
“With the right frameworks and legislation in place, using AI will make our operational and logistical processes simpler and more efficient. In this way we not only protect our own troops, but we can also limit harm and casualties to the greatest extent possible,” Ollongren said. ®