Should Algorithms Control Nuclear Weapons Launch Codes? The US Says No

Last Thursday the The US State Department has outlined a new vision for the development, testing and verification of military systems – including weapons – that use AI.
The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the US to direct the development of military AI at a crucial time for the technology. The document does not legally bind the US military, but the hope is that allied nations will agree to its principles and create some sort of global standard for responsibly building AI systems.
Among other things, the statement states that military AI must be developed in accordance with international laws, that nations should be transparent about the principles of their technology, and that high standards for verifying the performance of AI systems are implemented. It also states that humans alone should make decisions about the use of nuclear weapons.
When it comes to autonomous weapons systems, US military leaders have often given assurances that a human will stay “in the loop” on decisions about the use of lethal force. But official guidance, first issued by the DOD in 2012 and updated this year, doesn’t require it.
Attempts to forge an international ban on autonomous weapons have so far failed. The International Red Cross and campaign groups like Stop Killer Robots have pressed the UN for a deal, but some major powers – the US, Russia, Israel, South Korea and Australia – have shown themselves unwilling to commit.
One reason is that many in the Pentagon see increased use of AI throughout the military, including outside of non-weapons systems, as vital — and inevitable. They argue that a ban would slow US progress and hamper its technology against adversaries like China and Russia. The war in Ukraine has shown just how quickly autonomy can help gain an advantage in a conflict in the form of cheap, disposable drones that are becoming increasingly powerful, thanks to machine learning algorithms that help them perceive and act.
Earlier this month, I wrote about former Google CEO Eric Schmidt’s personal mission to beef up the Pentagon’s AI to ensure the US doesn’t fall behind China. It was just a story that emerged from the months-long reports of efforts to introduce AI into critical military systems and how it is becoming a central part of US military strategy – even when many of the technologies involved are still nascent and untested during crises .
Lauren Kahn, research associate at the Council on Foreign Relations, hailed the new US statement as a potential building block for more responsible use of military AI around the world.
Twitter content
This content can also be displayed on the website originates out of.
Some nations already have weapons that function in limited circumstances without direct human control, such as missile defense systems that must respond at superhuman speeds to be effective. Greater use of AI could mean more scenarios where systems act autonomously, such as when drones operate out of communication range or in swarms too complex for a human to manage.
Some proclamations about the need for AI in weapons, especially from companies developing the technology, still seem a little far-fetched. There have been reports of the use of fully autonomous weapons in recent conflicts and of AI assisting in targeted military strikes, but these have not been verified, and the truth is that many soldiers may be suspicious of systems that rely on algorithms that are anything but are infallible.
But if autonomous weapons cannot be banned, their development will continue. Therefore, it must be ensured that the AI involved behaves as expected – even if the engineering required to fully implement intentions such as those in the new US statement still needs to be perfected.
https://www.wired.com/story/fast-forward-should-algorithms-control-nuclear-launch-codes-the-us-says-no/ Should Algorithms Control Nuclear Weapons Launch Codes? The US Says No