[ad_1]
Last Thursday, the The US State Department has outlined a new vision for developing, testing and validating military systems, including weapons, that use AI.
The political statement on the responsible use of military artificial intelligence and autonomy represents the US’s attempt to lead the development of military AI at a critical time for the technology. The document does not legally bind the US military, but the hope is that the United Nations will agree to the guidelines, which will create a global standard for building AI systems responsibly.
The statement said, among other things, that military AI should be developed in accordance with international laws, that countries should be transparent about the principles underlying their technology, and that high standards should be applied to ensure the performance of AI systems. He also says that only people should make decisions about the use of nuclear weapons.
In the case of autonomous weapons systems, US military leaders have repeatedly asserted that one person “retains” decisions to use lethal force. But official policy, first issued by the DOD in 2012 and updated this year, doesn’t want that to happen.
Attempts to establish a global ban on autonomous weapons have so far been futile. The International Red Cross and campaign groups such as Stop Killer Robots have pushed for a deal at the United Nations, but some major powers – the US, Russia, Israel, South Korea and Australia – have proven unwilling to commit.
One reason is that many in the Pentagon see the addition of AI to the military, including those outside the military, as necessary and inevitable. They argue that the ban will slow U.S. growth and weaken its technology against adversaries such as China and Russia. The war in Ukraine has demonstrated how autonomy in the form of cheap, disposable aircraft can help provide the edge in conflict, with machine learning algorithms aiding in cognition and action.
Earlier this month, I wrote about one-time Google CEO Eric Schmidt’s personal mission to boost Pentagon AI to keep the United States from falling behind China. It was just one story in months reporting efforts to adopt AI in critical military systems and how that is becoming central to U.S. military strategy — even though many of the technologies are still in their infancy and untested in any crisis.
Lauren Kahn, a researcher at the Council on Foreign Relations, welcomed the new US announcement as a potential building block for more responsible use of military AI around the world.
Twitter content
This content may also be viewed on the Site. It is a starting point. from.
Few countries have weapons that operate without direct human control under limited conditions, such as missile defense, where missiles must react at superhuman speeds to be effective. Advanced use of AI could mean more situations where systems can operate autonomously, such as when drones are operating outside of communication range or in swarms that are too complex for anyone to manage.
Some proclamations around the importance of AI in weapons, especially from the companies developing the technology, still seem a little far off. There have been reports that fully autonomous weapons have been used in recent conflicts and that AI will assist in targeted military attacks, but these have not been confirmed, and in fact many soldiers may be wary of systems that support infallible algorithms.
But if autonomous weapons cannot be banned, their development will continue. That makes it necessary to ensure that the engineering required to fully implement objectives such as those in the new United States Declaration works as expected.
[ad_2]
Source link