Pentagon (AP) —
The Pentagon now aims to activate thousands of autonomous systems with relatively cheap artificial intelligence (AI) by 2026 to keep up with China. While funding remains unclear, this ambitious initiative will speed up difficult decisions about which AI technologies are mature and trustworthy enough to use.
US Air Force Colonel Matt Strohmeyer said the ability to make better and faster decisions than competitors and adversaries is at the heart of the use of a number of new technologies being discussed by US military officials.
“This is not technology for the sake of technological advancement, this is technology that allows soldiers on the battlefield to make better, faster decisions,” Strohmeyer said.
There is little disagreement among AI scientists and Pentagon officials about the US military’s arsenal, which will eventually include lethal weapons that are fully autonomous or can make their own decisions. For example, unmanned aircraft or drones which one day, when entering combat in a robotic swarm, can make decisions to obtain information by scouting, while others carry out attacks.
Secretary of Defense Lloyd Austin said, “Artificial intelligence is at the heart of our innovation agenda, helping us to compute faster, share better, and leverage other platforms. These are fundamental things for competing in the future.”
This new initiative, dubbed “replicator,” is an initial step to further integrating AI into the US military.
Two competitors vying to develop a fully autonomous weapons system for “Replica” are Anduril and Shield AI. Both are backed by hundreds of millions of dollars in venture capital, and have a software-first approach, (by) acquiring larger drones in acquisitions.
Chief Digital and AI Officer at the US Department of Defense Craig Martell is less concerned about autonomous weapons making their own decisions, as opposed to systems being used in the field that don’t work as advertised.
“Regardless of the autonomy of the system, there will always be an agent who is responsible and understands the limitations of the system, who has been trained with the system, has appropriate confidence about when and where the system can be used, and will always be responsible for implementing it and being a guardrail barrier,” said Martell.
However, one pressing challenge is recruiting and retaining the human talent needed to test AI technology, as the Pentagon cannot compete on salaries. (em/hour)