Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a. and b.) and third question (part c.) which ask how AI will affect the character and/or the nature of war, and what acquisition and application processes need to change to allow the government to address the AI national security and defense needs of the United States.
Recently, on a military base in the Euphrates River Valley, a soldier described to me how he was supposed to defend the base from a small hostile drone, like the ones used by folks back home to take dramatic shots for real estate listings or weddings. On his plywood desk there was an array of laptops and devices, but none of these products were connected or worked with the others. One of the devices chimed an alert occasionally. These alerts were usually false positives, but he still needed to manually check a few of the other systems, one after another. It wasn’t clear where he should direct his attention or how he should interpret all the information he was receiving.
As the soldier spoke, I recalled having similar feelings when interfacing with military technology to target the Islamic State as a marine in 2016: the innumerable hours watching video footage from aerial platforms to see momentary glimpses of nefarious activity, the sifting of endless intelligence