Get the FREE DIGITAL BOOK: The Case for Killer Robots
Mind Matters Reporting on Natural and Artificial Intelligence
ucavunmanned-combat-air-vehicle-military-drone-stockpack-adobe-stock.jpg
UCAV(Unmanned Combat Air Vehicle) military drone
UCAV(Unmanned Combat Air Vehicle) military drone

After Thursday’s Dogfight, It’s Clear: DARPA Gets AI Right

In the dogfight Thursday between AI and a pilot, AI won. But what does that mean?

AI prevailed against a human in DARPA ’s recent AlphaDogfight trials. Given that DeepMind’s AI achieved the level of grandmaster in the StarCraft II video game, AI beating a human in a simulated closed world contest is not impressive. What is impressive is AlphaDogfight’s role in DARPA’s overall plan for the development of AI in the military.

DARPA, the United States’ Defense Advanced Research Projects Agency, has been called the US military’s “Department of Mad Scientists.” Its mission is to prevent strategic military surprise by supplying fertile ground where new and revolutionary ideas can sprout and grow. DARPA founded the internet and gave us the GPS (Global Positioning Satellite system) that guides our Google map directions.

Less well known is DARPA’s role in initiating research into self-driving cars. Over a decade and a half ago, DARPA’s Grand Challenge offered a million-dollar prize for autonomous vehicles traversing a given track. Many of the participants went on to pursue the commercial development of self-driving cars.

As I pointed out in my book The Case for Killer Robots, we need to examine history in order to soberly assess how we should adopt new AI weapons. Apart from technology, military adoption of AI requires the application of psychology. Col. Dan “Animal” Javorsek, the DARPA project manager for AlphaDogfight, gets it. He recounts a lesson in psychology from military history that can be applied to today’s AI.

In the early years of World War II (1939–1945), Hitler’s blitzkrieg (“lightning war”) strategy defeated Poland, Norway, Belgium, Holland and France before the US became involved in 1941. U.S. General George C. Marshall, later the namesake of the Marshall Plan, asked Chief of Cavalry Major General John Herr how the US could counteract the Nazis’ lightning war. Herr, a strong proponent of mounted cavalry (soldiers on horseback), said horses should be transported to the front lines by vehicle so they would be fresh when they initiated their attack on the German Panzer tanks. The foolish advice was ignored and the US cavalry wisely began abandoning the use of horses in favor of mechanized vehicles.

The lesson here is that inertia can impede thoughtful strategy. Like the cavalry, US fighter pilots have a long and proud tradition of doing things a certain way. Think Tom Cruise and Val Kilmer in Top Gun (1986).

According to Javorsek, fighter pilots need to be convinced of the capabilities and utility of AI in combat beyond current practice. DARPA’s AlphaDogfight is a step in this direction.

By posing relevant questions, DARPA’s overall AI strategy accurately embraces both the capabilities and limitations of AI. How can AI enhance the performance of the pilot by lessening cognitive load? How can companion unmanned aircraft accompanying the fighter pilot’s fighter jet be effectively used? What are the limitations and dangers of autonomous drones?

Remotely controlled unmanned drones are not always feasible. Their tethering to a remote-control pilot can be severed by enemy jamming. Totally autonomous drones cannot yet be trusted to make complex life-and-death decisions. So oversight by a human pilot is the sober solution in the near future. DARPA is addressing these and other important questions.

The mission of US military technical research is to work hard to equip the American war fighter with the tools for victory. My research as a professor has been funded by the US Army, Navy, and Air Force. I am invariably impressed by the serious dedication of those I have worked with in the military. This is less true of other government agencies that have funded my research. DARPA’s AlphaDogfight and the subsequent adaptation of AI in the military confirm my experience.

DARPA gets AI right.


Backgrounder: DARPA has scheduled AI vs. AI aerial dogfights for next week Robert J. Marks: A round robin tournament will select the AI that faces off against a human pilot Thursday. A successful AI dogfight tournament is exciting but it is only a first step toward enabling such fighters to be used in combat.

See also: Book at a Glance: Robert J. Marks, The Case for Killer Robots

Further reading: Why we can’t just ban killer robots Should we develop them for military use? The answer isn’t pretty. It is yes. (Robert J. Marks) (February 15, 2019)


Edit

Robert J. Marks II

Director, Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Besides serving as Director, Robert J. Marks Ph.D. hosts the Mind Matters podcast for the Bradley Center. He is Distinguished Professor of Electrical and Computer Engineering at Baylor University. Marks is a Fellow of both the Institute of Electrical and Electronic Engineers (IEEE) and the Optical Society of America. He was Charter President of the IEEE Neural Networks Council and served as Editor-in-Chief of the IEEE Transactions on Neural Networks. He is coauthor of the books Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks (MIT Press) and Introduction to Evolutionary Informatics (World Scientific). For more information, see Dr. Marks’s expanded bio.

After Thursday’s Dogfight, It’s Clear: DARPA Gets AI Right

Skip to toolbar Log Out
Move 1 file