Algorithms of Armageddon: The Impact of Artificial Intelligence on Future Wars

Image of Algorithms of Armageddon: The Impact of Artificial Intelligence on Future Wars
Release Date: 
March 12, 2024
Publisher/Imprint: 
Naval Institute Press
Pages: 
248
Reviewed by: 

“an excellent primer for anyone wanting a solid understanding of this future U.S. national security challenge.”

Is artificial intelligence (AI) the next superweapon of warfare? What is AI and how will it truly affect warfare in this century? Can anything be done to stop the spread of AI and its impacts on future conflict? These are questions the authors try to answer in a fairly straightforward manner in this informative volume that addresses one of the most controversial topics in the defense arena today.

AI has built up a great mystique about what it can and cannot do and there is a lot of misinformation about exactly what constitutes “artificial intelligence.” The authors lay out a well-defined groundwork that does not get bogged down in too much technical jargon of exactly what makes up an AI algorithm. As they correctly point out, AI is actually the integration of an algorithm, large amounts of what is termed “big data,” and the ability for the algorithm to use that big data, typically terabytes or even petabyte’s of data, to “learn” from the constant running of the algorithm with continually updated data sets. 

There are several critical factors the authors explore that will determine the ultimate impact of AI on the battlefield and in military weapons systems. First, the development of AI will occur in both the civilian and military realms and there will be a great deal of overlap and technology transfer between them. As one example, the technology to create a smart, self-driving automobile could be easily adapted to drive a tank or other armored vehicle on a battlefield, so the military is unlikely to have a monopoly on the development of this technology.

Second, the ability to “control” the proliferation of AI via arms-control type treaties is as likely to be as effective as other historical arms control treaties—while some countries may abide by them, others will not, either because they do not trust other countries, or they want to cheat by developing AI programs covertly to give them an advantage in a future conflict.

Finally, while AI may seem to be the next huge leap in military technology, it is still dependent on the design and development of the algorithm by human programmers. As the early 2024 Google historical AI image generator debacle shows, the biases of the programmers can produce less than optimal, or in this instance for Google, embarrassingly incorrect historical images. Future military AI instances will face the same challenges in development and testing before being fielded.

However, once these factors are resolved, there is no doubt that AI will work its way onto future battlefields driven primarily by two recent developments. First, the widespread use of unmanned systems, particularly unmanned aerial systems has reached a tipping point like the use of biplanes in World War I—they are here to stay, and every military is now trying to figure out exactly how to integrate them into military planning and battlefield operations. The war in Ukraine has acutely shown the utility of drones for reconnaissance, targeting, and even attack. Combined with an AI network that can control large numbers of small attack drones and the potential is startling.

The second development is the sheer volume of information that modern sensor systems generate and the increasing inability of commanders and their staffs to integrate, synthesize, and make decisions on a modern battlefield. AI, through its ability to ingest and process massive amounts of data will become an invaluable tool to aid in managing the modern battlespace. The authors provide a compelling thesis about how the integration of humans and AI will be a fundamental shift in how battles are planned and conducted.

The most controversial part of military AI and one the authors spend a fair amount of time discussing not only the technical, but the legal and policy aspects, is the use of AI systems to actually control weapons release, either on unmanned drones or other weapon systems. The U.S. Navy has successfully used a defensive semi-autonomous weapons system known as the Close-In Weapon System (CIWS) to shoot down Houthi missiles and drones fired at warships in the Red Sea. But the potential for offensive weapons use, particularly on targets where civilian casualties could occur, is still a highly controversial topic. The authors examine this potential from all sides, particularly the U.S. military’s ongoing development of doctrine, potential rules of engagement and the general ethical and legal issues of potentially removing the “man in the loop” for weapons release.

As a conclusion, the authors point out the very uncomfortable truth that AI is here to stay, it will be incorporated in future military systems, and the U.S. is likely falling behind China and Russia in the development of military AI systems and capabilities. They paint an alarming picture of an incoherent military and overall national policy to ensure the U.S. develops both an offensive and defensive AI capability and associated doctrine, strategy and ultimately training to ensure the U.S. military forces on a future battlefield do not find themselves overwhelmed by an opponent’s use of this significant new force multiplier. This book is an excellent primer for anyone wanting a solid understanding of this future U.S. national security challenge.