Duelling Algorithms: Using Artificial Intelligence in Warfighting

Examining the dichotomy of AI employment in warfare-“Do things better?” or “Do better things?”-and other implementation challenges.

Estimated Time to Read: 7 minutes

By Peter Layton

Technological change is relentless. Fifth generation warfare is only just emerging but already commercial technology developments are pushing us in new directions. Artificial intelligence (AI) has arrived as a new disruptive force although its impact and its usefulness to warfighting is unclear. To help address that question, I recently wrote a paper for the Royal Australian Air Force.

Importantly, this paper tries to stay practically focused on the application of emerging intelligent machine technologies to warfighting. Fictional books and movies remain fascinated by notions of robot soldiers, often fighting some final battle against the human race. Such technologies though remain distant and the form in which they may emerge – if at all – is very uncertain. With warfighting being a practical profession, this paper accordingly sticks to more mundane non-fiction matters in thinking about the here-and-now.

However, while Terminator cyborgs and Cyberdyne’s Skynet are imaginary, today’s smart-phone and internet search engines already use intelligent machine technologies. The Chinese Government’s Skynet intelligent surveillance system is operational and Project Maven will deliver its AI-powered analysis system to the USAF later this year. Intelligent machines have arrived. Algorithmic warfare is very real.

The paper’s ‘Algorithmic Warfare’ title derives from Project Maven but it is much broader than that project or indeed AI itself. When we use AI, we are generally actually thinking of an amalgam of various technologies including ‘big data’ and ‘the cloud’ that when integrated give us machines with particular capabilities. These technologies are emerging from the commercial world and proliferating widely, as the AI in your smart phone will tell you if asked. Given this, the key differentiator in determining combat performance between future intelligent machines used for warfighting tasks appears likely to be the algorithms each incorporates and how it has been trained.

Algorithms are the sequence of instructions and rules that machines use to solve problems. They transform inputs to outputs and as such are the crucial conceptual and technical foundation stone of modern information technology and the new intelligent machines. Algorithms may also become the conceptual and technical foundation stone of future warfighting – and hence the title.

Algorithmic warfare then is not a discrete new technology such as directed energy or hypersonics. Instead, the concept’s technologies will have a broad, all-pervasive effect, progressively becoming omnipresent across warfighting. Such smart machines though do have distinct limitations that need to be understood and which can be exploited.

The characteristics of intelligent machines differentiate them from the traditional programmable machines we are all used to. Unlike these earlier machines, intelligent machines are trained although it is not always apparent what they have learnt. This aspect is magnified in neural network machines as they continue to learn and evolve ‘on the job.’ Intelligent machines then do not necessarily give the same output each time in the same situation. They are capable of emergent behaviour and may well surprise: for better or worse.

Intelligent machines are superior to humans in analysing big ‘V’ data: high volume, high velocity, and diverse variety. Regarding data volume, much more data is now collected than can ever be sensibly analysed by humans; there is no viable alternative to machine analysis. Regarding velocity, intelligent machines work at machine speed, almost beyond the comprehension of humans. Regarding variety, humans have limited attention frames, favouring some data sources over others. Machines analyse data more comprehensively.

However, intelligent machines have some shortcomings compared to humans. They are quite brittle being generally unable to handle minor context changes. Moreover, such machines have poor domain adaptability in that they can struggle to apply knowledge learned in one context to another. Humans are also better at inductive thought: being able to generalise from limited information. Humans generally make better judgments in environments of high uncertainty.

This means that the major issue today in introducing intelligent machines to the battlefield is finding the optimal mix of machine and human competencies for any given problem, leveraging each’s unique cognitive advantages. In this the human+machine+better process combination may offer much. Task-optimised human-machine interfaces could be the key to optimal human-machine teaming and thereby victory in future wars.

In being applied to warfighting, intelligent machines may change the character of war and overthrow some established precepts. The current emphasis on quality may be displaced, mass may return to the battlefield, and the pace of battle quicken. Such notions could disrupt current force structure models. The size of an armed force may become disconnected from the population size of the state fielding it. Small wealthy states might field much larger forces than large poorer ones. Intelligent machines may also allow all to sharply improve their training, reducing the advantages in skill and experience some countries’ armed forces currently possess.

In considering strategy, there are two distinct schools of thought: will intelligent machines allow us to do things better or instead to do better things? The ‘do things better’ school emphasizes inserting intelligent machines deep into battle networks to enhance performance. Such networks currently have trouble processing and assessing information; using intelligent machines within the network may solve this. The ‘do better things’ school emphasises distributing intelligent machines in a manner that shifts the primary function of battle networks from information sharing towards machine-waged warfare. Our developing fifth generation battle networks might then morph into active fighting networks where edge devices – not the network – dominate. Under this construct, machine-speed hyperwar emerges and the tactical mainstay becomes swarming intelligent machines.

Intelligent machine technology advances have drawn Chinese and Russian interest. China has become a ‘fast follower’ and is implementing an ambitious new national strategy to become world leader in intelligent machine technology. In the military domain, the People’s Liberation Army (PLA) considers that intelligent machine technology will lead to ‘intelligentized’ warfare replacing today’s network-centric warfare. An early embrace of such a transformation may allow the PLA to overtake America’s military. In contrast, Russia’s flagging economy hinders its progress in intelligent machine technology but creates a demand to innovate, both using technology created in Russia and elsewhere.

China and Russia lead in two specific national security areas. China has long sought to enforce domestic stability. These efforts however, are becoming much more individualized and intense though the progressive application of intelligent machines to undertake population surveillance and control across the country. China is moving towards a ‘rule by algorithm’ future. On the other hand, Russia has embraced algorithm warfare influence operations to disturb other nations’ domestic stability. Russia cleverly uses others’ algorithms against them, perhaps creating a whole new dimension to such warfare and suggesting a way smaller nations might manoeuvre in the new ‘intelligentized’ warfare era.

In considering ethical and law of war issues, human responsibility and accountability loom large and the application of intelligent machine technologies to warfare will not fundamentally alter this. This may seem a paradoxical situation when machines do some tasks much better than humans. Intelligent machines though have a range of shortcomings particularly in being inherently unpredictable. Only humans can ‘do’ responsibility and accountability, making three issues important.

Firstly, users must understand their intelligent machines are fallible and will at times fail in unexpected ways. The machines’ tactical employment needs to reflect this.

Secondly, for humans to understand how their intelligent machines operate in the sense of fortes and foibles, they will need optimised training regimens. Intelligent machines will bring new training demands, not make training no longer necessary. Intelligent machines and their humans will need to train together.

Thirdly, the human-machine interface design is critical to allowing humans to understand what the intelligent machines are doing. In this however, humans should bear in mind that gaining such understanding is inherently problematic. The critical matter for humans is to try to ensure that the occasional inexplicable intelligent machine output is not an unrecoverable action.

Intelligent machines seem set to remake our ways of war. Our machines have previously been extensions of ourselves; they do tasks our bodies can do, only physically better. But our new machines are different. They are intelligent, can learn, display emergent behaviours, and make apparently incomprehensible decisions. It is tempting to anthropomorphize them as humans have done for centuries with our gods and animals, but this would be unwise. Our new intelligent machines do not think like us; they literally reason differently, have dissimilar logic flows, and possess unusual rationalities. In the business of making war, they are truly new actors that bring disruptive capabilities in their wake. The future of war may well not be like its past. Buckle up for a possible reboot.

Dr. Peter Layton is a RAAF Reserve Group Captain and a Visiting Fellow at the Griffith Asia Institute, Griffith University. He has extensive aviation and defence experience and, for his work at the Pentagon on force structure matters, was awarded the US Secretary of Defense’s Exceptional Public Service Medal. He has a doctorate from the University of New South Wales on grand strategy and has taught on the topic at the Eisenhower College, US National Defence University. For his academic work, he was awarded a Fellowship to the European University Institute, Fiesole, Italy. He is the author of the book Grand Strategy.

The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

One thought on “Duelling Algorithms: Using Artificial Intelligence in Warfighting

  • May 3, 2018 at 5:47 am
    Permalink

    Great article. I especially like the three issues we must remember as we begin to employ AI. Thanks for contributing your input to this important new realm.

    Reply

Leave a Reply