Artificial Intelligence in the Operational Information Environment: The Need for Proactive Doctrine

By: Thomas A. Drohan
Approximate Reading Time: 12 minutes

Joint Operations doctrine about the Operational Environment (OE) omits the agency of artificial intelligence (AI).  How is this a problem? After all, US Joint Publication 3-0 (IV-1-IV-2) defines the Information Environment (IE) as expansively as it ever has, to include “cognitive” attributes. See the bolded portions of this excerpt:

The information environment comprises and aggregates numerous social, cultural, cognitive, technical, and physical attributes that act upon and impact knowledge, understanding, beliefs, world views, and, ultimately, actions of an individual, group, system, community, or organization. The information environment also includes technical systems and their use of data. The information environment directly affects all OEs.  Information is pervasive throughout the OE. To operate effectively requires understanding the interrelationship of the informational, physical, and human aspects that are shared by the OE and the information environment. Informational aspects reflect the way individuals, information systems, and groups communicate and exchange information. Physical aspects are the material characteristics of the environment that create constraints on and freedoms for the people and information systems that operate in it. Finally, human aspects frame why relevant actors perceive a situation in a particular way. Understanding the interplay between the informational, physical, and human aspects provides a unified view of the OE.

The problem arises when we see AI only as a human-made technical system, rather than as an autonomous influencer itself.  If the former becomes conventional wisdom, we have a significant vulnerability in the operational IE. Why?

AI is becoming a source of influence in critical operations.

AI Influences

Consider a few examples. Each capability is called for, but not specified in terms of effects, in the National Security Strategy (NSS) and the National Defense Strategy (NDS).

Briefly, the NSS has four goals: 1. protect the homeland;  2. promote prosperity; 3. peace through integrated power; and 4. advance influence. Under goal #3, there are three “renew and make competitive” objectives: comparative advantages; military capabilities; and diplomacy and statecraft.

So, what types of military capabilities can integrate various instruments of power (Goal 3)? Capabilities to “connect any sensor to any shooter in any domain,” and to sustain operations in contested environments are synergistic force integrators.

In support of the NSS, the NDS has eleven objectives. In compressed form, these are: 1. defend the homeland; 2. sustain joint force advantages; 3. deter adversaries; 4. enable inter-agency efforts; 5. maintain regional balances of power; 6. defend allies; 7. prevent adversarial weapons of mass destruction; 8. counter terrorism; 9. preserve common domains; 10. acquire speedy affordable performance; and 11. secure the national security innovation base.

Now we apply our question to defense strategy. What types of military capabilities can achieve these objectives? Again, a joint force with survivable capabilities for dynamic force employment is a basic requirement for realizing such a broad scope of objectives. Systems such as advanced autonomous networks, resilient C4ISR, and agile logistics are foundational to effectiveness.

Such AI systems can also exert influence, as in the following three examples.

Multi-Domain Operations (MDO) rely on technology with humans in the loop, but through interfaces that essentially represent machines’ conclusions. Increasing, human decisions are based on information from machine-processed data, not the data itself. How many pilots mentally compute a fix-to-fix, rather than accepting route guidance from integrated avionics? There’s often no time for the former as 5th-generation operators orchestrate multi-source situational awareness. The more we rely on technology, the more we accept the conclusions presented to us by machine processing. In a densely interconnected IE, decisions made on the basis of those conclusions often have unpredictable nth-order effects.

Other critical examples are Agile Combat Support (ACS) and Agile Combat Employment (ACE).

ACS is the creation and lifeblood of rapid, sustained deployments. An agile combat support enterprise is a huge part of what USAF Chief of Staff General Dave Golfein’s call “to design the Air Force of the future in alignment with the National Defense Strategy.”  More automation of operational testing and software development is underway. As Undersecretary of Defense for Acquisition and Sustainment Ellen Lord put it, software is defining combat systems and hardware is enabling them. AI that writes code leads to AI that writes software.

ACE is critical to conducting distributed operations. Effective operations require resilient communications as threats target our platforms and linkages. In rapidly changing, highly contested environments, decisions to switch routing among nodes are shaped by machine processing and confirmed by humans. Who checks the assumptions of those algorithms? Certainly, hackers looking for zero-day exploits do. Many other desired objectives, such as reducing deployment footprints, also rely on technologies such as small smart munitions and distribution systems (11, 15, 16).

Micro munitions and the distributed ability to employ them provide kinetic options. This technology influences decision makers’ thinking about whether and how to intervene. The availability of other AI-enabled capabilities (bots, DDOS attacks) generates options for a multitude of other actors in the IE. As in our preceding MDO and ACS examples, these actors can create unpredictable effects.

AI therefore is one cause of those effects; an autonomous influencer.

Doctrine Reacts

So, why doesn’t our doctrine recognize that AI can be a cognitive actor, more than predictable, pre-programmed algorithms? One answer is that doctrine encourages drawing lessons learned from past experience, rather than anticipating alternative futures:

Joint doctrine presents fundamental principles that guide the employment of US military forces in coordinated and integrated action toward a common objective. It promotes a common perspective from which to plan, train, and conduct military operations. It represents what is taught, believed, and advocated as what is right (i.e., what works best). It provides distilled insights and wisdom gained from employing the military instrument of national power in operations to achieve national objectives.

Such retrospection becomes institutionalized in how we see our professional identities and related warfighting platforms, even as we watch the future emerge right in front of us. Take social media. Algorithms that create filter bubbles of our own preferences effectively monetize us when we start making predictable decisions. Did our doctrinal approach to mining fundamental principles out of past practices help us anticipate this and non-monetizing behavioral applications of AI technology?

Clearly not. The way we develop and vet doctrine is thoroughly reactive. Fixating on past experience becomes a bigger problem in environments and conditions where humans aren’t the most effectively intelligent actors.

Humans Control…Don’t We?

Super AI is not only coming; it is here. Machines are more effectively intelligent than humans in areas such as experience-based gaming (chess, go), operations research and data science solutions (optimization problems), and maximizing utility functions as a rational actor (decision-theoretic agent).  Back to our initial question of, how is ignoring AI agency a problem?

AI threatens human control, even in symbiotic machine-human systems wherein machines learn what human preference structures are, and obey them. Stuart Russell asks the important question, is there any biological example of a symbiotic relationship where the less intelligent being is in control of the more intelligent one? We can answer this question yes, if we make a distinction between narrow intelligence and general intelligence.

According to Max Tegmark, narrow AI refers to an ability to accomplish a narrow set of goals such as driving a car. General intelligence is an ability to accomplish any goal — learning.  We can find examples of predators (Orcas) and parasites (barnacles) that are more narrowly intelligent in terms of particularly skills — hunting and infesting — than their generally more intelligent prey (blue whales) and hosts (crabs).

Therefore it’s not just super AI—what Tegmark refers to as general intelligence beyond human levels—that threatens human control. We have seen that relatively obscure advantages in learning can be remarkably resistant to control by “great” powers. Particularly when used together, Diplomatic-Informational-Military-Economic-Social (DIMES) advantages can create synergistic combined effects.

For instance, a superior ability to understand tribal morality (a Social power advantage), exploit disinformation (an Informational power advantage), and create encrypted wealth (an Economic power advantage) can render high profile diplomacy and military preeminence ineffective. But under what conditions? Humans, with our neurally networked brains, prefer to think that we are singularly capable of anticipating those relevant conditions. However, we don’t understand how the most advanced AI we have created so far (deep neural networks) actually work. It follows that AI can cause unpredictable, unanticipated effects.

This struggle for theoretical understanding has two practical implications. First, we should prepare for the uncertain controllability of AI. This is not unlike accounting for human agency.  Second, we should characterize the complex features of the operational IE, to include the impact of AI. Treating AI systems as autonomous actors among interconnected, networked systems can reveal emerging threats and opportunities. If we anticipate such causes and effects.

Recommendations

Given the substantial research being conducted on how to control AI, military doctrine needs to be organizationally broadened and future-oriented by:

  1. Placing military doctrine in a broader scope at lower organizational levels than the National Security Council, the level where authorities presumably can combine DIMES-wide effects. How? Aligning desired effects across integrated National Military, Defense, and Security   Strategies is a start. Those documents largely omit the concept of effects that might otherwise connect priority actions (or, activities) and pillar sub-headings (or, objectives).
  2. Emphasizing future-casting and sense-making in military doctrine. Characterizing, anticipating and influencing dynamic changes in the operational IE is necessary for relevant operations. How? With doctrine that shapes more than military conditions. This effort will require close collaboration of civil-military expertise under proper authorities and permissions.

A place where military doctrine begins to do this is JP 3-0 and related doctrine on operations design and joint planning.

Designing and planning are about interpreting strategic priorities into desired end-states and missions that achieve supporting objectives and derived effects with tasks and activities. Interpreting is more than translating, and necessary when priorities are not specified. The latter typically are not, at the political level. Unfortunately, current joint military doctrine defines end-states narrowly as “military” end-states, even though military effects certainly are not limited to military contexts. Effects don’t respect lanes.

Nevertheless, strategists, planners, operators and analysts can think about and recommend how to create alternative futures by specifying ends (“Guidance” in the figure below) and shaping conditions via ways and means (“Operational Approach” in the future below).

OTH, Emerging Security Environment, Multi-Domain Operations

Figure 1. Assessment Interaction, JP 3-0 (II-11)

If we are not doing this, we are way behind competitors. Now, add not only “non-military” human actors, but also advanced machine-learning as agents. What happens to our strategy-making process? We have to make and test new assumptions about emerging possibilities. We can do this via courses of action that are red-teamed from multiple actors’ perspectives (more than “red”). This needs to be done using any available idea and resource, then assessed with respect to risks. Otherwise wargaming tends to become rehearsing pre-planned sequences.

Proactive doctrine that recognizes the reality of complex warfare can exploit the so-far human comparative advantage in strategic innovation. It’s notable that machines currently excel in experience-based learning. This is no small matter, as it includes generating new syntheses of discovered relationships. Humans, however, can intuit, deceive, somewhat control, and manufacture and destroy machines. In time, AI will be able to perform those cognitive, informational and physical functions as well. Out-thought is out-fought. We need proactive doctrine now.

Brig Gen (ret) Thomas Drohan is Director of the International Center for Security and Leadership, JMark Services Inc (securityandleadership.com). He formerly headed the Department of Military & Strategic Studies at the United States Air Force (USAF) Academy. He holds a PhD from Princeton University, an MA from the University of Hawaii, and a BS from the USAF Academy. Brig Gen Drohan’s publications include A New Strategy for Complex Warfare: Combined Effects in East Asia, and articles in journals such as Joint Force Quarterly and Defense Studies. His career includes combat rescue, airlift and anti-terrorism in East Asia, the Middle East, and Afghanistan. He is a Council on Foreign Relations Japan fellow and Reischauer Center for East Asian Studies scholar.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the United States Government.

Feature Image Source: Whale/ pixabay.com, Barnacles/ pixabay.com, AI/ istockphoto.com

OTH, Emerging Security Environment, Multi-Domain Operations

Print Friendly, PDF & Email

2 thoughts on “Artificial Intelligence in the Operational Information Environment: The Need for Proactive Doctrine

Leave a Reply