Autonomy: The Future of Aerial Combat

Editors Note: The following is the first installment in a series addressing the future of autonomous aerial systems training and acquisition.  It was originally written for the Air Force Blue Horizons CSAT Program.

By Nicholas J. Helms

In other words then, if a machine is expected to be infallible, it cannot also be intelligent.

Alan Turing, Automatic Computing Engine, 1945

Autonomous machines are characterized by the ability to make decisions without direct human input. However, the thresholds for direct human input to machines, and human demands of machines, develop differently over time. Autonomous air vehicles in a combat environment will be more useful if they are capable of adapting effectively to changing human demands. These two expectations — autonomy and adaptation — apply to machines in the same way that they apply to human trainees. Therefore, the Department of Defense should test autonomous air vehicles in a manner similar to the way we train wingmen.

Autonomous air vehicles will be expected to iterate decisions over time and continue to develop after they are deployed to operations.  Thus, autonomy is a dynamic concept with long-term implications. Consequently, we require a method for testing autonomous air vehicles that incorporates a long-term mindset in the same way that we conceptualize the long-term development of a Weapons School Instructor or Red Flag Mission Commander. Effectively, testing autonomous systems matters because the future of air battle depends largely on a decision that determines how much machine autonomy is afforded to air vehicles.

The most popular methods of test and evaluation do not effectively serve systems that are expected to rapidly change. Until proliferation of machine learning or quantum computing capabilities changes the status quo, we can expect autonomy to take a long time to develop.  Systems engineering is the current method of weapons system development. As a test and evaluation tool, systems engineering guides the verification of deterministic requirements to field systems that demonstrate incremental progress inside of a short-term budget cycle. Systems engineering’s strength is breaking down complicated design to verify physical parameters like speed, endurance, or electro-magnetic signals. However, systems engineering will fail to verify complex evolutionary parameters like human behavior, which is the crux of autonomy.

Systems training represents a better way to test autonomous machines. Systems training would merge actions in developmental test, operational test, and operational warfighting. Human pilots learn through part-task training events and operational exercises, adapting as necessary in combat environments. Similarly, systems training would emphasize the capacity of a machine to learn and adapt. Systems training complements systems engineering with a long-term, non-deterministic approach towards developing autonomous air vehicles that is more congruent with complex adaptive systems. Systems training facilitates a change in the way of thinking about future combat. Moreover, it also requires an expectation that, similar to human wingmen, machine autonomy will sometimes fail.  Systems training would trade off machine infallibility to gain machine adaptability. Once established as a complementary method, systems training affords strategic synergy between the defense industry and the Air Force.

 “We have from this moment until then to get ready [for future conflict].  And every week counts.” —General David Goldfein, Air Force Chief of Staff

While machine autonomy is not synonymous with unmanned vehicles, unmanned vehicles are dependent on machine autonomy to operate in a dynamic air combat environment. In other words, unmanned vehicle effectiveness depends on the machine’s capacity to emulate the intent of friendly air forces in a contested environment. Increased machine autonomy offers manned aircraft a number of operational advantages. However, the subsequent focus on unmanned autonomous air vehicles is a consequence of the unmanned systems’ dependence on autonomous capabilities and the unmanned systems’ performance advantages over manned aircraft. The Air Force warfighting community can adapt operations and tactics to better leverage unmanned autonomy.

Autonomous air vehicles that responsively accept human input will have a greater capacity for action than those air vehicles that do not. That is why autonomous capability is so important to unmanned air vehicles. By not choosing autonomous air vehicles, the Air Force gives up a performance advantage in the air domain. If the autonomous air vehicle is unmanned, it is generally capable of better aerodynamic performance. Robert Work, Deputy Secretary of Defense, said “the potential of [autonomous air vehicles] derive primarily from the performance advantages gained from removing a person from a platform, such as increased speed, maneuverability, and endurance, and from the ability to take increased risk with unmanned platforms.” As any Airmen who have shot live air-to-air missiles can attest, subscale drones (missiles) offer energy sustaining turn performance at high altitudes, which translates to survivability. The most important opportunity cost, however, involves operator thinking rather than physical performance capabilities. Choosing to invest in autonomous air vehicles offers Airmen an opportunity to think about operational art in a manner more compatible with warfare that is likely to include autonomous systems.

Airmen will be required to adapt to a rapidly evolving version of warfare that is envisioned to rely heavily on autonomous capabilities. The Air Force Chief of Staff reflected on the urgency of this fact during a speech at the 2017 Air Force Symposium. Additionally, Robert Work’s paper “20YY: Preparing for War in the Robotic Age” also highlighted the importance of human thinking in a radical new environment, “the ‘winners’ will likely be those who best leverage the unique advantages of both machine and human intelligences.” Thus, Airmen’s complex adaptive brains must accommodate new schema faster than the enemy in order to succeed in future conflict. This may be more important in the field of acquisitions than anywhere else.

Complexity has increased development costs and delayed development schedules. Therefore, a systems training approach focused on iterative responsiveness will be of pivotal importance in acquiring an affordable, attainable autonomous air vehicle. Colonel Benjamin Drew also weighed in on complexity and responsiveness in acquisitions. He said, “[acquisitions] needs to tighten its own decision cycle to keep itself from being shocked into paralysis due to its increasingly glacial responsiveness to increasingly dynamic warfighter needs.” Furthermore, he said, “the answer to this volatility is not necessarily to stabilize the process inputs and perturbation (it’s beyond control), but to field solutions faster than the environment can change.”   Matt Clark, an autonomous air vehicle expert at the Air Force Research Laboratory, reiterated Colonel Drew’s point, “we need to test as fast as we change.” With a systems training approach to autonomous air vehicle test, the warfighter would be directly involved in teaching autonomy exactly where and when it serves the warfighter best.

A systems training approach to autonomous air vehicle test would demand habit change in acquisitions, writ large. The current performance measures for acquisitions reflects a process that is not agile enough to fulfill the intent of a Systems Training approach to autonomy. Tellingly, the Government Accountability Office (GAO) listed weapon system acquisition as a high-risk issue. In a 2014 report, GAO compared 80 programs against initial estimates and determined that budgets had inflated “nearly $448 billion with an average delay of 28 months in operating capability.” In a twist of autonomous irony, the Air Force is looking towards artificial intelligence as “the only way to navigate a stifling bureaucracy” and better streamline the 1,897 page Federal Acquisition Regulation. Senator John McCain, Chairman of the Senate Armed Services Committee, has called for more responsive military acquisition reform. High levels of military leadership seem receptive to adopting alternative approaches to technology development, and autonomous air vehicles are the perfect place to start.

A systems training approach that connects lab researchers, testers, program managers, and operators provides the Air Force with an opportunity to manage its own future towards autonomous air vehicle utilization. This is a different approach compared to the way we test today. The current approach, systems engineering, fulfills requirements towards relatively short-term comprehensive designs. As proposed, systems training would serve the long-term via an approach more congruent with complexity. Systems training would therefore not fit neatly into budget cycles. It demands culture change. Where systems engineering was born out of a requirement to handle industrial complexity, systems training is born out of a requirement to handle behavioral complexity.

Systems engineering, its methods and its techniques, served the success of two offset strategies.  In 1954, RAND’s Alexander Boldyreff advocated that systems engineering methods “should be applicable to the operation of any large organization or system which performs an essentially repetitive process, expressible in quantitative terms.” Similarly, systems training could scale to serve a repetitive process, but instead expressible in qualitative terms.

Quantitative and qualitative measures could be managed exclusively by the Air Force Research Laboratory. In 2012, Matt Bissonnette, author of No Easy Day under the pseudonym Mark Owen, told an audience of autonomous air vehicle enthusiasts that “an organization should not contract out that which it is best at.” He was talking in the context of Navy Seal infiltration and small arms tactics, but his point translates to air combat knowledge. Airmen know air combat best. With a systems training approach to autonomous air vehicle test, Airmen can generate, store, and refine the data that describes the evolution of Air Force autonomy without having to initiate a requirements-driven process for a new capability. The Air Force has accomplished this kind of organic technological innovation before, first with the SUU-16 gun pod on early F-4C aircraft, and again with a 30mm gun on the AC-130W aircraft for counter-terrorism. If successful, the Air Force and defense industry could mutually benefit from focus where each is most confident. Industry could focus on aircraft performance, flying qualities and signature innovations while the Air Force adapts air vehicle behavior to human-machine operations and tactics. The opportunity waits, and success depends on initiative.

Major Nicholas J. Helms is a graduate of the USAF Academy with a Bachelor of Science in Human Factors Engineering.  He is a distinguished graduate of the USAF Test Pilot School with over 2,000 hours piloting multiple aircraft, including the F-16, MQ-9, T-38C, and C-12J. He has flown missions in support of Operations Noble Eagle, Iraqi Freedom, and Enduring Freedom.

Disclaimer: The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Print Friendly, PDF & Email

2 thoughts on “Autonomy: The Future of Aerial Combat

Leave a Reply