The Role of the Human in Systems of Systems: Example of the French Future Combat Air System

Approximate Reading Time: 20 minutes
By David Pappalardo @DavPappa

Author note: This article was written to prepare the International Council on Systems Engineering (INCOSE) Human Systems Integration Conference in Biarritz (FR) on September 12, 2019 (HSI2019). The INCOSE mission is to address complex societal and technical challenges by enabling, promoting and advancing Systems Engineering and systems approaches. HSI conference is a scientific and industrial event meant to understand how Human Centered Design combined with System Engineering contributes to improving Human System Integration.

France is committed to design a Future Combat Air System (FCAS) relying on an architecture of networks, meshing inhabited and unmanned platforms within a System of Systems and fitting fully into the Man-Unmanned Aircraft teaming paradigm. Artificial Intelligence (AI), Big Data Analytics, cloud computing and cyber-security are the four digital technologies at the heart of the digital transformation of our Air Force. All of this raises the question of the role of the Human in such a complex System of Systems.

Ultimately, the French Air Force transformation aims to create a truly cognitive air combat management system by combining the calculating and storage capacity of computers with the ability of human intelligence to adapt. The FCAS should select whichever human-machine relationship is best based on the circumstances and combat performance. Connectivity, AI, and automation should foster a stronger collaborative combat, increase situational awareness, and assist the aircrew in contextualized decision-making. These technologies will allow airmen to concentrate on their main tactical and operational combat tasks, which gives rise to a counter-intuitive paradox: as AI advances, it simultaneously relieves man of the simplest analytic tasks thereby augmenting his capabilities and enabling him to reach his potential. Since airmen are able to understand both context and higher-level issues, they will always bring good sense, intuition, and the ability to adapt when faced with the unknown. Upon selection of the optimal relationship, developers of the FCAS should take into account risks underpinned by AI and autonomous functions. To do so, trust, transparency and explicability will be paramount.

A Detour into Science Fiction: Black Mirror

Before digging into the issue, a detour into science fiction could be both refreshing and filled with useful insights. After all, as the second law of Arthur C. Clarke highlights, “the only way to discover the limits of the possible is to venture a little beyond, in the impossible.” In fact, science fiction sheds useful light on possible futures, for what we could do, and for where we should never venture.

Let’s consider, for example, the outstanding TV Show Black Mirror, which examines the unanticipated consequences of new technologies through dystopian, satirical and dark lenses. For what concerns us today, two episodes of the third season particularly struck me (HUGE SPOILER ALERT):

  • First, Hatred in the Nation is a murder mystery in which detectives try to solve the inexplicable deaths of people targeted via the hashtag #DeathTo on social media. In fact, targeted individuals are killed with an Autonomous Drone Insect using a facial recognition system. Originally, these artificial substitute bees were designed to counteract a sudden colony collapse disorder and pollinate the United Kingdom’s (UK) crops in a crisis. Instead, the ADIs were co-opted to terrorize its citizens. Not only does the episode offer insights into the weaponization of social media, it illustrates the need for cybersecurity in automation systems and the critical risk of man’s complacency towards the machine.
OTH, Emerging Security Environment, Multi-Domain Operations
Figure 2: Black Mirror, Hatred in the Nation (Netflix)
  • Second, Men Against Fire tells the story of soldiers exterminating mutants known as “roaches.” Each soldier is enhanced by a government-issued neural implant that fuses instant data with their own senses via augmented reality. It turns out that “roaches” are actually humans and that the government uses the implants to alter soldiers’ perception by dehumanizing the targets, thereby eliminating the risk of remorse. In other words, technology is used to kill the “Naked Soldier” theorized by Michael Walzer in Just and Unjust Wars. As explained in a line from a psychologist in the show: “It’s a lot easier to pull the trigger when you are aiming at the bogeyman, hmm?”
OTH, Emerging Security Environment, Multi-Domain Operations
Figure 3: Black Mirror, Men Against Fire (Netflix)

These two dystopian episodes highlight three major pitfalls: cyber vulnerability, complacency towards machines, and the risk of soulless killers. In brief, they epitomize best the strongest ethical objection to autonomous weapons, brilliantly explained by the American scholar Paul Scharre in his book Army Of None: “As long as war exists, as long as there is human suffering, someone should suffer the moral pain of those decisions. This is not about autonomous targeting per se, but rather how it changes humans’ relationship with violence and how they feel about killing as a result. For it is humans who kill in war, whether from a distance or up close and personal. War is a human failing. Autonomous targeting would change relationship with killing in ways that may be good and may be bad. But it may be too much to ask technology to save us from ourselves.

Assess the Appropriate Level of Autonomy

To overcome these pitfalls, the French Air Force considers that the integration of the Human within the FCAS will be critical and the endeavor should avoid seeking autonomy for the sake of autonomy. For Paul Scharre, “it is meaningless to refer to a system as ‘autonomous’ without referring to the specific task that is being automated. For any given task, there are degrees of autonomy.”  It may be necessary to further define autonomy into three dimensions:  the type of task the machine is performing; the relationship of the human to the machine when performing that task; the sophistication of the machine’s decision-making when performing the task.

Critically, the crux of the issue lies in the second dimension, that is, Human System Integration: “How intelligent a system is and which tasks it performs autonomously are different dimensions. It is freedom, not intelligence, that defines an autonomous weapon.” In fact, the human-machine relationship is built around three levels of autonomy: semi-autonomous (human-in-the-loop); human supervised autonomy (human-on-the-loop); total autonomy (human-out of-the-loop). It must be possible to consider complete autonomy to ensure certain functions in full cooperation (cooperative autonomy for navigation and trajectory management), and also to preserve human judgment in the decision-making process (man on the loop). Both aspects are not orthogonal.

Ultimately, the main criterion for assessing the appropriate level of autonomy will be nothing but the performance of collaborative combat and the respect of strict ethical rules.

*     *

Enhance Collaborative Combat Through Connectivity and Automation

Future Air Power will be composed of connected, manned, and unmanned air platforms, enhanced by different sensors and effectors. They will be part of an open, scalable system architecture that enables the inclusion of future platforms and new technologies. In such a complex and heterogeneous organization, effective and balanced human system integration will be a key to success. Critically, all of them will foster enhanced collaborative combat through connectivity and automation.

OTH, Emerging Security Environment, Multi-Domain Operations
Figure 4: Created by the Author. Right bottom corner: concept of new generation cockpit © Dassault (Man Machine Teaming Study)

Figure 7: Created by the Author. Right bottom corner: concept of new generation cockpit © Dassault (Man Machine Teaming Study)

The Need for Warfare Analytics

First, the system of systems design will rely on a level of data exchange never achieved before. Data will therefore be the basis of our digital advantage and the Air Combat Cloud will be a genuine enabler of this approach. It will require networking of all players involved as well as judiciously dispersed data handling capabilities.

From intelligence to planning and conduct of operations, insertion of AI must respond to the problem of the information deluge. In particular, the amount of information available, the multitude of players involved and the improvement in performance in general all offer a number of challenges to the Command, Control, Communications, Computers (C4) architecture, integrated with Intelligence, Surveillance, Targeting and Reconnaissance (ISTAR) capabilities. Consequently, the French Air Force is making moves towards digitization of its C4ISTAR structures, supported by the technologies associated with AI Management of combat space will need ever more efficient real-time coordination and sharing of information—the notion of the Common Relevant Operational Picture (CROP).  To succeed in operations in distant theaters and over long periods, forces will need digital assets that are compatible with real-time transmission of information so that the entire command chain can operate at the required tempo.

In short, facing the problem of infobesity (information overload), the real question concerns the data rather than the algorithm. Digital technologies have become essential for the analysis of vast quantities of data, to consolidate the information extracted, and then to distribute the knowledge acquired in order to decide and act with clarity and efficiency.

A Better Connectivity Between Platforms, Sensors and Command and Control Structures

Second, connectivity is a mainstream of effort for the French Air Force to concentrate and distribute effects, enhance collaborative fight and increase combat performances. Networking of different airborne weapons systems centered on the Rafale and the New Generation Fighter will make new modes of collaborative combat possible, which will in turn increase the intrinsic fighting strength of the platforms. We will switch from platforms to “nodes within a network,” which will be used to compensate for the weaknesses of platforms when considered in isolation.

But there again, connected, collaborative air combat will also necessitate piloting of ever more complex systems in a cyber-secure environment. AI must allow the creation of a genuine virtual cognitive assistant to the aircrew, whose double aim is to facilitate decision-making and piloting complex systems.

Virtual Cognitive Assistant and Pilot Directed Interface

In a national study called Man Machine Teaming, Dassault and Thales strive to reshape and expand Human System Integration into a cognitive air system, in which man and machine communicate more naturally for greater efficiency. In the future, aircraft will have AI on board to assist the aircrew in their understanding of the situation, conduct of the mission and contextualized decision-making. AI could be used as “virtual cognitive assistant” interacting in an intuitive manner with the aircrew without excessive chatter that can divert attention resources from task performance. Referring again to pop culture, it could be similar to Jarvis in Iron Man, assisting Tony Stark in his superhero job.

Three main objectives of the Virtual Cognitive Assistant:

  • Reduce the quantity of projected information on tactical visualization screens.
  • Offer sequential action based on context and former-mission history. This virtual cognitive assistant will be proactive by suggesting changes to the operational states of objects, and reactive, by continually choosing the best function or the best resource to obtain the desired change of state.
  • Recommend system reconfiguration in line with context and mission evolution. This means capabilities to adapt displays and alerts to the tactical situation and to the cognitive workload of pilots, assist the reconfiguration of systems following breakdowns and faults, improve prediction of chance of success, adapt trajectory as the tactical scenario develops and more.

To do so, we are studying the use of monitoring technologies for pilots in order to evaluate the level of collaboration between Human and System. Pilots, in the near future, could be equipped with various sensors (electroencephalogram, eye-trackers, and electrocardiogram) and their physiological signals could be analyzed in real-time in order to:  determine the efficiency of Human System Integration; to optimize the interaction between operators and their control systems; and also to develop intelligent and adaptive cockpits.

For example, the system would be able to detect the pilot’s cognitive states (tunneling, cognitive overload) and thus launch cognitive countermeasures or relieve certain tasks in most critical situations. In this project, understanding and continuous learning about AI (central to this cognitive assistant) are primordial in order to retain control of the assistant’s decisions, and for these decisions to be automatically updated.

Management of Complex Systems and Autonomy

Last, connected, collaborative air combat will go hand-in-hand with strengthened partnerships between human operators (whether embarked or not) and the autonomous functions within a system of systems. This partnership must improve the effectiveness of the mission above what a traditional manned craft would have achieved by itself. To achieve this, the virtual cognitive assistant must be in a position to respond to demands placed upon it, to anticipate needs and to act autonomously, though in coordination with the overall system.

The arrival of cognitive virtual assistants, the collaboration within a system of systems that goes with it and Manned-Unmanned air machine teaming will, in short, empower air strategy with ubiquity through recreation of the mass that is essential to create and open the spatial and temporal doors to air superiority in the face of enemy defenses. To quote a NATO STO study on Human Autonomy teaming: “A paradigm shift is required to move from using autonomous agents as tools to treating them as teammates.

Trust, Transparency and Explicability

Considering robots and virtual assistants as teammates (and no longer tools) is neither natural or already established; trust, transparency and explicability will be essential for adoption and integration. All of which will be paramount to overcome Human System Integration’s risks underpinned by AI and autonomous functions. Let’s dig into some of these risks.

Risks

The first risk is confidence bias, which can trigger, as we have just said a degradation of expertise. Complacency towards the machine and/or lack of intelligibility of the solution given by AI can eventually push the human operator out of the decision-making process, particularly when time is pressured. Yet, we cannot afford to lose human expertise in the future. As Paul Scharre writes, “delegating a task to a machine means giving it power. It entails putting more trust in the machine, trust that may not be warranted if cybersecurity cannot be guaranteed.” For that reason, we need to keep man somewhere in or on the loop and avoid at all costs that Human System integration increases at the expense of general performance in a degraded situation. Remember Black Mirror and Hatred in the Nation when people in charge got hacked and lost control of an Autonomous Drone Insect!

The second risk concerns opacity of complex systems, which creates a lack of ex post transparency. Systems are becoming more complex, becoming opaque, including for their designer. This is especially the case for deep learning machines, which do not operate by following a script in a linear fashion but exploit with a high level of data abstraction thanks to architectures articulated in networks.

Last, the opacity of these systems goes together with a lack of predictability and a lack of ex ante transparency that induces a natural human defiance. Goal misalignment is therefore not impossible. If a human operator cannot foresee the machine’s outputs, he will eventually refuse to use it!

Needs

In the face of these risks, effective Human System Integration will demand building a level of trust and must therefore seek transparency, intelligibility and explicability.

Manned-unmanned machine teaming is only possible if both teammates know what the other actor is doing and why. Automation transparency is the prerequisite. To be transparent, “the automation must bridge this gap by presenting information in ways that fit the operator’s mental model.” In a manned-unmanned machine teaming situation, the machine may need the capability of explaining their decisions to a human.

Yet, I would like to point out a fundamental quandary here: the presentation of transparent information in real time could be at odds with the workload savings that is, frequently, the goal of including autonomy in the first place. We should therefore focus on transparency before and after the action: while traceability and intelligibility should ensure the ex-post transparency of autonomous systems, explicability should, in turn, be able to maintain a twofold ex post and ex ante transparency, guarantee of a trustworthy Human System Integration.

Finally, trust will not come from scratch. General Deptula recently wrote a paper in which he emphasized the need for training: “In many ways, building autonomous unmanned mission partners is like training young Airmen. It is crucial to ensure they execute their mission functions in ways that correspond with established methods and are able to plug and play into the broader enterprise in a positive fashion.”

Conclusion

In conclusion, considering robots or virtual teammates as mirror images of what we should be is an error. The manner in which we program them or the information that we ‘teach’ them will reflect our prejudices, our cognitive biases and all that ought to change in our societies. Robots will not create a perfect world for us. AI and robots will not create a perfect world for the French Air Force either. They bring us many promising things but do not forecast the end of the airmen within the FCAS.

Given the stakes involved, the rationale of the air combat system must therefore be shaped by three essential conditions:

  • That the association of man with machine benefits from accuracy and speed of automation to increase many-fold the agility and creativity of human intelligence.
  • That AI will not abolish human responsibility and will not remove Man from the decision process when committing to lethal force.
  • That engineers’ ideas in design will not replace those of the airmen in the decision-making process; man must never be subjected to the machine but use it to improve his own performance.

To this end, the French Air Force must develop a voluntarism strategy with neither false modesty, nor excessive hope, but always with responsibility.

David Pappalardo is an French Air Force Officer in charge of Combat Aviation in the capability branch of the French Air Staff. As a multirole Rafale pilot, he is the former commander of the 2/30 fighter Squadron “Normandie-Niémen” and has been involved in several operations over Africa, Afghanistan, and the Levant since 2007. He is graduated from the French Air Force Academy and is a distinguished graduate from the US Air Command and Staff College.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the French Air Force, the Ministère Des Armées, or the French Government.

Feature Image: Top-left corner: artist view of AI; Top-right corner: mockup of a Remote carrier (Airbus); Bottom-left corner:  Next Generation Weapon System (Airbus); Bottom-right: New Generation Cockpit (Dassault); Center: mockup of New Generation Fighter (Dassault).

Print Friendly, PDF & Email

Leave a Reply