User-Centered Design is Critical to the Success of Multi-Domain Operations

By: Dr. Kristen K. Liggett and Dr. Gina F. Thomas
Estimated Read Time: 11 minutes

Multi-Domain Operations (MDO) is the integration of capabilities across multiple domains (such as air, space, and cyberspace) to achieve desired operational effects.  One of the USAF’s primary goals is to conduct multi-domain operations in a way that increases warfighting capabilities.  To be successful, single-domain effects that are currently stove-piped must be planned and executed in an integrated fashion; doing so can increase USAF capability exponentially.  Future USAF multi-domain command and control (MDC2) operators must know and understand the needs for their original domains of expertise in addition to understanding how effects can be achieved across unfamiliar domains in order to effectively make use of the variety of instruments of power that MDO offers.

As the USAF strives to achieve efficient and effective MDO, it will be imperative to accurately assess the requirements for enabling current and emerging capabilities.  The Air Force’s 2018 MDC2 Implementation Plan defines three lines of effort, Command and Control Operating Concepts, Support Structures for Command and Control, and Advanced Technology, which specifically call for applying advanced technology as a primary means to support implementation of MDC2.  However, successful MDO cannot be based solely on hardware and algorithms; the human’s role is critical.  The Advanced Technologies line of effort specifically states both the need for technologies that allow MDC2 operators to establish and maintain situation awareness and the need for tools that effectively support decision-making.

In order to take advantage of those significant advances in autonomous systems, artificial intelligence, machine learning, big data analytics, and visualizations that are useful for providing MDO capabilities, we must increasingly rely on human factors to enable effective use of technology through optimized human-machine teaming.  A key requirement for effective MDO is to ensure that MDC2 operators have the information they need when they need it.  Determining information requirements and how to most effectively present them requires a user-centered design process.

User-centered design (UCD) is not a new concept.  The Human Factors Engineering community has long advocated for user-centered design to ensure that processes, workspaces, and tools are designed in ways that allow humans to effectively and efficiently achieve their goals.  The Handbook of Human Factors was originally published in 1987 and the book, “The Psychology of Everyday Things” (more recently published as “The Design of Everyday Things”) was written to introduce this concept to a wider audience and was one of the first books to advocate user-centered design.  More recently, the International Organization for Standardization established a standard on human-centered design for interactive systems that focuses on ways in which both hardware and software components of interactive systems can enhance human-system interaction.  Although not new, the UCD concept is occasionally “rediscovered” and, at various times, has been rebranded as user-driven development, UX (user experience), human-centered design, human factors design, cognitive systems engineering, and most recently, design thinking.

UCD is the cornerstone of the human factors engineering discipline and has been used effectively in many domains, including designs of work support systems for aircraft cockpits, unmanned aerial vehicle control stations, automotive dashboards, nuclear power plant control rooms, space vehicle controls, hospital medical record systems, and many more safety-critical applications. MDO technologies can also greatly benefit from the UCD process.

At the most basic level, UCD is a design approach that definitively places the user at the center of all design activities.  According to William Hudson of the Interaction Design Foundation, UCD is “an iterative design process in which designers focus on the users and their needs in each phase of the design process.  UCD calls for involving users throughout the design process via a variety of research and design techniques so as to create highly usable and accessible products for them” (see Figure 1).  The process starts with a deep understanding of the future tool’s users, their needs for performing their work, and their constraints and limitations from both a work perspective and a human capabilities perspective.  This analysis feeds an iterative design and evaluation process that again requires working with users to obtain feedback.  Keeping users involved throughout the design process helps ensure that the final product will be both useful and usable.

OTH, Emerging Security Environment, Multi-Domain Operations
Figure 1. User-Centered Design Process

Myths about Design

Often tools are built based on factors other than user needs, such as when developers try to find new ways to make use of existing technology or to make use of a new and innovative technology.  Other times, user needs drive design, but the designs are not based on a thorough evaluation of the users’ processes and goals.  While these types of development are not useless, they do not generally lead to the most effective tools and often lead to tools that get shelved because they either don’t fit into the users’ work processes or because they provide a service that the user doesn’t really require.  Three common “myths” about how to provide capability are often employed by developers as excuses for circumventing the user-centered design process:

  • Design Myth #1: Just ask the users what they want and give it to them.
  • Design Myth #2: Give users access to everything they could possibly need and they will find the information they need when they need it.
  • Design Myth #3: Allow users to design their own interfaces by making everything customizable and letting them choose how they want their interfaces to look and behave.

There is solid empirical evidence that indicates that these “myths” are not only inaccurate, but can even be dangerous if employed in the design of critical systems.  We address them one by one below, along with some comments on how they relate to MDO.

Design Myth #1 – Give the Users What They Say They Want

While it may be true that users know what they want, that knowledge is often limited by experience (constraints they currently work within) and the lack of knowledge of what is possible.  It has been said that if Henry Ford had asked people what they wanted, they would have said ‘faster horses.’ While there is some debate as to whether this is an actual quote from Ford, it illustrates the fact that getting user input involves more than just asking people what they want.  The more important information for designers to obtain when the users are excited about telling them what they want (and they will be!) is the understanding of why they are asking for specific things. It is up to designers to probe users for goals, constraints, and motivations.  Yes, faster horses may be the users’ interpretation of a requirement, but the underlying reason for this request is so they can get from point A to point B faster.  As shown in Figure 1, the user-centered design process or human factors design process (HFDP) begins with analysis. Effective analysis is crucial to the success of the design process as it provides a foundation to support all subsequent design activities. The Analysis Phase establishes the relationship between the design team and the end-users of the product being developed.  The design team employs human factors methods and techniques such as unstructured and structured interviews, observations, and specific techniques such as goal-directed task analyses in order to elicit information about the work and work context.  Analysis techniques such as work-flow diagramming are used to validate the information and determine gaps in understanding for iterative knowledge elicitation.  These activities supply the design team with a valuable understanding of the work domain that includes the overall goal of the work; tasks currently done to accomplish it; objectives, order, and dependencies of those tasks; information requirements; etc.  The analysis enhances the team’s knowledge of stakeholders; tasks, information and decisions to be supported; sources of information; gaps in current processes and information; constraints of time and environment; and the objectives of the work. Using this knowledge, designers can provide tools that allow users to perform work in the best possible way using current technology and sometimes providing requirements that expand current technology rather than tools that just allow them to marginally improve upon current work processes that have been defined and limited by the constraints of past tools and technologies (i.e. faster horses).

Since Multi-Domain Operations Centers (MDOCs) have not yet been established, design teams cannot go directly to such centers to interview users about their needs.  However, Multi-Domain Operations are being planned and conducted in a limited fashion in various Air Operation Centers (AOCs).  Access to these users provides a good starting point for the analysis phase.  Eliciting knowledge from the 624th Cyber Operations Center and the National Space Defense Center is also important to learning about cyber operations and space operations and to leveraging effective work processes and lessons learned from these two domains.  When there are unknowns, assumptions must be made.  For instance, it is unknown whether there will be cyber operators, cyber liaison officers, cyber commanders, etc. in the MDOC of the future, but assumptions can be made about how an MDOC would be best organized (best case scenario), and these assumptions might drive policy or be proven wrong and require reanalysis.  Assumptions must be revisited often, both to ensure that they are sound and to update them as new information comes to light.

Design Myth #2 – Give Users Access to Everything and They Will Find What They Need

This myth stems from the frequent requirement for users to go to a variety of sources to obtain needed information.  Designing a system that collects and stores all of this information is an incremental improvement, but that capability alone certainly does not provide an optimal solution.  Many dashboards are designed using this myth, and they are frequently not effective for particular users due to the large amount of time and cognitive effort required to sort through and filter information.  Often these complex designs are created under the misguided notion that training or documentation will allow users to bridge the gap between the excessive amounts of information and their ability to process excessive amounts of information.  The most challenging step in the HFDP is the conversion of domain knowledge obtained during the Analysis Phase into initial design concepts (represented by the arrow from the Analysis Phase to the Iterative Design and Testing Phases in Figure 1). This step requires the integration of information gathered in the Analysis Phase with foundational empirical knowledge of human perception (vision theory, color theory, etc.) and cognition (encoding theory, memory theory, information processing theory, multiple resource theory, attention theory, etc.) guided by human factors design principles that represent decades of empirical research to determine the most effective ways to display information for different uses (see Figure 1). This foundational knowledge and experience in applying it to design prevents the creation of complex designs that overtax users’ perception and cognition.  Design teams must also determine sources of required data to support the interfaces and visualizations and consider methods of effectively accessing those data.  Through these activities, a design team can provide an effective tool that will guide users to information they need when they need it, and training and documentation needs will be minimal.

Many people talk about the need for a “common operational picture” to be displayed in an MDOC to provide “situation awareness” so everyone can understand everything going on in the battle.  This idea falls under Design Myth #2 because it supports the “One Size Fits All” mentality.  How can one picture show everyone everything that is going on in a way that they need to see it (or not see it) to understand and make decisions to support the mission effectively?  What we propose is a realistic alternative, “contextualized operator perspectives,” to provide tailored information for various users, all drawing from a common data source (Figure 2).

OTH, Emerging Security Environment, Multi-Domain Operations
Figure 2. Contextualized Operator Perspective Concept

Design Myth #3 – Make Everything Customizable and Let the User Design

One problem with this notion that users typically don’t have the time or ability to customize their interfaces.  More importantly, multiple studies have been conducted that illustrate that there is often a disconnect between configurations that people say they like best and those with which they perform most effectively.  While there are some features of interfaces that can and even should remain flexible, the choice of many interface features are optimal only when their designs and configuration are primarily dependent on work flow needs and goals and fitted to human perceptual and cognitive needs.  As indicated above, the conversion of ‘work needs’ to ‘work aids’ requires both a deep understanding of the work and a foundational knowledge of human perceptual and cognitive processes along with knowledge of human factors design principles.  Users cannot be expected to have this foundational knowledge and should not be made responsible for design.  On the other hand, users do have a deeper understanding of the work than can be obtained by the designer, which is why interaction with end-users is critical for refining initial designs during the Iterative Design and Testing Phase shown in Figure 1.  During this phase, products are tested, refined, and tested again as needed to ensure maximum utility and usability.  The thoroughness of the analysis and the skill, experience and knowledge of the designer will affect the number of iterations that will be required.  Later evaluations should include user-in-the-loop testing with operationally representative scenarios in order to verify that the design is effective.  User involvement has the added advantage of ensuring user buy-in.  Users often end up preferring things that are best for them when they feel they have been allowed an adequate amount of input into the design process.

The first training class for the USAF’s MDC2 career field (Air Force Specialty Code 13O) started this year.  The Airmen that have been selected are from different operational backgrounds and are between the 9 and 12 year points in their careers.  While the 13O career field builds a cadre of experts, C2 operators in AOCs will still be required to assist in on-the-job training and to aid in the transition to a more complete integration of air, space and cyber operations.  Although newly trained 13O professionals will have knowledge in all of the warfighting domains, their tactical depth of knowledge will still be the strongest in their domain of original expertise, so customizing their tools and interfaces effectively would be particularly challenging and require them to make use of unfamiliar information and capabilities.  It is one thing to customize the look and feel of an interface; it is something else entirely to understand what information is available from sources that have not previously been accessible.

Conclusions

Historically, traditional design approaches have often resulted in systems that either fail to provide expected performance improvements or that are not used at all because they add to rather than support work, as evidenced by the multitude of tools that are currently being developed to provide a particular capability but have been found to be “onerous to install and operate”.[i]  In part, this failure to design using the HFDP has been due to the time-consuming nature of “getting it right.”  However, when the HFDP is employed skillfully by developers with design experience who understand human capabilities and limitations as well as basic human factors design principles, the resulting designs are effective and intuitive and have a high degree of user acceptance.  These designs support work flow without adding unwanted workload.  These long-term benefits certainly outweigh the up-front costs.  Research supports that early consideration of work processes reduces situations in which unanticipated requirements are encountered by users at or after development, requiring high-cost design adaptations or, worse yet, retrofitting or shelving of developed systems.

Looking toward Multi-Domain Operations, the challenges primarily lie in operators’ abilities to understand information from all relevant domains in ways that support effective operations.  This new way of operating will require a human factors-aided paradigm shift away from stove-piped thinking or incremental changes in one domain by slowly adding “pieces” of other domains – activities that represent evolutionary changes toward MDO.  Instead, we need revolutionary changes in the way we perform C2 activities in order to harness the power of MDO.  UCD can support a top-down analysis of the work that needs to be accomplished by MDC2 operators for successful MDO and determine the information required by operators to reach the goals and objectives of MDO.  The success of MDO lies in the hands of the variety of users who must work together efficiently and effectively to plan and execute their missions, creating a challenge that is uniquely human-centered.  Therefore, user-centered design will be critical to the success of Multi-Domain Operations.

Dr. Kristen K. Liggett is a Principal Human Factors Engineer in the Air Force Research Laboratory’s Airman Systems Directorate at Wright-Patterson AFB, OH.  She has been conducting research in the lab for 32 years, determining how to apply the science of human factors design to a variety of Air Force applications.  Her current focus is on the design and evaluation of information visualizations, user interfaces, and cognitive work aids for a range of cyber operators.  This has led to research in information interface design for the various users involved with multi-domain operations.  E-Mail:  kristen.liggett@us.af.mil

Dr. Gina F. Thomas is a Research Engineer in the Air Force Research Laboratory’s Airman Systems Directorate at Wright-Patterson AFB, OH.  She has been conducting research and designing work support systems in the lab for 16 years; previously she worked for the Ohio Department of Transportation.  Her primary interests are visual attention and information presentation, user interface design, symbology, and system evaluation.  Her current focus is on cyber and multi-domain planning and operations.  E-Mail:  gina.thomas.2@us.af.mil

Disclaimer: The views expressed are those of the authors and do not necessarily reflect the official policy or position of the Department of the Air Force or the United States Government.

[i] Trent, S., Hoffman, R., and Beltz, B. (2016). An empirical assessment of cyberspace network mapping capabilities. In 21st International Command and Control Research and Technology Symposium., pp. 15.

Print Friendly, PDF & Email

Leave a Reply