Abstract to Action: Targeted Learning System Theory Applied to Adaptive Flight Training Part 2

OTH, multi-domain operations, emerging security environmentEditor’s Note: This is Part 2 of a two part series exploring the use of Targeted Learning System Theory to pilot training. Part 1 laid the foundation by explaining the traditional approach as well as the concept of Targeted Learning System Theory. Part 2 details how this theory can be applied to maximize both efficiency and effectiveness of pilot training.

Reading Time: 13 minutes

By Travis Sheets and Matt Elmore

The Adaptive Flight Trainer

The AFT was developed to demonstrate the applicability of the TLST to the current USAF pilot manning crisis, extenuated by the limitation of expansion in the production pipeline. The use of emerging technologies to increase production is being explored by Pilot Training Next, and this research provided the baseline for their initial virtual reality trainers. The focus when designing the AFT was to incorporate commercial off the shelf (COTS) technology to help reduce costs, find initial integration and capability problems to inform Pilot Training Next, and demonstrate the feasibility of scaling the system.


From a hardware perspective, all the technology could be integrated by an average video game enthusiast. In fact, three prototypes to include the assembly of three gaming chairs was accomplished by the research team in approximately 3 hours with a total estimated cost of $6,000 per prototype. The primary software utilized for flight simulation was Prepar3d v4 (pronounced prepared), a Lockheed Martin product that is commercially available. It is a fully virtual reality (VR) compatible environment that gives the user control over almost any imaginable variable from weather to aircraft malfunctions.

In addition to modifying the T-6A model, Virtual Reality Learning Environment (VRLE) learning theories were incorporated into the Prepar3d virtual environment. Using a developer license, industry partners programed three learning scenarios within Prepar3d to incorporate multimodal learning cues, and data capture scripts. The targeted data included aircraft airspeed, aircraft altitude, aircraft position, subject eye movement, and subject cognitive load parameters throughout the entire scenario. The scenarios incorporated aspects of the developed construct validity for a VRLE and various elements of multimodal learning theories. Particularly they provide cues that reduce the cognitive workload typically required for a subject to orient themselves in the virtual environment. The cues were multimodal in that they instructed the subject visually and aurally. The scenarios incorporated elements of gamification and exploratory learning to engage the subject and were sufficiently short to allow for multiple iterations. Since subjects would receive no instruction from the observers the goal of the environment was to create an opportunity for the student to learn and practice the task on their own, while providing cues to enable a focused, faster and deeper learning than a traditional “free play” environment.

The primary goal of the AFT test was to measure the cognitive and kinesthetic transference, or learning, from a VR trained task in the physical environment. A secondary goal was to identify affective feelings towards the VR environment when utilized in a flight training construct. The tertiary goal was to identify post-test what if any, determinant variables and neurological activity could be identified that might indicate a higher propensity for an individual’s success in flight training. Finally, the test looked to inform the Pilot Training Next team by testing the AFT technologies, virtual environment, and providing the team with initial trend data regarding the effectiveness of the AFT in teaching flying skills. This data was collected utilizing multiple assessments as well as performance, cognitive load, and eye-tracking data from the virtual and physical environments.

Measuring People and Performance

To measure the person, also called biosensing, the TLST uses noninvasive integrated technologies. Biosensing could be any biological factor that could play a determinate factor in the performance outcome. In the purest form of the TLST, we would measure every biological factor available about a human such as sleep, intestinal bacteria, hydration levels, vitamin and hormone levels, cognitive load, eye-tracking, heart-rate, respiration, etc. The bio-data influences the student’s curriculum real-time by providing feedback on how their current biological and neurological state is affecting their performance at a given task. For example, eye-tracking measurements inform where in the environment a subject is looking. If the task is driving a car at a given speed and eye-tracking indicates the student is not meeting performance standards because they are not looking at the speedometer frequently enough, this would be useful feedback. How often is frequently enough? This is where “what right looks like” comes in. When data on a task has been captured across multiple students, the system would be able to provide initial guidance to the student like “the best drivers look at their speedometer every 2 seconds.” This would be a starting point until enough data on a student is collected to determine how often they as an individual should look at the speedometer to maximize their performance. For an organization, all these datasets could provide clues as to what biomarkers might indicate the best skillsets suited to a given task so predictive analytics could be used to help find the right people for the right task.

Performance is measured by creating datasets linked to desired outcomes for the training and education that is occurring. In the above example, the performance was the ability to maintain a certain speed while driving within parameters of the speed limit. These parameters would be set by the educator based on the level of performance desired for the task or content. Higher performance usually has less degree of tolerance for error. While we advocate for exploratory learning, some measure is required to validate that learning has occurred. The performance measurement along with bio-indicators confirms that the student is ready for more complex tasks, indicates they should regress, or just should spend more time at the current level.

The environment is any contextual situation that formulates the learning experience in a VRLE, mixed reality or physical environment. In theory, capturing this dataset involves measuring every variable available in the environment. Since not every variable can be measured, it requires someone to prioritize the variables that should be measured depending on their impact to the learning experience. In digital environments, these are easily captured, but in simulations and physical environments, it becomes exponentially more challenging. This dataset is valuable because the conditions within an environment effect performance and the person. Back to the car example. When the student is asked to perform the same task in a thunderstorm, it becomes more difficult. They would likely focus more outside the car to ensure they stay in their lane, and the difference in road conditions may raise their anxiety level making them want to drive slower and failing to meet the performance requirement. On the other hand, when they have mastered performance requirements on a clear day and are starting to become complacent with the task as indicated by their biosensing measurements, the system could change the environment and continue to challenge the student with more complex tasks and content. Anyone of these datasets alone could provide a wealth of information for the student and educator, but when fused together they create a holistic picture of where the student is in their learning and ways the educator can help the student move forward faster.

The adaptive curriculum described above is driven by feeding these data streams into an Artificial Intelligence (AI) neural network. The AI will sort and prioritize all these variables to create feedback for the student and modify the environment to progress the student along in the curriculum. Greater deviation rates from a given performance standard would elicit greater guidance and cues from the AI and would reduce assistance as student performance improved. The student’s progress is recorded along with a performance report that is visualized for personal reflection, and for a follow-on face to face debrief with an instructor or teacher. The progress and performance reports are distilled from both formative and summative assessments that occur during the interaction with the system. By digitizing people strengths, weakness, characteristics, and performance the TLST also allows for datasets among multiple students to be compared for personnel management placement and selection.

The student-centered portion of the TLST provides students with multiple learning pathways within an experiential performance-based structure and the option to iterate as necessary. Part of the multiple pathways included are capitalizing on multiple modalities as described in the VARK – visual, auditory, read/write, and kinesthetic. The other part of the pathways implements the experiential learning cycle as described by Kolb – a loop of concrete experiences, observations, and reflections, forming abstract concepts and generalizations, and testing these concepts in new situations. These technologies allow the student to have a student-centered learning experience empowered by choice driven by high-fidelity performance data. Additionally, by allowing the AI to guide student learning based on the student’s performance, it allows one educator to supervise multiple students. It also allows an organization to implement a performance-based model instead of the traditional time-based industrial model. The performance standard and time spent achieving that standard is variable depending on the complexity of the task and risk aversion of the organization. Why should a student that can learn to drive the car in five hours be forced to sit through 20 hours of driver’s education?


All subjects for this test were permanent party volunteers from Columbus Air Force Base (AFB), Mississippi. The test group was a convenience sample comprised of USAF active duty, reserve members, and DOD civilians. This test group was divided into four subgroups based on flight experience (high experience, some experience, no experience, control); no member of these groups except the control subgroup had any experience flying the T-6A. The subjects were evaluated on their ability to maintain airspeed and altitude control during aircraft take-off and landing as well as the required cognitive tasks associated with that profile. Subjects completed a baseline assessment in a T-6 simulator, trained for 1.5 hours in the AFT VRLE, then completed the same task in a final improvement to measure any changes in performance. Subjects also completed questionnaires throughout the survey to measure affective indicators.

The Columbus AFB AFT study did provide multiple indicators, trends, and verified flight performance improvements that were a result of kinesthetic and cognitive transference from the VRLE to a real-world T-6A certified simulator. All test groups improved airspeed control, altitude control, and execution of procedural tasks. After 1.5 hours of student-centered VR training in the AFT, groups improved an average of 14% as measured by the Columbus AFB T-6A Instructor Pilot during observations of the baseline and final evaluations. The T-6A instructor graded the students with the standards they use for actual students they would be evaluating or instructing during normal T-6A training. The questionnaires provided indicators of affective improvement. Subjects reported a belief that the VRLE improved their knowledge of the T-6A and how to fly the required task while reducing anxiety and increasing confidence prior to the final evaluation.

The study also indicates there are linkages between cognitive load, arousal factors, and eye measurements. The high experience sub-group had the least relative increase in performance but had the smallest room for improvement. The no experience sub-group also had the steepest learning curve and achieved better performance faster than any of the other subgroups. On average it took the high experience sub-group 3-4 training sorties in the VRLE to master the required task. Whereas the some-experience sub-group never reached the same level of mastery but began to have an acceptable performance of +/- 100 feet and +/- 10 knots of airspeed after 6-8 rides in the VRLE showing a slightly slower learning curve. The no-experience sub-group never did reach an acceptable level of performance in approximately ten rides, but they did reduce their crash rate by 50% in their final evaluation. The no-experience sub-group also had the greatest improvement overall. All sub-groups migrated toward the control group indicating overall improvement. See the complete study for further details and analysis.

Strategic Opportunities
The TLST presents both short-term applications and long-term opportunities for the USAF. Short-term applications include using empowering structures paired with technologically advanced learning devices to experimentally mature educational and training capability. Beyond a training device, the novelty of the AFT system makes it attractive to use for flight introduction as well as recruiting possibilities.

In the immediate future, additional studies need to be conducted to further validate the effectiveness of the TLST and AFT while also looking at applications of TLST outside pilot training. Prototypes need to be put in the hands of operators and practitioners, so they can start experimenting with how to best maximize training potential through curriculum development.

Long-term opportunities provide the USAF the chance to iteratively improve this type of fused technology to assist in a more strategic full-spectrum approach. When enough data is collected, the tools created using the TLST could identify traits and patterns that make a pilot more successful, aid in pilot candidate selections and improve predictive analytics for student success. Switching to a performance-based model that allows for self-directed learning through iterative growth could reduce the number of physical flight sorties required to master a skill, driving down training costs. With further development, the TLST could be used to revolutionize personal growth opportunities and workforce development along a range of education, training, and learning especially as technology continues to advance at a rapid pace. Furthermore, the TLST has the potential to change how talent management is done by capturing a digital representation of a person’s professional qualities, and skills through interaction with TLST managed environments and displaying those to potential employers with a great level of certainty. A digital record of this type will provide a much more accurate description of an individual’s talents than a traditional college degree, allowing the potential for a much better match of skills, capabilities and contextual requirements. This capability aligns with the 2018 AETC Strategic Plan.

Finally, the TLST provides a future operational benefit through human-machine symbiosis and increased situational awareness. Pairing human exploration with machine intelligence could elevate human performance. A system that can know the person, the environment, and desired outcomes can filter the most pertinent information to the human allowing the human to act faster and make more accurate decisions. A system designed using the TLST could be used operationally across all domains of combat operations but could be specifically useful in urban combat, medical surgeries, first responders, civilian flight training or any tasks that are accomplished in complex environments that require data prioritization and ergonomic presentations to obtain a competitive advantage. The opportunity and usefulness of this technology has the potential to help humans have greater situational awareness and make better decisions faster by making better use of time and information. The fusion of mixed reality, data, and human decision making that are core to the TLST are the future of learning and combat operations

There is nothing new about the TLST; it is simply a combination of technologies, structures, and learning theories fused together in a novel way. The TLST produces learning value by providing high fidelity feedback to the students. It fuses technologies to provide immersive experiences across multiple learning styles while maintaining the flexibility to adjust to the new requirements of future operating environments. The student-centered, exploratory, collaborative structure builds on components of situated and experiential learning theories informed by a constructivist approach in a connected world. Targeted learning systems that provide customized learning experiences leveraging empowering structures and emerging technologies will only continue to grow as technologies continue to mature.

We cannot predict the future, but we can look backward and use impactful trends to inform potential outcomes. How people learn at the biological and neurological level has not changed. However, the environments and the demands of human performance are significantly in flux. The first step towards elevating human performance starts with understanding ourselves better and leveraging the right combination of structures, theories, and technologies to help humans find their untapped potential. Human capital is the USAF’s most important resource and developing new ways to maximize its potential is a smart investment. The USAF has an opportunity to be the innovative organization that drives the direction of education and training in the future. As an organization it must be willing to let go of the outdated industrial models holding to truly revolutionize training and education for decades to come. Failing to change will not only leave it undermanned to meet future conflict demands but will leave Airmen underprepared to think and act in increasingly uncertain and complex situations. The TLST advocates for a smarter and more efficient use of educational structures, learning theories, and disruptive technologies to find a new approach to learning that produces equal or better results at a lower cost.

Major Travis Sheets is an Air Force Officer assigned to the Joint Chiefs of Staff, Pentagon. He was previously an Air University Fellow. Major Sheets grew up in West Virginia and graduated from Virginia Tech. He has flown multiple small fixed-wing aircraft for AFSOC. He is a published author with The Journal of Values-Based Leadership, Vol 11, 2018.

Matt Elmore is an Air Force Officer and prior C-17A Evaluator Pilot with 13 years of aviation and instructor experience. Both his graduate and undergraduate degrees specialized in coaching and education. He has served in multiple C-17 squadrons, in the Contingency Response Wing, and most recently as an Air University Fellow.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of Defense or the U.S. Government.

One thought on “Abstract to Action: Targeted Learning System Theory Applied to Adaptive Flight Training Part 2

  • September 11, 2018 at 12:30 pm

    Great to see USAF officers engaged directly with pioneering innovative advancements in pilot training and education. Coupling together advances in VR technology with state-of-the-art eye tracking and bio data to better inform insights for shaping the future of how Airmen education is inspirational. The effort conveys the “nothing is impossible” ingenuity present at the beginning of the Air Force. It’s refreshing to see such a fine example of the ingenious spirit alive and well in today’s Air Force. Congratulations Maj Sheets and Maj Elmore for the fine work and contributions to the United States Air Force.


Leave a Reply