Why the Department of Defense Should Create an AI Red Team

Estimated Read Time: 5 Minutes 
By Rena DeHenre 

What is adversarial machine learning (AML)? AML is the purposeful manipulation of data or code to cause a machine learning (ML) algorithm to misfunction or present false predictions. A popular example of AML is from a team at Google that carried out an experiment on GoogLeNet, a convolutional neural network architecture that won the ImageNet Large Scale Visual Recognition Challenge in 2014. Adding noise to an image of a panda and digitally changing its characteristic led the program to more highly predict that the image was a gibbon. This type of manipulation is relatively easy to execute with just a few bits of code inserted into the original algorithm.  

OTH, Emerging Security Environment, Multi-Domain Operations, All-Domain Operations
Source: Explaining and Harnessing Adversarial Examples, Goodfellow et al, ICLR 2015. 

In the paper “Adversarial Machine Learning at Scale”, Alexey Kurakin, Ian Goodfellow, and Sam Bengio highlight that machine learning models are often vulnerable to adversarial manipulation and, most significantly, that many machine learning algorithms are highly vulnerable to attacks based on small modifications or inputs to the model. At the core of AML is taking advantage of the highly brittle nature of ML infrastructures to change predictions. There is a growing list of AML techniques, but most of these attacks have the potential of going undetected. Discovering adversarial manipulations has led to an increase in AML research with different goals. Some researchers are focusing on more effective ways to conduct AML attacks on ML programs or fine tuning their techniques. Others are focusing on increasing the robustness of their ML algorithms. For example, the performance metrics of an algorithm decreases when the algorithm has to perform against “real world” data. Researchers purposefully use some AML techniques to increase the robustness of their ML algorithms so the model can handle natural perturbation it might come across in the real world. Some research is dedicated to identifying and defending against AML attacks. For example, DARPA has a research arm called Guaranteeing AI Robustness Against Deception (GARD) which is a framework that organizations can use to identify system vulnerabilities, characterize properties that will enhance system robustness, and encourage the creation of effective defenses. As with ML, the AML world is not new, but due to technological advancements the AML community is making significant discoveries every day. This leaves the Department of Defense (DoD) in great position to take advantage of the research already conducted by the AML community and create an AI Red Team. There are still some challenges that must be addressed first.  

The DoD is actively working to close the technological gap in artificial intelligence (AI) and ML between defense and industry as commercial industry is leading the way. The National Security Commission on AI made three important points in their recently published report. First, a major shift in DoD technical acquisition processes and cultural thinking is required for the military to maintain a competitive advantage in AI and ML. Second, the DoD must be “AI ready” by 2025 but lacks the digital workforce and technical talent required to do so. Third, AML is a real threat that is being under-prioritized by decision makers. These three findings are keys to why the DoD should consider creating an AI Red Team.  

Addressing the vulnerabilities that DoD AI and ML algorithms have would be the main task of an AI Red Team, but as the team matures it would likely serve other functions as well. First and foremost, the team would be responsible for assessing, demonstrating, and recommending actions that would increase the robustness of DoD algorithms. With a dedicated AI Red Team, the DoD would have a central team to address and assess AI and ML vulnerabilities. In the short term, this team would most likely rely heavily on the research community and industry partners, but it is imperative that the DoD start somewhere. By establishing the AI Red Team it provides one DoD focal point for AML partnerships. This would include research labs across the country, our foreign partners, other federal agencies, and academic institutions. As research and academia continue to make amazing leaps within the fields of artificial intelligence, machine learning, and adversarial machine learning, the DoD will have a seat at the table to share ideas, new TTPs, and partner on projects. In addition, the DoD would be able to close the training and expertise gap within its ranks. While the team might struggle initially due to a lack of technical expertise, their operational expertise would still be useful in understanding the operational impact an AML attack could cause.  

In the short-term the DoD might have to contract out the ML and AI talent, but a joint team of operational experts and experienced technical professionals would quickly develop the necessary AI and ML skillsets after an assignment dedicated to working AML problem sets. It is not hard to see a future where the selection for the DoD AI Red Team becomes similar to applying to the Air Force Weapons School or Junior Officer Cryptologic Career. Lastly, it is important that this organization be DoD lead, DoD tasked, DoD staffed, and create and execute DoD authorities as there are parts of warfare that should not be outsourced or contracted. By establishing the AI Red Team, it gives the DoD an arm that focuses on AI, ML and AML through the lens of warfare. The tools and techniques created, the talent grown, the problems solved, and the outcomes created would be motivated by national defense. When it comes to AI and ML, the DoD has allowed industry to lead the way. When it comes to AML, the DoD must own it as a national security mandate. 

Rena DeHenre is an intelligence officer in the United States Air Force. She recently completed a research fellowship at MIT-Lincoln Labs in which she focused on AML and its impact within the cyber domain. She is now a student at the School of Advanced Air and Space Studies.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Featured Image Source: https://www.scnsoft.com/blog/red-team-penetration-testing-to-level-up-corporate-security

Print Friendly, PDF & Email

Leave a Reply