Interview with Lieutenant General Jack Shanahan: Part 2

Excerpt: In an interview, Lt Gen Jack Shanahan shares his vision for the role of artificial intelligence and machine learning in future warfare.

Approximate Reading Time: 17 minutes

Editor’s Note: OTH had the distinct pleasure of interviewing Lt Gen Jack Shanahan, USD(I)’s Director for Defense Intelligence (Warfighter Support), in November of 2017. Part I of this interview, published on Monday (02 April 2018), focused on current initiatives to integrate artificial intelligence (AI) and machine learning (ML) into the Department of Defense. In this second part, Lt Gen Shanahan provides his vision for the role of AI and ML in future decision-making and warfare. He advocates for a cultural shift in the way we develop, acquire, and employ AI at every level of war.

Over the Horizon (OTH): The advancements and innovation you describe are very exciting. Just a few years ago, all of this would have been considered unthinkable. But, here we are pursuing projects that will fundamentally change intelligence, surveillance, and reconnaissance (ISR), and change the way we conduct operations. And to think, we have only scratched the surface of algorithmic warfare.

I would like to get your thoughts, however, on our path to normalizing AI/ML utilization. There will unquestionably be a time period we must traverse where we will continue to learn, develop, and implement these new capabilities. Meanwhile, we have this sort of antiquated architecture, a legacy command and control structure built around a Theater Air Control System (TACs), AOCs, and a variety of authorities that are perhaps out of sync with or suboptimal for this new paradigm. How do we bridge that gap?

Lt Gen Jack Shanahan (JS): That is one of our most difficult challenges that we have to work through. You raise a very important point though that we are going to begin this transition period. It will not be easy getting there and it is important that we have some level of expectation management. General Raymond Thomas, Commander of US Special Operations Command (USSOCOM), has been really pushing hard on AI, saying “I need better tools, give me AI and machine learning.” We are moving as fast as I have ever seen the department move because we have a Marine, Colonel Drew Cukor, who is just getting it done.

When the first algorithms are fielded in Sprint #1, some may view them as rudimentary. While I hope we do not disappoint General Thomas, he and others must understand that this is the beginning of the transition. From there, we will quickly transition to Sprint #2 and Sprint #3. Then we’ll really start to have people generate new ideas for how best to use AI.

One of our biggest challenges of many right now is integrating these algorithms on to programs of record. We knew from the beginning this would be a challenge and weren’t surprised by it. We were surprised by other things but not this one. The reason we are starting with the ScanEagle is because it is not a program of record. We are just working with USSOCOM, Joint Special Operations Command (JSOC), and others to figure out how to put these algorithms into the systems. [Integrating the algorithms onto ScanEagle is] not easy, but a lot easier than getting them into the Distributed Common Ground System (DCGS) programs of record, which is our next big step.

The Air Force really has to have Open Architecture (OA) DCGS for this to work. They’re already trying to figure out how to make this happen on the timelines for OA-DCGS, which is not nearly fast enough for Lt Gen VeraLinn Jamieson’s (HAF A2) liking. She’s pushing her staff hard because she is relying on this. Project Maven is essential for the future of the Air Force processing, exploitation, and dissemination (PED) enterprise. The architecture today is not built for dropping an algorithm into a program of record. So, we are going to go through this with the Air Force. We are also working it through the Army right now with DCGS-Army. The Navy doesn’t have anything quite equivalent. And, the Marine Corps are using these tactical UASs.

We’re in this strange period of dropping algorithms into programs that were never built for AI. Until we get to a different environment where systems are built to accommodate this from the start, there will be some setbacks. I can guarantee that there will be setbacks because there are programs of record that will be reluctant to put these things in. There may be some hiccups along the way, something breaks and you’re talking about a billion-dollar DCGS program of record. They do not want to be put at risk by some small start-up vender dropping an algorithm in. So, we are working through this during the test and evaluation process.

There are lots of other things we have to do to start moving toward a world where this now becomes part of the fabric of the Department of Defense. I’d be hard pressed to overstate the challenges we face to include everything from big vendors who might be threatened by a different way of doing business to just the challenges of a bureaucracy that’s not used to the pace that we’re going at. Six or seven months into it, we’re putting capabilities into the field. That is very different than the normal way of doing business in the department. Some will not like it; some will find ways to put obstacles in our way.

We have all seen it in our careers, but it takes several generations of people to really embrace this before it gets accepted. It’s like a two-year command tour, and you’re trying to change the culture of an organization. You think you’ve done it in two years and when you walk out of the door, the culture probably reverts back unless the next person embraces the same approach that you had. That’s what we’re looking at here. Everything from acquisitions, technology, and logistics (AT&L) and research and development (R&D) doing business a little bit differently up here in the Pentagon, to service programs, to operators in the filed saying, “Hey, I think there is a new way to use this, so let’s come up with it.”

We are in a really difficult period for probably the next two to five years. It will take at least that long before this technology really takes root across the department. There are also some people who want us to fail because of the pace we’re operating at, the fact that we are threatening other organizations because we’re just doing this, and if we deliver, it will show that it can be done. Not everybody is as enthusiastic about this as we are. All of these factors present challenges. And, I will not say with any degree of certainty that two years from now that we’ve succeeded. I know we will deliver capabilities to the field, but a lot more has to happen in the department. And, we are talking about it. The department is interested in understanding, at the Secretary level on down, how does the Department of Defense begin to change its culture to embrace this idea of artificial intelligence.

OTH: That links back into something that you have already talked about with respect to how and where industry fits into this. Of course, what we are discussing is very different than the traditional industry-DoD partnerships. What do you think that means for future DoD partnerships with industry and how we operate? Is this going to be a new model?

JS: We’ve already learned a lot. Our going-in position, which we have had from the start, is that anybody is a player. From a ten-person startup company to the biggest Internet company in the world, everybody is a potential player. We had an industry day and our push was: give us your white papers, here’s our problems, tell us what you can do to solve them, and we will give anybody a look. We may or may not use you again but we paid you some money and it’s kind of a Defense Advanced Research Project Agency (DARPA) approach to doing business; as you get more and more gates, you get more money and kind of work your way down.

So, this idea of anybody’s a potential player is an important philosophy for me. What a lot of people don’t understand is we’ve actually done everything up to this point within the confines of the Defense Federal Acquisition Regulation (DFAR). We do have some special authorities called Other Transaction Authorities and Rapid Acquisition Authority. I actually have not had to use those yet. We’re staying within the DFAR, and the only reason we can do this, is just coincidence of right people, right time. Colonel Cukor, the Marine leading this, he’s an intel professional as an operator. But, he’s also a level-3 certified acquisition professional. That combination of skills is rare. If it had been the other way around, we wouldn’t be where we are- acquisition professional first, some understanding of operations. So, he knows the DFAR and he calls bluffs all the time with people that think they can BS their way through what can or cannot be done.

We’ve also run up against some brick walls. We are up against one right now in terms of cloud accreditation for one of the big companies because that’s how they do business. There is a DFAR clause that we have to break through or we can’t do what we need to do. This whole idea of prototype warfare, rapid delivery… I won’t quite say fail fast because people will interpret that differently; but, know that prototype means you’re not going to have something 100% successful. However, you are going to get it faster than you’ve ever had anything before.

The days of five-year block upgrades to DCGS that costs hundreds of millions must be gone. Upgrades that are years behind, so by the time they are fielded the capability may be obsolete. What I am describing is: get it out there, put it in people’s hands, and let them wring it out. If it is not right, we’ll get it fixed quickly, and get updates pushed to the field. It’s not necessarily turning the acquisition world upside down, since that is what AT&L and R&D are supposed to be doing. They are pretty good at it in some ways, but we need to move faster than we’ve done in the standard acquisition processes because AI and ML are different. Algorithmic warfare is different. It is not a lot of lines of code and it does not take that long to develop [an algorithm]. What you really need is the data on the front end; that’s the long pole in the tent. We are working a whole bunch of things related to how the department conducts data processing in a different way.

OTH: As we have engaged with disparate audiences on this topic, one of the biggest concerns people have with algorithmic warfare and AI is linking the technology to the actual decision making. How do you see that working? How do AI and ML interface with decision makers in a timely but dynamic way for the future operating environment?

JS: It goes back to my earlier point. I expect AI to get me through the Observe and Orient cycles faster, but I’m still expecting a human to be physically present on the loop somewhere. It may not even be “in the loop,” but it’s “on the loop.” You can interpret that in different ways, but ultimately, especially when you come to weapons employment, a human will make the decision. I tell people, because of our project, it’s pretty easy for me to say that no intelligence analyst out there is saying shoot. They are identifying something, giving advice to somebody that has a finger on a pickle button who then shoots, drops a weapon, or fires a gun, but that’s not the intelligence analyst saying do it. It’s a recommendation based on the criteria in the rules of engagement. That’s all going to still be the case; however, it could happen so much faster than we’re used to in a future contested fight that if our adversary is using autonomy we’re going to have to figure out how we adapt to that world.

So decisions are made by humans. But you know how it goes, the United States plays by all sorts of rules until we start losing. Once we start losing, then we have to decide if we want to play by different rules or accept the fact that we must have autonomy. So it’s difficult for me to say that decisions won’t ever be made by machines because if you look far enough down the road where reasoning and contextual analysis is built into AI, now you may have autonomy. In fact, Deputy Secretary of Defense Robert Work used to say this all the time, we have autonomous systems today. There are parts of Patriots that are fully autonomous. The weapons on a ship are another example when those weapons are in autonomous mode. Even these systems make mistakes, but they generally do not make the same mistake humans make under the heat of combat since they are machines. They’re taught to do certain things, they learn to do certain things, and they’re going to be better performing than somebody that’s on one-hour of sleep in seven days and is faced with chaos. A hard part of combat stems from the fact that real people are shooting back at you. So machines can be a strength in that case, but I acknowledge this idea of full autonomy has people concerned because of humans not being in the decision-making process.

However, I don’t see that for us any time soon. I see humans being the sole decision-makers until we’re far enough along that we have a very high level of trust. This is all about trust. We have to build confidence and trust in these algorithms so people start relying on them. I do not expect operators and analysts to trust the algorithms right off the bat because there will be mistakes. The algorithm will identify a car as a person by mistake. Whatever it is, it will make mistakes and that’s why we need the feedback loop. This idea of getting through the OODA cycle, the decision cycle so much quicker is going to be the way they’re going to fight. Again, not always, certain environments it will and in certain environments it won’t. At the tactical level, [the decision-making process] could certainly be very fast paced. At the grand strategic level, the process will be a little bit slower. But you’re still going to be relying on data and information being presented to a commander somewhere — a decision maker — which means I really want AI and ML to make sense of the enormous amount of data we will have available to us.

An avalanche or tsunami of data is out there for the taking, but when faced with a world without AI and ML, we all fall back on the simplest process. In that scenario, the analyst goes after the piece of data that is trusted, which is one source, from one place, from one time, that may or may not be right but you rely on it because you always have. We want analysts to be able to access so much more than that. If they choose not to use it okay, but at least its available to them.

Being involved in counter terrorism and counter insurgency combat for the last 16 years has presented some real disadvantages and caused a lot of bad lessons learned in the terms of contested environment for the future. But over that time we have built this incredible analytic workforce that is highly experienced, battle trained, and has been doing this for a very long time. We have a lot of experience that the Chinese, Russians, and other adversaries do not have.

I would say that one of our greatest asymmetrical strengths remains our ability to make sense of data quickly and that’s why AI and ML is so important. We do have a lot of experience fighting in combat where others do not. They’re planning on it and they’re going to integrate AI, but they don’t have the same combat experience that our workforce has had over the last 16 to 17 years.

OTH: When we assess what it means to incorporate AI and ML into our current practices, there appears to be a major reliance on connectivity. How does that reality align with a future contested environment operating construct?

JS: We must take this all into account, which is why we are pushing hard beyond this current phase. The intent is to push these capabilities out to the edge. They have to be at the edge. They have to be on platforms and sensors. I could easily see a world in which you’ve got AI on an MQ-9 or any other future system, as well as on the exploitation or processing workstations. It is reasonable to expect there will be multiple algorithms layered throughout the enterprise for redundancy, back-up, and many other functions.

If you start talking about a world of small unmanned aerial vehicles (UAVs) and swarming drones, these systems will not have a lot of bandwidth to begin with. We will not be sending full motion video (FMV), light detection and ranging (LIDAR), or measure and signature intelligence (MASINT) off of these assets. What you need is an algorithm on the platform, on the sensor that will send only the defined parameters the algorithm has been trained to detect and process. In that world, the systems require much smaller bandwidth and will not need a massive communication pipeline to transmit data around the globe. Signals will be transmitted in the middle of conflict like always, but I must be prepared to communicate and operate in a limited environment. That’s the reason why AI must be pushed to the tactical edge. It’s not just airborne. It is maritime and subsurface. AI must be present with soldiers and marines on the ground fighting combat. Whether forces are enabled via tactical clouds that they have forward or we build algorithms, they will all require AI at the edge. It’s so important. Those environments will exist, and we have to be ready for it.

OTH: Our final question, one that we typically end our OTH interviews with, continues with this theme of the future operating environment. What is something that’s on the horizon that concerns you or that you are thinking about that we haven’t discussed?

JS: We have to be ready for a fight where our adversary has the ability to keep us from entering, staying in, or dominating a piece of territory. While we are ready for tough fights with Al Qaeda, the Taliban, ISIS, and others, we have a lot of work to be just to prepare for a potential fight with China or Russia. We have to get ready for that.

To do that, we have to change the culture of the department to accept this idea of machine learning and artificial intelligence. Without it I think we’re at greater risk than people understand. The Chinese and the Russians are really accelerating their development of AI-enabled technologies and capabilities. They are going to challenge us in ways we are not used to. If we do not focus on and invest heavily in AI and ML, I can actually see a scenario in which we don’t win. We have not said that in an awful long time, but I worry about that.

Related to that, I worry that we’re at a period that begins to feel uncomfortably like 1914 and 1939. It would not take much to spark a regional conflict that quickly expands into something much bigger. It is easy to see a scenario where a theater conflict goes global. How do we deal with that given the size of the force we have? Or with the readiness of the force we have? Those are all things that should keep anybody up at night. I’m not an alarmist. And, I’m not saying it will happen. I’m just saying if you look around the globe with what’s happening in various countries, the environment is conducive to the kind of conflict that we haven’t seen in a long time.

Lt Gen John N.T. “Jack” Shanahan currently serves as the Director for Defense Intelligence (Warfighting Support) (DDI(WS)), Office of the Under Secretary of Defense for Intelligence. Prior to this assignment he commanded the 25th Air Force at Joint Base San Antonio-Lackland, Texas, and served as the Commander of the Service Cryptologic Component. General Shanahan earned his commission in 1984 as a distinguished graduate of the University of Michigan ROTC program. He is a master navigator with more than 2,800 flying hours in the F-4D/E/G, F15E, and RC-135. He has served in a variety of flying, staff, and command assignments.

This interview was conducted on 16 November 2017 by OTH Editor in Chief Maj Sean Atkins and Senior Editor Maj Jerry “Marvin”Gay.

One thought on “Interview with Lieutenant General Jack Shanahan: Part 2

Leave a Reply