***OTH Anniversary*** Clarity from Complexity: An Interview with Dr. Jon Kimminau on Big Data and Activity Based Intelligence

Continuing our first anniversary celebration, Sean Atkins, OTH Editor-in-Chief, brings you his 2017 Editor’s Choice article: our interview with Dr. Jon Kimminau on how Big Data and Machine Learning are shaping the future of intelligence and decision making in warfare. Sean chose this piece because it provides a very candid and well-considered look at the beginnings and evolution of Big Data and Artificial Intelligence in Defense. Further, it kicked off one of the most useful ongoing discussions OTH has hosted this year that examined how technological developments are influencing future decision making. Follow-on interviews and articles included:

The Kimminau interview ended up being a particularly rich conversation that was split into three parts but is being published in its entirety here.

As noted in the National Intelligence Council’s Global Trends Report, the future security landscape is increasingly complex and uncertain. One of the more promising ways to manage this complexity and cut through its opacity is in new paradigms of data collection and analysis. Concepts such as Big Data Analysis and Activity Based Intelligence may be critical to gaining timely awareness of risks and opportunities to achieve true insight from a complicated context.

To better understand these issues and their impact, OTH sat down with Dr. Jon Kimminau, a Senior Executive at the Pentagon and forward thinker on matters of intelligence collection and analysis.

Over the Horizon (OTH): Can you describe for us some of the key concepts and capabilities behind Big Data and how they fit into the future of intelligence analysis?

Dr. Jon Kimminau (JK): Okay, this is a huge question that relates to a vision we set forth for the future of Air Force Intelligence, Surveillance, and Reconnaissance (ISR) back in 2013. After a few months of working hard on it, what we came up with was that first we had to recognize our resource limitations. We aren’t going to get more people and we aren’t necessarily going to get more resources. But at the same time, we saw that ISR requirements were continuing to climb as they have every year. The demand for more and better ISR was continuing to rise and so the question was: how are we going to meet that without additional resources, no additional people? The only answer we could come up with that we all believed in was that we had to change the way we do analysis. Analysis is the engine of what we produce so changing it would involve the whole infrastructure: going from what we collect, to how we process it, to then how we analyze it, and how we connect to the operators, platforms and staffs that need that information.

So in devising this transformation we said, “Okay first let’s start with the collection”, and the collection and processing in particular, because we all recognize we have a stovepiped system. Every single type of intelligence is stovepiped and even within each there are their own stovepipes of particular types of collection. In just one of those pipes you go from the collection to the data being transmitted to usually a large number of people helping in the processing to produce some sort of database and reports. Think of all these pipes kind of spilling into repositories into which analysts then reach to do their analysis.

I describe that because if that’s the way things are now, we need to look at our investment in people. The rule of thumb we’ve been using is that about 80% of our people including those we call “analysts” are invested on those pipes on the input part and only about 20% of our people are on the analysis side. If we can break down the stovepipes and use more automation and go towards digitization in a cloud – some people like to say “a lake” – for all of the data, we believe we can flip that paradigm so that about 20% are invested on the processing side and 80% are involved in being able to actually work on the data and do analysis and produce more and better stuff, with better tools of course.

In the center of this is where the Big Data construct comes in. If you think of breaking down the stovepipes so that we’re data focused, and of course the cloud where it all gets dumped and structured so that analysts can then operate on it, then that brings us to their tools to do this and that’s where the data analytics part comes in.

So if that’s the vision of the future that we started in 2013, what did people start coming to talk to us about? Well, I call it two waves. The first was kind of the big companies – the Microsofts and Amazons and Googles and IBM’s would talk to us about how to make that data lake. They had different structures and approaches to make a cloud or lake and could bring you that infrastructure to build it. Well, that’s all good and necessary. We know we’re going to need that kind of thing but that doesn’t quite reach the data analytics part yet.

So with the second wave of people we got coming in, and this is where I started to get frustrated, and we’re talking maybe the 2014-2015 timeframe and still today somewhat, we’d get vendors or labs or you name it, people would come in and start talking to us about “look at this slick tool I have that we can give your operator and look at what they can do out there with it.”

Well the tool is really neat but it begs some questions. Among them, at the staff level for instance, we have no idea what tools are out there already being used because they’re being proliferated in different places. Second, we don’t know in the whole spectrum of what we do for analysis where in particular we need tools because nobody’s been trying to take that inventory.

Third comes the questions I wish people would ask if they knew more about data analytics. This led me to search for a framework for talking about Big Data analytics – talking about these questions and understanding the full scope of things rather than looking at tools you can give an individual analyst or looking at just the infrastructure you need for the data.

And I found a framework through two big projects that have been going on across the Intelligence Community in the area of data analytics. One of them is called Activity Based Intelligence and one is Object Based Production. But focusing on Activity Based Intelligence, the Director of National Intelligence (DNI), Director Clapper, charters three – what they call – major issues studies each year, and in 2015 they delivered a major issue study on Activity Based Intelligence. It started because basically DNI was asking: “Okay, I have an agency and a couple others who’ve been pursuing this idea they call Activity Based Intelligence. Is this something for the future of the whole Intelligence Community?”

What the study came out with was a yes to that, but I won’t go into all those answers because the thing I’m coming to is part of what they structured this around. The study produced the framework and said this is how you have to think about the whole thing and that framework, with a slight expansion, I would say colors and organizes my thoughts about where we are in data analytics and where I see everybody working and I call it the Data Analytics Framework.

You have to think conceptually that there are four sets of activities that must take place in data analysis writ large. The first is called Big Data triage and in Big Data triage you have to think it goes all the way from what I collect, how I access it, how I ingest it, how I organize it and how I structure it because for tools to work, they have to work on some sort of structured data. So all Big Data triage activity is how I do all that, how I bring it all in and structure it in a way that it can then be used by analytic tools.

The second set of activities in data analytics just has the label forensic analysis or forensic network analysis. This is where you think of an analyst sitting and applying a tool to the data to, let’s say, look at a geographic box, and look for particular activities, trying to identify patterns from which we’ll get some sort of relevant information about what’s going on. Forensic is a great word for it because this whole set of activities is all about looking at the data you have, kind of looking backwards and pulling from it. And in the Big Data approach this can be years and years of data and you just sort and filter based on the types of questions you’re asking.

The third set of activities is called “activity forecasting” or it could be called “predictive analytics” but the idea here is that we want our, let’s say our analysts writ large, to be able to do more than just look back. We want them to be able to anticipate things that are about to happen. We want them to be able to alert folks that, hey this type of activity appears to be happening in this area. Well, the only way you can get there is with sophisticated tools, the kind of tools that just don’t pop out of nowhere – you have to have people who can sit down and model and say, as an example: what does a mobile missile activity look like, or a mobile missile event? So what are the parts, what are the observables, what’s the sequence, what kinds of things come together before we call it this kind of event? And with that model, then you can build the tools that would enable analysts to actually look at streaming data coming in and identify that this kind of activity might be happening.

It’s important to realize that point because all of the tools that the vendors and labs bring in. If the tool is anything beyond a simple descriptive – like what I call a screwdriver type of statistical analysis that tells you that you’ve got X many of this data points – if you want something more than that, that starts to tell you an event is going on or this looks like this type of activity, well then you have to have that modeling behind it. And it takes a different set of analysts and a different set of tools to build those models.

And that leads finally to the 4th activity which is collaborative analytics. This is the whole idea of the user environment itself. It’s the idea that we want to be able to have our platforms linked to analytics real-time and we want to be able to have our command and control, our operators in operations centers be able to interact with that analytics too – kind of like the internet of things idea. I stress in this fourth column (reference above graphic) for data analytics that they aren’t just users of information, you know, they aren’t just taking information out of some black box, they are participants in the data analytics. You have to think of this like when you go shop Amazon. You are actually contributing to the analytics that goes on behind that screen for Amazon to do it more efficiently. That’s why you get things like “oh you asked for this but do you realize that people who were looking for this were also looking for x, y, or, z,” or “did you realize that people asking for this rated it 2 star out of 5 and here’s their feedback.” Those kinds of things come from your participation. Our operators and platforms are collaborators in data analytics, they’re not just consumers.

Dr. Jon Kimminau (JK): So if you look across these four data analysis activities there are a number of conclusions that can be drawn and I’ll start with some of the high points. [Note: the four activities include Big Data triage, forensic network analysis, activity forecasting, and collaborative analytics. More information can be found in Part I of this series]. The first one is that if you go and talk to some of the folks that are doing this today and you look across the four activities and you ask those people where they spend their time across them, you find that about 60-70% of analysts’ time is spent in Big Data triage. That’s an important thing to think about because that’s prior to anything everybody expects out of this. This is just: “how do we get the data and structure it in a way that I can then go and apply my tools to it?” So that has portents in what we need to invest more in if our analysts doing this have to spend up to 70% of their time just on the data triage.

The second conclusion is that people are way underestimating that third activity about forecasting. We can’t get the tools that we need or would like to go beyond just what I’d call the descriptive analytic – and we want to start getting into the predictive analytics and cognitive analytics which are higher orders – you can’t get there if you don’t do that third column where you have folks modeling and working with the data to build the kinds of tools that will help you do that.

Over the Horizon (OTH): There are a lot of ongoing conversations on the role of Artificial Intelligence (AI) in this kind of predictive analytic work. Is that something that figures heavily into this third activity?

JK: I shy away from the AI label because it gives people the wrong idea. I like “machine learning” which also gets grouped quite often with AI and yes machine learning is a big part of both the modeling and in some respects also a big part of Big Data triage. We can use machine learning to help us capture that streaming data and put it into a form which we need. It’s also quite useful for what we call unstructured data. So if you have large data sets, the simplest form would be “I have these huge documents. How in the world do I get to the data that’s inside?” Machine learning is one of the approaches to breaking that down. So yes, it plays heavily into this.

Another conclusion is about the fourth column, the collaborative analytics. If you look at where the investments are being made, there is way too little attention being paid to the last column. And if you don’t like the term collaborative analytics you could also say “the user environment” – what we want our operators to see and how they participate in the analytics and just the ability to do the things I named like Amazon.

Wouldn’t we like to have analysts that can sit down and when they request data or information about a particular area they can then also see “people asking for this also asked for this” and “Oh did you know Mary, John, and Sam just asked this question and they’re working in your region or your function.” All of those kinds of things that would help, we are investing so little in that area right now. I’ll even underline that a little more. If you look at the data itself, the data involved in data analytics, data as an asset, and look across these four activities where the commercial world is telling us they get by far the best value is in that fourth activity, where the user interfaces with the data analytics. It’s not in the first where they gather the data… where they’re getting the value is the user end of this and so that should be a lesson for us that maybe our best value, if we can get involved in data analytics, is going to come from what the users can do and interact with and get out of that interaction.

So you asked how Big Data analytics fits into the future vision and well it absolutely fits! We’ve got to somehow realize this.

Before moving on, I’d like to put one more point on this: none of this substitutes for what I would call classic analytics. Classic analytics is where we have a question we try and go find out what data we have on it, we piece it together like a puzzle and we provide the answers. We still need classic analytics but Big Data analytics is kind of on the other side, it’s inductive, it starts with: let’s gather the data and let’s explore it and then we might find stuff relevant to things we’re working on. Another way to describe those two sides is that the classic approach is deductive and the Big Data analytics is inductive. We start with the data first, we don’t start with questions.

OTH: How is the US defense community progressing toward that vision?

JK: So I guess I’ll talk first about the Intelligence Community because the Air Force is behind on this. From an Intelligence Community standpoint the first part is to actually have the foundation to do all of this. The infrastructure is a program the Director of National Intelligence put together which is called ICITE. It stands for Intelligence Community (IC) and the ITE is Information Technology Environment. ICITE is basically the foundation to be able to do analytics. So think of it as providing the clouds, the services, the platform that you can put tools on, the interaction, the access, … all of that comes with that environment. So there is progress in that sense because ICITE has come well along and Director Clapper isn’t going to leave until it’s at a point where we can’t go back.

But when I talked about this framework and what that report that the folks did for Activity Based Intelligence in the major issues study, I kind of pulled what they did in that study up a level. Because they proceeded to take their framework and show where current projects are today in terms of the framework. I’ve kind of raised it and asked: so where are data analytics projects across the IC taking place across that framework?” And what you find is two things.

First, if you think of them as bubbles and think that each bubble or program is really concentrating on one aspect or working on one idea within data analytics most of those bubbles, those programs in the IC right now, are concentrated on the left hand side which is big data triage and forensic analysis. So investment-wise, these bubbles are under-emphasizing the modeling that’s necessary for that activity forecasting set of activities and then, like I had mentioned before, there’s almost nothing going on in the collaborative analytics in terms of those projects. So that’s an investment look at it in terms of where we are today.

A second part beyond investment, along with ICITE and data analytics the DNI began about a year and a half ago what he called mission campaigns. Mission campaigns are big ideas that they say “let’s see how ICITE and data analytics can help us tackle this big idea.” They had seven mission campaigns go from 2015-2016 and the conclusion they drew from the first round, and this relates to the bubbles, was that we aren’t sharing the infrastructure necessary to do these things. So every campaign, every project let’s say, has to kind of reinvent the entire framework for data analytics just for their project. Another way to say that is that every one of those projects has their own approach to the data triage and structuring rather than sharing one common one. So this idea is that, while we are doing data analytics, we have not reached the point where we can share it all. Obviously, it would be so much more efficient and you might even be able to do more projects or initiatives within this if you could somehow get to that common infrastructure, let’s call it for now. That’s the second conclusion.

A third conclusion I will mention real quick in terms of where we are today: if we think of data analytics as kind of new wine, we are trying right now to put new wine in old wineskins and the way they articulate this in the Intelligence Community is they say “our mission workflows need to change and need to change radically to accommodate data analytics.” We’re trying to do new business in old ways and it doesn’t work, it doesn’t fit.

This is where the Intelligence Community is. The Air Force is definitely way behind. We are grappling with the most basic questions and if you want to look beyond the Air Force, and say the Department of Defense (DoD), it is also behind this – the part of DOD that is the Military Services. The part of DoD that is big agencies is a full participant in ICITE and where it’s going but the Services are behind. Example: the DoD equivalent of ICITE is supposed to be DI2E and that stands for Defense Intelligence Information Enterprise. DI2E itself is going to be considered kind of a sub-bubble or bridging bubble to JIE which is a Joint Information Environment. Each of those are trying to establish the same thing that ICITE is, that infrastructure for where we go. Well, I’ll just tell you DI2E and JIE are probably at least 2 or 3 years behind where ICITE is and ICITE is, in crawl walk run stages, they’re just transitioning from crawling to walking. So we are behind.

OTH: So if it is supposed to be a common framework amongst everybody, why is the US building two different infrastructures, especially if one of these is years ahead of the other?

JK: That’s a great question. It’s because for right now the infrastructures are isolated to their information Classification level. ICITE is Top Secret level and the DOD is Secret and Unclassified levels. So JIE is going to be Secret and Unclassified and that’s why it has to be on its own and DI2E is kind a bubble between them because it’s struggling with how do I not become separated from the Intelligence Community and yet serve our customers that are fighting on lower Classification level networks. So that is a big question DI2E is struggling with, or challenged with let’s say. And that’s why the separate information environments, it’s really Classification levels right now.

OTH: What are your thoughts on what was described in a recent Foreign Affairs article as the “Age of Transparency” and its implications for the future of defense and national security? This includes things like the Snowden leaks and Twitter analytics or commercial imagery that reveal events and movements as they happen. Given that Big Data works best with volume and variety of data and that Open Source data will be an increasingly large contributor, how do you see future intelligence collection being shaped by this?

JK: Here’s how I think about it: it almost gets a little philosophical. I think of the issues that were raised in the Age of Transparency article is an almost separate paradigm shift from what we’re talking about in data analytics and Big Data, and here’s why. If you go back and look at intelligence for the last 50 years, the way we have approached it is that we draw on intel sources, we collect in an intel sense, and we draw on those collected intel things and make our analytic products and services from those intel sources and maybe add what we now call open source to it where necessary, kind of the condiments that we would add to an intel product. What is coming true, what I believe is true today and we just haven’t shifted our own paradigm enough, is that the sources of information that you can get from Open Source should be the foundation and the intel collection and sources should be the condiments and that’s a massive reversal. I don’t remember if the author of the article used this but I remember sharing it with him when we were discussing it.

There’s this anecdote going around that’s supposed to be attributed to Robert Cardillo who is currently the director of The National Geospatial-Intelligence Agency. He had a bunch of seniors around a table and he said let’s just consider this table the whole sea of information that’s available to us to understand what’s going on around us. Then he took a piece of paper and tore the corner off of it and threw that corner on the table and said this is where the intelligence community spends all of its time, this little piece of paper of what we collect. How do we change our approach so that we’re taking advantage of everything? I think that’s absolutely what we have to do and for us in the intelligence business, that’s the key issue going on with the Age of Transparency.

We have to shift our paradigm and recognize that the foundation of the knowledge that we need to do our products and services can come from Open Source and that we use our intelligence collection to fill the gaps, to be the condiment, to be the extra thing we need that isn’t already publicly available in some fashion or form. So to me, that’s where Open Sources come in and that’s why I consider it a separate paradigm shift from that of Big Data. They definitely overlap at some point, particularly when we talk about what we do but there are two different things going on. But they both have the roots, don’t they, the roots in technology of today and how we are digitizing everything.

OTH: Are the volume and validity aspects of Open Source data relevant here? The idea that the amount of publicly available data is massive but often of imprecise or questionable validity seems to fit the Big Data model that is focused more on truths that can be derived through data volume and less through data precision.

JK: There actually have been some guys doing studies on that. One study I know of said: hey, if we get one source that we consider, or one data set let’s say that’s very large, but we clean it, we take out all of that either redundant or dirty data and just have a clean data set, and we have another one that has everything, all the dirty data in one big set, and we try and apply our analytics to both of those data sets to compare it, what do we find? What some of these studies have been finding is you’re getting better information out of the dirty data set. It has something to do with all of the volumes you get but also that some dirty data can point you to things you didn’t know that are actually happening. So that’s why I downplay the idea – and there’s other studies like that, I’m not relying on one study – that’s why I personally downplay the idea that open sources are full of deliberately misleading things and dirty data, because it appears the more data you have the better equipped you are to actually to find out things that are going on.

And you brought something else up there too. I use analogs to talk about this, sort of, open sources versus intel data. I call one of the analogs the video compression model. So if you’re using one of those peer-to-peer video things on your computer and you open a little window and you’re looking at each other, a lot of people don’t realize that behind that are some algorithms going on that don’t transmit the entire actual picture to you all the time. What some of those compression techniques are doing is, they will get one big picture and then after that will only transmit the things that change, the moving person or the mouth that’s moving and the rest is still the static picture. Because the rest, if it’s not changing, I don’t need to give you the repeat data. That’s one way to think about intelligence collection in the future. How do we set this up so that we sample areas to kind of lay a foundation and then we’re only looking for what changes. It’s a different approach to how we do things. Open source might give us the static picture and we might use intel collections for the things that are moving, that’s one way to think about it.

Another way to think about it is something called monovision and I’m familiar with it because I’m a near-sighted guy but as I get older I also need some assistance on reading. I like contacts but I have that difference in vision requirements so what they’re using for many people now is one contact that is more focused on far vision and the other contact is a little more adjusted to handle near vision. So it’s not bifocals, it’s one eye focusing on one thing and the other on the other challenge. I also like to use that in what we’re talking about here. If we think about maybe open sources can provide me, let’s say the far vision and maybe I only need intel to kind of do the near vision but put together I have full vision and that’s another way for us to think about how do we manage the balance between these two things. 

Over the Horizon (OTH): Big Data analytic constructs figure significantly into operating in a more fluid and integrated multi-domain future. What are your thoughts on how that future system would help identify cross-domain threats as well as opportunities for cross-domain pivots and action?

Dr. Jon Kimminau (JK): I hadn’t tried to bridge these thought streams before and it’s kind of a challenging question. I think there’s no doubt that if you bring all the data together you ought to have no domain boundaries, right, and that certainly seems to be—at least in one sense—part of what I was talking about in breaking down all those stovepipes on the data entry side of things anyway. So if we have all that data, we should be able to do cross-domain thinking, shouldn’t we?

Well, maybe not, though. There are three challenges for us in data analytics, even in our approaches, to make sure to pay attention to if we don’t want to become domain focused in what we’re doing and I think the first is stovepiping the data. So one of the challenges that’s out there at this early stage, you know I mentioned all these different projects and campaigns that are going on as bubbles. Well, each one of them is also, although it’s Big Data they’re only going after particular types of big data, particular sources. They aren’t actually going after all data yet. So as soon as you say I’m only going after some data, you have immediately put some boundaries up. You’re limiting your domain somehow.

So hypothetically, we could be applying an activity based intelligence approach, but the only data we’re looking at is data for, let’s say, the Middle East. Well, we can certainly get a lot of value out of that, but we are probably not going to see things crossing, let’s say hypothetically, to Russian operations in the Ukraine or even up in the Baltics. We aren’t going to be able to see any connections there if we are only looking at that area in terms of data. So that would be the number one challenge: if we stovepipe the data, we aren’t going to be able to do cross-domains.

The second challenge would be in, “How we might stovepipe our questions?” If we, let’s say in our program of analysis or in our national intelligence priorities framework, if we’re asking the questions and saying pay attention to country X, we may have created some boundaries there to seeing cross-state or cross-functional or cross-domain activities just because the questions we’re asking people to pay attention to immediately sets some boundaries.

And the third challenge, which may sound related to that, is the organizing people issue. Let’s say I have a shop that is Africa focused and we may do great at seeing things across African nations, but we aren’t necessarily going to see things, or we may have an impediment to seeing things, crossing from other nations into Africa or other continents.

So that was my first reaction to your question, I certainly think the foundation is there, I think it’s obvious the foundation is there for us to do cross-domain analytics, but the dangers are let’s say domain boundaries on either the data or the questions we ask or how we organize our people to do the business.

OTH: Thinking about these three challenges you mention to using Big Data analytics to enable fluid multi-domain operations, does this have implications for military Services that are organized primarily by domain (Air and Space, Land, and Sea)? An Air Force leader is going to ask questions that are primarily air related and organize their intelligence force around answering air-related questions.

JK: Definitely. To me, the things that people aren’t openly talking about or paying attention to are issues like this. Related to this is the history of the Intelligence Community and the organizations we have. They are fundamentally about intelligence sources and so we have people in each of these big agencies who work on that particular source, have specialized in that source. If we truly reach a point where we are sharing all the data, have we not just removed one of the fundamental purposes of the separate agencies? Do we need to rethink how we’re organized, perhaps not organized by type of intelligence, but organized by something else? I don’t think anyone is looking at that, but I think it’s a brick wall we’re going to hit in the near future if we realize what we’re trying to do.

Related to that is a question I got asked by a Senate staffer when I was briefing our vision of where we want to go with data analytics. He asked me: “Well, okay, I get it and yeah, I can see how the analysts would be better equipped to answer more and do it better, but what drives what they look at to do this? If they can access all data, then why wouldn’t Sue and Jim and Bob, even if they’re at three different places, all be chasing that really exciting activity over in Country X? Or why wouldn’t all of them be looking at what’s happening in the Ukraine? What drives them to look at different places?”

I call this the children’s soccer game. We don’t have everything in place, once we provide everybody all the information to go look at, we don’t have necessarily anything in place to stop it from becoming a children’s soccer game where everybody gathers around the current most exciting question and tries to answer it in all their 10 or 12 different ways. So, that’s another over the horizon thing I’m not sure we’re grappling with. If we break down the boundaries in the data, and you said it before, how do we now organize our people? And what missions do we give them?

OTH: Related to this, there is a sense that advanced analytics will compress the decision-making cycle. How do you see future analytics impacting decision-making and the time component?

JK: This is another interesting one and I hadn’t thought about that in depth either, but it kind of leads me to philosophy again. Does data analytics compress decision cycles? Well, my first thing would be what causes us to wait on decisions, and do they wait on getting more information or do decisions get paced by events? Philosophically, I lean toward the latter, that decisions are paced more by events than they are by the amount of information we have. Is data analytics going to change the pace of events or is it going to change the amount or quality of information we have? I think data analytics changes the latter which would lead me to, kind of my first blush answer to the question: “Does it compress decision cycles?” I don’t think it does. That may be a radical answer, but I don’t think it compresses decision cycles, because I think decision cycles are driven by things outside what we put in our Observe-Orient-Decide-Act cycle. The things that influence us to complete a cycle are external to those four steps. I think they’re events that drive us to say “I have to have a decision now.”

It also relates to this old adage I used with people as a commander and supervisor, you know sometimes people do sit around saying, “I have to have more information, I have to have a study before I decide that.” And my old adage is that if you have full information you no longer have a decision. That’s not what decision making is about. Decision making is always about making a choice when you don’t have full information. So I guess it relates to this in the sense of, again, I don’t think better information-generating tools affect our decision cycles.

OTH: A final question we ask all Over the Horizon guests: what is something just over the horizon that the international security community should be paying close attention to or trying to figure out?

JK: Well, this has nothing to do with analytics, but I personally am still surprised that we haven’t seen a terrorist set off some kind of nuclear or radiological weapon, because there is no concern for life there, there’s no concern for many of these guys with what they do. I’m just personally surprised we still haven’t seen that happen and I just keep expecting to see that around the corner or over the horizon. It doesn’t relate to data analytics, but that’s one that I still think of.

Jon “Doc” Kimminau is the Air Force Analysis Mission Technical Advisor for the Deputy Chief of Staff, Intelligence, Surveillance and Reconnaissance. He is a Defense Intelligence Senior Leader (DISL) serving as the principal advisor on analytic tradecraft, substantive intelligence capabilities, acquisition of analysis technology, human capital and standards. Previously, he served nearly 30 years on active duty as an Air Force intelligence officer. Dr. Kimminau holds a Master’s in Public Policy from the Kennedy School of Government, Harvard University, a Master’s in Airpower Art and Science from the School of Advanced Airpower Studies (SAAS), and a PhD in Political Science from the Ohio State University.

This interview was conducted by Sean Atkins, Editor-in-Chief of Over the Horizon, on 14 December 2016.

The views expressed are those of the participants and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

Print Friendly, PDF & Email

Leave a Reply