By: Ryan Hilger
Reading Time: 9 Minutes
In May 2018, President Donald Trump, following several years of fretting over Russian and Chinese initiatives on artificial intelligence (AI), established a Select Committee on Artificial Intelligence to ensure the United States retains technological superiority. The Department of Defense is nearing completion on its departmental AI strategy to scale the success of initiatives like Project Maven, whose partner, Google, is under fire for their relationship with Defense. But, quietly in the background, all the services are starting initiatives to leverage AI to advance their capabilities.
It takes a wide variety of skill sets and personalities to bring technology like AI into the military: developers, managers of many colors, programmers or software engineers, data scientists, and strategists, to name a few. What I have been doing, at least according to a recent Harvard Business Review article, is translating; specifically, the technical aspects, problems, and capabilities of AI algorithms into conceivable defense capabilities, primarily for the Submarine Force. Over the past six months, that function has put me into a myriad of different efforts around the Department of Defense to bring AI into the fold. These are my observations on our efforts thus far.
Parallels with History
Analogies to history abound when discussing the capabilities that AI can bring into the military. Many argue that it will produce a revolution in military affairs on par with Sputnik or nuclear weapons. As a student of history, I believe that holding AI to the same level as nuclear weapons fails initial assumptions into what AI will do for us. The development of AI is likely more on par with the development of naval nuclear propulsion or the global positioning system, technologies that fundamentally changed how we operate, but not the nature of war. The technology in itself is niche and will take tremendous resources to bring to fruition, but the resulting enhancement of existing capabilities will allow us to fight better, and perhaps slightly differently.
The truly disruptive innovations — the revolutions in military affairs — come from the architectural linkage of technology and doctrine in a new way. USS Langley (CV 1) first entered service as an aircraft carrier in 1922. Subsequent aircraft carriers were used primarily as fleet scouts for the battleships. Admiral Joseph Reeves slowly developed the tactical doctrine needed to employ carrier air power offensively, culminating in wildly new and disturbing results in the Fleet Problems of the late 1920s and early 1930s. The development of maritime commerce raiding by the submarine force, again trained during the interwar period as fleet scouts, provides yet another parallel.
Portrait of Hyman Rickover from “The Rickover Effect” by Theodore Rockwell
The development of nuclear propulsion provides an even more compelling case study for the development of AI. Then-Captain Hyman Rickover, shuttled to a career-ending and out-of-the-way posting at Oak Ridge National Laboratory, saw the potential for nuclear fission to power warships. He sought not to develop it for its own sake, but because it would immediately remove some of the most difficult operational constraints of the current diesel submarines: speed, endurance, and range. His subsequent decision to press forward with designing the first nuclear power plant for a submarine and not a land-based reactor as many advocated allowed the industry to tackle the hardest problem first and scale to larger, less complex solutions — a lesson the military would do well to remember.
Artificial intelligence will only be as good as the data we use to train it, which will be derived from current capabilities. By default, this limits us in the near term to evolutionary, not revolutionary, advancements in capabilities. Other historical anecdotes show that, on average, it will take 10–20 years from the arrival of a new technology to architecturally link it with doctrinal advancements and create a new form of warfare. AI will not change how we fight tomorrow.
Industry, Academia, and the Pursuit of Knowledge
Unlike most technologies that are protected by intellectual property laws, AI has been phenomenally open-source. Any efforts to put new techniques, algorithms, or knowledge behind a non-disclosure agreement, or even a paywall, have been met with fierce resistance. This seems like a tremendous advantage for us, but in reality, the playing field is still level since the academic papers, industry conferences, etc., all only report the mathematics, theory, and basic algorithms. The true capabilities are in the training of the algorithms — the data sets are the secret sauce. The underlying beauty of these algorithms is to be able to train the exact same algorithm with different data sets, resulting in wildly different capabilities and results. Data is a new strategic asset.
Recent headlines surrounding the protests and resignations of many Google employees after learning of the company’s involvement with Project Maven is symptomatic of the larger Silicon Valley attitude toward working with DoD, especially in the area of AI. Most believe that DoD seeks to develop SkyNet or Terminator and ethically do not want to be a part of that effort, or have their work used to that end — the short video SlaughterBots accurately reflects the general attitude in Silicon Valley. Those types of algorithms, artificial general intelligence, are a long, long way off, and will likely remain a highly sought after goal for many decades to come. Even though current efforts in DoD are not aimed at directly lethal autonomous weapons systems, most in Silicon Valley are still not comfortable with cooperating with DoD. The double standard when it comes to accepting Chinese money for similar research seems to bring out a blind eye or profound naiveté in industry.
China is investing billions of dollars in artificial intelligence research in the Silicon Valley area and in academic powerhouses across the country. The high degree of civil-military fusion in China virtually ensures that commercial investments in AI companies will feed military development. At present, DoD cannot compete with that level of investment. A recent data stripping algorithm run on the federal contracts database revealed that DoD’s investment in anything even remotely AI-related was between $3–7 billion dollars in FY18, with over 80% of it going to the traditional “big 5” defense contractors: Boeing, Lockheed Martin, Northrup Grumman, Raytheon, and General Dynamics. These contractors are not the companies we think of when we talk about the leading edge competitors in the AI space. So in reality, DoD’s investment in groundbreaking AI research is incredibly small.
Most companies that seek to fund additional basic research at US universities or start their own AI research divisions face a stark choice: be willing to fund tens to hundreds of millions of dollars toward the effort, or work with industry and academia on retainer-like relationships and other sponsorship-type programs to help guide research objectives toward their goals. The cost of acquiring AI talent is eye-watering. A graduate student coming into industry with a master’s degree in computer science and AI from a leading university in this field can expect salaries to start in the hundreds of thousands of dollars. Recruiting and retaining senior engineers in AI with a decade of experience will cost into the millions in annual salaries per engineer. DoD simply does not have the mechanisms to compete with industry in this area, and should look at developing their own internal talent pool or creating new acquisition frameworks to help guide developments and bring knowledge into our areas. But, given the open source nature of AI, our existing workforce is more than capable of learning how to develop AI algorithms and that avenue must be pursued.
Talent acquisition and commentary about the defense acquisition system aside, DoD faces many hurdles to fully harnessing and exploiting the possibilities that AI can create. The largest barrier appears to be cultural and knowledge-based. There are very few experts in AI within DoD, and many leaders at all levels simply do not understand, even at the most basic levels, what AI is and what is needed to develop it. This issue stops development in its tracks, not usually for malicious reasons, but simply because we as leaders cannot or have not taken the time to educate ourselves or our subordinates to make informed decisions. This education and training cannot be understated.
DoD has a data problem. Many leaders state that we have a plethora data. We do, but the quality and usability of that data likely makes the vast bulk of it unusable for AI training purposes. The curation, conditioning, storage, transfer, use, etc., of data requires a specialized set of engineers or data scientists to manage. AI algorithms require voluminous amounts of data to train, and feeding the algorithm garbage data will produce a garbage algorithm. Leaders must take the time to learn, at least at the basic levels, what efforts are required to make your data work for you. It is not small.
Disruptive innovations require a champion to succeed. This is not the virtuous insurgent who is trying to drive the change, but the high-ranking officer or leader who has the power to protect your efforts and foster development. Then-Captain Rickover, despite his tense relationship with the Bureau of Ships, had a single admiral above him who believed in what he was doing and protected him from efforts to terminate his project. Similar stories are all across revolutionary military developments. Some product champions have emerged for selected programs, but at this point, DoD as a whole is only vocally-supportive of AI development efforts. It is likely too soon to tell.
Overall, the impressions that I’ve gleaned from my perch at the crossroads of AI efforts within DoD are very positive, despite the challenges presented here. There are significant hurdles, but there are several early successes, particularly Project Maven, that are helping to shape attitudes toward AI within DoD. Widespread development of AI across the enterprise will present a number of challenges for cultural, organization, financial, and other hurdles, but they are surmountable with a good product champion, robust, joint cooperation among services, and a willingness to accept failure.
Lieutenant Commander Ryan Hilger is a Navy Engineering Duty Officer and a former Submarine Officer. He has served as a requirements officer in the Undersea Warfare Division (N97), Office of the Chief of Naval Operations, where he developed Artificial Intelligence requirements for applications in the Submarine Force.
The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Navy or any organization of the US government.
This article originally appeared in the Defense Entrepreneurs Forum and is cross-posted with permission.