Artificial Intelligence: A Non-State Actor’s New Best Friend

By: Paige Young
Approximate Reading Time: 8 Minutes

Excerpt: The 2019 Director of Intelligence’s Worldwide Threat Assessment identifies Artificial Intelligence (AI) as an emerging disruptive technology. While there is much focus on AI’s impact on the economic and military front, an AI application that is being overlooked in the defense community is the potential for non-state actors to use AI to expand their capabilities at a low-cost.

The Director of National Intelligence’s 2019 Worldwide Threat Assessment identifies AI as an emerging disruptive technology that will present military, economic, ethical, and privacy challenges to United States (US) national security. However, AI and its military applications, in particular, are actually not as new as many in the defense sector think. According to Harvard University’s Belfer Center for Science and International Affairs, partially autonomous and intelligent systems have been used in military technology since at least World War II. More recent advances in AI, particularly over the past five years, represent a turning point in the use of automation in support of warfare. While much of the scrutiny over AI focuses on how AI will change conventional warfare, it should not be overlooked that AI advancements will also create new opportunities and make existing capabilities more affordable to a broader range of actors, including non-state actors. Specifically, AI used for military programs and the repurposing of commercially available AI technology has the potential to improve non-state actors’ funding, recruitment, deception activities, and, ultimately, their lethality.

In February 2019, the current administration launched the “American Artificial Intelligence Initiative,” a five-pillared Executive Order focused on enhancing AI development and standards in alignment with America’s national values. Although not explicitly stated, the Executive Order effectively signaled US acknowledgment of an AI arms race with China. China’s growth as an AI leader and near-peer competitor spark concerns for many reasons related to national security. In regard to non-state actors, China’s AI capabilities are particularly concerning because, although in discussion, regulations on the use of AI in the defense sector and arms control do not exist yet. China has a reputation for exporting lethal technologies to US adversaries and is already proliferating AI technology to many regimes. The current lack of regulations means there is no legal framework monitoring or prohibiting China from selling AI technology directly to irresponsible states or non-state actors with malicious intent such as hacktivists, terror organizations, or cyber criminals.

China already exports AI-powered surveillance and facial recognition technology to over a dozen  authoritarian regimes around the world including Venezuela, Egypt, Saudi Arabia, and Zimbabwe. This type of surveillance technology further facilitates the establishment of security states and undermines the development of free and open societies. Although this trend is alarming, in this case, the more significant concern is not the proliferation of surveillance technology, but that once lethal autonomous technology is more refined, China may also export the technology to oppressive regimes who will either use it irresponsibly, lose control of it, or perpetuate its sale on the black market to non-state actors. Lethal autonomous technology combined with commercially available AI could increase non-state actors’ lethality while minimizing exposure and reducing the need for human capital, intelligence, and expertise to accomplish specific tasks.

For instance, non-state actors could take advantage of AI automation of high-skill tasks  such as self-aiming, long-range sniper rifles. AI will also enable the automation and coordination of tasks using drones. Long-range drone package delivery could give non-state actors access to a cheap, long-range precision strike capability with an extremely small signature that also flies low enough to avoid radar detection. Greater autonomy increases the capability of an individual or small group to conduct large-scale attacks and can cause significant damage using drone-swarming or strikes. The software required to carry out attacks using facial-recognition, navigation, and multi-agent swarming technology can increasingly be acquired commercially. Swarming technology is particularly dependent on AI because humans cannot realistically coordinate and continuously update the flight paths of each individual drone. Drone swarms could be deployed via distributed networks to conduct surveillance, execute coordinated attacks, or use facial recognition technology to kill specific members in a crowd, in place of less surgical forms of violence. Autonomous weapons will make challenging tasks such as assassinations, subduing populations, or selectively killing a particular ethnic group easier for non-state actors.

OTH, emerging security environment, multi-domain
Killer Robot Drones

Another area non-state actors will benefit from AI technology is in gaining funding. While cyber theft is not new, AI technology will further enhance the ability and number of foreign cyber criminals who conduct for-profit cyber-enabled theft and extortion against US networks and persons worldwide through increased automation. Advancements over the past five years in narrow AI are enabling target prioritization for cyber-attacks and cyber operations to gain funding through hacking, spear-phishing, or dialogue with ransomware victims. Narrow AI, also known as machine learning, is the ability for robots or automation programs to make decisions and recommendations based on collected data. Large datasets are used to identify victims more efficiently by estimating personal wealth and willingness to pay based on online behavior.    Spear-phishing attacks require a significant amount of time and skilled labor often through the exploitation of social media and professional networks to generate messages with convincing content. If research and synthesis tasks can be automated, then more actors may be able to engage in spear-phishing or other types of cyber operations geared towards fraudulent and profit-driven activities.

AI’s ability to automate tasks involved in data collection and analysis of mass-collected data will expand threats associated with persuasion, deception, and social manipulation. Non-state actors will be able to use AI technology for specifically targeted recruiting by exploiting an individual’s social media or internet surfing habits to evaluate and steer them towards recruitment. Social media profiles have already been identified as reasonably predictive of certain psychological conditions such as depression. A non-state actor with sophisticated AI technology would be able to analyze an individuals’ behaviors, moods, beliefs, and vulnerability based on available cyber data and social media activity to deliver the right message at the right time to maximize their potential to persuade and recruit an individual effectively.

Aside from the abuse of lethal automation, one of the most disturbing capabilities AI will bring to bear for non-state actors is deception. AI technology will enable what has come to be known as “deep fakes” or the digital impersonation of people not only in the online environment but also through synthetic voice and image replication. Through the generation of synthetic images, text, or audio, AI could be used to impersonate individuals or personal contacts online to spread polarizing content, disinformation, or even mimic real personal contacts’ writing styles to elicit information or conduct social engineering. Voice and even image impersonation technology has also made significant improvements towards authenticity and is being commercialized. As this technology develops further, it may lead to a chatbot impersonating a real person in a video chat or fake news reports with realistic fabricated video and audio. This unique technology could be abused and result in audio or video clips of world leaders making inflammatory comments, false promises, declarations of war, or statements with economic impact that they never actually made. Once this technology is refined, it would be extremely difficult to control the content made and prove the true narrative.

How the AI race plays out between China and the US will undoubtedly affect the world’s global economic and security power dynamic. Unlike nuclear weapons which are expensive and require hard to obtain components, many AI applications will be cheap to mass produce or make commercially available. The wider accessibility and affordability of AI makes it only a matter of time until AI technologies appear on the black market for nefarious use. AI will inevitably enable non-state actors to conduct more attacks with less manpower, less funding, and less time, while simultaneously still being effective, surgically targeted, and difficult to attribute. This narrative is not meant to downplay the positive effects AI technology will bring to the civilian and defense sectors. It is, however, meant to highlight that AI will have effects, some of which will be negative, beyond what was originally anticipated. As a dual-use technology, many of the malicious uses of AI outlined also have related legitimate uses.  In some cases, the difference between legitimate and illegitimate uses of AI will come down to building appropriate safeguards. For example, online surveillance tools can be used to catch terrorist planning activities.

While we are too early in the development stages and ethical debates of AI to find a one size fits all solution, exploring arms control agreements and continuing discussions on regulating lethal autonomous capabilities is a start. In fact, the United Nations Convention on Certain Conventional Weapons has already received an open letter signed by hundreds of scientists and defense industry members expressing their concerns over the development of lethal autonomous technology. The wide cast of players who already have AI technology, varying degrees of sophistication, and eventual lower cost of AI make controlling AI technology challenging. If the issue is not addressed soon though, it will become impossible to regulate later.

Paige Young is an Intelligence Officer and student in the Multi-Domain Operational Strategist concentration at the United States Air Force’s Air Command and Staff College. She previously served as the Director of Operations for the 11th Special Operations Intelligence Squadron.

Disclaimer: The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the United States Government.

OTH, multi-domain operations, emerging security environment

Leave a Reply