Ethics in Autonomy

By Jon Farley

                                                                                                Estimated Reading Time: 8 Mins

The idea of killer robots annihilating the human population is as old as machines themselves. From Terminator’s Skynet to War of the Worlds, Hollywood’s vision of autonomy is synonymous with global warfare and the worst of human nature. Most rational people can understand the fears of humanity-eradicating killer machines. Unfortunately, the ubiquity of autonomous robots, the lack of public discourse, and absence of legal restraint puts us in a position where we have not established boundaries or norms that might prevent this apocalyptic vision from becoming reality.

What is autonomy?
The United Nations’ Convention on Certain Conventional Weapons (CCW) has debated lethal autonomous weapons systems (LAWS) over the past several years, but despite all of their work, even a basic definition of the term has proved a challenge.

The Future of Life institute, an organization dedicated to shaping the future use of technology, defines autonomous weapons as weapons that “select and engage targets without human intervention.” We will use this simple definition as a baseline for understanding the spectrum of autonomy. Other entities, such as the automobile industry, have established standards of automation, which also have a relevant military application. For simplicity, we will define non-autonomous as machines that require human interaction for targeting and firing, such as your basic rifle. Semi-autonomous, or human-in-the loop, still requires some sort of human interaction, such as targeting on most modern missile systems. Fully-autonomous weapons are weapons that target and fire without human interaction. They can include a human override, or human-on-the-loop, but without human intervention will rely solely on algorithmic logic for target selection and weapons employment. With that said, many of our current semi-autonomous weapons have fully-autonomous capabilities, but are software-restrained due to concerns about weapons-release authority.

The difference with autonomyOTH, multi-domain operations, emerging security environment
To begin with, semi-autonomous weapons are inexpensive. During the Battle for Mosul, ISIS flew an estimated 300 drone missions, with a fleet of weapons that cost about $650 each. Furthermore, the recent drone attack on Venezuelan President Nicolas Maduro while giving a speech in Caracas, though unsuccessful, highlights how these commercial-off-the-shelf (COT) technologies are used outside of a warzone. While these weapons are currently in non-autonomous or semi-autonomous modes, with a decent inertial navigation system (INS), it is not hard to conceive of drones that can target a time and place over a closed system without real-time human intervention.

The relatively low cost of automation stands in stark contrast to traditional military forces. The cost to put Marine through basic training is estimated at $45,000. The cost of a combat-deployed soldier, including follow-on training and equipment, is estimated between $850k & $1.2M per year. Conversely, the DOD’s TALON robot with a full weapons suite costs $230k right now, and the price is expected to fall with full-rate production.

When comparing aircraft, the MQ-1B Predator is estimated at a cost of $4.1M ($20M including the reusable ground station and satellite link), with the upgraded MQ-9 Reaper estimated at $16.9M. This pales in comparison to the other aircraft acquisition programs on the books. While these systems are not high-end performance aircraft like the $165M F-35, they still provide substantial savings over previous ISR platforms and save wear and tear on the top-end fighters that would otherwise perform the same missions. These systems are not fully-autonomous, but it is not a stretch to see how the incorporation of technology, such as the AI-focused Project Maven, could turn these weapons systems from simple observers into full-scale analysts and automatic employers.

OTH, multi-domain operations, emerging security environmentThe second main advantage of autonomy is speed. The ability to return enemy fire with little-to-no response time could mean the difference between life and death for friendly forces. While the US is currently committed to keeping a human-in-the-loop, rumors of Russian autonomous “dead hand” systems abound, specifically on their nuclear arsenal. The US learned some hard lessons in the early push towards autonomy, most famously with Rules of Engagement (ROE) aggregation and the shoot down of multiple coalition aircraft, including a US Navy F/A-18, by Army Patriot batteries during Operation Iraqi Freedom. Currently though, semi-autonomous weapons, such as the Navy’s close defense Mk-15 CIWS, blur the lines between semi-autonomous activation and autonomous targeting and firing.

Outside of weapons systems, autonomous aggregation of data has other implications for the military. Aggregation has been used in financial automation for years, with the military application in the aforementioned Project Maven. On the surface, AI is being used to simply process data, in actuality, this data are directly informing decisions, many of which result in kinetic employment. So in a large part, we are already automating within the kill chain, but at an “acceptable risk level” for commanders.

Finally, as David Francis describes in The Fiscal Times, autonomous soldiers are resilient. There are no thoughts about self-preservation and there is no PTSD to identify and treat afterwards. They would not act out of anger or frustration and could even monitor the ethical behavior of humans.

Ethics in Robotics
The true question of autonomy comes down to an ethical red line. How much human interaction is required? Are we comfortable with humans simply tweaking algorithms for machine autonomy? Or will we require a human in the loop for any weapons release authority?

 

Taking this argument to its next logical conclusion, if the US is currently in the green to yellow portion of the above spectrum, where are our adversaries? If the Chinese, Russians, Syrians, or violent extremist organizations see a technological disadvantage, they may be more likely to move straight to automation in order to close the gap. This is the story of warfare, where superiority in one area leads to a response in another. But as previously stated, will self-imposed limitations prohibit our progression in automation? If so, are we willing to accept the risk of a delay in employment, even if only a few hours?

As mentioned earlier, the United Nations’ Convention on Certain Conventional Weapons (CCW) is currently debating autonomy. Just this month, the US, Russia, South Korea, Israel and Australia blocked a CCW effort to ban autonomous weapons over concerns about the scope of legislation and impacts on security. Additionally, many of the technological leaders, to include the late Stephen Hawking, Steve Wozniak, and Elon Musk, signed a letter calling for the ban of autonomous weapons. This letter highlighted the positive aspects of artificial intelligence, but also noted how easily AI could be weaponized for nefarious purposes. Making minor tweaks to the technology could turn any garage into a mother-ship of autonomous drones. In conjunction with these concerns, Google publicly removed support for Project Maven due to ethical protests from employees and the establishment of their AI guiding principles.

Unfortunately, the window for debate is rapidly closing, as many of the future autonomous weapons are already here. The Army and Marines have already deployed autonomous guns, but in semi-autonomous capacities. In the air, the Navy and Air Force have shown the capability to employ drone swarms from aircraft, again dropped in semi-autonomous modes, but easily switched to full-autonomy if the situation dictated. In an attempt to provide direction to autonomy, the US Department of Defense released DOD Directive 3000.09, which “Establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” While this is the current stance of the DOD, if opponents can strike US weapons with rapid response, this directive could easily be rescinded and full-autonomy enabled. Since many of the systems are simply software-restrained, this switch could take a matter of minutes to days, depending on the contested nature of the electromagnetic spectrum in a conflict.

Much of this comes down to our current risk in warfare. Michael Walzer’s seminal work, Just and Unjust Wars, addresses the idea of the sliding scale of morality. If faced with an existential threat, Walzer would predict that our acceptance of autonomy would quickly slide to the red portion of the spectrum for all forces involved. The reason for this shift is that autonomy reduces the risk to human life by doing jobs that are either too hazardous or too costly for humans. But in doing so, autonomy lowers the threshold for military employment in situations where previously we would not have risked human lives. This lack of military necessity also reduces some of the political control that citizens should have over the employment of forces. Without the risk to life, the US government is not subjected to the same public outcry over the loss of a drone in Yemen compared to the loss of soldiers in Niger.

Because of this lower threshold, the US has employed armed drones in Afghanistan, Pakistan, Yemen, Libya, Somalia, and the Philippines, often in places that would otherwise be unreachable. Interestingly, the lack of a human in the machine appears to lessen the outcry over employment in places like Pakistan, where these strikes – while officially protested by the government – would be untenable if performed by manned aircraft. This is a unique case where removing the human operator seems to move strikes into the deniable category of activity, but in a place where a human operator is not directly accountable for actions, as opposed to foreign invaders flying planes over sovereign territory.

In the current permissive environment with technological advantage over our adversaries, it is easy to discount the limitations on automation in warfare. However, if the risk to our country and culture increases, I would expect the machines to do most of the fighting. As more weapons become available, the window for debate and regulation is closing. After World War II, Harry Truman often postulated that the machines had outpaced man’s ability to think through their use, and history teaches us that technological advancements, such as chemical weapons, firebombing, and nuclear annihilation, are often only truly debated after an ethical line has been crossed. The implications of autonomy are numerous, but the conversation is vital. Without this public discourse, as Michael Walzer states, there might not be anything left worth fighting over.

All images from army.mil or navy.mil

LCDR Jon “Tike” Farley is an instructor at the Air Command and Staff College who teaches in the Multi-Domain Operational Strategist program. He is an F-18 Pilot with multiple deployments to the 5th Fleet AOR, supporting operations in Iraq and Afghanistan. Email: jonathan.farley.1@us.af.mil 

 The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

OTH, multi-domain operations, emerging security environment

Leave a Reply