At a time in human history when networked systems and autonomous and semi-autonomous robotic technologies are common facts of life, from third world hot spots in the Middle East and Africa, to first world neighborhoods in New York, Maryland, and Texas, it is easy, even cliché, to toss out SkyNet references. Why not? Images of exotic aerial military drones (like the MQ-9 Reaper pictured above) cruising the battlefields of Pakistan and Afghanistan are stark metaphors of a looming human/robot war. At least for the cypherpunks and techno-apocalypse doomsayers among us.
A vast majority of military drone technology existing today is only semi-autonomous though. Somewhere, a human being sits, surrounded by multiple screens and keyboards, switches, a mouse, maybe a joystick, and makes decisions about what actions to take with whatever robot is in use. These actions range from piloting aerial drones over AfPak territory, monitoring weaponized robotic sentries placed along the demilitarized zone between North and South Korea, or manipulating the flow of surveillance data.
The United States military, however, is not satisfied with leaving target selection and attack to mere human beings. A variety of programs and research efforts underway throughout the country are working to introduce full autonomy into the Pentagon’s growing fleet of drones and robots. In the fall of 2010 two small helicopter drones successfully recognized a target and communicated that information to a ground station without any human input.
This test, and the ongoing efforts to produce fully autonomous robotics, presages a future in which robots will have the capacity to select human targets using facial recognition software and decide, completely independent of human input, whether or not to use lethal force. The Air Force, in its Unmanned Aircraft Systems Flight Plan 2009-2047 acknowledges the trend in military robotics, but leaves most of the responsibility for formulating legal and ethical guidelines to future generations of leaders, saying that, “Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions…. Ethical discussions and policy decisions must take place in the near term in order to guide the development of future UAS capabilities, rather than allowing the development to take its own path apart from this critical guidance.”
Human operators already rely on algorithmic decision-making software to assist with evaluating the massive amount of information delivered to battlefield planners and coordinators. Machines are, in some ways, already responsible for selecting targets during military operations by deciding what information is important.
Fully independent weaponized robots are where we’re headed, and if that conjures up images of human beings hanging on at the ragged edge against hordes or murderous robots, you aren’t crazy, or at least not entirely. There are some people, people with academic credentials and fancy titles engaged in technical fields of endeavor that tend to agree with the underlying sentiment of our SkyNet inspired fears.
The International Committee for Robot Arms Control (ICRAC), formed in 2010 in Berlin by a group of technology experts, robotics engineers, and human rights advocates, is dedicated to bringing international attention to armed, fully autonomous military systems. From their mission statement:
Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in war, we call upon the international community to urgently commence a discussion about an arms control regime to reduce the threat posed by these systems.
We propose that this discussion should consider the following:
Their potential to lower the threshold of armed conflict;
The prohibition of the development, deployment and use of armed autonomous unmanned systems; machines should not be allowed to make the decision to kill people;
Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
A ban on arming unmanned systems with nuclear weapons;
The prohibition of the development, deployment and use of robot space weapons.
ICRAC is hoping to use the International Treaty to Ban Landmines as a template for a new multinational agreement that would ban fully autonomous weapons system capable of targeting human beings. Given the escalating global drone arms race, the reluctance of the United States to shed even modestly some of its titanic defense budget, and the increasing popularity of drones as an antiseptic alternative to the costly nature of boots-on-the-ground military operations, things are not boding well for ratification of an international treaty that would curtail the new and favored toys of armed forces around the world.
So, what does this all mean? Well, not that much actually, at least not much in the near term. Robotics, artificial intelligence, software, hardware – all this stuff is decades of technological advances away from the point of delivering a completely independent weaponized robot capable of going off script and gunning down human beings. What all this really means, right now at least, is the horizon for that possibility is a little nearer, and that governments around the world, mostly the US and China (of course), are in the process of normalizing a style of war in which soldiers sit far away from the actual battlefield, piloting, driving, or otherwise operating remotely controlled robot vehicles; this, in turn, means that the effective threshold for waging war has been lowered. With fewer soldiers dying or returning from war physically and emotionally crippled belligerents will be under far less political pressure to avoid conflicts or end them once they’ve begun. And, as always, it means civilians and other non-combatants, those who are bystanders or trying to help within a combat zone, are going to pay a heavier price.