The AI arms race: how can we control the use of ‘killer robots’?
Popular culture likes to explore the theme of Terminator-inspired “killer robots” which develop their own ability to think and, at some point, revolt against humans. The fears of machines becoming predators are also strengthened by the media increasingly reporting about the topic of autonomous weapon systems, which, according to the International Committee of the Red Cross (ICRC), “select and apply force to targets without human intervention”. Earlier in 2021, a worldwide media reaction followed the publication of a UN report claiming that Kargu-2 loitering munition systems were programmed “to attack targets without requiring data connectivity between the operator and the munition” in Libya.
The fast pace of technological development raises urgent questions about whether it is legally and morally acceptable to delegate crucial responsibilities from humans to machines. Can algorithms be perfected enough to distinguish between civilians and combatants? If an autonomous weapon system commits war crimes, based on algorithmic bias for instance, is there a process to figure out who is responsible? What happens if such a system is hacked, or deviates from the objectives given to it?
At first glance, many of these questions seem premature, as “killer robots” might still be far from becoming reality. However, this issue is no longer only a concern for the future. Militaries from around the world have been developing, testing and using weaponised artificial intelligence (AI), as well as integrating automated and autonomous features into air defence systems, guided munitions, loitering weapons, and even underwater vehicles.
Existing applications are already changing the quality of human control involved in using force. The ways that air defence systems are used, for example, show that it is becoming increasingly normal to delegate tasks such as targeting decisions away from human oversight towards autonomous and automated features. The diminishing human control over weapon systems is not only a moral issue, but also a security concern, given the unpredictability of AI and issues of data processing.
There is also a global security aspect. Leading AI developing states are closely watching each other’s technological capabilities and seeking to develop the same, encouraging a strategic competition in the area of military AI (although not an inevitable one). In its Integrated Review of Security and Defence, the UK Government visibly portrays emerging technologies such as AI through the lens of global competition. States that the Government labels as “systemic competitors” – China and Russia – are open about their intentions to increasingly use AI and automation in their defence forces.
In the absence of international rules to manage and mitigate these risks, the issue has gained prominence at the United Nations. Since 2014, experts have gathered in Geneva to discuss the challenges brought by lethal autonomous weapons systems (LAWS) within the framework of the Convention on Certain Conventional Weapons (CCW). Since 2017, discussions have taken place within a formal Group of Governmental Experts (GGE) on LAWS.
In 2019, the CCW High Contracting Parties adopted eleven guiding principles on LAWS, but these remain broad and still have no legal standing. For example, everyone agreed that “human responsibility for decisions on the use of weapons systems must be retained since accountability cannot be transferred to machines”, without elaborating on how to enforce this point. Since then, Covid-19 delayed the UN debate – but not governments’ interest in and development of weaponised AI. For instance, the Royal Marines have been testing drones both in the air and underwater, while Russia has announced it has begun “the serial production of combat robots”.
This year, the first GGE meeting took place in August and expectations were high ahead of the session. Time is a limited resource at the experts’ meeting, as the upcoming CCW Review Conference in December 2021 is due to decide on whether to renew the GGE’s mandate. Yet, as often happens in UN negotiations on such sensitive issues, nine working days were not enough to reach consensus among all 125 states that are parties to the CCW, which is required to adopt a final report. Governments all have their own conceptions of LAWS, the level of control that humans should have over them, and the ways to deal with them.
One diverse group of states — including Austria, Brazil, and the Philippines — pushes for a legally binding instrument banning the use and development of LAWS, while others (such as the UK, the US, Israel and Russia) argue that existing international humanitarian law is sufficient. Some view the concept of human control as a goal in itself, others see it as only one of the tools to ensure that autonomous weapons comply with international law. The divergences in positions seem too paramount to break the deadlock.
It is also questionable whether the work done by experts and civil society is seriously being taken into consideration. The ICRC, the UN Institute for Disarmament Research and several other organisations have made their positions known for years. While some state delegations referred to their studies, others continued to argue that there was not enough research done on risks such as algorithmic bias, or that we should not “demonise” new technologies. The arguments for developing weaponised AI are too important, it seems, for these states to pay attention to experts’ warnings, which also contributes to the impasse.
We will now have to wait until the next session at the end of September to see if substantial progress is possible. In the meantime, militaries will continue to pursue weaponised AI and autonomy in weapons systems, and to gradually normalise the diminishing of immediate human control in their use.
A Message from TheArticle
We are the only publication that’s committed to covering every angle. We have an important contribution to make, one that’s needed now more than ever, and we need your help to continue publishing throughout the pandemic. So please, make a donation.