Should We Ban Killer Robots?. Deane Baker
Читать онлайн книгу.(GGE) on LAWS, though it followed on the heels of three years of informal meetings of experts tied to this process. At the time of writing, this international process continues. In addition to the state delegates to these meetings, a range of civil society groups are also represented, most notably the coalition of non-governmental organizations (NGOs) known as the Campaign to Stop Killer Robots. Originally launched in April 2013 on the steps of Britain’s Parliament as the Campaign to Ban Killer Robots, it was ‘the Campaign’ (as it is commonly known) that hosted the viewing of Slaughterbots at the 2017 GGE meeting in Geneva.
Slaughterbots certainly provided a significant boost to the Campaign’s efforts to secure a ban on lethal autonomous weapons (or, failing a ban, to otherwise ‘stop’ these weapons). Unfortunately, the emotive reaction generated by the film is in large part the result of factors that are entirely irrelevant to the issue at hand: the question of autonomous weapons.
Remember what Russell identified as the key issue? ‘Allowing machines to choose to kill humans’. If you have time, watch the film again, and ask yourself this question throughout: what difference would it make to the scary scenarios in the film if, instead of the drones selecting and engaging their targets autonomously, a human being seated in front of a computer somewhere was watching through the drone’s cameras and making the final call on who should or should not be killed? I don’t mean just pressing the ‘kill’ button every time a red indicator flashes up on his or her screen – let’s assume he or she takes the time to (say) check a photo and make sure that the person being killed is definitely on the kill list. To use a key term at the centre of the debate (which I will examine in depth in chapter 2), in this mental ‘edit’ of the film, a person is maintaining ‘meaningful human control’.
In this alternative, imagined version, AI would still be vitally important in that it would allow the tiny quadcopters to fly, enable them to navigate through the corridors of Congress or Edinburgh University, and so on. But there are no serious suggestions that we should try to ban the use of AI in military autopilot and navigational systems, or even that we should ban military platforms that employ AI in order to carry out no-human-in-the-loop evasive measures to protect themselves. So that’s not relevant to the key question at hand.
What about the nefarious uses to which these tiny drones are put in the film? It is, without question, deeply morally problematic, abhorrent even, that students should be killed because they shared or ‘liked’ a video online; but the fact that the targeting data were sourced from social media is an issue entirely independent of whether the final decision to kill this student or that was made by an algorithm or by a human being. Also irrelevant is the fact that autonomous weapons could in principle be used to carry out unattributed attacks: the same is true of a slew of both sophisticated and crude military capabilities, from cyberweapons to improvised explosive devices (IEDs), and even to antiquated bolt-action rifles. In short, a ban on autonomous weapons – even if adhered to – would make essentially no material difference to the frightening scenarios depicted in Slaughterbots.
There are real and important questions that need to be asked and answered about LAWS. But in order to make genuine progress we will need to disentangle those questions from the red herrings thrown up by Slaughterbots and, indeed, by many contributors to the debate. This book seeks to take steps in that direction by trying to give a clear answer to the question raised by the Campaign at its formation: should we ban these ‘killer robots’? As campaigners rightly point out, this is a choice we have made before, in the case of other kinds of weapons systems: the international community has successfully negotiated treaties and agreements that have resulted in bans on military capabilities, including bans on chemical and biological weapons, antipersonnel landmines, and even blinding lasers. There’s much that could be said about the process of securing such a ban, and what avenues might be available for doing so and to what effect, but that is not the question in focus here. Rather, this book is about whether or not we should ban LAWS.
To give you the bottom line up front, my answer to this question is in the negative. I hope to show here that the central considerations that have been raised in support of the view that we should ban (or in some other, undefined sense, ‘stop’) these systems are not, when put under scrutiny, ultimately convincing. This does not mean I think there should be no controls or constraints on the development and employment of LAWS; there certainly should be. Indeed, I have had the privilege of working alongside a group of international experts to try to outline a first attempt at a set of guiding principles for the international community now titled ‘Guiding Principles for the Development and Use of LAWS’. But that is not the focus of this book. Instead, my argument here is focused on showing that we do not in fact have compelling reasons to ban ‘killer robots’.
A Definition
Before proceeding, I do, of course, have to clarify what this phenomenon is that is the focus of our investigation. While ‘killer robots’ is much racier than ‘lethal autonomous weapons’, we are on firmer ground with the latter terminology; so, going forward, that is what I will generally use. So then, what exactly is a lethal autonomous weapon? There is, as yet, no universally accepted definition, and some parties to the debate have been accused (perhaps with some justification) of playing definitional games. There is, however, growing acceptance of the definition put forward by the International Committee of the Red Cross and Red Crescent (ICRC), according to which an autonomous weapon is
[a]ny weapon system with autonomy in its critical functions. That is, a weapon system which can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention. (ICRC 2016, pp. 11–12, n. 2)
Despite being widely accepted, this definition is not without shortcomings. In particular, some of the terminology is, arguably, loaded. ‘Select’ carries an implication of deliberate cognitive activity, which may not be an appropriate description of how many autonomous weapons do or will function; ‘discern’ or ‘identify’ would be a more neutral alternative. Likewise, ‘attack’ is a loaded term in this context, given the importance of the question of the point at which human agency is relevant; ‘engage’ would, again, be a more neutral alternative. Nonetheless, for the purposes of this volume, I will take it that the ICRC definition is a sufficiently accurate description of the phenomenon under consideration to enable us to weigh up whether or not a ban is necessary.
1 1. The formal name of the group is the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (GGE LAWS) of the High Contracting Parties to the Convention on Prohibitions of Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW).
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.