Should We Ban Killer Robots?. Deane Baker
Читать онлайн книгу.
Copyright © Deane Baker 2022
The right of Deane Baker to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2022 by Polity Press
Polity Press
65 Bridge Street
Cambridge CB2 1UR, UK
Polity Press
101 Station Landing
Suite 300
Medford, MA 02155, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-4852-1
A catalogue record for this book is available from the British Library.
Library of Congress Control Number: 2021942477
The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website:
Acknowledgements
I am grateful to have had the opportunity to trial earlier versions of several of the arguments in this book in other forums. Parts of what follows first appeared in ‘Autonomous Weapons and the Epistemology of Targeting’, in the Defence-in-Depth blog (10 September 2018), ‘The Awkwardness of the Dignity Objection to Autonomous Weapons’, in the Strategy Bridge journal (6 December 2018) and ‘The Robot Dogs of War’, in Jai Galliott, Duncan MacIntosh and David Ohlin (eds), Lethal Autonomous Weapons: Re-examining the Law and Ethics of Robotic Warfare (Oxford University Press 2021). I have also drawn on arguments that appeared in my books Just Warriors, Inc.: The Ethics of Privatised Force (Continuum 2010) and Citizen Killings: Liberalism, State Policy and Moral Risk (Bloomsbury Academic 2016). Along the way I have received insightful inputs from friends, students and colleagues, including Liran Antebi, Ned Dobos, Erin Hahn, Mark Hilborne, David Kilcullen, Peter Lee, Rain Liivoja, Ian MacLeod, Rob McLaughlin, Valerie Morkevicius, David Pfotenhauer, Shashank Reddy, Julian Tattersall and Mathew Wann (among others). I am particularly grateful for comments on the final draft by the two anonymous readers and for the guidance of George Owers and the team at Polity.
This book is dedicated to my daughters, the fabulous Baker girls: Jemimah, Kezi and Amelia.
Introduction
If you haven’t yet watched the short film Slaughterbots on YouTube, you really should do so now. I mean it – stop reading immediately and watch the video before going any further. You won’t regret it, Slaughterbots is short and impressively well executed. Besides, what I say below contains spoilers.
Slaughterbots was created by the Future of Life Institute in conjunction with Stuart Russell from the University of California at Berkeley. The film garnered over 350,000 views on YouTube in the first four days after its release, and was reported on by a large range of news outlets, from CNN to the Telegraph. The fictional near-future scenario depicted in this film in vivid Hollywood thriller style is both entertaining and scary, but is scripted with serious intent. As Russell explains at the end of the video, Slaughterbots is intended to help us see that, while AI’s ‘potential to benefit humanity is enormous, even in defence’, we must nonetheless draw a line. Ominously, he warns us that ‘the window to act is closing fast’. The key issue is that ‘[a]llowing machines to choose to kill humans will be devastating to our security and our freedom’ (Sugg 2017).
The film opens with a Steve Jobs-like figure speaking on stage at the release of a new product. Only, instead of the next generation of iPhone, the product is a weapon – a tiny autonomous quadcopter loaded with three grams of shaped explosives, and which combines artificial intelligence (AI) and facial recognition technology to lethal effect. After proudly explaining that ‘its processor can react 100 times faster than a human’, the Steve Jobs of Death demonstrates his creation. We watch as he throws it into the air, and it then buzzes autonomously, like an angry hornet, over to its designated target – in this case a humanoid dummy. After latching parasitically onto the forehead of this simulated enemy soldier, the drone fires its charge, neatly and precisely destroying the simulated brain within, to the applause of the adoring crowd. If that were not demonstration enough, a video then plays on the giant screen, showing a group of men in black fatigues in an underground car park. The mosquito-like buzzing of the quadcopter causes the men to scatter in fear, only to be killed one by one as the tiny drones identify, track and engage them, detonating their charges with firecracker-like pops. ‘Now that is an airstrike of surgical precision’, says Mr Death-Jobs. As if sensing the concern that is building as we watch, he is quick to reassure his audience: ‘Now trust me, these were all bad guys.’ (Of course, we don’t trust him one tiny bit.) Our concern only increases as he tells us that ‘they can evade … pretty much any countermeasure. They cannot be stopped.’ Another video rolls on the big screen, this one depicting a huge cargo aircraft that excretes thousands of these tiny drones, while we are informed that ‘[a] 25 million dollar budget now buys this – enough to kill half a city. The bad half.’ (Just the bad half – yeah, riiiight.) ‘Nuclear is obsolete’, we are told. This new weapon offers the potential to ‘take out your entire enemy, virtually risk-free’. What could possibly go wrong?
At that point the film cuts across to a fictional news feed that’s designed to help us see the dirty reality behind the advocacy and smooth assurances presented by the Steve Jobs of Death. The weapon has fallen into the wrong hands. An attack on the US Capitol Building has killed eleven senators – all from ‘just one side of the aisle’. TV news reports that ‘the intelligence community has no idea who perpetrated the attack, nor whether it was a state, group, or even a single individual’. We witness the horror of a mother’s Voice over the Internet Protocol [VOIP] call to her student-activist son that ends with his clinical killing by one of the micro drones, as swarms of them hunt down and murder thousands of university students at twelve universities across the world. The TV talking heads inform us that investigators are suggesting that the students may have been targeted because they shared a video on social media ostensibly ‘exposing corruption at the highest level’. Then, suddenly, we’re back on stage with Mr Death-Jobs, who tells us: ‘Dumb weapons drop where you point. Smart weapons consume data. When you can find your enemy using data, even by a hashtag, you can target an evil ideology right where it starts.’ He points to his temple as he speaks, so that we are left in no doubt as to just where that starting point is.
It’s all very chilling, and it taps into some of our deepest fears and emotions. Weapons like tiny bugs that attach to your face just before exploding – creepy. Shadowy killers (states? terrorists? hyper-empowered individuals?) striking at will against helpless civilians for reasons we don’t fully understand – frightening. People targeted on the basis of data gathered from social media – terrifying.
Slaughterbots was released to coincide with, and influence, the first