Trust in Computer Systems and the Cloud. Mike Bursell
Читать онлайн книгу.the trust relationships that other entities have with them are at risk. This can be thought of as an example of programmatically encoded bias: only certain contexts were considered in the design of the system, which means inflexibility is inherent in the system when other contexts are introduced or come into play.
In our example of the automated defence system, at least the base commander or empowered subordinate has the opportunity to realise that a change in context is possible and to reprogram or switch off the system: the entity who has the relationship to the system can revise the trust relationship. A much bigger problem arises when both entities are actually computing systems and the context in which they are operating changes or, just as likely, they are used in contexts for which they were not designed—or, put another way, in contexts their designers neglected to imagine. How to define such contexts, and the importance of identifying when contexts change, will feature prominently in later chapters.
Trust and Security
Another important topic in our discussion of trust is security. Our core interest, of course, is security in the realm of computing systems, sometimes referred to as cyber-security or IT security. But although security within the electronic and online worlds has its own peculiarities and specialities, it is generally derived from equivalent or similar concepts in “real life”: the non-electronic, human-managed world that still makes up most of our existence and our interactions, even when the interactions we have are “digitally mediated” via computer screens and mobile phones. When we think about humans and security, there is a set of things that we tend to identify as security-related, of which the most obvious and common are probably stopping humans going into places they are not supposed to visit, looking at things they are not supposed to see, changing things they are not supposed to alter, moving things that they are not supposed to shift, and stopping processes that they are not supposed to interrupt. These concepts are mirrored fairly closely in the world of computer systems:
Authorisation: Stopping entities from going into places
Confidentiality: Stopping entities from looking at things
Integrity: Stopping entities from moving and altering things
Availability: Stopping entities from interrupting processes
Exactly what constitutes a core set of security concepts is debatable, but this is a reasonably representative list. Related topics, such as identification and authentication, allow us to decide whether a particular person should be stopped or allowed to perform certain tasks; and categorisation allows us to decide which things which humans are allowed to alter, or which places they may enter. All of these will be useful as we begin to pick apart in more detail how we define trust.
Let us look at one of these topics in a little more detail, then, to allow us to consider its relationship to trust. Specifically, we will examine it within the context of computing systems.
Confidentiality is a property that is often required for certain components of a computer system. One oft-used example is when I want to pay for some goods over the Web. When I visit a merchant, the data I send over the Internet should be encrypted; the sign that it is encrypted is typically the little green shield or padlock that I see on the browser bar by the address of the merchant. We will look in great detail at this example later on in the book, but the key point here is that the data—typically my order, my address, and my credit card information—is encrypted before it leaves my browser and decrypted only when it reaches the merchant. The merchant, of course, needs the information to complete the order, so I am happy for the encryption to last until it reaches their server.
What exactly is happening, though? Well, a number of steps are involved to get the data encrypted and then decrypted. This is not the place for a detailed description,8 but what happens at a basic level is that my browser and the merchant's server use a well-understood protocol—most likely HTTP + SSL/TLS—to establish enough mutual trust for an encrypted exchange of information to take place. This protocol uses algorithms, which in turn employ cryptography to do the actual work of encryption. What is important to our discussion, however, is that each cryptographic protocol used across the Internet, in data centres, and by governments, banks, hospitals, and the rest, though different, uses the same cryptographic “pieces” as its building blocks. These building blocks are referred to as cryptographic primitives and range from asymmetric and symmetric algorithms through one-way hash functions and beyond. They facilitate the construction of some of the higher-level concepts—in this case, confidentiality— which means that correct usage of these primitives allows for systems to be designed that make assurances about certain properties.
One lesson we can learn from the world of cryptography is that while using it should be easy, designing cryptographic algorithms is often very hard. While it may seem simple to create an algorithm or protocol that obfuscates data—think of a simple shift cipher that moves all characters in a given string “up” one letter in the alphabet—it is extremely difficult to do it well enough that it meets the requirements of real-world systems. An oft-quoted dictum of cryptographers is, “Any fool can create a cryptographic protocol that they can't defeat”; and part of learning to understand and use cryptography well is, in fact, the experience of designing such protocols and seeing how other people more expert than oneself go about taking them apart and compromising them.
Let us return to the topics we noted earlier: authorisation, integrity, etc. None of them defines trust, but we will think of them as acting as building blocks when we start considering trust relationships in more detail. Like the primitives used in encryption, these concepts can be combined in different ways to allow us to talk about trust of various kinds and build systems to model the various trust relationships we need to manage. Also like cryptographic primitives, it is very easy to use these primitives in ways that do not achieve what we wish to achieve and can cause confusion and error for those using them.
Why is all of this important? Because trust is important to security. We typically use security to try to enforce trust relationships because humans are not, sadly, fundamentally trustworthy. This book argues that computing systems are not fundamentally trustworthy either, but for somewhat different reasons. It would be easy to think that computing systems are neutral with regard to trust, that they just sit there and do what they do; but as we saw when we looked briefly at agency, computers act for somebody or something, even when the actions they take are unintended9 or not as intended. Equally, they may be maliciously or incompetently directed (programmed or operated). But worst, and most common of all, they are often—usually—unconsciously and implicitly placed into trust relationships with other systems, and ultimately humans and organisations, often outside the contexts for which they were designed. The main goal of this book is to encourage people designing, creating, and operating computer systems to be conscious and explicit in their actions around trust.
Trust as a Way for Humans to Manage Risk
Risk is a key concept to be able to consider when we are talking about security. There is a common definition of risk within the computing community, which is also shared within the business community:
In other words, the risk associated with an event is the likelihood that it will occur multiplied by the impact to be considered if it were to occur. Probability is expressed as a number between 0 and 1 (0 being no possibility of occurrence, 1 being certainty), and the loss can be explicitly stated either as an amount of money or as another type of impact. The point of the formula is to allow risks to be compared; and as long as the different calculations use the same measure of loss, it is generally unimportant what measure is employed. To give an example, let us say that I am interested in the risk of my new desktop computer failing in the first three years of