Security Engineering. Ross Anderson

Читать онлайн книгу.

Security Engineering - Ross  Anderson


Скачать книгу
all of the above plus IT staff;

      5 any or all of the above plus internal users and management;

      6 any or all of the above plus customers and other external users.

      Confusion between the above definitions is a fertile source of errors and vulnerabilities. Broadly speaking, the vendor and evaluator communities focus on the first and (occasionally) the second of them, while a business will focus on the sixth (and occasionally the fifth). We will come across many examples of systems that were advertised or even certified as secure because the hardware was, but that broke badly when a particular application was run, or when the equipment was used in a way the designers didn't anticipate. Ignoring the human components, and thus neglecting usability issues, is one of the largest causes of security failure. So we will generally use definition 6; when we take a more restrictive view, it should be clear from the context.

      The next set of problems comes from lack of clarity about who the players are and what they're trying to prove. In the literature on security and cryptology, it's a convention that principals in security protocols are identified by names chosen with (usually) successive initial letters – much like hurricanes, except that we use alternating genders. So we see lots of statements such as “Alice authenticates herself to Bob”. This makes things much more readable, but can come at the expense of precision. Do we mean that Alice proves to Bob that her name actually is Alice, or that she proves she's got a particular credential? Do we mean that the authentication is done by Alice the human being, or by a smartcard or software tool acting as Alice's agent? In that case, are we sure it's Alice, and not perhaps Carol to whom Alice lent her card, or David who stole her phone, or Eve who hacked her laptop?

      By a subject I will mean a physical person in any role including that of an operator, principal or victim. By a person, I will mean either a physical person or a legal person such as a company or government1.

      A principal is an entity that participates in a security system. This entity can be a subject, a person, a role, or a piece of equipment such as a laptop, phone, smartcard, or card reader. A principal can also be a communications channel (which might be a port number, or a crypto key, depending on the circumstance). A principal can also be a compound of other principals; examples are a group (Alice or Bob), a conjunction (Alice and Bob acting together), a compound role (Alice acting as Bob's manager) and a delegation (Bob acting for Alice in her absence).

      Beware that groups and roles are not the same. By a group I will mean a set of principals, while a role is a set of functions assumed by different persons in succession (such as ‘the officer of the watch on the USS Nimitz’ or ‘the president for the time being of the Icelandic Medical Association’). A principal may be considered at more than one level of abstraction: e.g. ‘Bob acting for Alice in her absence’ might mean ‘Bob's smartcard representing Bob who is acting for Alice in her absence’ or even ‘Bob operating Alice's smartcard in her absence’. When we have to consider more detail, I'll be more specific.

      The meaning of the word identity is controversial. When we have to be careful, I will use it to mean a correspondence between the names of two principals signifying that they refer to the same person or equipment. For example, it may be important to know that the Bob in ‘Alice acting as Bob's manager’ is the same as the Bob in ‘Bob acting as Charlie's manager’ and in ‘Bob as branch manager signing a bank draft jointly with David’. Often, identity is abused to mean simply ‘name’, an abuse entrenched by such phrases as ‘user identity’ and ‘citizen identity card’.

      The definitions of trust and trustworthy are often confused. The following example illustrates the difference: if an NSA employee is observed in a toilet stall at Baltimore Washington International airport selling key material to a Chinese diplomat, then (assuming his operation was not authorized) we can describe him as ‘trusted but not trustworthy’. I use the NSA definition that a trusted system or component is one whose failure can break the security policy, while a trustworthy system or component is one that won't fail.

      There are many alternative definitions of trust. In the corporate world, trusted system might be ‘a system which won't get me fired if it gets hacked on my watch’ or even ‘a system which we can insure’. But when I mean an approved system, an insurable system or an insured system, I'll say so.

       Secrecy is an engineering term that refers to the effect of the mechanisms used to limit the number of principals who can access information, such as cryptography or computer access controls.

       Confidentiality involves an obligation to protect some other person's or organisation's secrets if you know them.

       Privacy is the ability and/or right to protect your personal information and extends to the ability and/or right to prevent invasions of your personal space (the exact definition of which varies from one country to another). Privacy can extend to families but not to legal persons such as corporations.

      For example, hospital patients have a right to privacy, and in order to uphold this right the doctors, nurses and other staff have a duty of confidence towards their patients. The hospital has no right of privacy in respect of its business dealings but those employees who are privy to them may have a duty of confidence (unless they invoke a whistleblowing right to expose wrongdoing). Typically, privacy is secrecy for the benefit of the individual while confidentiality is secrecy for the benefit of the organisation.

      There is a further complexity in that it's often not sufficient to protect data, such as the contents of messages; we also have to protect metadata, such as logs of who spoke to whom. For example, many countries have laws making the treatment of sexually transmitted diseases secret, and yet if a private eye could observe you exchanging encrypted messages with a sexually-transmitted disease clinic, he might infer that you were being treated there. In fact, a key privacy case in the UK turned on such a fact: a model in Britain won a privacy lawsuit against a tabloid newspaper which printed a photograph of her leaving a meeting of Narcotics Anonymous. So anonymity can be just as important a factor in privacy (or confidentiality) as secrecy. But anonymity is hard. It's difficult to be anonymous on your own; you usually need a crowd to hide in. Also, our legal codes are not designed to support anonymity: it's much easier for the police to get itemized billing information from the phone company, which tells them who called whom, than it is to get an actual wiretap. (And it's often more useful.)

      The meanings of authenticity and integrity can also vary subtly. In the academic literature on security protocols, authenticity means integrity plus freshness: you have established that you are speaking to a genuine principal, not a replay of previous messages. We have a similar idea in banking protocols. If local banking laws state that checks are no longer valid after six months, a seven month old uncashed check has integrity (assuming it's not been altered) but is no longer valid. However, there are some strange edge cases. For example, a police crime scene officer will preserve the integrity of a forged check – by placing it in an evidence bag. (The meaning of integrity has changed in the new context to include not just the signature but any fingerprints.)

      The things we don't want are often described as hacking. I'll follow Bruce Schneier and define a hack as something a system's rules permit, but which was unanticipated and unwanted by its designers [1682]. For example, tax attorneys study the tax code to find loopholes which they develop into tax avoidance strategies; in exactly the same way, black hats study software code to find loopholes which they develop into exploits. Hacks can target not just the tax system and computer systems, but the market economy, our systems for electing leaders and even our cognitive systems. They can happen at multiple layers: lawyers can hack the tax code, or move up the


Скачать книгу