Alice and Bob Learn Application Security. Tanya Janca
Читать онлайн книгу.serious repercussion than end of life. The integrity of their health systems is crucial to ensuring they both remain in good health.
Figure 1-3: Integrity means accuracy.
CIA is the very core of our entire industry. Without understanding this from the beginning, and how it affects your teammates, your software, and most significantly, your users, you cannot build secure software.
Availability
If Alice’s insulin measuring device was unavailable due to malfunction, tampering, or dead batteries, her device would not be “available.” Since Alice usually checks her insulin levels several times a day, but she is able to do manual testing of her insulin (by pricking her finger and using a medical kit designed for this purpose) if she needs to, it is somewhat important to her that this service is available. A lack of availability of this system would be quite inconvenient for her, but not life-threatening.
Bob, on the other hand, has irregular heartbeats from time to time and never knows when his arrhythmia will strike. If Bob’s pacemaker was not available when his heart was behaving erratically, this could be a life-or-death situation if enough time elapsed. It is vital that his pacemaker is available and that it reacts in real time (immediately) when an emergency happens.
Bob works for the federal government as a clerk managing secret and top-secret documents, and has for many years. He is a proud grandfather and has been trying hard to live a healthy life since his pacemaker was installed.
NOTE Medical devices are generally “real-time” software systems. Real-time means the system must respond to changes in the fastest amount of time possible, generally in milliseconds. It cannot have delays—the responses must be as close as possible to instantaneous or immediate. When Bob’s arrhythmia starts, his pacemaker must act immediately; there cannot be a delay. Most applications are not real-time. If there is a 10-millisecond delay in the purchase of new running shoes, or in predicting traffic changes, it is not truly critical.
Figure 1-4: Resilience improves availability.
NOTE Many customers move to “the cloud” for the sole reason that it is extremely reliable (almost always available) when compared to more traditional in-house data center service levels. As you can see in Figure 1-4, resilience improves availability, making public cloud an attractive option from a security perspective.
The following are security concepts that are well known within the information security industry. It is essential to have a good grasp of these foundational ideas in order to understand how the rest of the topics in this book apply to them. If you are already a security practitioner, you may not need to read this chapter.
Assume Breach
“There are two types of companies: those that have been breached and those that don’t know they’ve been breached yet.”2 It’s such a famous saying in the information security industry that we don’t even know who to attribute it to anymore. It may sound pessimistic, but for those of us who work in incident response, forensics, or other areas of investigation, we know this is all too true.
The concept of assume breach means preparation and design considerations to ensure that if someone were to gain unapproved access to your network, application, data, or other systems, it would prove difficult, time-consuming, expensive, and risky, and you would be able to detect and respond to the situation quickly. It also means monitoring and logging your systems to ensure that if you don’t notice until after a breach occurs, at least you can find out what did happen. Many systems also monitor for behavioral changes or anomalies to detect potential breaches. It means preparing for the worst, in advance, to minimize damage, time to detect, and remediation efforts.
Let’s look at two examples of how we can apply this principle: a consumer example and a professional example.
As a consumer, Alice opens an online document-sharing account. If she were to “assume breach,” she wouldn’t upload anything sensitive or valuable there (for instance, unregistered intellectual property, photos of a personal nature that could damage her professional or personal life, business secrets, government secrets, etc.). She would also set up monitoring of the account as well as have a plan if the documents were stolen, changed, removed, shared publicly, or otherwise accessed in an unapproved manner. Lastly, she would monitor the entire internet in case they were leaked somewhere. This would be an unrealistic amount of responsibility to expect from a regular consumer; this book does not advise average consumers to “assume breach” in their lives, although doing occasional online searches on yourself is a good idea and not uploading sensitive documents online is definitely advisable.
As a professional, Bob manages secret and top-secret documents. The department Bob works at would never consider the idea of using an online file-sharing service to share their documents; they control every aspect of this valuable information. When they were creating the network and the software systems that manage these documents, they designed them, and their processes, assuming breach. They hunt for threats on their network, designed their network using zero trust, monitor the internet for signs of data leakage, authenticate to APIs before connecting, verify data from the database and from internal APIs, perform red team exercises (security testing in production), and monitor their network and applications closely for anomalies or other signs of breach. They’ve written automated responses to common attack patterns, have processes in place and ready for uncommon attacks, and they analyze behavioral patterns for signs of breach. They operate on the idea that data may have been breached already or could be at any time.
Another example of this would be initiating your incident response process when a serious bug has been disclosed via your responsible disclosure or bug bounty program, assuming that someone else has potentially already found and exploited this bug in your systems.
According to Wikipedia, coordinated disclosure is a vulnerability disclosure model in which a vulnerability or an issue is disclosed only after a period of time that allows for the vulnerability or issue to be patched or mended.
Bug bounty programs are run by many organizations. They provide recognition and compensation for security researchers who report bugs, especially those pertaining to vulnerabilities.
Insider Threats
An insider threat means that someone who has approved access to your systems, network, and data (usually an employee or consultant) negatively affects one or more of the CIA aspects of your systems, data, and/or network. This can be malicious (on purpose) or accidental.
Here are some examples of malicious threats and the parts of the CIA Triad they affect:
An employee downloading intellectual property onto a portable drive, leaving the building, and then selling the information to your competitors (confidentiality)
An employee deleting a database and its backup on their last day of work because they are angry that they were dismissed (availability)
An employee programming a back door into a system so they can steal from your company (integrity and confidentiality)
An employee downloading sensitive files from another employee’s computer and using them for blackmail (confidentiality)
An employee accidentally deleting files, then changing the logs to cover their mistake (integrity and availability)