You CAN Stop Stupid. Ira Winkler
Читать онлайн книгу.The Target, Sony, OPM, and Equifax hacks all happened over a period of time. They each resulted in some form of user action or inaction as the initial attack vector. However, none of them had to result in massive damage from the single user failing. Yes, an Equifax employee was slow in patching a new vulnerability, but the massive data breach did not have to occur if there weren't the systematic technical failings within the Equifax infrastructure, especially given that the thefts took months to complete.
These examples begin to imply some potential solutions for UIL. However, before we begin exploring solutions, we intend to set a foundation of understanding the types of losses that may be initiated through user actions. With this foundation, we can then discuss how to avoid putting users in a position where they might initiate loss, instruct them how to take better actions, and then prevent their actions from resulting in loss. We will also explore how to take the opportunity away from malicious actors, as well as how to detect and mitigate the malicious acts.
Because there are an infinite number of user actions and inactions that can result in loss, it is helpful to categorize those actions. This allows you to identify which categories of user error and malice to consider in your environment and what specific scenarios to plan to mitigate. This chapter will examine some common categories where UIL occurs. We'll begin by considering processes, culture, physical losses, crime, user error, and inadequate training. Then we'll move on to technology implementation. Future chapters will explore ways of mitigating UIL.
Processes
Although this might seem to have no direct relationship to the users, how your organization specifies work processes is one of the biggest causes of UIL. Every decision you make about your work processes determines the extent to which you are giving the user the opportunity to initiate loss.
Clearly, the user has to perform a business function. If you can theoretically remove people from processes, you can reduce all UIL associated with those processes. For example, in fast-food restaurants, cashiers have the ability to initiate loss in multiple categories. A cashier can record the order incorrectly. This causes food waste and poor customer satisfaction, which can reduce profit and impede future sales. A cashier can also make mistakes in the handling of cash. They might miscount change, steal money, or be tricked by con artists. These are just a few of the problems. Restaurant chains understand this and implement controls within the process to reduce these losses. McDonald's, however, is going even further to control the process by implementing kiosks where customers place their orders directly into a computer system. This removes all potential loss associated directly with the cashiers.
Obviously, there are a variety of potential losses that are created by removing a human cashier from the process (such as loss of business from customers who find interacting with a kiosk too complicated), but those are ideally accounted for within the revised process. The point is that the process itself can put the user in the position to create UIL, or it can remove the opportunity for the user to initiate loss.
A process can be overly complicated and put well-intentioned users in a position where it is inevitable that they will make mistakes. For example, when you have users implement repetitive tasks in a rapid manner, errors generally happen. Such is the case with social media content reviewers. Facebook, for example, through outside contractors, pays content moderators low wages and has them review up to 1,000 reported posts a day. (See “Underpaid and Overburdened: The Life of a Facebook Monitor,” The Guardian, www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content
.) This can mean that legitimate content is deleted, while harmful content remains. The situation is ripe for UIL and also for causing significant harm to the content moderators, who have stress both from the working conditions and from reviewing some of the most troubling content on the Internet.
A process may also be poorly defined and give users access to more functionality and information than they require to perform their jobs. For example, companies used to attach credit card numbers to an entire sales record, and the credit card numbers were available to anyone in the entire fulfillment process, which included people in warehouses. Payment Card Industry Data Security Standard (PCI DSS) requires that only people who need access to the credit card numbers can actually access the information. Removing access to the information from all but those with a specific requirement to access it reduces the potential for those people to initiate a loss, maliciously or accidentally.
Processes can also lack checks and balances that ensure that when a loss is initiated, it is mitigated. For example, well-designed financial processes regularly have audits to ensure transactions are validated. A financial process that does not have sufficient audits is ripe for abuse by insiders and crime from outsiders. For example, we worked with a nonprofit organization and found that they paid thousands of dollars to criminals who sent the organization invoices that looked real. However, when we asked what the invoices were specifically for, it turns out that nobody knew. They modified the process to ensure that future invoices required internal approval by a stakeholder familiar with the charges. Clearly, establishing proper checks and balances is equally important for anyone who has access to data and information services as well.
All processes need to be examined to ensure that users are provided with minimum ability to create loss. Additionally, all organizations should have a process in place to prevent, detect, and mitigate the loss should a user initiate it.
Culture
Establishing a great process is awesome. However, as stated in the famous Peter Drucker quote, “Culture eats strategy for breakfast.”
Consider all of the security rules that exist in an organization. Then consider how many are usually followed. There are generally security rules that are universally followed and those that are universally ignored.
As consultants, we are frequently issued badges when we arrive at a client's facility. We diligently don the badges, at least until we walk around and determine that we are the only people actually wearing a badge. While we intend to adhere to security policies, we also have a need to fit in with and relate to the people inside the organization. Badge wearing is a symptom of a culture where security policies inspire people to ignore them.
Conversely, if many people in an office lock their file cabinets at the end of the day or whenever they leave their desk, most of their colleagues will generally do the same. Culture is essentially peer pressure about how to behave at work. No matter what the defined parameters of official behavior are within the organization, people learn their actual behavior through mirroring the behavior of their peers.
Culture is very powerful, enabling vast amounts of UIL and facilitating losses in other categories. If your culture doesn't adequately support and promote your processes, training, and technology implementation, then crime, physical losses, and user errors all increase as a consequence. Let's consider some examples where culture can be shown to have a direct relationship to UIL.
When the Challenger space shuttle exploded, the explanation given to the public was that O-rings, cold weather, and a variety of other factors were the combined cause. However, internal investigations also revealed that there was a culture that was driven to take potentially excessive risk to stay on schedule. (See “Missed Warnings: The Fatal Flaws Which Doomed Challenger,” Space Safety Magazine, www.spacesafetymagazine.com/space-disasters/challenger-disaster/missed-warnings-fatal-flaws-doomed-challenger/
.) Despite many warnings about safety concerns relevant to the Challenger launch, NASA executives chose to downplay the warnings and continued with the launch. Even if the Challenger explosion was due to a mechanical failure, it was clearly a UIL because someone made the conscious decision to ignore warnings and proceed despite the risks.
While it shouldn't take a crippling of the entire space program to initiate culture fixes, NASA subsequently issued engineers challenge cards that they could place on the table in the middle of discussions and demand that their concerns be heard.
In perhaps one of the most iconic cases of culture-based UIL, in 2017, the U.S. Navy destroyer USS Fitzgerald crashed into a large