Tribe of Hackers Red Team. Marcus J. Carey

Читать онлайн книгу.

Tribe of Hackers Red Team - Marcus J. Carey


Скачать книгу
Why can’t we agree on what a red team is?

      As with many things in cybersecurity, there is always an implied “it depends” when discussing what constitutes red teaming. Some believe that red teaming is just hacking; others believe that red teaming is far more robust and systematic than that. I believe that ultimately it depends on the perspective of the audience. For those in a purely corporate setting, red teaming gives a more elegant name to penetration testing with a nonmalicious purpose. It infers a sense of structure and methodology that leverages offensive security capabilities to uncover exploitable vulnerabilities. Among the hacker community, however, there may be a much looser definition being used.

       What is one thing the rest of information security doesn’t understand about being on a red team? What is the most toxic falsehood you have heard related to red, blue, or purple teams?

      Perhaps the most toxic falsehood to date that I have heard is that cybersecurity professionals completely fit within one of three buckets: red team, blue team, and purple team. This gives the perception that cybersecurity professionals are single-threaded, which simply isn’t true at all. While each professional may have more of an affinity to one or the other depending on how they have matured within cybersecurity, it is functionally impossible to not consider the other buckets. Red teamers must understand how their penetration attempts could be thwarted or detected and come up with countermeasures to lessen the likelihood of that happening. Blue teamers must understand at some level the TTPs that adversaries are launching to better develop countermeasures to repel them. Most cybersecurity professional are a shade of purple, being more red or blue depending on affinity and maturity in the field.

       When should you introduce a formal red team into an organization’s security program?

      A formal red team can be introduced into a security program at any point. The value and benefit of doing so largely depends on what is to be gained from the red team exercises. If the intent is to understand the threat surface and to what degree a program (or a part of the program) is vulnerable, then it is reasonable to engage red team services early in the program’s develop phase as a tool to better frame overall risks. Similarly, formal red team engagement can be part of the overall security strategy and lifecycle to reassess the robustness of controls and the organization’s ability to detect and respond.

       How do you explain the value of red teaming to a reluctant or nontechnical client or organization?

      Lobbying for red teaming within one’s organization can be challenging, particularly if the organization’s security program has not matured beyond vulnerability assessment and/or vulnerability management. Additionally, if the organization has not sufficiently invested in or implemented controls or resources, red teaming may uncover vulnerabilities that have not been budgeted for and which there are insufficient resources to address, which exacerbates the problem. My approach has always been to frame the notion of red teaming as a function of risk management/mitigation. Red teaming allows for an organization to find potentially damaging or risky holes in their security posture before bad actors exploit them, minimizing the potential impact to company reputation, customers, and shareholders. Taking this approach makes the question of whether to use red teaming a business decision, as opposed to a technical one.

       What is the least bang-for-your-buck security control that you see implemented?

      With the myriad of security products, services, and capabilities that are on the market, they all should be supporting two principal edicts: detect and respond. However, many security organizations are not staffed appropriately to consume and act on all the data that is available to them from these tools. Standalone threat intelligence tools, in my opinion, offer the least bang for the buck because they still require contextual correlation to the environment, which implicitly requires human cycles. Even with automation and orchestration between firewalls, SIEM, and IDS/IPS, correctly consuming threat intelligence requires resources—and burns cycles that may be better utilized elsewhere. The robustness of many of the more effective controls (firewalls, IDS/IPS, EPP) will generally give you the threat context that is necessary to detect and respond, without the overhead of another tool.

       Have you ever recommended not doing a red team engagement?

      Typically, a customer or an organization can always benefit from some form of “red team” activity, even if it is just a light penetration test. In my consulting life, we generally would recommend against a full-blown red team exercise if there was significant immaturity evident within the organization’s security program or if the rules of engagement could not be settled upon to safely conduct the red team exercise. What has been recommended in the past is a more phased approach, going after a limited scope of targets and then gradually expanding as the organization’s security maturity increases.

       What’s the most important or easiest-to-implement control that can prevent you from compromising a system or network?

      Security awareness training can be one of the easiest and most important controls that bolsters the overall security posture of an organization. User behavior can be the difference between a managed threat landscape and an unruly one, and in many instances, the end user will see incidents before security. Educate and empower users to practice good cyber hygiene. Beyond that, certain security controls that are cloud-based can be leveraged to offset the capital costs of infrastructure, if that is a barrier. This is particularly true in small to medium-sized businesses with limited staff and/or budgets.

       Why do you feel it is critical to stay within the rules of engagement?

      Rules of engagement are established as the outer markers for any red team/pentesting exercise. They basically provide the top cover for activities that may cause harm or an outage, even if unintentional. Additionally, the rules of engagement can be your “get-out-of-jail-free” card should something truly go sideways, as they generally include a hold harmless clause. Deviating from the stated rules of engagement without expressed written consent of the client could open you up to legal liability issues and be devastating to your career.

       If you were ever busted on a penetration test or other engagement, how did you handle it?

      I had an instance where a physical penetration test was being conducted for a client, and the sponsor had neglected to notify site security about my presence. After gaining access to the facility through a propped-open door in the back (repair personnel didn’t want to keep badging in), I was walking through the facility with a hard hat that I had “borrowed” from a table, and I was apprehended by site security and the local police. To make matters worse, my contact was unavailable when they called to confirm that I was authorized to conduct the penetration test. After two intense hours of calling everyone that I could to get this cleared up and the threat of charges being filed, the contact finally called back and I was released without being arrested.

       What is the biggest ethical quandary you experienced while on an assigned objective?

      Without question, the biggest ethical quandary I’ve experienced is stumbling upon an account cache, financial records, or PII in a place where they shouldn’t be and being told by the sponsor not to disclose the details to the impacted individuals until the penetration testing exercise was complete, which may be over several days. For me, there are certain discoveries that take priority and need to be acted upon immediately, particularly when it is PII or financial information. In this case, the sponsor was attempting to prove a point to another member of management and had virtually no regard for what had been discovered.

      


Скачать книгу