Alice and Bob Learn Application Security. Tanya Janca
Читать онлайн книгу.source code would therefore have access to those hard-coded values. We always want to keep our secrets safe, and hard coding them into our source code is far from safe.
Hard coding is generally considered a symptom of poor software development (there are some exceptions to this). If you encounter it, you should search the entire application for hard coding, as it is unlikely the one instance you have found is unique.
Never Trust, Always Verify
If you take away only one lesson from this book, it should be this: never trust anything outside of your own application. If your application talks to an API, verify it is the correct API, and that it has authority to do whatever it’s trying to do. If your application accepts data, from any source, perform validation on the data (ensure it is what you are expecting and that it is appropriate; if it is not, reject it). Even data from your own database could have malicious input or other contamination. If a user is attempting to access an area of your application that requires special permissions, reverify they have the permission to every single page or feature they use. If a user has authenticated to your application (proven they are who they say they are), ensure you continue to validate it’s the same user that you are dealing with as they move from page to page (this is called session management). Never assume because you checked one time that everything is fine from then on; you must always verify and reverify.
NOTE We verify data from our own database because it may contain stored cross-site scripting (XSS), or other values that may damage our program. Stored XSS happens when a program does not perform proper input validation and saves an XSS attack into its database by accident. When users perform an action in your application that calls that data, when it is returned to the user it launches the attack against the user in their browser. It is an attack that a user is unable to protect themselves against, and it is generally considered a critical risk if found during security testing.
Quite often developers forget this lesson and assume trust due to context. For instance, you have a public-facing internet application, and you have extremely tight security on that web app. That web app calls an API (#1) within your network (behind the firewall) all the time, which then calls another API (#2) that changes data in a related database. Often developers don’t bother authenticating (proving identity) to the first API or have the API (#1) verify the app has authorization to call whatever part of the API it’s calling. If they do, however, in this situation, they often perform security measures only on API #1 and then skip doing it on API #2. This results in anyone inside your network being able to call API #2, including malicious actors who shouldn’t be there, insider threats, or even accidental users (Figure 1-7).
Figure 1-7: Example of an application calling APIs and when to authenticate
Here are some examples:
A website is vulnerable to stored cross-site scripting, and an attacker uses this to store an attack in the database. If the web application validates the data from the database, the stored attack would be unsuccessful when triggered.
A website charges for access to certain data, which it gets from an API. If a user knows the API is exposed to the internet, and the API does not validate that who is calling it is allowed to use it (authentication and authorization), the user can call the API directly and get the data without paying (which would be malicious use of the website), it’s theft.
A regular user of your application is frustrated and pounds on the keyboard repeatedly, accidentally entering much more data than they should have into your application. If your application is validating the input properly, it would reject it if there is too much. However, if the application does not validate the data, perhaps it would overload your variables or be submitted to your database and cause it to crash. When we don’t verify that the data we are getting is what we are expecting (a number in a number field, a date in a date field, an appropriate amount of text, etc.), our application can fall into an unknown state, which is where we find many security bugs. We never want an application to fall into an unknown state.
Usable Security
If security features make your application difficult to use, users will find a way around it or go to your competitor. There are countless examples online of users creatively circumventing inconvenient security features; humans are very good at solving problems, and we don’t want security to be the problem.
The answer to this is creating usable security features. While it is obvious that if we just turned the internet off, all our applications would be safer, that is obviously an unproductive solution to protecting anyone from threats on the internet. We need to be creative ourselves and find a way to make the easiest way to do something also be the most secure way to do something.
Examples of usable security include:
Allowing a fingerprint, facial recognition, or pattern to unlock your personal device instead of a long and complicated password.
Teaching users to create passphrases (a sentence or phrase that is easy to remember and type) rather than having complexity rules (ensuing a special character, number, lower- and uppercase letters are used, etc.). This would increase entropy, making it more difficult for malicious actors to break the password, but would also make it easier to use for users.
Teaching users to use password managers, rather than expecting them to create and remember 100+ unique passwords for all of their accounts.
Examples of users getting around security measures include:
Users tailgating at secure building entrances (following closely while someone enters a building so that they do not need to swipe to get in).
Users turning off their phones, entering through a scanner meant to detect transmitting devices, then turning it back on once in the secure area where cell phones are banned.
Using a proxy service to visit websites that are blocked by your workplace network.
Taking a photo of your screen to bring a copyright image or sensitive data home.
Using the same password over and over but incrementing the last number of it for easy memory. If your company forces users to reset their password every 90 days, there’s a good chance there are quite a few passwords in your org that follow the format currentSeason_currentYear.
Factors of Authentication
Authentication is proving that you are indeed the real, authentic, you, to a computer. A “factor” of authentication is a method of proving who you are to a computer. Currently there are only three different factors: something you have, something you are, and something you know:
Something you have could be a phone, computer, token, or your badge for work. Something that should only ever be in your possession.
Something you are could be your fingerprint, an iris scan, your gait (the way you walk), or your DNA. Something that is physically unique to you.
Something you know could be a password, a passphrase, a pattern, or a combination of several pieces of information (often referred to as security questions) such as your mother’s maiden name, your date of birth, and your social insurance number. The idea is that it is something that only you would know.
When we log in to accounts online with only a username and password, we are only using one “factor”