Defence in Depth: Web browsers (part 6/7)
In the previous article we covered some important security aspects regarding server-side infrastructure. This article covers some of the challenges we face on the client-side, in particular when working with web browsers.
The browser is a very attractive target environment for distributing applications and systems to the user. It’s easy to access and requires no additional installation since most of today’s users have access to a modern browser. For the user it is, compared to installing and running a native application, a well-isolated and convenient environment for running applications. This allows users to be less careful regarding which sites they visit and are many times happy to have an online banking browser tab open at the same time as another tab is visiting a highly untrusted site.
The isolated environment in which web applications run inside the browser adds protections against for example ransomware and other attacks aiming to access the device’s resources. While the environment can be opened to allow access to sensitive data, such as microphone and camera most browsers require user interactions before allowing it.
The developer support in terms of languages, frameworks and tools for building web applications are becoming better and better. Today’s tooling makes it possible to deliver systems largely independent of the type of client the user has. But from the application’s perspective the browser has several security obstacles for us as developers.
Among these we’d like to highlight the following:
- Browser plugins and add-ons
- A shared environment for many web applications of different origins
- Weaknesses and variations between browsers
As an application developer it’s important to understand these concepts and properties and keep them in mind. While the isolation between web applications running in the same browser is (in general) better than the isolation between native applications it’s important to remember that the threshold for a user to run a malicious web application is lower than it is to run a malicious native application, especially in a professional setting with managed clients.
While it’s important to think about client-side security a secure web API is crucial. No amount to client-side protections can prevent an attacker from attacking your API directly.
Pontus Hanssen, Omegapoint
Can you prevent your users from running a vulnerable browser or installing malicious plugins?
We should use the security features provided by modern browsers to our advantage. As a starting point, we should only actively support browsers and versions that are actively supported by their vendor. There might be business, legal or compliance reasons for supporting specific versions of a certain web browser, in such cases we must weigh the risks against each other and understand what we compromise by supporting users with vulnerable browsers.
Kasper Karlsson, Omegapoint
We’ve seen many examples of vulnerable and malicious plugins throughout the years. Adobe’s plugin for displaying PDFs in Mozilla Firefox contained a Universal XSS (UXSS) vulnerability which allowed all web sites that displayed PDFs to be exploited. This was discovered in 2007, vulnerable plugins is not something new!
The browser is a shared environment where applications from different site of different origin are run together in a shared environment. One of the most basic defenses to separate data from a banking web site from malicious web sites (or any two sites of different origin) is called Same Origin Policy (SOP).
SOP has been implemented in all mayor browsers for a long time and is such a fundamental protection that it’s often taken for granted. Perhaps it’s something that application developers never had to understand in-depth since it’s always been there. In the end, one must understand that websites are separated by protocol, host and post. For more details in an in-depth description of the challenge of SOP see https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy
A common mistake we often see is developers opening upp the SOP by implementing a CORS policy which allows all HTTP verbs regardless of origin. This increases the risk of the application becoming vulnerable to CSRF attacks.
It’s important to note that CORS is a client-side browser-only protection and not a “firewall” on the server-side. By using a “Backend For Frontend” architecture as described in previous articles you avoid this kind of issues since all traffic comes from the same origin.
Over time all major browsers have had weaknesses and vulnerabilities, sometimes even in their implementation of SOP. Vulnerabilities and security issues aside, inconsistencies between browsers is something that most developers have struggled with. Such inconsistencies are in general harmless, for example when they are regarding how a specific element is rendered. Sometimes the modified behavior give rise to serious issues which are hard to identify since there’s usually not enough resources for in-depth testing in all supported web browsers.
One example of this is how Cookie Policies have changed the interpretation of Strict, Lax and None during the last few years.
Cookie Policies offer a defence against CSRF attacks in most modern browsers. But the fact that different browsers handle them differently motivates to keep using CSRF protections that are browser independent, such as “double-submit-cookies” and user reauthentication, as mentioned in the section Secure sessions in Clients and Sessions.
Also note that in 2020 both Google and Apple made it clear that they want to make it harder for third-parties to track users between different sites, which will have an impact on solutions relying on third-party cookies. Read more at https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/ or https://blog.chromium.org/2019/10/developers-get-ready-for-new.html
- Built a secure API, with secure infrastructure and environment according to the principles of the previous articles.
- You users only run modern and up-to-date browsers as previously discussed in this article.
You may still be vulnerable against Cross-Site Scripting (XSS) attacks which, in example, make it possible for an attacker to bypass access control and access data they should not be allowed to.
Implementing strict input validation in our API is a good first line of defense against XSS, but it’s in no way enough, it will only limit attack vectors. The solution to the root cause of XSS is proper, context-aware output encoding.
Note the similarity with an injection attack against the server, e g SQLi. And how this relates to Domain Driven Design (DDD) and security (DDSec). This will help you identify trust boundaries and where you need to do output encoding and input validation.
See more on this in Secure APIs by design and the book Secure by Design.
While the solution to XSS is spelled “output encoding” it’s important to adhere to the principles of defense in depth and use multiple layers of protection. To do this we use a Content Security Policy (CSP), which is supported by all modern browsers. A CSP, just like CORS, is a client-side protection mechanism that doesn’t add any server-side security.
Implementing a strict CSP might seem like a daunting task. Tools such as Google’s CSP Evaluator can help you find weaknesses in your policy. https://csp-evaluator.withgoogle.com/
Adrian Bjugård, Omegapoint
I often see developers adding unsafe-eval to their CSP to make certain dependencies work. In this case one should perhaps reconsider the choice of dependency and make sure that it’s used in a secure manner.
Another common mistake is disabling the CSP to ease the development process. Unfortunately such changes have a habit of finding their way into production. Try to keep track of what your CSP looks like in production. Perhaps there can be a system test to verify that it doesn’t include unsafe-eval.
CSP is a security standard which lets applications define a list of trusted sources from which the browser is allowed to fetch content from. The CSP is made up of different “directives” for different content types which makes it possible to specify different policies for example scripts and fonts. One should strive towards implementing a strict CSP according to least privilege by setting the default directive “default-src” to “none” to disable all external content and specifically allow the required sources for each content type. By designing your system to only fetch resources form sources you control you can simplify the process of defining a strict CSP. For more information on CSP see https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
It’s better to update your CSP often, than to start with a permissive policy, allowing content to be loaded from any source.
Note that in a shared environment that you are not in full control of, the source value ‘self’ allows script from the same origin as you to be run as part of your application.
Don’t be fooled into thinking that a strict CSP is enough to protect your application against XSS. A CSP is great for reducing the attack surface when some part of the application does not handle output encoding correctly, but it does not offer a complete defence against XSS.
It is possible to specify a CSP to run in report-only mode. Violations of the policy are reported to the developer specified URI, but nothing is blocked. This feature is especially useful when first implementing a CSP and testing it without impacting your users. To handle CSP reports in production is another challenge. There might be a lot of reports with many false-positive, they may contain XSS attacks or be caused by a browser plugin which makes it hard in a high-volume production environment to identify whether a client is under attack or not.
Apart form XSS protection through CSP directives there’s a number of other of other security focused HTTP headers such as Referer-policy and Feature-policy that can be used to further increase the client-side security of your web application.
A great tool for testing the protection enabled in your web application through HTTP headers is https://securityheaders.com/
All in all there’s a lot of mechanisms to protect the user that we can use in a modern browser which are part of the architecture when we build security in multiple layers (defense in depth).
Sometimes there might be a need to compromise on availability, confidentiality and integrity. One example of this is finding a a balance for meeting requirements on the ability to analyze user behavior. There are many tools such as “Tag Managers” which are very powerful and allow detailed analytics of individual user sessions. This might be perfect from an analytics point-of-view, but add several security challenges:
- The scripts that make this possible are included from a third-party. Can this party be trusted? Who guarantees that only your own scripts can be included?
- The scripts need access to manipulate the DOM. This usually happens through some kind of eval construct. What risks are introduced by opening up the CSP with “unsafe-eval” or “unsafe-inline”?
- The scripts, even if tool-generated where the risk of mistakes is low, are often created outside of the regular test and quality process and deployed straight into production. Perhaps there’s also a lack of strong authentication and version control for those creating such scripts. How is the quality of these scripts guaranteed?
- Over time it’s hard to keep track of which data is stored at third parties, it’s often the up to the user of the tool to “hide” sensitive data instead of only allowing non-sensitive data. Who ensures GDPR compliance? The data may be extremely sensitive since tag managers might also record data that the user never intended to sent to your backend.
It’s important to be aware of these properties and answer the questions above before starting to use services like this.
The browser is an attractive environment for delivering applications and for most systems it is, for good reasons, the first-hand choice. It is still important to understand the browsers’ security model and restrictions. Consider your choice of client. For some applications with high security requirements, a web application might not be the best choice?
The next article concludes this series and we try to sum it all up.
See Defence in Depth for additional reading materials and code examples.
More in this series:
- Defence in Depth: Identity modelling (part 1/7)
- Defence in Depth: Claims-based access control (part 2/7)
- Defence in Depth: Clients and sessions (part 3/7)
- Defence in Depth: Secure APIs (part 4/7)
- Defence in Depth: Infrastructure and data storage (part 5/7)
- Defence in Depth: Web browsers (part 6/7)
- Defence in Depth: Summary (part 7/7)