Defence in Depth: Infrastructure and data storage (part 5/7)
19 September 2023The first three articles covered modelling identity and the steps necessary to retrieve an access token. The fourth article showed how to validate an incoming request and build a fine-grained access control for our API.
In this article we will discuss the infrastructure necessary to deploy and operate the system we’ve described in the previous articles. We will also cover some important notes regarding data management.
Infrastructure is a very broad topic and based on our experience we chose to focus on the following sub-topics:
- Minimize public exposure of services and functions
- Employ end-to-end encryption using TLS for all traffic
- Secure usage of third-party components
- Secure data management
Minimize public exposure
One important aspect of defense in depth and the principal of least privilege is to not expose more that what is necessary. If your API is not indented to be used by public clients there’s no reason it should be accessible from the internet. Minimizing the exposure is all about reducing the potential attack surface from public networks. One way of implementing such restrictions is through perimeter defenses such as firewalls and gateways. While we should never trust perimeter defenses alone they add a layer of defense, and could help lower the severity of security issues further down the line.
Can I trust the framework I use for API development?
It’s important to > remember that the developers of the frameworks you use to build your API on may not have security as a priority. In many cases the framework vendor recommends using an external product with a focus on security as a first line of protection against incoming traffic instead of relying on the bundled reverse proxy.
Different terms and products in this area have overlapping functionality. Common for all of them is their ability to improve our defense against simpler, often scripted or automatic, attacks.
A classic firewall gives us a good baseline protection against unintentional exposure of services from our infrastructure such as FTP servers, SMB shares, etc. An attacker scanning for open ports and services with known vulnerabilities to gain a foothold in our system will have a much harder task if we use a correctly configured firewall with a strict policy as a first defense against incoming traffic.
A “Web Application Firewall”, often abbreviated as “WAF”, inspects incoming HTTP requests and through a more or less intelligent rule set can detect common injection attacks, insecure HTTP headers and some cases of XSS.
An “Application Gateway” usually includes additional functionality that is useful from a security perspective, such as rate limiting for individual clients. They may also allow egress control, i.e to limit outgoing traffic to a set of known valid addresses. This makes it more difficult for an attacker with a foothold inside the system to exfiltrate data.
As the products listed above act as the first line of defense for your system it’s important to consider their ability to protect against DDoS attacks.
Following the principle of least privilege you should disable all services and functions not in use in your system. This is especially important when using virtual machines or bare metal servers compared to cloud native services. Servers must also be patched whenever new security updates are made available by the vendor.
Depending on your organization cloud native services may be preferred over managing your own VMs or bare metal servers. A cloud service provider with a strong security profile, experience and competence regarding their own products may have the means necessary to perform better security work than your own organization.
Pontus Hanssen, Omegapoint
Pay extra attention to limiting the exposure of data sources in the cloud. In many cases data sources such as database servers are exposed to the public even if the cloud provider offers different types of network protections. we often find this during penetration tests and security reviews.
Read more about penetration tests and security reviews at Offensive application security.
Implement network segmentation to make it difficult for an attacker with access to one part of your system to move laterally to other parts of the system. Also make sure that all user accounts with access to your infrastructure use strong passwords and multi-factor authentication (MFA).
An important security aspect that is often overlooked is user account management. Former employees, or employees that have moved to a different assignment within the organization should no longer have access to the infrastructure.
All accounts should be personal to enable traceability of who performed a state-changing operation in the system. Traceability and audit logging are important factors when conducting forensics and root-cause analysis (RCA) after a security incident or breach.
Rotating secrets and credentials to services is important for the same reasons. A person no longer working with the system may have copies or memorized old passwords to databases etc. See the section data management for more information.
Use TLS end-to-end
All traffic in your network should be protected with TLS. It’s important to understand what encrypting traffic with TLS gives us, and what is does not give us.
With TLS we get:
- Confidentiality
- Integrity
- Authentication of the recipient of a request (the server) on the client side
- Possibility for the recipient to authenticate the client through Mutual TLS (mTLS), which requires client certificates
TLS does not give us:
- Anonymity
- Traceability
- Non-repudiation
Even if the application data is encrypted it’s possible to eavesdrop on metadata such as SNI in Client Hello and address information such as IP addresses from underlying protocols in the network stack. In other words, TLS does no enable full anonymity.
Non-repudiation means that it should be impossible for the one who performed an operation to deny that they performed it. In other words this requires the system to be able to cryptographically tie a user to a given read or write operation.
TLS does not enable traceability or non-repudiation since it simply only enables encrypted communications. This needs to be handled in the application layer.
Martin Altenstedt, Omegapoint
Ensure that TLS is used in production, test and development environments. With all the tooling that’s available today it’s completely reasonable to expect developers to use HTTPS during development. History has shown us that settings and feature flags to make exceptions in test and development environments have a habit of finding their way into production. Which is why you should avoid having feature flags to disable security during development.
In general we consider traffic over HTTP with securely configured TLS (HTTPS) to protect against an attacker being able to read or modify our requests. It offers protection of all the application-level data included in the request. This includes protection against an attacker with full access to a node in the network. It does not include protections against an attacker with access to any termination point of the TLS connection such as the API or a client.
TLS encrypts all data in an HTTP request, method, URI, headers, body and query. It’s important to note that the recipient address (host) may leak through SNI in Client Hello, DNS queries or though the IP network layer.
Also make sure to redirect any traffic over clear-text HTTP to HTTPS and use HTTP Strict Transport Security (HSTS). The recommended settings are to include subdomains and with a min-age of at least 180 days to inform web browsers that the service should only be accessed over HTTPS.
The choice of algorithms in the X.509 certificates used are important. Weak algorithms and cipher suites give insufficient protection against a powerful attacker. Services like https://www.ssllabs.com/ enables you to test you own certificates and TLS configuration against the current best practices.
Where TLS is terminated may vary depending on the system and how they are deployed. OAuth2 and OpenID Connect require strong transport layer protection using TLS 1.2 or later. We always strive for end-to-end encryption but picking the place to terminate TLS in a large system is a hard balancing act between simplicity, security and complexity.
In a Kubernetes deployment it’s common for TLS to be terminated at the ingress and all traffic inside the cluster to be in plaintext. Naturally this means that an attacker with a foothold inside the cluster will have access to all traffic, including sensitive data such as access tokens. The attack surface is greatly reduced if the cluster is limited to one system with only a few and properly controlled administrators. In such a case TLS may be terminated in the ingress, which is a far simple technical solution than supporting uninterrupted TLS communication all the way to the pod.
In large, unsegmented networks shared by many systems with a large number of administrators, this creates a large attach surface for an attacker. An initial foothold in a pod in the network compromises the confidentiality and integrity of all the systems in the unsegmented network.
Large internal networks do not offer a good enough level of integrity and confidentiality for them to be considered trusted. It is perfectly reasonable to consider internal networks un-trusted and “public” from the point of view of a system.
A common scenario is that the node that terminates TLS at the ingress of a network re-encrypts the traffic for internal communication. This enables a good continuous transport layer protection, but note that the node responsible for the decryption is able to read all traffic and must be secured accordingly.
Note that terminating TLS before the request reaches our API makes it more difficult to handle certificate-bound access tokens since they rely on mTLS. However, it is still possible to use token binding, even if TLS is terminated before our API, if the terminating party forwards the client certificate information to the API, for example through a custom HTTP header.
Secure usage of third-party components
All modern systems are built using components and services that we don’t develop and maintain ourselves. All these dependencies need to be continuously updated to lower the risk of introduce vulnerabilities in our system. As an example, a bug in one of the dependencies used in a web application may introduce an XSS attack vector. Long-term maintenance and updating all dependencies are important aspects to consider when picking new dependencies and services.
Note that this stays true throughout the whole lifetime of a system, not only during the development phase. Operations and maintenance of a system must include updating all components that may affect the security of the system. Careful consideration of a dependency’s impact of a systems overall security and future maintainability is an important part of our security work.
Modern web applications generally depend on a large number of third-party dependencies, some of which are fetched directly from external sources. Google Analytics and Google Tag Manager are two examples of dependencies that are often fetched directly from Googles CDN and downloads both data and code to your JavaScript runtime. If an attacker manages to gain control over the dependency source code repository or the CDN they can inject malicious code to gain complete access over your application. This is true also for our API. The prevalence of this issue varies with the framework used to develop the application.
To reduce the risk of malicious code from third-parties reaching our application we can opt for fetching dependencies at build-time rather than dynamically in runtime. This allows us to identify issues before they reach our systems.
This makes it harder for an attacker since they would need to control the package source over a longer period of time. If we regardless of this chose to fetch packages directly from an external source during runtime we should verify that the package does not contain any unexpected content or vulnerabilities.
There are many different tools that can help us with static and dynamic code analysis during development. For example scanners that can be integrated with your build pipeline and search for known vulnerabilities in your code and included dependencies.
For packages loaded dynamically from an external source in a web application we can add protections such as Subresource integrity. It allow us to cryptographically verify the integrity of the package fetched. The potential downside of this is increased administration when updating dependencies.
Secure data management
All accounts used by our API to access data should follow the principle of least privilege and have the minimum required set of permissions to the specific data source. Connection strings should be rotated periodically. Passwords in connection string should always be machine-generated with high entropy, a human should never get to pick a password to a database.
Any user accounts used by administrator to connect to data sources should be personal and restricted as much as possible. Partly to mitigate the security risk, but also to limit the potential impact of human error.
Persisted data should be encrypted, preferably supported through a product or operating system.
Do not forget to handle database backups in a secure manner
Several of the largest attacks against IT systems have been possible due to lacking routines for database backups. For example storing unencrypted backups in a public S3 bucket.
Some data is so sensitive that the encryption scheme offered by the database server is not enough, and the content must be encrypted separately by the application. A good example of this is password storage, which according to current best practices should be handled using e g PBKDF2.
Summary
This article has covered the infrastructure used to deploy the system and considerations that must be taken regarding data management.
Just like we need centralized logging of our application, we need monitoring and alerts of our infrastructure to be able to detect and block attacks and abuse. There are many products for monitoring and alters on the market. Pick a solution with good coverage and allows you to get relevant and automatic alerts.
Updating and maintaining a system over time is crucial from a security point of view and includes everything from operating system and services to software dependencies in your application.
In the next article we’ll take a closer look at the web browser and what kind of security challenges it brings us when it comes to web applications.
See Defence in Depth for additional reading materials and code examples.
More in this series:
- Defence in Depth: Identity modelling (part 1/7)
- Defence in Depth: Claims-based access control (part 2/7)
- Defence in Depth: Clients and sessions (part 3/7)
- Defence in Depth: Secure APIs (part 4/7)
- Defence in Depth: Infrastructure and data storage (part 5/7)
- Defence in Depth: Web browsers (part 6/7)
- Defence in Depth: Summary (part 7/7)