By Shaun Ruffell
In September of 2020, a team at the Systems and Network Security Group at VU Amsterdam announced a new technique for developing exploits they called BlindSide .
BlindSide allows an attacker to conduct Blind ROP-style  attacks against targets that are not crash-resistant, such as the Linux kernel. What does this mean for you as a system engineer? It means BlindSide can enable an attacker with knowledge of a single kernel overflow vulnerability to convert an unprivileged shell to a root shell despite recent kernel features designed to prevent information leaks which can lead to these types of privilege escalations.
It is techniques like BlindSide, and other techniques still unknown, that drive Star Lab to design its products with the assumption that an attacker will gain root access. It is why we advocate everyone at least consider the 10 Properties of Secure Embedded Systems—data-at-rest protection, authentication and/or secure boot, hardware resource partitioning, software containerization and isolation, attack surface reduction, least privilege and mandatory access control, implicit distrust and secure communications, data input validation, secure software development and OS configuration, and finally, integrity monitoring and auditing—when designing systems. These properties can make it both more difficult for an attacker to satisfy the preconditions to launch a BlindSide-based attack and can also contain the damage if an attack is successful.
First, make it hard for an attacker to satisfy the attack preconditions. To use BlindSide, an attacker needs both knowledge of a kernel overflow vulnerability and access to an unprivileged shell.
To reduce the chance of exposing a kernel vulnerability, consider both the “Secure Software Development, Build Options & OS Configuration” and “Attack Surface Reduction” properties. The most fundamental aspect of these properties is to configure the kernel to remove any features and drivers unnecessary for your application. The less code loaded into the kernel, the less chance of a vulnerability. For kernel code you develop yourself, leverage the tools the kernel build system provides to reduce the probability of adding your own vulnerability. Examples of such tools are kernel stack checkers and compiler and linker support for placing pointer-containing structures in a read-only data segment.
Next, consider ways to reduce the probability of exposing an unprivileged shell to an attacker. This could be as simple as applying OS configuration and removing the shell from production systems, and if that is not possible, consider “Least Privilege & Mandatory Access Control” policies to ensure that unprivileged software running on the system does not have the needed permissions to run a shell. You might also consider “Software Containerization & Isolation” and “Hardware Resource Partitioning” to constrain non-critical code such that if an attacker can exploit it, they cannot get a shell on the virtual machine containing the more critical functions. For attackers with physical access to the system, “Data-at-Rest Protection” and “Authenticated and/or Secure Boot” can make it difficult for an attacker to change the software configuration offline, sidestepping the run-time protections and partitioning you’ve setup.
Even if you've followed all the best practices to secure your system, it is still prudent to assume an attacker can exploit it.
For many attacks, root access is the ultimate goal as it can provide the attacker complete control over your system. This does not have to be the case.
Here you can apply “Least Privilege & Mandatory Access Control” such that a user with even root shell access cannot read or modify files that shouldn’t be outside of the protection domains defined by the mandatory access control policies.
“Software Containerization & Isolation” and “Hardware Resource Partitioning” can be applied again since you can place critical functions in other virtual machines, pinned to specific cores with reserved CPU cache.
A root user who exploited one of the less-critical processes still cannot affect the properly isolated critical processes, which are then more easily inspected to ensure they apply “Implicit Distress & Secure Communications” in their interoperation with less-critical processes and devices.
Finally, consider ways you can apply “Integrity Monitoring & Auditing” so you or the system can respond in real-time to an attacker who is either probing the security perimeter or attempting to move laterally from one component of the system to another.
Systems security is a seemingly never-ending game of cat and mouse. Attackers find novel exploits and defenders devise countermeasures to these exploits. Attackers are then incentivized to research and develop new exploits. BlindSide is more evidence that the game continues to this day. However, with careful application of the 10 Properties of Secure Embedded Systems, you can give yourself a chance of not being BlindSided by this or even the next exploit.
*Post originally published at Starlab.io.