By Bill Graham
The CWE/Sans Top 25 is fairly well known among security experts but might be overlooked by embedded developers since the list covers all types of systems and programming languages. Developers are fully aware of the quality impacts of many of these errors however they may be less knowledgeable of the security implications. The classic example is the buffer overflow error – all programmers know this is a bad thing and can cause a program to crash or become unresponsive. However, many developers fail to realize that an attacker can trigger these errors to execute code, reveal data or cause a denial of service attack.
The Top 25 list is a “who’s who” of dangerous errors that are the most commonly reported security vulnerabilities. This year’s RSA conference theme was ‘Security in Knowledge,’ and Wind River couldn’t agree more — mitigating the risk of these errors is the first step to improving your device’s security. In this post I will point out what the 10 most important errors are from the top 25 list that embedded developers need to be aware of. In a subsequent post I’ll discuss the mitigation strategies and the role of automated tools in detecting and removing these errors.
The Big 10 Coding Errors and Impact for Embedded Developers
Some of the Top 25 are more applicable to embedded developers than others and arguably have a different priority. Also, some of the errors are from coding errors while others are configuration errors (or a combination of both.) In this case we’ve identified Wind River’s choice for the vulnerabilities that are most likely to impact embedded developers from this list. The “Big 10” errors that embedded developers are likely to encounter are as follows.
The buffer overflow is the most common coding error that embedded developers are likely to run into and can have serious security consequences if exploited. By definition, a buffer overflow is causing a runtime error where the index of a buffer accesses memory outside the bounds of the buffer, array, string, etc. Embedded developers need to be particularly aware of buffer overflow errors since much of their coding is in done in C or assembler where such errors are commonplace because there are not any strict boundary checking mechanisms. Also, embedded developers may not be in the habit of checking input data (even from within the system, from disk or flash storage, for example) to ensure it isn’t out of bounds.
A typical Linux/Unix problem, tasks, threads or processes running at high privilege levels (e.g. root in Linux) have access to any file and can execute any command or system API. If such a process is compromised for example, to execute injected code, runs at the highest system privilege level. Only essential code should be running at the high privilege level (or kernel level in an RTOS) – vulnerabilities at this level would do the most damage.
Critical data in your system should never be stored or transmitted in clear text. Any data stored in files, on flash memory or transmitted over the network should be encrypted. Private user information, system control and sensitive data need to be encrypted. Embedded systems have, in the past, relied on their isolation from a public network and relative obscurity to keep data safe. This is no longer acceptable; any device that can potentially store sensitive information should use encryption. For example, home medical devices should store all patient data in encrypted form. Any transmission of this data over a mobile or home network should be encrypted.
Any input from outside the system must be considered untrusted. Using this input data as input into critical functions can have serious security ramifications. If this input is used in security decisions within the code, it can be exploited. Although embedded systems may not have human input directly, any data from files, networks or peripheral devices is untrusted and needs be validated before use.
Authentication should be required for critical functions in the system. Creating or updating sensitive information should be done on behalf of an authenticated user, for example. Also, any function that has critical system impact or possibly consume a large amount of system resources should be authenticated. Embedded systems may not have authentication in place per se, but if critical functions are being performed on the behalf of user input (e.g. from an HMI) then that user should be authenticated.
Embedded systems are likely to need patching and upgrades out in the field. Applying patches should only be done with validated code from the manufacturer. Without proper integrity checks of upgrades or downloaded applications, it’s possible to insert malware into the system. To combat this, downloaded applications and code could be encrypted and signed by the manufacturer.
This is, unfortunately, a commonplace problem in embedded systems – relying on hard-coded credentials to access a system’s user interface or network-based shell. These are often manufacturer back door access methods for a device. Unless a manufacturer or system integrator changes the defaults (or forces initialization), these hard-coded credentials end up in the field. Manufacturers must configure embedded systems to avoid hard-coded credential especially when devices are deployed.
There are factors inhibiting modern encryption in embedded systems; processing power to encrypt/decrypt data especially in real-time, OS support for up-to-date cryptographic algorithms, and long product life spans where outdated techniques are still used. Cryptographic libraries in embedded systems need to be kept up to date as technology changes, and devices need to be configured to use the strongest yet computationally efficient encryption possible. Also, embedded system processing power has increased significantly even in low power devices making the overhead of cryptography less onerous.
Although not unique to embedded systems, it can be particularly dangerous to not limit the number of authentication attempts (e.g. user login attempts for a controller user interface.) A combination of hard-coded credentials and weak encryption means that brute force authentication attacks are feasible on embedded devices. Limiting the number of attempts is a simple way to thwart or at least slow down brute force attacks.
If user, file, or network input data is used to construct a path name, it is possible to traverse the device’s directory structure if proper input checks are not performed. This vulnerability can allow for the leaking of private and sensitive information, the replacement of libraries and executables with malware, or corrupt system settings. As with any external data to the system – it must be validated before use.
Mitigating the Risk
Although the CWE/SANS top 25 and our Big Ten list here can be daunting, there is plenty that can be done to mitigate the risk of these dangerous software errors to your embedded device. Performing a threat assessment, using specific software tools, following coding best practices and the right runtime technologies can go a long way to improving system security. It’s essential that security be viewed as an entire device lifecycle issue and it must be built in from the beginning. “Bolting on” security after a product is nearly ready for production is a recipe for disaster. In subsequent posts, I’ll be discussing mitigation strategies and software tool automation to make the Big 10 much less of a risk.
For additional information from Wind River, visit us on Facebook.