Five Steps to Improving Embedded System Security

By Bill Graham

Bill GrahamEmbedded device security needs to be integrated into the development lifecycle of the product rather than being an afterthought. The following are high level guidelines that embedded systems designers should consider when addressing security. This is not a prescriptive methodology, but intended to highlight an approach that looks at embedded security as a development lifecycle issue from requirements management, architecture and design and maintenance. 


The five steps to improve embedded security are as follows:

  1. End-to-end threat assessment – evaluate the security threats to the device in the various contexts of its lifecycle – development, operation, maintenance. 
  2. Security optimized design – make security a number one requirement and design consideration. Leverage modern separation and partitioning techniques, secure communications, and intrusion protection.
  3. Secure runtime selection – build your device from known secure components such as commercial-off-the-shelf (COTS) operating systems, middleware and tools.
  4. Application protection – leverage whitelisting technology to exclude malware installation on the device.
  5. Development lifecycle and tools – consider security to be part of the entire lifecycle of the device and plan for updates and security fixes well into the product's lifespan.

In this first post, I am going to cover the end-to-end threat assessment. The other steps will be covered in subsequent posts. 

Step 1: End-to-End Threat Assessment

Security of an embedded device can't improve until the potential threats are known. Not only that, these threats must be evaluated beyond context of the device manufacturer but to the operator (if the device is provisioned in such a way) and the end user. Threats are described in terms of attack vector (where the attack is perpetrated on the device) and the vulnerability it exploits (the weakness or fault in the hardware or software that allows the attack to enter the system). Examples of attack vectors are through network access, for example, a wired Ethernet connection on the device used for communication. Further, the device likely supplies services such as web (HTTP), FTP, SSH or a debug agent. Examples of vulnerabilities might be weak or default passwords, or coding errors such as stack or buffer overflow errors. Attacking a device needs both a way to access the device (attack vector) and vulnerability in order to succeed. The difficult part of security design is attempting to predict and prevent these in advance.

 A device needs to be evaluated within large scope if a security threat assessment is to be successful. Indeed, many current security threats are from thinking a device won't be used in a certain way. Stuxnet exploited the fact that PLCs were on the same network as infected desktop and laptop computers. Although not connected to the Internet, it's likely that private control system networks will have diagnostic or development computers connected at some time. In evaluating security threats it's important to consider the larger picture – the operator (e.g. wireless network provider) and end user (e.g. electricity grid control) are part of the equation.

End to end threat assessment of embedded devices needs to consider the following (and more):

  1. Complete product lifecycle analysis – include developer, manufacturer, operator, distributor, retailer and end consumer. Each of these may have impacts on devices security – it’s more than just an end user problem. At this point it is important to sort out the priority of information assurance and cyber security. More and more embedded devices are handling confidential personal, business or government data. Protecting this data is an important security consideration.
  2. Attack vector analysis – define and describe the possible entry paths for attacks into the system. First step is to define the physical entry path which may be more than just via network access, is it possible to compromise a device via physical access such as USB or serial ports? Once the physical entry points are defined, then the attack possibilities need to be evaluated. For example, if the device provided web access via TCP/IP port 80, are the vulnerabilities possible? Does the device use a firewall? If not, what TCP/UDP ports are open? Similarly for physical port access – does the device boot from USB if attached? An analysis is required of the permutations possible from physical access plus vulnerabilities possibilities.
  3. Build a risk matrix – given the vast number of permutations possible from step 2) a risk assessment needs to be performed. What is the probability of an attack via this channel? What is the impact of exploitation via this channel? Note that it’s important that risk to data and to device operation be handled separately. As discussed in a previous post, information assurance and cyber security may have different priorities based on your device and how it’s used.
  4. Create a mitigation strategy – based on the priorities in the risk matrix, decide the highest impact and highest probability threats. Create a strategy for each of these threats; in some cases there are architectural or design choices that can mitigate many threats. For example, partitioning a system into secure and "less-secure" segments might be a sound strategy. 
  5. Generate a security design specification –  should be generated based on the above assessments. This is part of the overall system design, but should be treated with high priority.
  6. Create a product lifecycle plan - an overall plan on designing, implementing, testing and maintaining security features and mitigations needs to be part of an existing or new development plan. 

 The key message here is that security needs to be top of mind in product development of any connected embedded system. In the next post, I'll discuss step 2 – security optimized design.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>