Linux Containers: Where Enterprise Meets Embedded Operating Environments

Linux Containers: Where Enterprise Meets Embedded Operating Environments

As more and more internet of things (IoT) services are being pushed to the edge of the network or onto smaller devices, the need of flexible, connected platforms is on the rise. Exciting open source technologies are brought forth to fill in the gap between embedded and enterprise platforms to service IoT needs, and containerization is an up-and-coming area for all manufacturers and service providers.

Containers can leverage a stable ecosystem and use mature technologies to meet the performance and resource constraints of a modern embedded platform. Although the container technology has been around for a while and is available in most any modern Linux kernel (i.e. 3.14+), the technology for the framework and management is undergoing big transformations. Managing containers enables a rich runtime that looks, feels and behaves like an enterprise class virtualized environment, but one that respects the unique constraints of the system. As a result of the facilities provided by the host system, containers provide a lightweight ‘virtual’ environment that groups and isolates a set of processes and resources (such as memory, CPU, disk, etc) from the host and any other containers.

With the right tuning, the platform itself can embrace and extend cloud, container and microservices for both applications and system services. But given their current infrastructure, service providers are now faced with a lot of questions: use the same platforms with the existing userspace unchanged, modify an existing userspace, or build a new userspace from scratch. In addition to selecting how the container’s userspace should be created, questions about how it can be constructed (from source, from binaries, via scripts, via a build system, on target, off target, etc) are also important criteria.

Enabling building blocks

We propose a different approach where the base platform has been designed specifically for container support, and fine-tuned for COTS hardware platforms. This is the basic concept for Wind River pulsar Linux, a small, high-performance, secure, and manageable Linux distribution designed to simplify and speed embedded and IoT development projects.

The ability to run containers of any format is a key aspect of Pulsar Linux, which means that Docker is supported from within end user containers. This also means that the co-existence of VMs and containers is a fundamental aspect of the system. Containers can launch VMs, containers can run within VMs, etc.

Allowing the most appropriate solution to be used, versus enforcing a technical choice or methodology is a pillar of the platform.

Pulsar Linux uses containers to deliver platform services and overall system functionality such as:

  • Independent updates of individual containers
  • Partitioning of resources among different units of functionality for optimal device usage and as application isolation to improve overall system security
  • Scalability of the system functionality, both upward and downward, so that users can install a development environment during development but easily remove the development environment for deployment
  • Provide flexibility so that the same system can be used for both development and testing and for deployed systems, without excessive release costs
  • Interoperability of application containers among multiple hardware platforms

By combining containers and the embedded DNA of Pulsar Linux, a solution that bridges the gap between embedded and enterprise computing is delivered. It can be extensively tuned, optimized, while reaping the benefits of a consistent base platform. We recently showcased the technology at the Cisco Live! event and you can learn more by listening to our archived webinar replay.