Will embedded Linux development ever defragment?

By Paul Anderson

P.anderson

We have seen the power of open source and open standards transform much of the enterprise space, particularly in the server market. Before the success of open source in server space, there were really only two choices: Run an operating system and applications developed and maintained by a hardware manufacturer, or choose one of only a few commercial operating systems and applications that spanned different manufacturers.  Hardware manufacturers optimized each operating system to best suit the hardware they sold, creating a lack of compatibility between hardware manufacturers. Alternatively, commercial enterprise operating systems typically were not optimized for each hardware platform, and as a result, did not perform well. Given these scenarios, a system architect, integrator, or developer, often had to make hard choices and compromises.

With the advent of open source operating systems and applications, the software landscape changed dramatically. After a slow start, there was a bloom of available operating systems and applications available. There was little standardization or compatibility between the different operating system, distribution, and application variants. It was nice to have a choice, but all of these options also created fragmentation. With time, open standards within the enterprise space developed, from carrier grade standards (CGL), to application environment standards (LSB), to standardized application stacks (LAMP). For a variety of reasons, there has not been a single distribution or package standard, but things are now similar enough that system architects, integrators, and developers have a fairly standardized environment and a rich ecosystem of hardware and software from which to choose.

Now consider the embedded community – the world of deeply embedded development has evolved in a much different manner. The basis for software was not “the mainframe,” but rather “the microcomputer.” In the world of the microcomputer, the software design is driven by the configuration of the hardware, and the application and operating system are heavily modified to meet the system requirements of a purpose-built device. The operating system was considered a necessarily evil, and was scaled to fit accordingly. In the world of open source, this created a large fragmentation force. The issue was compounded by the large variation of hardware architectures used in deeply embedded applications.

For many years, the open source community struggled to address even the basic needs of deeply embedded. Different Linux kernel variants existed, different toolchains, libraries, packages, and many different distributions existed. The good news is that many of necessary basic changes required to the kernel, toolchain, and applications have created more portability and less fragmentation. The bad news is that there is a long way to go. The majority of deeply embedded Linux developers still “roll their own” software. They heavily modify the distribution, packages, and applications, and very few of these changes get driven upstream into the main line open source community. There is very limited software reuse, limited standardization, and a fragmented commercial ecosystem. Why is this bad? It means there is no software reuse and no collaboration. This results in different people doing the same work again and again, with no real differentiating value. Perhaps it means infinite job security for the average embedded developer, but it amounts to doing work that doesn’t really add a tremendous amount of value over time.

The real driver for change in deeply embedded Linux is the emergence of SoCs, the increase of software complexity, and the massive increase in the number of embedded device applications in the market. Although many will continue to operate in a fragmented ecosystem, the need to create devices that are faster, smarter and better, coupled with the need to stay cost effective will drive parts of the market towards a more standardized approach over time. That approach will allow more collaboration, more software reuse, and a more connected open and commercial ecosystem. In short, defragmentation in the deeply embedded space is necessary and inevitable. There are just too many drivers for it not to happen at some point.

I’m just heading off to the Linux Foundation Collaboration Summit where the topic of deeply embedded Linux and many other important topics will be discussed. I’m certain the dialog will be interesting and spirited, and I’ll fill you in on the details!