The fact that almost everybody in North America is either packing for the holidays, or has already left gives me a chance to finally write about an exchange I had with a customer a while ago.
We were discussing how the hardware side of embedded software development had changed over the past years. Where in 'the olden days', things would start by bolting a processor on top of a breadboard of some kind, todays development typically starts with an out-of-the-box hardware solution. That is, the processor is integrated with memory controllers, certain peripherals such as graphics, networking, SRIO are already on the board, others can be added through standards like PCI and others.
The second place where embedded systems have changed is the level of integration of peripherals inside the actual chip. This has extended the traditional processor into a System-On-Chip design, where some of the devices that would typically be on a board like graphics, SRIO, network acceleration, encryption are available in the same die as the processor core(s) itself.
Take that with the current trend towards multi-core and you quickly realize that the landscape has changed drastically. Where in 'the olden days', integration of peripherals was the skill to master, currently the skill to master is how to maximize the use of available technology in the chip and/or on the board.
All this technology is typically pre-integrated by the silicon (Intel, Freescale, Cavium, RMI) as well as board vendors (Kontron, Curtiss-Wright, …).
Final production hardware may still be in-house developed, but it is often strongly based on the out-of-the-box solution that development was started on, largely the same chips, just in a different form factor.
Our discussion then turned to how the software layers in these embedded systems have been advancing in the same direction. Where large development teams used to have their own operating system, now most teams are using commercial operating systems, or some flavour of open source. However, the operating systems have had to adapt to these new hardware environments. Where in 'the olden days' this meant simply the addition of a driver to the BSP, the new hardware brings more complexity.
Take for example network acceleration in devices like the Freescale P4080 or the Cavium's Octeons , these require not just an addition to the 'BSP', but they have advanced queue and buffer managers that need to be integrated into the operating systems.
The term operating systems here is no typo, many usages of these advanced devices require multiple operating systems for control and data path, either of the same type (say VxWorks), or of a different type (VxWorks and Linux to name but two).
Configuring multiple operating systems on top of a single piece of silicon often requires the services of a virtualization layer, also called a Hypervisor or Virtual Machine Monitor. The virtualization layer needs to be optimized for the silicon as well, take again the telecom example, configuration of the queue and buffer management would be a task typically assigned to the virtualization layer.
I used a networking solution as an example, but many of the same properties also hold for consumer electronics, industrial control, automotive, transportation and so forth, though the System-On-Chip devices will differ of course.
Now, the customer I was talking to was explaining to me why his team was developing on top of pre-integrated and validated hardware from a COTS vendor. His main reasons were time-to-market, risk and the fact that he can do with a much smaller hardware team and that he can focus on what he considered his competitive strength: developing software.
The discussion then quickly shifted to the software world and he explained that what he was looking for pretty much the same properties for a pre-integrated software stack. The particular capabilities that he was looking for in this software stack were the availability of an RealTime OS, a general purpose OS, a virtualization layer and a strong debugging and IDE integration. He did not want his team to loose time integrating different components from different vendors, he wanted to have a system that works out of the box and a single point of contact for questions and assistance. His team would then be tasked to build content on top of this software and hardware stack and make rapid progress, thereby outrunning his competition.
I played advocate of the devil for a bit and asked him whether he was worried about 'vendor lock-in', something that people often talk about in relation to fully integrated solutions. This particular gentleman had a very outspoken opinion on this: vendor lock-in in this day and age is not an issue anymore if you properly architect your software and build on top of standards. Yes, one operating system API may be different from another operating system API, but that only affects the bottom layer of your software architecture and even at that, the main principles of threads, synchronization and communication are the same.
This answer (and the whole discussion really) was refreshing to a software engineer with a visual modeling background like myself. I always cringe when I discover that a particular piece of software uses OS primitives all throughout the code base much like the meatballs in a plate of spaghetti. I am more a believer in the fact that software should not look like spaghetti, it should look like a lasagna, neat layers, nicely separated, each having their own well defined function with a strongly defined interface and downwards pointing dependencies. A nice component based design would be even better, but I don't know how to wrap that into a food metaphor.
It was an interesting discussion, especially since Wind River, together with it's partners, provides this complete, integrated solution from operating systems, virtualization layer, development, analysis and debug tools on top of partner hardware.
The holidays are here and things will slow down for a few weeks, but we will be back with exciting news along the line of the discussion that I just described come January.