By Mark Hermeling
So, let's assume that you are currently on a single-core board and you would like to explore the possibilities of delivering a next generation device using multi-core technologies. There are a lot of different ways to approach this. The most elaborate involve doing an architecture study into ways in which you could evolve your current device architecture. This study would need to be approached from both a technical, as well as a business level.
The funny thing with software is that 'virtually' anything is possible, but the trick is to find approaches that are not only technically possible, but that provide business benefits, usually in a short time-frame. Part of the equation is the amount of work required to build this next generation of device.
In the 'olden days' people would spend a lot of time deciding about which hardware to use for a project at the beginning of that project. Once the hardware is decided, the software is next.
Well, embedded virtualization is one of the leading technologies to support the migration to multi-core and embedded virtualization is a bit like hardware and a bit like software. There are three variables to decide on when planning to use embedded virtualization: how to partition or virtualize cores, memory and devices. And that is immediately where the complexity lies, there are many degrees of freedom. Doing an architecture study at the beginning of the project helps you define which parameters you are interested in such that you can then explore the proper degrees of freedom with a proof of concept for example.
So let's assume that the study has been done and we know what direct we want to go into. Now it is time to explore and do a proof of concept. Well, let's also assume that you are targeting an architecture that has hardware assist for virtualization. Examples of this are Intel processors with VT-x, or Freescale processors with the e500mc core, or the upcoming ARM Eagle. Well in these cases life is relatively easy. One can run unmodified guests, meaning unmodified operating system kernels, and run them directly on the new multi-core hardware on top of a hypervisor in a virtual board (also known as a virtual machine). In this case one would directly assign all devices directly to this one virtual board and the software will run as in a native environment, but now run on a virtual machine.
There may still be some changes required to the BSP, for one, the devices on the new platform may be different for example. However, this is by far the easiest and straight-forward way to migrate to a new multi-core based platform. The next step is to use the additional processing cores, memory and devices to create new content.
It is easy to add another virtual board with a new operating system, maybe add Linux to an already existing real-time operating systems. At this point, maybe one or more of the devices need to be shared or virtualized and maybe the new operating system needs to be connected to the initial virtual board using some form of IPC possibly.
Once this is done it is time to start building new content and to experiment with the new, multi-core environment based on the parameters laid down in the original study.
Now, let's take a step back and assume that we are not targeting a processor with hardware assist for virtualization. In this case, we can still do the exact same things explained above, but with the difference that we can not use unmodified operating systems inside the virtual boards, instead we need paravirtualized the operating systems we intend to use. Any operating system can be paravirtualized (the source needs to be available to make the changes though), and the Wind River operating systems (VxWorks and Wind River Linux) are already paravirtualized to run on top of the Wind River Hypervisor.
The rest of the workflow remain the same, we have to be a bit more careful with regards to device drivers. The operating system, the device drivers and the hypervisor have to collaborate to deliver the functionality of the system. There is a bit more work and coordination involved, but getting a basic software architecture up and running is still fairly straight forward.
All in all, there are well described paths to migrate a single core system to a multi-core, or combining multiple single core systems onto a multi-core. The true challenge comes when resources (such as devices, cores and memory) need to be shared, but I will leave that discussion to a later blog post.
Following these paths provides a quick way to evaluate how existing designs would run on top of an embedded virtualization layer in a proof of concept. Going through the process of making a proof of concept not only validates the findings from the architecture study, but it is also a learning experience for your software development teams in preparing them for the new world of multi-core.