An interesting article by Dan Woods on Multi-core slowdown. The article tries to temper people's expectations with
regards to multicore. The basic argument goes: A multicore processor has more
raw processing power, but it requires the software load that runs on top of
that processor to be able to use those cores, if not, the software could run at
the same speed as single-core, or even slower.
One of the ways to use all the cores of course is multi-threaded
programming in combination with an SMP operating system that can schedule over
all the cores (SMP being Symmetric Multi Processing). Typically multi-threaded
programs use multiple threads of execution and use synchronization primitives
to make sure executions happens in the right order. The last thing you want is
2 threads 'simultaneously' modifying the same data structure. The thing is, one
a single core processor, there is no true parallelism, the operating is
pretending to do multiple things at once. When you introduce multicore
processors you will have true parallelism, this may lead to problems.
One problem occurs when the application uses a lot of
synchronization and the result is that the cores are continuously waiting on
each other. You are now executing a largely serial application on a multicore
part and Amdahl's
law will dictate that you will have
very little speed-up from the use of multiple cores.
Another possible problem is that your application does not use
sufficient synchronization and the result is that the application will do
things truly in parallel that were never meant to be executed in parallel. This
often leads to data corruption and hard-to-find faults. Not a desirable
The statements in the article have more of an IT focus, however,
they hold for embedded as well. Multicore could lead to a slow-down if the
application is unable to use all this new processing power. However, in most
cases there is a way to use the multiple cores and to make a difference, either
in performance, or in the set of capabilities of a device.
One of the important points to consider when looking at
multicore in embedded are the multicore software configuration. The most simple of these multicore configurations uses
SMP which allows a single application to use multiple cores, if
that application is written for it, as mentioned before. This seems to be
the main focus of the original article.
However, other configurations allow customers to run multiple
different operating systems on top of these multicore processors. There
operating systems could be of the same type (say, VxWorks), or other types (say
VxWorks and Linux). Each operating system runs in isolation and will use some
of the raw processing power of that processor. Every operating system can of
course run one or more applications.
Now, you may ask, why do I need to run multiple operating
systems, can I not run multiple applications in a single SMP operating system?
Yes you can, however, there are many situations where a single SMP OS is not
the best solution because of safety, security, separation, robustness,
performance or scalability reasons. Running multiple operating systems gives
you, the user, better control as to how to use the power of your multicore
One straight forward example of the use of multiple operating
systems is that it allows a customer to use some of the cores in a multicore
processor to add more functionality to an existing product using a different
operating systems. For example, graphics capabilities using Linux added to the
real-time control for an industrial application, or networking to a consumer
device. This capability can provide benefits in many different industries, for
example consumer applications, network switches, industrial applications,
aero-space and defense, the examples are many.
Running more operating systems provides more flexibility such
that you can provide more features or more performance to your customers and
through that deliver a more competitive, higher value product.
Breadth and flexibility here are important. There is no silver
bullet. SMP, uAMP, sAMP, virtualization, all are configurations in the tool-box
of the embedded developer and can be used in many different combinations and
There are a a couple of challenges that become immediately
- The first challenge is to use the right configuration at the
right time (a screw driver does not work well if you need a hammer)
- The second challenge is to make sure that everything is
integrated, the operating systems have to be optimized for the virtualization
layer as well as the silicon
- The last, but certainly not the least is that with these
additional configurations, we still need tools for development, analyze, debug
and test, that need has not gone away
To summarize: a slowdown when using multicore is certainly
possible, but there are ways to solve that slowdown:
your make your application aware of the multiple cores
- Using the multiple cores to do new things, possible with new operating system