Swimming in a Sea of Simulators

Swimming in a Sea of Simulators

jakob-for-wr-jive

It is fun to be back on the Wind River blog, even if it is “just” as a guest blogger. I have written a lot about Wind River Simics and simulation here over the years, and it is fun to keep doing that, even though I have moved from Wind River to the System Simulation team at Intel Software and Services Group. Inside of Intel, I have found a ton of cool simulation technology (and plenty of fantastic products and smart people). Simulation is a key part of software and hardware design, and Intel is making very good use of it.

I have been working with Simics* for almost 15 years now, and even before I joined the team I had been writing some simulators for my PhD.  Simics is one particular type of simulator (the fast functional instruction-set based virtual platform), but there are many other abstraction levels, choices of detail, and approaches to modeling and simulation out there. In this post, I will go through some of the simulation technologies that I have come across so far, both for hardware development and software development.

It is well known that hardware architects use microarchitecture-level simulators to evaluate and explore design variants. They also use higher-level models that only deal with latencies and bandwidth across interconnects to sketch out system architectures. There is no single tool or framework that can cover all use cases, but rather a set of tools used for different purposes. Behind each product that Intel ships, there was a lot of modeling and simulation done to design and validate it, and to enable software. This is how hardware design is done, and has been for a long time going back at least to the 1960s!

In order validate system-level changes that have big effects on software, what you need is a fast functional model that runs code and models new functionality in order to evaluate how software will make use of new hardware features. This role can be very nicely filled by Simics, as discussed in an earlier blog post.  Similarly, simulators can be used to evaluate and prototype new instruction set extensions, to make sure that they work and are useful in real software.

When signal processing algorithms go into a system, simulation is used both to validate that algorithms do what they should, and to decide which pieces to put into hardware and which into software.

Software developers also need models of next-generation platforms to develop firmware, UEFI system firmware, drivers, and bring up operating systems (which I blogged about a few years ago). This shortens the time to market and increases hardware and system solutions quality. In the end, such simulation models are used to enable the whole ecosystem around Intel hardware and software, to speed up system development and customer deployment of new solutions. The better this works, the faster solutions get into the hands of consumers and IT professionals.

Moving back towards silicon design, in order to validate hardware designs, you have to run simulations of the actual RTL, on both software simulators and dedicated hardware devices. That is a huge field in and of itself, and where I am definitely not an expert. Suffice to say that without RTL simulation, silicon would most likely never work the first time.

Power and power management is a very important aspect of modern computer design, and I am fairly close to the Docea team that Intel acquired last year. The Docea tool suite is all about developing power models and running simulation scenarios to validate and optimize power design. Very cool (or should I say hot?) stuff, where you can even get a virtual floorplan of a chip to light up showing how it gets warm as it runs.

Another tool in my organizational vicinity is the Intel® CoFluent™ Studio modeling toolset, which lets you use high-level models to estimate system performance and dimension and architect systems. Intel CoFluent Studio models use data flows to model the behavior of algorithms, software, hardware, and communications channels. It is a very general form of modeling, that can used to design the microarchitecture of a hardware accelerator, as well as determining the optimal topography for your thousand-node IoT sensor deployment, or to abstractly explore software architecture for a data processing node.

It really is like swimming around in a big sea of simulators and learning things all the time!

What gets really interesting is when you start to mix and integrate different types of simulators. When you combine simulators that model different phenomena and operate at different levels of abstraction, you can build some truly awesome combined systems. Some of these integrations have been made public, such as combining a Simics model of an Intel server platform with detailed models of hardware accelerators in order to test hardware designs with realistic inputs from a complete real driver stack. There are several cases where we have used Simics and its platform models as a way to combine disparate separate simulations for subsystems or particular phenomena into combined models.

I hope I can tell more simulation stories going forward, there is a lot of stuff to be told!

*Other names and brands may be claimed as the property of others.

**No computer system can be absolutely secure. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer.