By Jakob Engblom
Simics is a great product for simulating computer systems that contain processors and execute code. However, as we all know, just a computer is rarely the entirety of a real system. Most systems also involve a physical system that the computer controls, such as a plane, satellite, rocket, IoT sensor node, or control box, as well as an outside world that the system interacts with. A true system simulator needs to simulate all these parts – the control computer, the system it resides in, and the world outside of it. The Trinity of simulation. To achieve this, we need to add more simulators. Simics on its own is great, but Simics talking to one or more other simulators is better.
The picture above illustrates the decomposition into computer board, system, and environment. The Simics-powered simulation of the computer running the control software, is interfaced to and integrated with the software simulators for the system and its environment. Sometimes, the system and the environment are contained in the same model (for example, the plant model in PIL testing), but more often than not they are separate simulators provided by separate teams or even companies. It is also common to see each logical part consist of multiple separate simulators for separate subsystems.
The key value that Simics brings to the simulation setup is that it runs unmodified binaries from the real target – many existing simulation setups rely on various forms of shim layers or API simulations to make the software run. With Simics, the software is compiled, linked, integrated, and run just like it is on the real system. As illustrated above, input and output values pass from the simulated system over to simulated input and output devices in Simics. The values then reach the target software via device drivers, just like they will in the real system once it is built. In this way, the entire integrated software stack can be tested in the virtually real environment, making it possible to perform automatic testing and continuous integration even for systems that are deeply embedded and connected to their environment.
To see that the environment matters, just remember the Ariane 5 failure in 1996 – the software was already tested and working on the older Ariane 4 rocket, but the Ariane 5 exhibited a different launch trajectory that caused the software to crash and then bring the rocket down with it. The lesson is that no matter how good a piece of software is, it has to be tested in the context that it will be used in. To do such tests cheaply and quickly, simulation is the best tool.
In terms of simulation software architecture, the other simulators can be separate from Simics, or sometimes run inside of Simics. Or, we can have Simics run inside of another simulator. The simulators might be spread across multiple machines, or run on the same machine. Some solutions are peer-to-peer, even though master-slave setups are more common as they tend to be easier to retrofit to existing simulators. Simics has the hooks and features needed to help build all types of integration solutions, and we have seen it integrated with many different simulators over the years.
I would say that the most common solution is to have Simics and the other simulators run side-by-side, communicating over network sockets or shared memory, and with one simulator being the master that asks the other simulators to run for a specified amount of time. In practice, existing simulators tend to be written to run as stand-alone programs, sometimes even requiring their own particular dedicated hosts to run. Modifying such code to run inside of Simics is more pain than it is worth, and therefore the most common solution is to simply build a simulator integration module that connects Simics to the other simulators and run them all side by side. For a few live videos showing this concept in action, I refer to a previous blog post of mine that also describes the NASA Go-Sim setup.
To make things easy for the end users of the simulation setup, it is common to build some form of simulation front end that takes care of launching the simulators. As shown in the picture above, the front end would start the integrated simulators and set up their connections to each other. Once the simulators are up and running, the interface to Simics can be exposed to the user, or kept hidden. The front end can be as simple as a batch file that starts all the simulators for a single configuration, or it can be a full-blown custom graphical application that manages simulation runs, collects results, and lets the user configure the system.
When integrating multiple simulators, it makes a lot of sense to use Simics recording checkpoints to capture the execution of the software on Simics, and then debug using only Simics, without the other simulators running. This provides the full debug power of Simics, without complicating the simulator integration and other simulators with support for reverse execution, checkpointing, or the ability to stop time precisely. As illustrated below, the key is that when replaying the execution in Simics, we just replay the inputs from the outside world.
More for information, there is a chapter dedicated to the topic of integrating simulators in the book Software and System Development Using Virtual Platforms that I published together with Daniel Aarno last year.