The big news in Simics 4.4 is the new Simics Analyzer product. Analyzer contains a few different technologies, but the most immediately visible is the new execution timeline view. The timeline view shows the execution history of the software on the target system, making it easy to see what has run where and when. In this blog post, I will try to peek a little behind the scenes to show what makes Analyzer and the timeline view work.
We start with a typical example of using the timeline view:
The screenshot shows how a multithreaded program executes on an eight-core machine (a single Linux SMP running across all eight cores). We can see that we do not quite manage to load all cores, mostly leaving core 0 idle. We also see the OS scheduler migrating some threads between cores, even though it would seem reasonable to run each thread on a single core for its entire duration. It would have been hard to notice this just by running the program live, or just by looking at the processor loads during the run.
How does it work?
Fundamentally, a virtual platform system like Simics can see anything the target system does (with unlimited bandwidth and no probe effect). The raw data offered is pretty primitive, since all Simics sees is streams of instructions, memory accesses, hardware device activity, interrupts, CPU mode changes, and other hardware-level events. Obviously, such data is very hard to make sense of in a modern multicore, gigahertz class machine. To get to something useful, we need to recreate the software abstractions used in the system, such as operating system kernels, user processes, and threads. This is known as OS awareness, and Simics 4.4 contains a new and much improved OS awareness system.
Simics OS awareness can now track multiple operating systems, layered software setups including hypervisors and guests, and detect and act on events such as thread creation, destruction, scheduling, and interruption. OS awareness is available to any Simics module, including user scripts and custom features. Scripts can use OS awareness to perform actions based on processes launching (and terminating), thread switching, and other actions. It can be used to aid in debugging, instantaneously determining which thread is running when a breakpoint hits. When you combine OS awareness with symbolic debug to this, you have a solution which can tie hardware actions in a system to a particular line of code in a particular program, thread, or kernel.
To implement OS awareness, we add modules to Simics that read target registers and memory, finding the starting points of OS data structures and traversing lists of processes and other state. It hooks into events such as user-mode-to-kernel-mode transitions to find invocations of the OS scheduler. This is really nothing unique to Simics, tools such as Wind River On-Chip Debug also offer OS awareness implemented in similar ways and achieving the same effect. However, OS awareness in Simics does not in any way disturb the target system or require any special instrumentation in the target.
OS awareness needs to know the layouts of various OS data structures, and this usually initially requires providing a symbol file for the OS kernel to Simics. For Linux, Simics also implements a heuristic approach which scans memory and guesses valid offsets, allowing the use of OS awareness on a binary-only kernel. Once the set of parameters for an OS has been determined, it can be saved to a file for later use. This makes it easy for foundational software teams to provide OS awareness information to its users, along with binary software images.
Some more examples
Analyzer really provides two different types of views: an instantaneous view of the current state of a system, and the history-based timeline. These serve different purposes. The instantaneous view is good for navigating a target system, understanding the various hardware and software units. The timeline view provides a high-level overview of the system execution.
The first example is a look into the boot of a Wind River Linux system running underneath a hypervisor on a dual-core target. Here we can see the various processes that run as part of the system startup. Note that they do not consume much in terms of CPU resources, most of the time is spent with the kernel executing. We also see the hypervisor keeping the kernel on a single core, leaving the other core for other work.
Next, we have an example of the instantaneous view of a system provided by the system info view. This is the same execution shown in the timeline at the beginning of this post. Here, we see that some threads are active, and others not. Active threads show the name of the processor core on which they are running. Note that there are two machines shown in this screen shot, as we are looking at a network of machines, not just a single machine. This ability to rise above a single machine is an important aspect of both Simics and Simics Analyzer.
A final example goes back to looking at Linux boots, this time comparing different machines in the same virtual network. All the machines run the same kernel image and the same file system, but have different number of processor cores. Note how the four-core machine boot is slower than the two-core machine (by about 0.2 seconds), since the kernel takes more time to get started with more cores in the system. The display does not show the time spent in the kernel prior to spawning init, as there are no user-level threads before that point. It is also clear that the kernel init is a sequential process even on a multicore machine.
Work in Progress
This is really just the beginning of visualizing the data available in a Simics run.
The current timeline view has been developed over the few quarters, and we are still improving it almost daily. The GUI details are still changing, as we use the timeline and gather feedback from early users. For example, an important design decision which was made after some use of early versions was to use colors only for “favorited” items. Applying colors to all processes and threads in the system resulted in a virtual candy pile of colors that looked pretty but was void of any usefulness, since colors had to be reused when target systems contained hundreds of threads.