The Power of Simulation – Modeling and Analyzing Intel Edge Analytics with Intel® CoFluent™ Studio, an Interview with Sangeeta Ghangam

The Power of Simulation – Modeling and Analyzing Intel Edge Analytics with Intel® CoFluent™ Studio, an Interview with Sangeeta Ghangam

By guest author Jakob Engblom, Product Management Engineer, Intel

jakob-for-wr-jiveGetting your system and software architecture right is very important to the success of a product. It is particularly important when the systems you are building have a long expected life time. Internet-of-Things (IoT) edge analytics is such a system, where once you deploy your smart analyzing gateway you have to live with the constraints of the hardware and software for a decade or so. Getting it right is pretty important, and the earlier you can evaluate and analyze the performance of your design, the easier it is to change it. If you wait until you have all the code done and the hardware design set, there is precious little flexibility left in the system. There is a better way, and that is to do the architecture of the software pre-code using simulation. In the past, such architecture and analysis work was typically done using numbers of a napkin or in an Excel sheet, but with the increasing complexity of modern systems, you need to do this in a smarter way. The Intel® CoFluent™ Studio offers a way to work before code but more concretely than numbers in an Excel sheet. By building a model and running simulations on it with varying input parameters, a much larger design space can be explored.

In this interview, Sangeeta Ghangam tells how she and her team used Intel CoFluent Studio to model and simulate an Intel Edge Analytics system in order to analyze and improve the system architecture.                                                   

Jakob Engblom (JE): Please introduce yourself!

SGblog

Sangeeta Ghangam (SG): I am Sangeeta Ghangam, working in the Internet of Things Group (IOTG) at Intel as a Product Solution Lead, in the past for IoT Analytics, and now for the next generation Edge Application Platform (EAP). I have been at Intel over 5 years and have worked with storage device drivers in the Platform Engineering Group at Intel before starting in the IOTG in 2014.

JE: What do you do at Intel?

SG: I am part of the Product Development Team focusing on EAP development and driving synergies between the Moon Island Platform and the Edge Compute software. My background is in Software engineering and currently I am the Product Solution Lead for EAP, which is a Software and Systems focused role.

JE: Moon Island, I recognize that name. That’s Intel hardware and Wind River* software. In particular the Wind River Intelligent Devices Platform (IDP). When I was at Wind River, I actually helped getting IDP to run on Wind River Simics* model of one type of Moon Island hardware. Funny to see how things fit together.  It is a small world sometimes.

JE: To get more concrete, I think we have to start by introduce the IoT system that you worked on when you did the model using Intel CoFluent Studio.

SG: The system was a gateway running real-time edge analytics and decision-making code. The gateway would use a handful of sensor nodes to gather information and issue control commands to a fairly large industrial machine. My team was working with the software running on the gateway, which was the primary driver of customer value in this project. In the end, we had to provide the customer with a recommendation for which hardware to buy and deploy in order to run the Intel edge analytics software.

The system looked something like this (the number of sensor nodes and gateways would vary):

SGblog2

JE: What were the issues you encountered in designing and architecting this system?

SG: We needed to understand how to size the system, given a certain set of edge analytics modules. There are many variables here: the number of sensors to attach to a single gateway, the nature of the connection between the sensors and the gateway, the compute power in the gateway, and the actual set of software modules running on it.

We needed to find a solution that would let us run the workloads we needed today – but also allow room for some growth. Once the gateway and the sensors were deployed, a hardware upgrade would be five to ten years out, but the software would be upgraded many times over the lifetime of the system. Thus, we needed to make sure we had some headroom, but without wasting power and cost.

JE: That is indeed quite a few variables to deal with… and nothing you can just do off the cuff I guess.

SG: No, off the cuff does not cut it. We also needed to this before the software was actually coded for the target, in a systematic way. For this reason, we decided to use Intel CoFluent Studio to build a model of the system and its software.

The Intel CoFluent Studio works at a higher level of abstraction than code, and thus we could start experimenting before the hardware and software was settled. By adding parameters to the model, we could simulate the effect of different types of hardware on the overall performance. Since there was no real code involved, it is also much easier to change the architecture since you do not have to rewrite the code to communicate in a different way or compute using a different algorithm.

JE: So what would such a model look like?

SG:  Here is a small section of the model that shows the sensor connection and the data processing pipeline

SGblog3

It is a graphical modeling system where we have included the main internal and external components of an edge system e.g. we have the sensor, the gateway platform, a programmable logic controller (PLC) as far as the external components go and then the various processing modules internal to each of these elements.

JE: How do you model software in this kind of setup – do you actually compute results, or just keep it to abstract tasks that consume time and generate tokens to put into queues?

SG: The software shown above was modeled using very exact system measurements from a representative edge analytics workload. Colleagues of ours in the Intel Software and Services Group (SSG) characterized the system by finding out the cycles per instruction (CPI), instruction count, CPU and memory usage, these numbers were used by the Intel Cofluent Technology modeling team to abstract the transactions. Since we have baseline operational statistics we can easily now extrapolate any addition or removal of the internal operational modules to expand on this model.

JE: So, to be clear, software is modeled as consuming a certain amount of resources on the processor, along with enough details to predict how long it will take to run a particular computational task?

SG: Yes, I worked with the Client Systems Optimization team in SSG for characterizing the workload using internally developed tools, as well as Vtune. We collected metrics for CPI, instruction count and detailed CPU/memory usage to model the workload accurately.

JE: Did you introduce the actual analytics algorithms into the model?

SG: Yes we did model the edge analytics as a starting point, however for the future we can use the Matlab/R plugins in Intel Cofluent Studio to integrate analytics faster than the current turnaround of 1 year +. We had a bit of luck here. The algorithm designers were working in Matlab, and Intel CoFluent Studio has a Matlab integration. Thus, we could run the Matlab algorithms just like they were in the model generated by Intel CoFluent Studio, no need to convert them to actual code. In this way we could work with software functionality a year ahead of having actual running code on an actual platform, which is obviously handy.

JE: What were the results of you simulation – and how many different configurations did you actually run?

SG: For a given set of input variables and sensor data throughput the model estimated the platform resource usage to give us a clear idea of which gateway would best suit the workload under consideration. On the other hand it also provided architecture inputs in areas where we had a devise a better way to manage the data processing in case the gateway was the fixed variable

JE: How did you calibrate the model to gain faith in the results?

SG:  To perform initial calibration we used real time sensor data and processing timelines on an actual hardware which we added to the model as variables. This meant that we could compare the results (from running a certain set of tasks) from the model with results from hardware, increasing our faith in the model.

JE:  Is there any chance you could have done this using hardware?

SG: Not really. In our lab, we have a few gateways and a few sensors. Evaluating the architecture on the lab bench would have limited us to just the configurations we could build using that set of hardware. We would also have to wait for the final software to make any kind of performance analysis and estimation. We could have tried different sets of analytics modules, but it would have been too late to change the capacity of the gateway hardware to run it so it would have been a matter of packing whatever we had into a given box. Not very architectural. Or shift-left

In contrast, with Intel CoFluent Studio, we could study the problem before we had hardware, and without being limited by the hardware configurations available to us in the lab. Eventually, we do run code on the real machine – but at that point, we had a good idea on what would work and what would not.

JE: Are you still using the model today?

SG: Yes and no. The project has ended, so we do not use the model we talked about here anymore. On the hand, we took what we learnt from this project and are applying it to our next project. In this new project, we are way ahead of hardware availability. The Intel CoFluent Studio model and simulation lets us do architecture work before we have anything to run code on, and way before we have the actual code in hand.

We have extended and improved the model to make it easier to vary the set of edge analytics modules used, allowing for faster experiments that explore a larger architectural space.

JE: That’s really good to hear!  Nothing proves the value of a tool like users that keep using it once the first test project is over.  I have sold and marketed development tools for all my career, and there is nothing better than a user who decides that your tool is part of the standard tool chest from now on.

I guess that was the final question. Thanks a lot for your time and insights, Sangeeta! This was a nice example of how simulation can be used to build better systems faster, and why simulation should really be considered a mandatory tool for system, hardware, and software developers everywhere!

SG: Thank you for the opportunity to talk more about the modeling project, I think the lessons learnt here can be applied to several other areas so we actually have performance data supporting a particular product vision!

 

*Other names and brands may be claimed as the property of others.

No computer system can be absolutely secure. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer.

Tweet about this on TwitterShare on Google+Share on FacebookShare on LinkedInEmail this to someone