Simulation Plays in the Gartner Top Ten Tech Trends for 2018

Simulation Plays in the Gartner Top Ten Tech Trends for 2018

jakob-for-wr-jive

The analyst firm Gartner has published a list of their “Top 10 Strategic Technology Trends” for 2018. The list contains a wide range of trends, from specific technologies like Blockchain to broad domains like Conversational Platforms. In this blog post, I will look at how simulation plays an important role in a couple of the trends that Gartner identified.

Gartner Top 10 Trends Full Page Report Flat View

Digital Twins

According to the Gartner blog post, Prepare for the Impact of Digital Twins, “Digital twins refer to the digital representation of physical objects.” That “digital representation” is an analytical model or a simulation model of a real-world system. The model is used by developers and analysts in their labs to understand the behavior of the real-world system in its operational environment.

The model is provided with inputs or stimuli that have been measured, logged, or observed in real-world systems.  That makes it possible to understand the behavior of the system better, debug problems, and predict future problems before they happen. The practice is getting a lot of industrial and professional interest with respect to the Internet-of-Things (IoT), as connected sensors and control systems make a wealth of information from the real world accessible that was not available in the past.

The digital twin is not a new concept per se; computer simulations were used by NASA in the Apollo program in the 1960s to understand the behavior of space systems and make critical decisions on how to run missions. It is very common in the space industry to use both physical and digital models to understand and resolve issues for missions in flight. The movie Apollo 13 offers a nice illustration of this for the mostly-analog era, but today, the same principles apply, in particular to software.

With the increasing emphasis on smart, connected systems and shorter innovation cycles, developers need to iterate testing, feedback, and deployment more quickly. Digital Twins for deployed systems often grow out of simulations used in the system design phase. They are built as a composition of many different models from different domains, including mechanical, physical, analog, digital, and software.

A key point for me here is how you include the software aspect in the digital twin. In some cases, you want to use a highly abstract model that only includes the algorithms used, or just their performance aspects (such as the models that are built using Intel® CoFluent™ Technology). In other cases, you want to see the actual software run on your system controllers (or backend database, or other hardware or software system), where virtual platform tools like Wind River® Simics® make an excellent choice. Connecting Simics virtual platforms to simulation models of physical systems is a common practice. It helps many companies build twins that go all the way from large-scale system physical modeling to the actual control software—including running multiple networked IoT, embedded, and control systems as on the control side. The digital twin concept is applied to both embedded systems and general-purpose computing systems. For the general purpose case,  by looking at the software stack (from BIOS to applications) and how it functions under certain workloads or inputs – such as replicating issues from the real world for debug.

Cloud to Edge

Gartner notes “While it’s common to assume that cloud and edge computing are competing approaches, it’s a fundamental misunderstanding of the concepts. Edge computing speaks to a computing topology that places content, computing and processing closer to the user/things or ’edge‘ of the networking. Cloud is a system where technology services are delivered using Internet technologies, but it does not dictate centralized or decentralized service delivering services.”

Sometimes you want your computation to be centralized in a server that sits in a data center far from the edge, and sometimes it makes more sense to do the operations at the edge. It comes down to a question of how much computing can be afforded (based on cost, power consumption, cooling, latency requirements, communications bandwidth, etc.) in each part of the system hierarchy, and how that compares to the cost of communicating data to somewhere else.  Moving things close to the edge shortens reaction time and latencies, and reduces the amount of data you need to send back up (often by many orders of magnitude, if raw sensor data is compared to processed data – like a video stream from a surveillance camera compared to a notification that a person entered a room) – but it also increases the cost of the edge system.

Thus, it is very important to understand, model, and explore the trade-offs involved in moving computation around and how that affects the communication, depending on the types of networks involved. This process is actually recursive, as once we get inside a data center, the same problem comes up again in terms of balancing different types of resources – compute, storage, communications. Solving it by intuition, guess-work and spreadsheets is not a very reliable or scalable process.

Model the Entire System, Edge-to-Cloud

Edge to cloud diagram

It makes more sense to build a model of the entire system and use simulation with varying parameters to explore the system design space. This can quickly churn through lots of alternatives to find promising trade-offs – all without actually building the system. A sketch of such a model is shown in the picture above, where you one part modeling a set of sensors, one modeling the aggregation in a central gateway, and a final part for the datacenter.

In the end, the simulations can also help optimize the system setup and advise the tuning of the system. Actually, the cloud-to-edge system model becomes a digital twin.

Summary

Overall, simulation should be a natural part of the development tool kit for anyone building any kind complex system. Relying solely on hardware is not optimal, and you want to use simulation to add flexibility and efficiency to the design and development process. Simulation lets you expand the testing landscape for your system and hopefully find bugs in the lab rather than when a system is deployed. In the end, the trendy technologies that Gartner lists will be realized in various products and systems. Systems that contain rich and complex interactions between many different pieces. To conquer that complexity, better tools and processes are needed.