By Jeff Gowan
Recently I heard someone from Google talk about the next generation Google Assistant, which promises to evolve search in new and incredible ways. The speaker talked about being able to stand in front of a public statue and simply say to your phone “Who made this?”, then your phone will be able to capture the contextual data of your location. Your phone will not only tell you the name of the statue but also offer information about the artist and other interesting facts. Extend that use case out and at some point you can actually interact with a digitally enhanced version of that statue. These are some of the promises of the near future.
But how does the promise actually become a reality and what happens when these practices become commonplace? When you lift up the hood of what drives this sort of modern wonder within the current infrastructure, you see an enormous amount of inefficiency because of current network architectures and traffic flows. If your command has to be sent all the way back to the core of the network to be analyzed and responded to, this may result in a response time that many would find unacceptable. Now think of the resource requirements for these sorts of applications when you multiply one user to many thousands or even millions. You end up with a massive strain on the network. This load will slow responses even more and may drive us to do something drastic like open an actual paper based guide book or, heaven forbid, ask someone standing nearby (gasp!)
Enter Mobile Edge Computing
The way to make these sorts of contextual, resource heavy applications viable is to migrate compute power to the edge of the network. This approach is known as Mobile Edge Computing (MEC.) By placing cloud computing capabilities at the edge you effectively create a “fog” which enables mobile operators, service and content providers, OTT players and ISVs to capture value from the proximity, context, agility and speed that an edge compute solution can provide. But all of those components must deliver – reliably. Most people won’t wait more than a few seconds for a web page to load. We just won’t tolerate waiting for a machine to get back to us.
Artesyn, China Mobile, Intel, and Wind River have created a demonstration to show just how a customer experience can be enhanced using MEC. The demonstration shows how a visitor to an art museum would be able to enrich their experience of the artwork displayed through augmented reality (AR.) The demonstration shows how a visitor could be identified with facial recognition as they enter the museum, which could then be used to personalize the experience. As the visitor tours the museum the exhibits come to life, thus demonstrating the ability to leverage versatility and innovation to generate interest of the end user in what they are viewing. Imagine having a real-time interaction with the artwork you are viewing!
What makes this possible is a integrated FlexRAN (scalable density virtualized baseband pooling) and MEC solution. This demo incorporates an Artesyn MaxCore system built on Intel Xeon processors and Wind River Titanium Server, a high performing, highly available network virtualization infrastructure. The combination provides the flexibility and agility needed to respond to varying levels of demand for resources, while guaranteeing the availability and uptime that are required for a smart coverage solution.
This demonstration was first shown at Mobile World Congress in Barcelona this year and it continues to be enhanced to show the power of a MEC solution. The current version was shown last week at Mobile World Congress in Shanghai China. If you missed us in Shanghai, the demo can also be viewed by appointment by contacting your Intel representative.