The Case for Easier Installation, Operations & Maintenance at the Edge

The Case for Easier Installation, Operations & Maintenance at the Edge

By Jeff Gowan

The network edge may look like the land of opportunity for telcos seeking fertile ground to grow new revenue-generating services. For their network operations managers, however, ensuring the delivery of reliable services across disparate edge architectures and geographically dispersed sites of varying sizes could well look like a nightmare scenario. Indeed, operational complexity is one of the biggest challenges of edge cloud deployments.

Managing edge clouds doesn’t have to be so difficult. The underlying edge cloud platform plays an important role in reducing the time and resources required and minimizing operational costs. Looking just at the processes involved in installation, operations and maintenance, having the right platform in place is pivotal to making a positive business case for edge clouds.

Let’s start at the beginning by looking at the installation and commissioning of the platform itself. Procedures can vary widely among different platforms. On some platforms, the deployment process can take weeks or months. For the fast-changing edge environment, where innovative services need to be spun up quickly, that’s just not acceptable.

Installation and commissioning of the platform should be as automated and repeatable as possible. A platform should be up and running in a matter of minutes, not months. Ideally, the platform would come as a pre-integrated image that can be deployed from a USB or downloaded, and with no need for a separate installation node. Automated provisioning should eliminate manual intervention for any sized deployment, especially for edge nodes which can count in the thousands of instances. And when the system needs to be expanded, a set-up wizard should store all the configuration parameters so operations teams don’t have to start from scratch. The platform nodes should also perform a system self-check and put themselves into service.

Another aspect to operational efficiency is ensuring the reliability and security of services across the network edge. Whether it’s supporting a telco or industrial control application, critical infrastructure providers must avoid costly downtime in their operations. Service providers need complete visibility of the edge network via remote monitoring. If faults or performance issues arise, there should be instant notifications and automated responses.

When it comes to platform maintenance for edge clouds, it’s not as mundane as it might sound. If routine patches and upgrades are complicated and take too long, that will drive up operations costs. And even tiny mistakes made during routine maintenance can cause a system-wide outage. To avoid unplanned downtime, platform maintenance needs to be simple and automated.

Techniques like in-service patching and hitless upgrades allow an edge cloud platform to stay up to date without disruption to services. From minor patches to full system upgrades, the entire process should be automated. For system upgrades, live migrations are essential. The edge cloud system and all its features cannot be comprised during upgrades.

Hopefully, the above doesn’t sound like wishful thinking, like an ideal world for operations that service providers can only aspire to. These capabilities for simplifying installation, operations and maintenance across the network edge are all possible thanks to the work that is being done in the OpenStack StarlingX Project.

StarlingX provides a comprehensive cloud infrastructure software stack that addresses many of the manageability challenges of edge deployments. The project is working to ensure that when telcos deploy edge clouds, they are reliable, secure, easily orchestrated and massively scalable.

The second release of StarlingX is coming in Q3. If you’re interested in making edge cloud deployments easier and more efficient, you can join this exciting project. You can also take advantage of easier and efficient edge cloud deployments with our product, Wind River Titanium Cloud, which is based on StarlingX.

Wind River can supply scalable distributed edge clouds ranging from small single-server solutions to large multi-server solutions, replicated hundreds of thousands of times and spread out over a wide area.   To overcome manageability issues, Wind River’s distributed edge cloud solutions utilize centralized management capabilities, massive scalability, edge cloud autonomy, and zero-touch provisioning.  These features are the key essentials for cost-efficient management.  Together, these capabilities will shorten edge cloud deployment times, streamline operations, ensure availability, minimize human errors, and, ultimately, lower overall operating costs to support the business case for distributed edge cloud deployments.

 

Tweet about this on TwitterShare on FacebookShare on LinkedInEmail this to someone