Companies today know that the cloud should be part of their IT strategy, but are not always clear on exactly how. Part of the challenge is that different companies are best served by different strategies, and the best approach to use depends on a variety of factors, such as their overall digital business strategy as I wrote about in the first blog in this three-part series, financial resources, IT competencies, workload characteristics, balance of legacy versus cloud-native applications, internal IT cost structure relative to the cloud, achievable economies of scale, reliability and availability targets, opportunities for custom performance engineering and the like.

I cover all of these trade-offs in my book Cloudonomics: The Business Value of Cloud Computing, including formulas to determine an economically optimal approach. These types of considerations will determine the best approach for a given company at a given time, and as that company evolves, its cloud strategies may evolve as well.

For example, Dropbox migrated out of the public cloud into its own environment centered on a custom-engineered storage server called Diskotech, optimizing cost, performance, security and availability. Conversely, GE is migrating many of its applications into a public cloud environment, primarily for reasons of agility and innovation.

Finding the Right Cloud

Most companies are best served by a hybrid and/or multi-cloud approach. For example, everyone knows that Netflix is a heavy user of public cloud services. Less well-known is that while the cloud supports media transcoding and recommendation engine processing, much of their core infrastructure is owned.  As Netflix puts it, everything before you hit “play” happens in a public cloud; everything after doesn’t.

Netflix “Open Connect Appliances” contain all the videos that we watch, and are deployed in roughly 1,000 interconnection and ISP facilities close to the edge of the network for ultimate delivery to endpoints such as home TVs and smartphones. In addition, a second cloud provider is used to back up data such as customer viewing history and transcoded media files to protect against data loss or service degradation in the event of a primary cloud provider outage.

In short, Netflix uses a hybrid multi-cloud approach. Such multi-clouds may exist at the infrastructure as a service (IaaS) layer, for example for data protection, or at the platform as a service (PaaS) or application layer, for example, where multiple software as a service (SaaS) clouds are interconnected to support an end-to-end workflow.

There are a variety of other hybrid architectures that companies can pursue based on their unique needs. For example, origin data may be kept in an enterprise data center, but be distributed to global regions via a content delivery network, i.e., cloud, or link to a front end of scalable cloud-based web and application server tiers. Conversely, the cloud may be the back end, used only, say, for data backups.

Or, a company may normally run in its own facilities, but enlist cloud resources in the event of demand spikes, due to seasonal or one-time promotions or global events. This has the benefit of reducing total cost while maintaining elasticity to meet such variable demand with response times that maintain user experience. Yet another approach is to run test/dev in the cloud, but own the production environment, or even do the exact reverse.

Embracing Digital Transformation

A variety of technologies have matured over the past few years to enable such flexibility. For example, containers and platform services help ensure that an application that runs in one environment will run the same way on a separate physical infrastructure. Orchestration engines help spin up cloud resources, deploy application components and microservices to them, and turn them down when they are no longer needed. Monitoring and management tools ensure that components are functioning properly, SLAs are being met, and that costs are being managed.

However, private, public and virtual private networks are one of the most important foundational elements of hybrid infrastructure. They tie together all of these disparate elements: enterprise data centers, branch offices, one or more public clouds, colocation facilities, and increasingly, multiple layers of dispersed fog computing resources including the edge, and fixed and mobile devices and “things” such as sensors, autonomous vehicles, surveillance devices, robots, digital signs, etc.

For the overall solution to work effectively, the networks connecting these elements need to be secure, have sufficient bandwidth to carry potentially massive amounts of data to or from the cloud to support big data analytics and machine learning, meet application-dependent latency requirements, and have the flexibility, elasticity and control to match the inherent flexibility of the cloud.

As an example, consider a hybrid approach where the cloud is part of a business continuity strategy. An enterprise data center mirrors or replicates data to a colocation facility over, say, a fixed network. If that data center suffers a major outage, the cloud can be used as a primary “site” for executing mission-critical applications, perhaps including customer-facing ones, but only if the data can be migrated rapidly and cost-effectively from the colocation facility to the cloud.

The linkages are clear: today’s businesses are increasingly digital and their digital architectures are likely to be hybrid in some way, but networking is the linchpin that can either enable, or hinder the performance, reliability, and cost-effectiveness of the overall solution, and thus impact the success of the overall business.

The next post in this three-blog series will review some typical network considerations for hybrid IT.

Editor’s Note: To learn more about CenturyLink’s security capabilities, visit this page or contact your CenturyLink account representative.