Achieving Cloud Neutrality with a Multicloud Approach

cloud neutrality
Discover how deploying multicloud successfully can enable better digital transformation through cloud neutrality.

When considering a move to the cloud, it might be tempting to pick a single provider. But digital transformation services aren’t one-size-fits-all. In fact, there are many reasons why digital transformation requires organizations to take a cloud neutrality approach and make use of cloud services from multiple providers and avoid being locked into a single provider. Increased cost saving and price flexibility, risk mitigation, enhanced security and service availability, unlimited scalability, better agility, and the promise of each provider’s best solutions are all too great to ignore. 

See also: Solving the Challenges of Multicloud Cost Management

The days of single public cloud deployments are gone. According to Gartner, by 2026, more than 90% of enterprises will extend their capabilities to multi-cloud environments, up from 76% in 2020. The use of multiple clouds is by far the most common pattern among enterprises, with 89% (73% hybrid cloud, 14% multiple public cloud and 2% multiple private cloud) adopting this strategy in Flexera’s 2024 State of the Cloud Report.

The truth is that managing and supporting multi-cloud is not an easy task. In an ideal world, application workloads—whatever their heritage—should be able to move seamlessly between (or be shared among) cloud service providers and to be deployed wherever the optimal combination of performance, functionality, cost, security, compliance, availability, and resilience is to be found—while avoiding the dreaded ‘vendor lock-in’. 

These are some key principles for making multi-cloud adoption a success:

1. Avoid Multi-Cloud through Hyperscalers

Although big cloud providers have spent years ignoring multi-cloud and hybrid cloud, now they are making their first steps towards embracing them. Hyperscalers are now starting to offer new platforms (e.g. AWS ECS Anywhere, Google Anthos) that work on other providers, and pre-configured hybrid cloud appliances (e.g. AWS Outpost, Microsoft’s Azure Stack) that promise to bring the power of the public cloud to the private cloud. While they offer the simplicity of using the same interfaces both on the public cloud and on a private data center, these proprietary solutions do not avoid the pitfalls of single-vendor reliance and can be very expensive in the long run.  

2. Avoid Proprietary-Source Solutions

The evolution of the modern cloud is leading to the creation of highly complex systems, often based on proprietary orchestration solutions by major vendors (e.g. Nutanix, VMware), that expand private clouds with resources from cloud providers. These proprietary-source solutions have predatory pricing and licensing models, are complex and expensive to deploy and maintain, and usually require the user to manually migrate or rebuild workloads. When the solution combines hardware and software, the problem is exacerbated and vendor lock-in is inevitable. 

A recent example that highlights this trend is VMware’s acquisition by Broadcom, which has resulted in widespread concern due to substantial changes in pricing and licensing structures, further intensifying the challenges of vendor lock-in.

3. Adopt True Multi-Cloud  

Multi-cloud is not only about achieving interoperability, defined as the ability to manage your workload across every cloud from a single pane of glass. A true multi-cloud solution should also bring portability, defined as the execution of your workloads with the same images and templates on any infrastructure and their mobility across clouds and on-premises infrastructure, enhanced security, defined as the use of dedicated, isolated resources with improved security, privacy, and control, and expanded service availability, defined as the execution of applications to meet the quality of service requirements.

4. Not All Workloads Are Heading to the Cloud

Although multi-cloud will become the norm, there’s still a place for the on-premise data center, at least in the near term, either as part of a hybrid cloud strategy or to host legacy applications that, for whatever reason, are not suitable for migration to the cloud. Some of the main reasons to keep using on-premises resources to host workloads include cost, control, security, and performance. Moreover, modern distributed cloud environments can include edge on-premise or on-campus micro data centers in cases requiring extreme privacy and/or latency.

Findings from the 2023 Uptime Institute Global Data Center Survey, the longest-running survey of its kind, reveal that for the first time, IT workloads hosted in on-premises data centers now account for slightly less than half of the total enterprise footprint. At the same time, a report by the Enterprise Strategy Group, published in February 2023, shows that 26% of enterprises still follow an on-premises-first policy, meaning they deploy new applications using on-premises technology unless there’s a more compelling case to use public cloud services.

5. Be Ready for Cloud Repatriation

Cloud repatriation is the process of reverse-migrating application workloads and data from the public cloud to a private cloud located within an on-premise data center or to a colocation provider. Companies need to think about cloud repatriation upfront, optimize early, and apply a vendor-neutral approach from the very beginning. 

A 2021 study by VC firm Andreessen Horowitz found that the cloud accounted for 50% of the cost of goods sold (COGS) in the top 50 public Software-as-a-Service companies—and with the number of public software companies growing, the problem adds up to $100 billion in market value. Cloud expenses are not really OpEx because many large companies end up having to accept spending commitments with the provider. For example, Snap said in 2017 that it had committed to spending $2 billion over five years on Google and $1 billion over five years on AWS. Other studies demonstrate that overprovisioning and always-on resources will lead to $26.6 billion in public cloud waste in 2021, not to mention energy waste.

Given these rising costs, it’s no surprise that enterprises are reconsidering their cloud strategies. According to a Barclays survey, 83% of enterprise CIOs plan to repatriate at least some workloads in 2024—an increase from just 43% in the second half of 2020. This shift reflects a growing concern about optimizing cloud spending and avoiding unnecessary waste, as enterprises seek more cost-effective, flexible solutions for their IT infrastructure.

See also: What Strategic Decisions to Make for Cloud Repatriation

6. Automate Deployment and Operations 

The original vision of cloud computing was on-demand, automated services that scale dynamically to meet demand. While this vision is now a reality for a single cloud, multi-cloud automation is complex and requires specialized tools to piece together solutions from technology stacks and services offered by hyperscalers. Multi-cloud platforms should be based on the automated deployment of nodes at cloud and edge locations with dynamic configurations to fit the needs of heterogeneous environments, the nature of the workloads, and development workflows. Deciding where to place an application is a complex decision based on infrastructure costs, data fees, performance, uptime, and latency. 

Streamline Your Operations with an Open Source Multi-Cloud Platform

It’s essential to find ways to simplify cloud operations, as each additional provider in a multi-cloud environment increases management and operational complexity significantly. Many organizations have implemented private cloud infrastructure and now manage workloads across multi-cloud environments using open source solutions. By using a vendor-neutral platform to orchestrate the datacenter-cloud-edge continuum, they can achieve unified management of IT infrastructure and applications and achieve the dream of cloud neutrality.

Leave a Reply

Your email address will not be published. Required fields are marked *