Skip to main content


The High-Performance Infrastructure You Need for Cloud Repatriation

A blog by Phil Kaye, Co-founder and Director at Vespertec

Release date: 19 March 2024

Understanding Cloud Repatriation

People who’ve made it their careers to oversee infrastructure know first-hand the challenges of managing cloud environments. I was struck by a recent Citrix survey, which found that 25% of UK companies have moved over half their cloud workloads back on-premises. In fact, 93% of respondents have done some form of cloud repatriation in the past three years.

What’s driving this mass exodus from the cloud? When I speak to people in the industry, the top factors I hear are unexpected or hidden costs, security risks, sky-high expectations around performance and uptime, compatibility issues, and service disruptions. I’m reminded of a recent presentation by Mark Boost at Civo, who showed that 57% of hyperscaler users reported attempts to manage or reduce their cloud service costs due to escalating and unpredictable expenses, highlighting the growing disillusionment we’re all seeing with the major cloud providers.

As infrastructure leaders look for ways out of highly restrictive service-level agreements (SLAs), here are some best practices for building the high-performance, flexible infrastructure you need to smoothly repatriate cloud workloads.

The High-Performance Infrastructure You Need

Virtualisation is key for server consolidation, letting you drastically cut hardware needs. If you can virtualise bare metal servers, this can immediately boost compute density and enable much higher post-repatriation workloads in the same footprint. However, with Broadcom acquiring VMware for $69bn and cutting off perpetual software licenses, many organisations are concerned that this will cause costs to spiral. Fortunately, there are several good alternatives to VMware for virtualisation and Kubernetes, allowing you to weigh your options and stay cost-effective.

To simplify management and streamline operations, you can bring in hyperconverged infrastructure, tightly integrating compute, storage, networking, and virtualisation resources. This allows you to start small and scale incrementally as needed while keeping costs aligned with business requirements.

With software-defined networking (SDN), you can also enhance agility by detaching network controls from hardware. This involves centralising the control plane so you can programmatically manage network behaviour and traffic flow. Find out more about disaggregated and software-defined architecture.

If you want to keep a handle on usage spikes post-repatriation, you also might want to investigate a cloud-bursting approach. By integrating on-premises infrastructure with public cloud platforms, you can seamlessly shift overflow workloads to the public cloud when needed. This is a solid approach, but bear in mind that it requires careful monitoring and management of workloads across environments, which may stretch your attention thin.

You’ll also need energy-efficient cooling systems, like modular in-row cooling units, which can play a role in reducing operational costs by precisely targeting hot spots and adapting to changing conditions.

Implementing a Successful Cloud Repatriation Strategy

Bringing workloads back from the cloud can be complex, but following best practices can set you up for an efficient transition. The first step is conducting a thorough audit of your existing cloud services. Document all applications and data currently residing in the public cloud, taking note of usage patterns, performance requirements, security needs, and dependencies.

Next, analyse which workloads are suitable candidates for repatriation. Applications that require predictable performance, have sharp peaks in demand, or handle sensitive data may be prime for bringing back on-premises.

When ready to start migrating applications, opt for a phased approach if possible. Beginning with non-critical workloads allows you to test and refine your process before tackling mission-critical systems. As you repatriate apps, closely monitor for performance impacts or connectivity issues. Having robust failover capabilities is essential should you need to rapidly revert to the cloud.

Also, be prepared to tweak your on-prem infrastructure as needed post-migration. Unexpected capacity or network demands could arise, and you won’t regret building in scaling capabilities to seamlessly handle spikes in traffic volumes or processing requirements. I cannot stress enough the importance of flexibility and headroom when setting up your hardware footprint.

Consider your server, storage, and networking needs – not just for current demand but also for projected growth over the next 3-5 years. High-density server platforms with flexible CPU and memory configurations allow you to scale compute resources as granularly as you like. Similarly, storage arrays and switches that support additional shelves or line cards will give you the incremental capacity and bandwidth upgrades you need.

Building in excess capacity upfront and using equipment that can dial resources up or down on-demand gives you the agility to handle whatever demands may come as workloads settle into their new on-premises home. However, it’s worth discussing how you can build out your own private cloud without turning monitoring it into a full-time job.

Building Your Own Private Cloud Infrastructure

Building an advanced private cloud lets you get many of the benefits of public clouds – like elastic scaling, self-service provisioning, and automated management – but within your own data centre. The key is having integrated software that pools compute, storage, and networking resources into a unified, software-defined infrastructure. This abstraction layer decouples the workloads from the underlying hardware, so you can seamlessly allocate and scale resources up or down as demands change.

Automation is crucial to keep this scaling process manageable. By deploying orchestration software, you can enable a truly scalable private cloud with monitoring, chargeback/showback, and a self-service interface that gives users a friendly, Amazon-like experience.

How Open Infrastructure Supports Cloud Repatriation

As you plan your cloud exit strategy, pay special attention to building infrastructure based on open standards rather than proprietary technologies. Solutions based on open-source software and hardware give you the flexibility to fluidly reallocate resources without vendor restrictions or lock-in. Open infrastructures give you greater choice in sourcing components from multiple suppliers to find the optimal balance of performance and value.

The Open Compute Project (OCP) was set up by Facebook in 2011 with the goal of reimagining data centre hardware and overcoming limitations in managing infrastructure at a huge scale. The result was a new type of infrastructure (‘OCP’) that radically reduced TCO and whose designs were made publicly available, just like Open-Source Software.

As active contributors to the OCP community, we provide performant OCP servers, storage, networking, and software-defined solutions. For example, see how we partnered with Timebeat to offer a new OCP-TAP Time Card, which provides highly accurate time data to meet regulatory requirements in financial trading.

If you’re evaluating open infrastructure solutions and want to see how our high-density, bare metal, hyperconverged platforms leverage OCP designs can deliver the freedom, control and performance needed to make your cloud exit successful, get in touch and we can schedule a chat.

Scroll back up to page top
Follow us
Contact us