Skip to main content

Blog

How to Apply Direct Liquid Cooling at Every Scale | Q&A with Lenovo’s Jim Roche

Release date: 18 December 2025

“How much denser can server racks get before we need to make fundamental changes in cooling?”

That was the question that inspired the latest conversation in our Vespertec Partner Insights series. This time, we sat down with Jim Roche, Senior HPC Technical Architect at Lenovo, to talk all things Direct Liquid Cooling (DLC). With the power draw of today’s systems reaching over 100kW, we’re seeing an increasing number of businesses trialling liquid cooling as a way to cut down on energy and power costs.

Jim walked us through everything DLC: from how colos are shifting to power-based pricing, to how waste heat is being reused in places like swimming pools and research labs, and how the University of Birmingham scaled from a testbed to a full DLC-only deployment.

More episodes of Vespertec Partner Insights:

To kick us off, could you tell us a bit about yourself: your background, how you arrived at Lenovo, and what you’ve been working on?

 

Jim Roche: I’ve been in high performance computing for a long time. As you can probably tell from the accent [check out the audio], I’m originally from the US, but I moved to the UK in 1990.

I started out with Silicon Graphics, doing graphical work, which gradually moved into system design and HPC – mostly with the university sector. I spent some time at Intel, learning how chip developers work, and then joined IBM in 2006.

As some people will know, Lenovo acquired IBM’s x86 business in 2015, and I’ve been at Lenovo ever since, focused on HPC and academic environments.

 

Let’s start with the basics. What do people get wrong about direct liquid cooling (DLC)?

 

Jim Roche: The most common one is that people think it’s complex. And yes, there are elements that are different from what you’d find in a traditional air-cooled data centre, but it’s not difficult. If you’ve got air conditioning, you already understand 80% of it. DLC is just a follow-on from that.

Of course, there’s always the worry about water and electricity not mixing well. And that’s true: you don’t want leaks. But plumbing isn’t new technology. Most people know how to do it safely and effectively. Once you remove the fear and demystify it, it’s quite straightforward.

Really, it comes down to exposure. Most people just haven’t worked with DLC before. But once they do – even briefly – they tend to get comfortable with it very quickly.

 

Lenovo’s direct liquid cooling (DLC) approach uses warm water, which some people might find surprising. Why warm water, and how does that work?

 

Jim Roche: If you think about traditional data centre cooling – or even the air conditioning in your house – it usually involves chilled water. You run air over cold water to produce cool air, which is what we’re used to for comfort.

But electronics don’t care about what feels cool to us. In fact, what they call “cold” might be 40°C. “Warm” could be 80°C. Many components don’t even start complaining until they hit 90°C.

So, if you’re trying to cool something that thinks 80°C is fine, you don’t need chilled water at all. Warm water (typically 40 to 45°C) is more than enough. And in electronic terms, that’s still cold.

When we say “warm,” we’re not talking about boiling or scalding temperatures. A cup of tea is made with 60–65°C water. That’s hot. We’re well below that. Warm water just makes sense once you understand how the hardware behaves.

 

Some people see direct liquid cooling (DLC) as something only suitable for giant exascale deployments. Others think it’s too advanced for their small testbed or cluster. How does Lenovo approach customers at both ends of the scale?

 

Jim Roche: At Lenovo, we’ve always believed in a building-block approach, whether you’re starting small or going big from day one. Some customers want to grow gradually. Others need to go large immediately. Our goal is to support both.

We design systems that scale naturally, from one server to a full rack, and from one rack to hundreds. We call it “From exascale to every scale.” It’s about picking components that can grow without requiring you to rip and replace the whole setup.

That’s helped us support a wide range of environments – from individual research clusters to national-scale supercomputers. We’ve made sure our systems can coexist with other infrastructure too, which matters because there are only a handful of customers globally building full exascale systems. But there are tens of thousands of organisations building incrementally.

 

Do you see customers treating direct liquid cooling (DLC) as a future-proofing step: testing it before they fully commit?

 

Jim Roche: Absolutely. A lot of organisations know they’ll need DLC eventually: they’re just not sure when. Components are getting hotter, GPUs are drawing more power, and customers still want to keep things dense. That’s forcing the issue.

Even today, about 20% of components on the market can only be liquid cooled. There’s no air-cooled option at all. So, people are starting small – trying a rack-scale deployment or using a warm-water testbed to see how it behaves in their environment.

Some people use in-rack units that cool with water but convert the heat back to air, so they can run without needing new facility infrastructure. It gives them a way to get hands-on with DLC before making bigger changes.

Looking ahead, I’d expect liquid cooling to be required for a lot of technologies by the late 2020s – maybe around 2028 or 2029. Whether we as manufacturers can keep hiding that requirement from customers is another story.

And then there’s the fan issue. Right now, fans are responsible for 20–25% of a server’s power draw. As servers hit 1kW, 2kW, or more, that adds up quickly. It ends up being like running a room full of toasters just to keep things cool. Power is as constrained as money now – so both need to be managed carefully.

 

Do you have an example of a customer that scaled up from a smaller deployment to a larger one and any lessons in how they made it work?

 

Jim Roche: Our flagship site in the UK is the University of Birmingham. Back in 2016/17, they wanted to explore liquid cooling, so they bought a small testbed – just two servers in a single chassis – to see if it could be integrated into their data centre and whether there’d be any real benefits in power efficiency and density.

What happened was, demand grew quickly. As researchers came on board and said, “We need a server,” they added one at a time – tray by tray. And they found it was just as easy to add a water-cooled tray as an air-cooled one. They ended up filling two full racks before even formally scaling up.

At that point, they realised this was working well and made the decision to transition most of their research computing to water cooling. They built a dedicated DLC-only data centre with no air handling at all, which made huge savings in running costs.

Back then, it was a big decision: you had to plan for liquid cooling at the facility level. But now, with coolant distribution units (CDUs) coming in all shapes and sizes, there’s a clearer separation between what the facility provides and what the IT equipment needs. That makes it much easier to experiment or scale.

It was a bold move for the university, but I think they’d say it was absolutely the right one. Now they’re looking at going even bigger, and asking: do we extend what we’ve got? Build another? Move to a partner? When you’re pushing megawatts into a data centre, not having to chill your water – and not wasting half a megawatt on fans – is a big deal. That’s power that could be going into research.

 

When organisations are adopting direct liquid cooling (DLC), what are some of the practical challenges they should be planning for: things like facilities integration, maintenance, or internal skills?

 

Jim Roche: There’s definitely a few. One of the big ones right now is colocation. A lot of organisations don’t want to run their own data centre, so they’re looking for colo providers that understand liquid cooling. That’s not always straightforward.

Colo pricing models are changing too. Many of them have stopped charging for space – they just charge for power. So, power density becomes the main planning constraint. It’s a simpler billing model, but it forces you to think carefully about architecture early on.

If you’re keeping things on-prem, then it becomes about educating your estates or facilities teams. These are people used to building infrastructure with 20–30 year lifespans. They need to understand how fast IT requirements are moving, and how to design environments that can evolve or upgrade every few years.

Back when Birmingham started with DLC, 60kW racks felt like a far-off target – maybe something for 2025 or 2026. But here we are, and we’re already talking about 220kW racks. That’s a huge jump in design assumptions. You need to plan for flexibility.

Another thing people worry about is the water itself. With warm water, there’s always the question of bacterial growth or water quality. At Lenovo, we offer a full water testing and integration regime as part of our service. That includes testing, CDUs, and any other elements. We don’t expect customers to manage that themselves unless they want to.

The truth is, everyone’s water-cooled setup is a little different. And while that may not be ideal for standardisation across the industry, it does allow for innovation. We deliberately avoid hazardous chemicals in our systems and keep the approach simple. That might mean there’s a bit more hands-on monitoring, but it’s manageable, and we allow customers to make that choice.

We don’t need our users to have a deep knowledge of liquid cooling, but we want to give them enough information to make the right decisions.

 

You touched earlier on adaptability and changing regulations. For anyone building out a testbed or scaling up infrastructure, there’s always the question: how future-proof is this? How does direct liquid cooling (DLC) help address that concern? 

 

Jim Roche: As you said, staying safe with regulatory changes is one of the reasons so many organisations are looking at hosted solutions. It’s a way to step back from some of that complexity and leave it to providers who deal with water cooling and facilities every day.

When you move to warm water cooling, your operating costs tend to drop significantly. You’re using less power to do the same amount of work. In many cases, those savings more than cover the cost of outsourcing infrastructure management – while still giving you a strong return on investment.

 

We’ve talked about some of the technical and economic benefits. But ESG and carbon reporting are also growing priorities. How does direct liquid cooling (DLC) fit into that?

 

Jim Roche: It’s becoming critical for nearly every industry we talk to – especially in the public sector. ESG metrics and carbon usage reporting have become a baseline expectation. Many organisations are already implementing carbon offset strategies, or mandating the use of renewable energy sources. There’s real pressure to prove sustainability.

A lot of data centres are installing solar panels and pushing for green energy. But green energy isn’t unlimited, so energy efficiency still matters, and that’s where liquid cooling helps. But energy efficiency is only one part of the puzzle; another one is how you reuse heat. Warm water opens up several reuse options: heating buildings, warming swimming pools, or pre-heating for steam systems. There’s even ongoing research into converting heat energy from water back into electricity.

Government and public sector are leading the charge here. But we’re seeing growing interest across the board from organisations that want to show they’re taking carbon accountability seriously.

 

With the explosion of AI and the sudden acceleration in data centre construction, are we entering a new phase for direct liquid cooling (DLC) adoption?

 

Jim Roche: Absolutely. Just look at what’s happening in the US: the growth has been dramatic. Places like Silicon Valley and the Southwest are seeing huge demand, but they weren’t built with liquid cooling in mind. Now they’re having to think about what else they’re taking away from the ecosystem, not just how much power they can produce.

As data centres proliferate, we need greener, more responsible ways to build. AI isn’t going away. The demand for compute is only going to grow. So, we have to get this right – and DLC is part of that.

 

Looking ahead – say, five years from now, what role do you expect direct liquid cooling (DLC) to play?

 

Jim Roche: By 2030, I think the vast majority of research and AI installations will include direct liquid cooling in some form. That’ll be driven by three things: the technology will require it, ESG commitments will demand it, and – maybe most importantly – it’ll just be cheaper.

It’s already more economical to build a server with copper and DLC than to use sophisticated fans. Fans are expensive to buy and run. Once CapEx and OpEx both favour DLC, adoption becomes inevitable. We’re heading in that direction now.

Every new data centre build in the next few years will consider how to integrate DLC from day one. And in a way, we’re going back to where we started. In the 1960s, a lot of systems were water cooled too. So soon, I think we’ll look back and say – why did we ever stop?

 

What would you say to someone who’s currently air cooling their systems but considering direct liquid cooling (DLC) for the future?

 

Jim Roche: Start now. Learn the basics. Get comfortable with the concept. You don’t have to jump in with both feet: just test it, explore it, and look at who’s already done it. That lets you make decisions based on facts. There’s an entire community moving in this direction, so it’s a good time to get educated.

 

Listen to the full podcast episode.

Scroll back up to page top
Follow us
Contact us