Airedale News

Blog

Data Centre World 2024 Review

Richard Burcher standing in front of the Airedale logo

Following DCW London 2024 in early March, Richard Burcher, Global Product Manager – Liquid Cooling, gives a quick round-up of the hot topics featured at the show.

Generative Artificial Intelligence (AI)

Unsurprisingly, the technology creating the most buzz at Data Center World was generative AI, with many delegates looking at ways to capture the potential of AI and Generative AI throughout their organisations. Generative AI systems operate on central processing units (CPUs) and graphics processing units (GPUs). The CPU is the ‘brain’ in any internet-connected device and executes the instructions needed for a system to operate, including at what speed it runs. The GPU runs parallel to the CPU, which was developed initially to accelerate image processing – essential in gaming. It has since become more general and handles a more comprehensive range of parallel applications, including AI. GPUs work closely with neural processing units (NPUs) to deliver high-performance AI prediction tasks.

The market for AI technologies is vast, with Statista projecting AI technologies will be around $306 billion in 2024 and is expected to grow well beyond that to over $1.8 trillion by 2030. The emergence of Generative AI presents a challenge and significant opportunity for business leaders looking to steer their organisations and operations into the future. It is becoming possible that anything not connected to AI in the next three years may be considered obsolete or ineffective. Many delegates at the exhibition were looking to seize the opportunity and implement Generative AI, others looking at pilot projects, and some non-committal.

Graphical Processing Units (GPUs) and Central Processing Units (CPUs)

Given the hype around AI at the exhibition, GPUs and CPUs are seeing exponential growth as enablers of AI, Machine Learning (ML), and High-Performance Computing (HPC), as AI models rely on servers equipped with GPUs to perform training faster.

The next generation is rapidly moving from CPU to GPU computing. The CPU will remain, but we will need to adapt to the challenges of  supporting both legacy, air-cooled environments and up-and-coming CPU and GPU loads that are taking the world by storm.

From an infrastructure perspective, with the growth, the industry is seeing an increase in heat generated at the chip level (TDP – Thermal Design Power), an increase in capacity and a densification in rack kW loads. When we consider that IT refresh rates are far quicker than the lifecycle of a physical data center facility, data centers need to rethink architecture and become future-proof-ready in their designs. To that point, Airedale by Modine ran a poll on our Data Center World exhibition stand, asking, ‘What percentage of data centers do you think will adopt hybrid cooling systems (mix of air and liquid cooling) by 2030?’ The results were compelling, with 42.7% believing 51-100% of data centers will adopt a mix of air and liquid cooling by 2030, with a further 33% thinking 20-50% of data centers will adopt a hybrid approach. That requires liquid-ready infrastructure implementation in forthcoming data center design – connecting fluid networks to your mechanical network.

Liquid Cooling

As you would expect, given the above, liquid cooling was discussed a lot, with delegates focusing on hybrid solutions. Operators were keen to understand how they could integrate liquid cooling into their existing data halls and facilities – whether that be direct-to-chip, single, or two-phase (where a liquid changes state throughout the cooling process, from a liquid to a vapour), or immersion cooling, single, or two-phase. Stand visitors were looking to combine their proven air cooling systems that perform excellent PUEs on existing heat loads but incorporate these with liquid cooling systems that serve particular high-density applications in high-performance computing areas. In discussions, the belief was that direct-to-chip was more retrofittable and accessible to install in existing facilities, targeting high TDPs at the chip.

In recent years, we have seen proof of concept (POC) of liquid cooling; those asks remain, but there was a definite shift at this year’s DCW, with operators wanting to leverage hybrid cooling into large-scale applications, including colocation, hyperscale, and telecoms.

Discussions focused on the management of fluid networks. Traditionally, the primary fluid networks had run to lower temperatures, but now, when connecting secondary fluid networks, the temperatures were higher. A range of options was discussed in this instance, including the operations of traditional air cooling, air handling units, RDHX, cooling distribution units (CDUs), direct-to-chip, and immersion cooling.

Standardisation of fluids

Regarding liquid cooling, discussions centralised around fluid use and the pros and cons of single-phase and two-phase fluids. Through discussions, it was apparent, like in the case of liquid cooling, that there was no one-size-fits-all, and the selection is use case dependent, centralising on the compute and the load requirements. Delegates were, however, looking for assistance and potential standardisation around fluids to ensure legislation and quality standards are upheld across the board to ensure optimised performance and efficiency of their IT assets; a requirement to ensure material compatibility, safe handling of fluids, service, maintenance, and signal integrity were not compromised.

Pockets of discussion focused on media intensity towards two phases due to PFAS. It is important to note here that additional two-phase fluids are well in development and that PFAS covers some ~70,000 chemicals, not all of which are bad or will be banned via regulation. It is necessary to figure out what fluids will face restrictions under regulation and stop using the more extensive terms of PFAS.

Project sizes

Moving beyond liquid cooling systems was the discussion around data center project sizes. As an industry, we continue to witness significant growth throughout Europe, Africa, and Asia in terms of load and a shift into additional data center locations beyond pure Tier 1 regions as access to power, space claims, and demand for guaranteed uptime soar. Historically, a 5 to 10-megawatt (MW) facility used to be considered a ‘large’ data center, but as demand for digital continues to grow, so too does capacity. At the show and in conversations globally, we continue to see projects of 100MW, 150MW, and even 200MWs and beyond, with rack densities also increasing to support things such as the rise in data processing and predictions required by AI, along with the Internet of Things (IoT) and data analytics for research purposes. During the Uptime Institute (UI), Fireside Chat, entitled, “The Transitional State of Direct Liquid Cooling”, the UI stated that through their surveys, average rack densities still stood at 4-6kW and 8-10kW as a typical average. Still, they were beginning to see growth in outliers, and the UI estimated that 10% of UI survey respondents were using some form of liquid cooling for some of their most dense racks for HPC.

Sustainability 

Sustainability continued to be a key theme from Data Center World 2024, as metrics for measuring environmental impact are priorities at tender. The industry is well aware of its responsibility to lessen its carbon footprint and continues to seek technologies that mitigate any adverse effect it may have on the environment in which it exists. One further catalyst for this is the forthcoming European Energy Efficiency Directive, with the requirement of mandatory reporting of energy efficiency figures from May 2024 for data centers bigger than 500kW. The Directive aims to reduce energy use in Europe by 11.7 per cent by 2030 to help meet the EU Green Deal goal of a 55 per cent cut in carbon emissions by 2030.

Heat Reclaim

Heat reclaim was discussed as a growing trend in data centers and additional industrial applications to optimise waste heat and potentially give back to the areas where these applications reside. It was encouraging to discuss multiple district heating projects around London, benefitting from waste heat from the data center. Whether requiring a top-up via Heat Pump (HP) chillers or industrial-sized HX, heat was considered usable, even at lower-grade temperatures. Liquid cooling also provides an opportunity for heat reclaim. Still, the project goals must be outlined at the start of any project in order to ascertain if the primary goal is performance at the chip, as it likely will be given the associated costs of next-generation GPUs (~$40,000+) or whether the main factor is heat reclaim and running fluids higher.

Environmental product declarations (EPDs)

Demand for EPDs is rising, and consultants, in particular, were keen to gather complete life cycle data on a cooling product’s environmental impact. It seems like a carbon calorie counter for your carbon scope emissions. There was insightful discussion about EPDs and their validity, availability and importance.

Controls and system optimisation 

As part of the sustainability conversation, controls and system optimisation were heavily featured, with delegates looking for ways to integrate systems at the facility and system levels to maximise efficiency and operational performance.

Latency in Edge Data Centers

Latency came up within numerous discussions related explicitly to edge-based computing and data centers, which are, for obvious reasons, keen to minimise latency as use cases intensify at the Edge, requiring instantaneous data access. One particular provider explained they were looking for latency of no more than 0.1 milliseconds for some use cases and that, when outsourcing compute requirements to Cloud Service Providers (CSPs), figures of 0.6 milliseconds were not uncommon and were unfit for some use cases.

Conclusion

There were no large-scale surprises in the key topics at DCW 2024. Sustainability continues to take a front seat, with emerging cooling technologies such as liquid cooling shifting from concept to reality in the face of generative AI, HPC, higher density, and higher capacity data halls.

The one thing we witnessed that made all of the above a reality was collaboration and partnership. There was energy from solution providers, consultants, and end users to come together to solve problems and make things happen for the industry’s good. As one person framed it, it was defined as a time of a potential shift in humanity, not technology, as next-generation HPC and AI come rapidly; collaboration is likely the only way to solve it.

At Airedale by Modine, we often talk about how our strength is our people, who bring the energy, share the knowledge and innovate to develop new technology that keeps us moving forward. As a complete solution provider, we are keen to get involved in projects early on so we can assist with design and implementation.