Richard Burcher standing in front of the Airedale logoRichard Burcher, Product Manager specialising in Liquid Cooling, shares his insights on the prevailing trends, challenges and innovations in data centers in 2023, and how they are poised to shape the data center industry throughout 2024.

 

Trends, challenges and innovations: Artificial Intelligence (AI)

Unsurprisingly, one of the most significant trends throughout 2023 and a driver for additional capacity and the continuation of the data centre boom is Artificial Intelligence (AI) and Generative AI. AI, as a theme, has dominated many headlines throughout 2023 and will continue to do so throughout 2024.

According to a recently published report from the Dell’Oro group, worldwide data centre capex will see an 18% CAGR over the next five years, with AI as the catalyst. Baron Fung, Senior Research Director at the Dell’Oro Group, stated, “Accelerated computing optimised for domain-specific workloads such as AI is forecast to exceed $200 billion by 2028, with the majority of the investments deployed by the hyperscale cloud service providers”.  

One previously reported barrier to AI adoption has been fear of how AI will negatively impact children’s education and how children can be stopped from copying and pasting from AI without forming their opinions on the outputted content. In reality, educational institutes view AI as an opportunity instead of a threat. Some of the top universities in the UK are looking to adapt teaching and assessment for students to incorporate the ethical use of Generative AI, ensuring that academic integrity is upheld. The Russell Group is a consortium which includes some of the most selective universities in the UK, and they have published a set of principles to help capitalise on the opportunities of AI. 

The reality is that AI is here to stay, and there is a need to teach how to use AI effectively and responsibly, reimagining teaching for the next generation with suitable parameters to understand AI and adopt it correctly.  

In the coming year, there will likely be an increase in AI, where the money invested in AI start-ups converts to viable use cases and adoption by large enterprises. In 2023, we will see AI adoption using word language models, including imagery and chatbots. Throughout 2024, predictions suggest we will begin to see the first wave of Generative AI producing video via keywords – complex, innovative and compute-intensive, but ground-breaking for advertising, education, and even mainstream film and TV production. 

In 2024, we will also likely see AI Chatbots – increasing in normalisation and becoming increasingly personalised. Adopted in the service industry, consumers will begin to use and see customisation. With live translation into multiple languages and more humanisation in the experience, increasing adoption, where the robotic voices for returned outputs become more humanised, blurring the lines between whether you are speaking to a human or a robot. 

Following AI adoption, regulation and legislation will be aligned to balance, measure, protect privacy, and detect fraud. Europe and the UK will want to be central to that as governance leaders. Indeed, recent ‘deepfake’ events experienced in the US have seen the boss of Microsoft and the Whitehouse call for immediate action to tackle ‘deepfakes’ requiring driving to Federal Law. 

To recap, 2023 was an exciting year, where AI began to capture the imagination of the consumer and has also quickly driven the investor to think about the relevant infrastructure needed to support the development of mass utilisation of AI tools and machine learning language models, all of which require increased processing capacity and next-generation graphics processing units (GPUs) – designed for parallel processing. 

Again, 2024 will be the year enterprises adapt and adopt specific AI use cases. Sam Altman, the American Entrepreneur and CEO of OpenAI, recently stated at Davos that the AIverse would be bigger than just a technical revolution and that there was no “magic red button” to stop AI. Mark Zuckerberg has recently decided to merge two of Meta’s leading AI research groups (FAIR and GenAI), stating that it “becomes clear that the next generation of services will require building full general intelligence.” That newly declared focus will involve 350,000 Nvidia H100 GPUs by the end of 2024, or almost 600K H100s equivalent of computing if including other GPUs. 

This high-density, AI-driven computing has a downstream effect, impacting power and power availability and a need for more advanced, innovative cooling, driving towards sustainable infrastructure. The data centre market has had significant growth via Hyperscale and large colocation. As a result, the power demand is outpacing the grid’s ability to service that demand. The industry are seeing supply shortages and sizeable delays in capacity, connection lead times, and access to capacity. Demand, accelerated by AI, is pressuring the power supply chain, where there is a mismatch between supply and demand, and an acceleration in data-intensive technologies, requiring lower latency, where computing consumption is needed closer to the source. 

Power availability is critical for data centres and gives rise to on-site generation and microgrids. The industry see workloads shift and intensify; in Europe, there is growth in traditional areas and regions, such as in the well-documented FLAP-D (Frankfurt, London, Amsterdam, Paris, Dublin) areas, dependent upon access to power and infrastructure materialising. The industry is also witnessing a rise in Edge-based, local and regional data centres, with strategic data highway routes, such as Amsterdam and Frankfurt, growing at a rate. High development rates remain with hyperscale providers, who continue to exhibit platform-based expansion and are looking at Edge-based DC rollouts. 

FLAP-D will always be integral to Europe. However, due to power and extending data flows via high-speed cable connectivity, continued growth can be seen in additional markets beyond traditional hub locations. New markets are being evaluated versus traditional Tier I; there is expansion in Milan, Madrid, Berlin, Warsaw, and Lisbon. Madrid, Milan, and German regions have seen high growth, predominantly via Hyperscale. The Middle East also continues to ramp up at pace. Power and physical space remain constraints, leading to additional growth in other areas. Scrutiny of regulations and directives are also becoming a factor for location – i.e. the German Energy Efficiency Act and Energy Efficiency Directive (EED), where the majority of power sources are required to be via renewables, and there are requirements for heat reclaim. Such regulation may lead to data centres in areas with access to renewable energy, for instance, the Nordics. Key hubs for latency and potential AI can be located in more disparate locations, supported via their ambient profiles, renewable capability and space access. 

Trends, challenges and innovations: Securing Power

Access to power and MW capacity remains the number one issue in the data centre sector. Accessibility to power in a reasonable timeframe is imperative and challenging, with the most extended lead times for interconnection to provide access to power stretching years into the distance – speed to power is a critical driver. It may lead to requirements for on-site power generation. Hyperscalers and additional providers are looking at bridge solutions, such as fuel cells, to bridge and supplement power requirements. Hyperscalers require predictable times to power, and that is less stable than ever; therefore, bridging the time to interconnection with on-site fuel cells is a growing trend. 

Sustainable infrastructure will likely be distributed moving forward. In the unique grid of the future, the industry could see microgrids deployed at scale. Speed to market remains one of Hyperscale’s number one priorities. Traditional grids are heavily regulated and have grown at a steady, informed rate, behind the aggressive CAGR growth rates witnessed in the data centre sector. As a result, the grid cannot compete and catch up at the same rate with the transition of electrification and digitalisation. As a result, the transition will have to be creative to meet the demand. 

Trends, challenges and innovations: ESG & Sustainability

At a macro-level, industries and the data centre sector have been looking at ways to drive and invest in clean energy, smart energy solutions, and renewable energy, and driving all elements of design, installation, and operation to drive a more sustainable purpose. The world is arguably far behind the level of decarbonisation needed to hold the rise in global temperatures to 1.5˚C. 

There are increasing ESG (Environmental, Social, Governance) KPIs and relevant reports required for investment and regulatory requirements and encouraging signs via the EU. The industry requires a combination of regulation and incentives to make early investments in sustainably led initiatives, sometimes before ROI is agreed.  

Regarding power, there is a proposed transition to renewables over time through nuclear and a path for natural gas. Hyperscale can drive with access to capital and proof of concept (POC)/pilot projects and deployments, which, in turn, will lead to a rollout at scale. Hydrogen is an additional transition that requires investment in infrastructure and balance. Sustainability can be a driver of data centre location, and regulation can drive change, i.e. German sustainability targets (EED), in turn influencing operator locations and use cases, including the potential for heat reuse. For instance, where there are high densities, it creates the opportunity to collect high-grade heat and repurpose that heat to additional use cases. 

It is important to note that in 2024, there will be increased geopolitical risks and opportunities. There is the potential for over sixty government elections worldwide in 2024, which can significantly affect forward-looking investments and strategies. For example, we could see a refocus on basic measures and services, such as transport, medicine, education, and healthcare, at the cost of some unproven, innovative R&D start-ups, including those looking at sustainability and associated innovation. The hope would be that governments see the long game and the continued need for R&D-led innovation.   

As a business, Airedale continues to drive increased sustainable practices through its operations and the systems provided to the marketplace. Airedale continues to measure and drive down Scope 1, 2 and 3 emissions and works hand in glove with our customers to drive to the best fit and system solution – maximised for operational capability and efficiency. Airedale prides itself on creating a complete system approach through the thermal chain. It uses intuitive controls at product, system and site levels to drive the creative use of smart insights and data to manage scale and operations with a complete systems approach. Managing control and the system right through the chain is integral. That approach allows us to tie the system together, control, monitor, maximise, and adapt – optimising performance and efficiency and helping us to deliver our mission to engineer a cleaner, healthier world.   

Trends, challenges and innovations: High-Density Computing

Chips and next-generation GPUs are forcing changes. All elements of the thermal management system in the data centre are connected and need to be viewed as a system. Airedale can collect all the heat produced at the server from the chip and reject or repurpose that heat right the way through to the outdoors. 

The message here is one of High-Power Density, increasing Thermal Loads or Thermal Design Power (TDP) at the chip, increasing rack densities, and placing strain on heat dissipation at the chip within the server.  

The world is transitioning with technology and digitalisation. We are seeing accelerated traffic growth. There is an ongoing transition from general purpose, traditional computing applications, using Central Processing Unit Servers (CPUs) for workloads that have moderate asks like communications, data storage and processing; and now, we are transitioning to more accelerated, high-density computing and Generative AI, developing AI language models, Machine Learning and Big Data Analytics.  

These high-density computing applications require higher complexity, workload, and strain. They require accelerated computation and use Graphical Processing Units or GPUs, the type of which can run to 500 watts plus, with an onward trajectory to 1,000 watts (1kW) generated at the chip, in the ranges of 80˚C – 90˚C (176˚F – 194˚F) surface temperature, at the chip, with the majority of these GPU shipments going to hyperscale and Cloud Services Providers (CSPs).  

According to Omdia, Nvidia shipped nearly half a million H100 and A100 GPUs in 3Q23 and was expected to cross the half-a-million mark in 4Q23. If you review the scale being ordered by the top players, at approximately 700 watts per chip of TDP, you can estimate that it will generate 105MW per player. That figure does not consider additional heat-generating IT components. So, with this transition as mission-critical data centre applications and use cases become more central, there comes the need for more advanced, innovative cooling and power – it is now more integral. 

With this increased density, you must be able to take the high-power densities away and reject the heat from the chip right the way to the external heat rejection outside the facility, and this is where liquid cooling begins to play. Airedale see from the market that systems can and will be hybrid. When looking at AI, there is often a perceived idealism that 100% will use air-cooled facilities and 100% will use liquid-cooled facilities. Airedale are seeing a hybrid approach and a mix of compute density in the data centre; customers are looking at facilities with a mix of purposes, both traditional computing and more high-density computing and developing dedicated High-Performance Computing (HPC) in halls/zones, where air cooling exists, alongside liquid cooling. 

In some instances, there is a mixed methodology of hybrid air and liquid-cooled systems; for example, where you have a Direct-to-Chip based Cold Plate system, in a rack, you will likely dissipate ~70–75% of that heat at the chip, meaning there is an additional ~25-30% of that heat to reject into the air from additional IT components, which needs management via CRAC/CRAH/OnRak (Rear Door Heat Exchanger) cooling systems. 

It is essential to note that there is no one-size-fits-all approach, and the solutions are application and customer-dependent. Airedale is making great strides in this area, allowing an agile and agnostic approach to the solution adopted in the server, whether that be Direct-to-Chip, 1-phase Immersion or 2-phase Immersion, allowing Airedale to provide liquid-enabled, air-cooling and hybrid based systems. That approach enables the creation of a best-fit solution, working with customers and tracking the entire system from internal chip requirement to external heat rejection.