Airedale News

News Sustainability

The Challenges and Opportunities in Data Centre Cooling

When will the allowable actually be allowed?

Airedale International recently exhibited at Data Centres Ireland, during which our Data Centre expert Matt Evans took part in a panel discussion on the future of data centre cooling. Here he shares his thoughts on the continued industry confusion / resistance around ASHRAE server inlet air temperature guidelines.

Maybe it was the stimulating discussion. Maybe it was the Guinness. Regardless, I’ve had an epiphany (on a plane back from Ireland) and I need to share it.

The biggest challenge and the biggest opportunity in the Data Centre industry are in fact – wait for it – the same thing. The ASHRAE TC9.9 Thermal Guidelines.

Shock. Horror. I know that’s somewhat controversial, but hear me out.

Rewind. Let me set the scene:

A few days ago I was lucky enough to participate in a panel discussion at Data Centres Ireland where conversation was primarily focused on the challenges and opportunities in cooling data centres. As anyone who knows me will likely guess correctly, I threw a few curveballs out there and made some strong statements, however it wasn’t until sitting on the flight home that I had this almighty epiphany.

Unpacking said epiphany

Ok, now it’s out in the open and I’ve explained why I was on a plane back from Ireland let’s delve a little deeper. We all know that cooling is one of the biggest aspects of data centre design, and the choices made at this stage impact the other big aspect – power. Primary power requirements, backup power requirements, power distribution (and almost everything else), revolve almost entirely around the cooling infrastructure you specify – heck, sometimes even the choice of battery technology you select depends on it.

So, if cooling design drives power design, what’s driving cooling design? Well, that’d be supply air temperature.

The American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE) publishes guidelines for temperature and humidity operating ranges of IT equipment. The ASHRAE TC9.9 guidelines cover server inlet air temperatures, not air conditioning temperatures. The “allowable” range provides IT equipment manufacturers and data centre designers with a simple way to define product specification limits.

TC9.9 has done an incredible job in providing standardised and structured information within our industry so that we are all harmoniously singing from the same proverbial hymn sheet. But, and it’s a big but, in opening up this information to the masses, ASHRAE has also opened up a window of interpretation, and interpretation in such a context is really just a posh word for misunderstanding.
Here’s an example. When looking at typical co-lo supply air temperatures what SLAs do we typically see? 24°C? It’s always around that level – it’s near criminal. Why? Because that temperature band sits within the “recommended” TC9.9 zone.
Recommended is an objective word and it couldn’t be truer in this instance – it’s a recommendation made by a group of people, ultimately driven by consensus. The ‘Allowable’ ratings on the other hand are factual and based on manufacturers information, and perhaps surprisingly they are far higher.

Answer me this, if you will:

The ASHRAE A ratings cover different types of IT equipment, classified as A1-A4.  Most current data centre equipment incorporates all of these classes (with some time-bound limitations around incursions to A3/4), though a growing number of server manufacturers are introducing equipment suited for continued operation in class A3 or A4 operation. The problem is we are still designing data centres around A1 guidelines.  Studies have shown properly designed servers do not experience statistically higher failure rates when operating at higher temperatures, so reliability is not impaired; running at dynamic set-points that float with ambient temperature typically improves this server “X” factor!  If we were to realign the industry and drive SLAs up to A2 levels which, no exaggerating here, would be perfectly fine for every single piece of hardware manufactured post Alan Sugar trying to sell video phones, would we still be reading articles such as this one regarding a proposed Google data centre in Luxembourg?

The resource consumption (both energy and water) of data centres is more under the microscope than ever before and the colossal energy and water use of this particular project could potentially be avoided through more a more intelligent approach to temperature management which is surprising because Google are known to run hot?!  Adiabatic cooling in particular could certainly be avoided, negating the need for such large amounts of water.

My final thoughts on the matter (for now!)

We’ve delivered PUE levels of 1.1x in large co-lo for a while now, and despite becoming the new standard, it’s becoming harder and harder to drive improvements. The biggest opportunity however, is staring us all in the face. We’re hung up on recommendations based on opinions based on outdated notions. We need to break free of the shackles of “recommended” and start listening to what the manufacturers are telling us.

The limiting factor is the same as it’s always been and it’s not technology – it’s us. I genuinely feel for the co-lo operators who are being forced to comply with irrelevant standards pushed upon them by operators who should be more concerned about the sustainability of not only their business, but the industry as a whole. If this changed overnight, we could reduce the DC industry’s global carbon emissions by over 20% overnight without breaking a sweat…….now where is Greta’s number?

 

Author:
Matthew Evans
Technical Account Manager, Airedale International
Connect on Linkedin