The era of space data center capacity planning is over

It has been suggested, in this post and recently elsewhere as well, that data center cooling systems in their current configuration are not really able to adapt to the changing demands of more modern, workload-oriented computing environments. . The danger of implementing a workload-driven approach for a facility with an older cooling system, the suggestion continues, is that workloads can end up being spread over larger floor areas. . This could eventually lead to a blocked ability, as the inability to scale up leads to the need to compensate instead by scaling outside.

The solution, offered by a vendor, is an adaptive cooling system that increases cooling capacity as increased workloads place more demands on computer systems. It would depend to a large extent on strategies for removing heat from the air by more natural means, integrated into existing installations. But is such a solution real and is it realistic?

Data Center Knowledge asked four world-class experts the question. Their responses appear below, verbatim but edited for clarity.

Chris Brown, Technical Director, Uptime Institute

It is true that any data center, even if it supports HPC, is designed with an average of watts per area (square foot or square meter, depending on the region). This does two things: first, it defines the total cooling load expected at the complete build. There must be a set cap, otherwise how do you know how much cooling to put in place? In addition, it defines the cooling strategy.

As the density increases, there is a breaking point where air-based cooling becomes not impossible but impractical. You can keep installing more fans, but at some point it will move so much air and be so loud that it will look like being on the tarmac of an airport with jets passing by. Not very practical. If the air flow is sufficient, the pressure becomes a problem just to open and close the doors. Thus, depending on the density, the design plans will be [entail] different solutions. Some data centers have conducted the exhaust air from the racks directly to the cooling units on 20kW and 30kW racks to solve some problems and increase the cooling efficiency. Others opt for water-based cooling and use rear door heat exchangers, liquid equipment cooling, and even immersion cooling at very high densities.

Now, from a practical operational standpoint, I have never come across a data center designed for 200 watts per square foot and then installed for it. In other words, they will have racks with higher densities and racks with lower densities, as long as they don’t cross the density threshold where a different cooling solution is needed. They are always limited by the cooling capacity and installed power. Then, if they start to exceed that limit, they will install additional power and cooling infrastructure to increase the available power and cooling capacity. At some point, real estate becomes the problem. Adding fans and cooling capacity means more space, and if there is no more space, there can be no capacity expansion.

In summary, data center design has been and will continue to be a balance between space, power and cooling. Different approaches to power and cooling infrastructure are used to increase capacity in a small footprint. But any addition will always require space – it’s just horizontal or vertical space. We agree that any data center design should anticipate future capacity increases to support demand as density increases (we are no longer making space on Earth and therefore will need to increase density), but there is has no way to decouple all three elements of space, power and cooling, as they will always be linked together. But design choices can maximize capacity in any given footprint.

Steve Madara, Vice President Thermal, Data Centers, Vertiv

Steve Madara - Vertiv [400 px].jpgUltimately, the rack density situation is increasing. For example, a data room designed for, say, 6 MW for X the density of the racks (equivalent to the number of racks in the space) determines the total square footage of the data room. If the rack density increases, you need less square footage. The challenge today is that if you build with too high a rack density, you may run out of rack space before you reach design capacity. But also, if you are building for the right rack density today, you might not be using the entire square footage of the data room. Whether or not you fail the cooling capacity depends on how the room has been laid out. As the density increases, you have unused floor space.

To provision higher density racks, you typically need a cooling unit that has more kW of cooling capacity per linear foot of wall space. Are there solutions today with higher kW per linear wall space? Yes. In non-raised floor applications, we see thermal wall / thermal grid designs that have more coil area per linear space / wall because the unit gets taller. For raised floor applications, the increased capacity tends to be larger units that are deeper in a mechanical gallery. Is modular expansion a solution? Not necessarily. It will really depend on whether the additional electrical and mechanical systems can handle this. If you integrate it all from day one, you underutilize the infrastructure until you get there.

Much of the above assumes that we are continuing with air-cooled servers. The additional cooling capacity can be supplemented by rear door cooling with minimal added power. However, today’s world is starting to see the advent of a large number of liquid-cooled servers – fluid-to-the-chip. An existing data room can easily add the capacity to supply fluid to the rack for the additional cooling load, and the remaining air-cooled cooling capacity will match the remaining air-cooling load. The challenge now is that you might run out of power for the extra load in the data room.

Changing the metrics does not change the cooling provisioning methodology. It’s about knowing the future roadmap for air-cooled server capacity and future liquid-cooled cooling capacity requirements, and planning for the transition. There are many customers today who are building this flexibility and these solutions, for the moment of transition. No one can predict when, but the key is a plan for future density.

Steven Carlini, Vice-president, Innovation and Data Center, Schneider Electric

Steven Carlini - Schneider Electric [400 px].jpgMost designs today are based on rack density. The historic method of specifying data center density in watts per square foot provides very little useful guidance in answering the critical questions facing data center operators today. In particular, the historic power density specification does not answer the key question: “What happens when a rack is deployed that exceeds the density specification?” Specifying capacity based on rack density helps ensure compatibility with high-density IT equipment, avoids wasted power, space, or capital expenditure, and provides a means to validate IT deployment plans for the design of cooling and power capacities.

Dr. Moises Levy, PhD., Senior Analyst, Data Center Power and Cooling, Cloud & Data Center Research Practice, Omdia

Power consumption in data centers is a matter of workloads! Workload is the amount of work assigned to IT equipment over a period of time, including IT applications such as data analysis, collaboration, and productivity software. In addition, workloads that do not produce business value contribute to wasted and inefficient energy consumption.

Let’s understand the impact of workloads on data center power consumption and cooling capacity. Workloads can be measured in different ways, as tasks per second or FLOPS (floating point operations per second). Next, we need to measure or estimate the server usage, which is the part of the capacity used to handle the workloads. This is the ratio of the workload processed to the processing rate. We must also measure the server power requirement, or estimate it according to its use (0% to 100%) and considering a scale between inactive power and maximum power. The heat generated must be extracted by the cooling system. The cooling capacity can be estimated by dividing the required server power by the SCOP (Sensible Coefficient of Performance) of the cooling system. Managing the workload is not a simple strategy, and we must plan for a successful result!

Dr Moises Levy - Omdia [400 px].jpgWe cooled the servers by convection, but air cooling hit a limit with higher power densities. Using an air cooling system for power densities greater than 10 or 20 kW per cabinet is already inefficient and the limit is around 40 kW per cabinet. Rack densities have increased in recent years and now we can reach 50kW, 100kW or more per cabinet. A liquid cooled approach is a way to extract heat more efficiently and sustainably.

In summary, in a data center, there is a high coupling between the servers and their physical environment, which usually means that handling a larger workload means higher server utilization and power consumption. higher. This results in an increased need to dissipate the heat generated.


Source link

About Chris McCarter

Check Also

Free return of “Alfresco at the Urban Farm”, thanks to Kings support for Grow It Green Morristown

Kings food markets continues to support the local neighborhoods where it operates by partnering again …

Leave a Reply

Your email address will not be published. Required fields are marked *