Effects of Doors On Airflow and Cooling WP PDF
Effects of Doors On Airflow and Cooling WP PDF
WHITE PAPER
800-834-4969
techsupport@chatsworth.com
www.chatsworth.com
©2005 Chatsworth Products, Inc. All rights reserved. CPI and MegaFrame are registered trademarks of
Chatsworth Products, Inc. All other trademarks belong to their respective companies.
MKT-60020-315/NB 10/05
Data Center Design
The ability to run an efficient data center can be a challenge especially when you are dealing with legacy
installations while also planning for future applications. The principles of data center design for effective
thermal management with high density data communications equipment heat loads are frequently violated,
and data center managers typically suffer through these violations through no fault of their own. More often
than not these violations either come via an acquisition of previously developed space or habitation of mature
space for yesterday’s heat loads. Fortunately there are a number of standard practices and a few creative
patches and band aids to which data center managers have access for minimizing or neutralizing the resultant
hot spots from these violations. These patches range from adding high static pressure blowers to the bottom
spaces of equipment cabinets to plugging all sources of bypass air to creating barriers to hot air re-
circulation, such as internal cabinet air dams, cabinet top return air isolation panels, and closed duct return
air paths, to adding floor fans to deliver more cold air to the fronts of cabinets. While there is situational merit
to all these approaches, removing high-flow perforated cabinet front doors or tasking cabinet vendors to
deliver even greater percent-open mesh doors do not represent viable patches for improving the thermal
performance of a data center.
As counter-intuitive as it may seem, there appears to be no realizable value in improving the percent-open
beyond the 63% that most electronic equipment manufacturers have specified in the guidelines for deploying
their equipment in third party cabinets. According to the information on pressure drops through different
percent-open metal mesh materials developed by the material vendors, those server OEMs did not just make
up that 63% open requirement – they settled on the optimum balance between maximum physical security and
maximum airflow.
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400
Figure 1, Source: Designers, Specifiers and Buyer’s Handbook for Perforated Metals, The Industrial Perforators
Association, 1993
1
Figure 1 from The Industrial Perforators Association clearly shows significant improvement in pressure loss
over the cubic feet per minute (CFM) curves seen in most data center cabinets, up to the 63% open level
where the pressure-drop curve is practically a straight line.
As counter-intuitive as it is that more open space would not necessarily allow for greater airflow, Chatsworth
Products, Inc. took advantage of an opportunity to verify this anomaly in a customer’s data center during a
confirmation audit to confirm various corrective actions had actually delivered the anticipated cooling benefit,
by monitoring airflow through equipment and through a cabinet with various percent-open door
configurations. A little background on this particular data center problem and a general review of the
principles for cooling servers will assist in understanding the data and its importance.
First, there are various ways to describe the cooling that happens with server equipment in cabinets. One
description is the equation for forced air convection heat transfer:
Convection heat transfer from a surface to an airflow is governed by the equation Q = hA(Tw – Tf)
Another way to describe this cooling is the equation for sensible cooling:
BTU = (∆T • CFM • 1.08)
And finally, the equation for CFM describes this same relationship:
Q = 1.67W
Ct
Where Q = CFM
Ct = Temperature rise across the equipment in Celsius
It is instructive to review what these equations are actually describing. In the heat transfer equation, every
factor except the Tf is basically a constant defined by the manufacturer, so the only variable over which the
data center manager has any control is the temperature of the input air. In the sensible cooling equation, every
factor is controlled by the equipment performance specification – the CFM is controlled by the fans in the
servers (this will only vary to the degree those fans are variable speed or to the degree that those fans are
choked by an inadequate air supply) and the ∆T is a factor of the equipment and the air crossing it. While there
are effects the data center manager can apply to sensible cooling at the room level, there is nothing variable
about sensible cooling at the equipment level. In the CFM equation, again, that temperature rise is a constant
defined internal to the server. In summary, server fans will pull air from somewhere and the only variable over
which the user has any control is the temperature of the air those fans will be drawing.
2
In the data center under study, a site audit revealed that the high density blade PCs were consuming more air
than the room air handlers were producing and hot spots were being created in cabinets because hot exhaust
make-up air was being drawn over the tops of cabinets and introduced into the blade servers located toward
the tops of those cabinets. The patches that were deployed included sealing off bypass air in the room and
replacing the perforated floor tiles with 50% open floor grates to increase the amount of chilled air that was
delivered into the cold aisles and then building a barrier between the hot and cold aisles so that any make-up
air that the servers drew would at least not be from the hottest air in the room. The audit, which included
actual microprocessor temperatures, revealed the patches were successful.
Furthermore, the audit provided an opportunity to test the validity of the “63% Solution.” The test was run in a
raised floor data center to quantify the significance of a server cabinet door percent open area relative to the
net flow through the cabinet. Temperature and air velocity data were acquired at the exhaust of Clear Cube
blade PC’s loaded in the cabinet. Intake air temperature was also acquired.
At the flow rates tested (~2800 CFM through a 7’ MegaFrame cabinet ), the results from the acquired data
suggested that increasing the percent open area of the door perforation above 63% does not significantly
affect the net airflow rate through the blade server.
CPI M-Series MegaFrame® Cabinets are used to house Clear Cube R-series blade PCs. The R-Series Clear
Cube system allows 8 blade PCs to be installed into a 3 RMU chassis. For this customer’s installation, up to 12
of the 8-blade chassis were installed into a 7’ MegaFrame cabinet. These units require a peak inrush current
of 10A, and a nominal draw between 4.5-6.5A is typical for each chassis. The actual electrical current supplied
to the cabinet could not be measured; however, based on the product literature, the cabinet load is estimated
to be within the range of 6.5kW and 9.4kW.
The test cabinet selected had the blade PC chassis installed in RMU 10 through 45.
Airflow and temperature measurements were acquired using a turbine type blade anemometer at the rear of
the cabinet and at the server inlets.
Figure 2
3
Results: The data collected during the test is summarized in Tables 1 and 2 below. An unexpected result
observed in the measurement was that the server exhaust temperatures actually decreased with the 63%
door closed, which was attributed to the timing differential between tests coinciding with both the heat of the
day and the heavier transaction load on the PCs.
1 2 3 4 Chassis Total
o
C 21.3 22.7 26.2 28.8
o
Exh. F 70.4 72.8 79.1 83.9
Fan FT/Sec 22.1 21.1 21 21.5
Case 6
U 30-32 CFM 55.69 53.17 52.92 54.18 215.97
Intake o
C 13.9 13.3 17.2
Surf.
o
Temp F 57.0 56.0 63.0
1 2 3 4 Chassis Total
o
C 21.2 22.9 23.4 22.0
o
Exh. F 70.2 73.3 74.2 71.6
Fan FT/Sec 21.5 23.1 22.5 23.1
Case 12
U 10-12 CFM 54.18 58.21 56.70 58.21 227.31
Intake o
C 12.8 13.3 15.6
Surf.
o
Temp F 55.0 56.0 60.0
1 2 3 4 Chassis Total
o
C 20.7 21.8 24.6 26.7
o
Exh. F 69.3 71.3 76.3 80.1
Fan FT/Sec 21.5 21.8 20.2 21.7
Case 6
U 30-32 CFM 54.18 54.94 50.90 54.68 214.71
Intake o
C 15.0 13.9 20.0
Surf.
o
Temp F 59.0 57.0 68.0
1 2 3 4 Chassis Total
o
C 21.7 23.3 23.4 22.6
o
Exh. F 71.0 74.0 74.1 72.6
Fan FT/Sec 22.1 22.6 22.1 22.1
Case 12
U 10-12 CFM 55.69 56.95 55.69 55.69 224.03
Intake o
C 15.6 15.0 16.1 17.2
Surf.
o
Temp F 60.0 59.0 61.0 63.0
4
CPU temperatures and fan speeds were monitored directly though the PCs operating system and showed no
improvement (i.e., operating temperature decrease) with the change from 63% open doors to a 100% open
situation (i.e., no door).
Figure 3
Figure 4
A subsequent controlled test was designed and executed to verify the anticipated poorer performance at less
than 63% and to double check the results from the data center test. In these tests, only restriction to airflow
was considered, having already established CPU temperatures are only affected by input air temperature,
assuming the server fans are not choked.
5
Test Approach
The test environment was the center section of an empty multi-compartment co-location cabinet, shown in
the picture below. Four different door configurations were tested: 8% open (vented plexiglass), 40% open
(perforated metal), 63% open (perforated metal), and 100% open (door off).
6
Figure 5
A plate was fabricated to hold 12 fans within the 14 RMU available in the center cabinet section. Cardboard
and duct tape were used to seal off all of the leaks (See Figure 2). Each of the fans was capable of ~100 CFM
in a free condition.
Figure 6
In the rear of the cabinet we attached a horn to a turbine anemometer so that we could measure flow through
one of the fans (See Figure 6). Data was taken with 12, 10, 8 and 6 fans active in the cabinet to simulate flows
of ~3600, 3000, 2400, and 1800 CFM through a 45 RMU cabinet. The inactive fans were duct taped over to
prevent leakage for the tests with fewer than 12 fans active.
7
Fans Active 12 10 8 6
Vented
Plexi FT/Sec 37.5 16.4 56.3% 37.3 18.4 50.7% 38.4 23.2 39.6% 37.7 25.0 33.7%
(8% Open) FT/Sec 37.5 16.2 56.8% 37.3 18.0 51.7% 38.2 23.0 39.8% 37.4 25.0 33.2%
Est
Flow/RMU CFM 79.5 34.6 56.5% 65.9 32.2 51.2% 54.2 32.7 39.7% 39.8 26.5 33.4%
Simulated
Flow/
45 RMU
CAB CFM 3580 1556 2967 1448 2437 1470 1792 1193
40% Open FT/Sec 37.5 34.6 7.7% 37.2 34.8 6.5% 37.8 36.7 2.9% 37.3 36.5 2.1%
Perf FT/Sec 37.8 34.8 7.9% 37.0 35.0 5.4% 37.8 36.7 2.9% 37.0 36.4 1.6%
Est
Flow/RMU CFM 79.9 73.6 7.8% 65.6 61.7 5.9% 53.5 51.9 2.9% 39.4 38.7 1.9%
Simulated
Flow/
45 RMU
CAB CFM 3594 3312 2951 2776 2406 2336 1773 1740
63% Open FT/Sec 37.5 37.0 1.3% 36.8 36.2 1.6% 37.7 37.4 0.8% 37.0 36.8 0.5%
Perf FT/Sec 37.5 36.8 1.9% 36.8 36.2 1.6% 37.7 37.3 1.1% 37.0 36.8 0.5%
Est
Flow/RMU CFM 79.5 78.3 1.6% 65.1 64.0 1.6% 53.3 52.8 0.9% 39.2 39.0 0.5%
Simulated
Flow/
45 RMU
CAB CFM 3580 3522 2927 2880 2399 2377 1766 1756
Table 3 shows the data collected during the test .
Table 3 shows the data collected during the test.
7.0%
% Flow Reduction
@2950CFM
Free Flow 6.0% @2950CFM
30.0% Free Flow
5.0%
@2400CFM
Free Flow 4.0% @2400CFM
20.0% Free Flow
3.0%
@1780CFM
Free Flow 2.0% @1780CFM
10.0%
Free Flow
1.0%
0.0% 0.0%
0% 20% 40% 60% 80% 100% 40% 60% 80% 100%
% Open Area % Open Area
Figure 7 Figure 8
Figures 7 and 8 show the relative effect of percentage flow reduction versus door percentage open area at
various free flow conditions.
8
Results:
The data confirms the findings from the less controlled data center tests and specifically indicates that we
can expect to see a maximum flow efficiency improvement of only 1.6% in going from 63% open to 100% open
in moving 3600CFM through a 45 RMU cabinet. At 1800 CFM, the improvement drops to 0.5%. As a point of
reference, a 7’ cabinet fully populated with Dell 1855 PowerEdge blade chassis would require 3120 CFM of
flow capacity (6 - 7 RMU chassis @, 520 CFM each). If that reduction in airflow actually represented choking
the server fans, it would represent a total airflow of 3070 CFM. At the maximum PowerEdge configuration load
of 26.6kW in a cabinet, using the airflow equation of Q=1.67•W/Tc, the 63% open door would deliver
“hypothetical” cooling of only 0.23ºC less than the 100% open solution – statistically and practically
inconsequential. Or, that difference for a 70% open door would only be 0.04ºC.
In conclusion, testing in both a controlled environment and in a live data center confirmed The Industrial
Perforators Association’s data that there is no meaningful airflow improvement to be achieved beyond 63%
open perforated server cabinet doors.