A A R D T C S: Nathan Rolander, Jeffrey Rambo, Yogendra Joshi, Janet K. Allen, Farrokh Mistree
A A R D T C S: Nathan Rolander, Jeffrey Rambo, Yogendra Joshi, Janet K. Allen, Farrokh Mistree
Nathan Rolander, Jeffrey Rambo, Yogendra Joshi, Janet K. Allen, Farrokh Mistree1
G. W. Woodruff School of Mechanical Engineering,
Georgia Institute of Technology, GA - 30332-0405, USA
ABSTRACT
The complex turbulent flow regimes encountered in many thermal-fluid engineering applications have
proven resistant to the effective application of systematic design because of the computational expense of
model evaluation and the inherent variability of turbulent systems. In this paper the integration of a novel
reduced order turbulent convection modeling approach based upon the Proper Orthogonal Decomposition
technique with the application of robust design principles implemented using the compromise Decision
Support problem is investigated as an effective design approach for this domain. In the illustrative example
application considered, thermally efficient computer server cabinet configurations that are insensitive to
variations in operating conditions are determined. The computer servers are cooled by turbulent convection
and have unsteady heat generation and cooling air flows, yielding substantial variability, yet have some of
the most stringent operational requirements of any engineering system. Results of the application of this
approach to an enclosed cabinet example show that the resulting robust thermally efficient configurations
are capable of dissipating up to a 50% greater heat load and a 60% decrease in the temperature variability
using the same cooling infrastructure.
NOMENCLATURE
Symbols C coefficient matrix
ai weighting factor F (u , β ) flux function
di+ , di− deviation variables G flux goal vector
gi ( x ) inequality constraint function Gi design goal target
Q heat generation rate
hi ( x ) equality constraint function
R,R’ covariance matrix
m mass flow rate T temperature
m number of observations/number of U observation ensemble
goals Vo observation set
n degrees of freedom/number of
Wi goal weighing factor
design variables
Z Archimedean objective function
p number of inequality constraints
ϕ basis function
q number of equality constraints
s number of servers Γ control surface
u ( x) observed phenomena Ω, ∂Ω system domain and boundary
x design variables
Subscripts
xi,L,U lower/upper bound of design
o ensemble average
variable xi
r reconstruction
A( x ) achievement function
1
Corresponding Author
Phone: (404) 385-2810; Fax: (404) 894-8496; E-mail: yogendra.joshi@me.gatech.edu
1
1 DESIGNING ROBUST COMPLEX TURBULENT FLUID SYSTEMS - CHALLENGES
The complex turbulent flow regimes encountered in many thermal-fluid engineering applications have
proven resistant to the effective application of systematic design. This is because the Computational Fluid
Dynamics (CFD) models required for analysis are computationally expensive, particularly for the latter
stages of design where more accurate solutions are required, making the application of iterative
optimization algorithms extremely time consuming. Furthermore, turbulent flow regimes are inherently
complex, requiring significant modeling simplifications and assumptions to be made in their simulation [1],
resulting in approximate solutions only. The Reynolds averaged Navier-Stokes based CFD approach
employed in simulation of engineering systems is based upon the mean flow field, with the turbulent
perturbations modeled as Reynolds stresses [1, 2]. Finally, in any complex system design, multiple
objectives must be considered in a mathematically rigorous fashion that also accurately reflects the
designer’s preferences. In many thermal-fluid applications the tradeoffs between energy efficiency, system
size, cost, and potential performance variability must be considered.
A representative example of a complex turbulent convective system in need of effective design is the
configuration of data centers. Data centers are computing infrastructures housing large quantities of data
processing equipment. This equipment is currently air cooled, and the resulting turbulent flow distribution
is both highly complex and variable. Furthermore, the reliability requirements of data centers are
exceedingly high, as discussed further in Section 3. Previous application of simulation based design for
data centers is limited to ad-hoc analyses based on experience and simple correlations [3, 4], simple data
center level CFD modeling with some comparison of configurations [5-10], and some limited geometric
optimization using design of experiments to create coarse response surface models with very few variables
[11-13]. All previous work utilizes the single objective of temperature minimization.
The development of an effective design approach for complex turbulent thermal-fluid systems, such as the
data center example, is thus hindered by three specific challenges:
1. Flow complexity – The CFD models required to analyze the systems are impractical to use in
iterative optimization algorithms, particularly in the presence of geometrical complexity and
multiple length scales.
2. Inherent variability – In complex three-dimensional turbulent flows, modeling uncertainties and
choice of turbulence closure models lead to variability in predictions.
3. Multiple objectives – The multiple design objectives in a complex system should represent the
designer’s preferences accurately.
These challenges are addressed in this paper through the application of three constructs: (1) the Flux-
Matching Procedure (FMP) augmenting the Proper Orthogonal Decomposition technique (POD), (2) robust
design principles, and (3) the compromise Decision Support Problem (cDSP). The POD is a highly
2
computationally efficient meta-modeling approach, providing the foundation for the development of
reduced order turbulent convective simulations [14], including the FMP. The principle of robust design is
used to find solutions that are insensitive to changes in both internal and external operating conditions.
This yields solutions that maintain their desired performance accounting for variability in both the system
and inaccuracies in the model of the system [15]. The cDSP, a hybrid formulation of mathematical
programming and goal programming, enables multi-objective solution finding through the specification of
multiple goals, and thus is well suited to engineering applications [16].
The challenge in the application of robust design is the computation of the non-linear numerical
derivatives, required for determination of the system variance, that require many functional evaluations of
computationally expensive CFD models. Simple response surface models are inadequate, as the non-
linearity of the systems is not well represented by linear or quadratic approximations, as shown by the
analyses in [8, 12, 13]. Krieging, multivariate adaptive regression splines, and other more advanced
interpolation approaches offer superior approximations [17]; however, these methods also require a large
number of data points for interpolation, a number which increases exponentially with the number of design
variables [17].
In Figure 1, the requirements and constructs for an approach for the robust design of turbulent convective
systems is presented. The problem presents three requirements: reduced order modeling, need to account
for variability and multi-objective trade-offs. These are instantiated in the approach by adopting three
constructs: FMP augmented POD, robust design and the cDSP.
Requirements Constructs Integration
Approach for
Inherent robust design of
Robust design
variability turbulent convective
systems
Figure 1 - Requirements, constructs, and integration for a robust server cabinet design approach
The approach illustrated in Figure 1 is demonstrated through application to the robust design of data center
server cabinets; and the outline of this paper is as follows. In Section 2 the conceptual description and
explanation of the three constructs used are presented. In Section 3 the background information and
description of example problem is shown. In Sections 4 & 5 the formulation of the design problem using
the developed approach is described. In Section 6 a presentation and discussion of the results of the
3
example problem is given. Lastly, in Section 7 the discussion and review of the overall effectiveness of the
approach is presented.
Solution methods based on Eq. (1) are generally classified as Galerkin or spectral methods, where u ( x ) is
the function to be approximated, such as the flow field, ϕi are the basis functions and ai are the weighting
factors. The utility of the POD is that it is a stochastic tool, which uses principal component analysis to
find the optimal linear basis for the modal decomposition presented in Eq. (1). The POD is well-suited for
CFD modeling as the complete flow field reconstruction is obtained; the solution is not a black box single
response value. Therefore, direct analysis of the solution can be made to ascertain the reasons behind a
response to the change in input parameters.
The concept of the POD computation is best explained graphically. Given a set of multi-dimensional data,
the aim of the POD is to accurately represent the complete data set in the most efficient manner possible by
using the minimum number of basis functions. This is accomplished through finding the principal axes of
the data set, representing the directions of maximum scatter. The orientation of these principal axes is
found through orthogonal distance regression, which is represented graphically versus traditional vertical
distance regression in Figure 2. This orthogonal fit produces a smaller sum of the squares of the residuals
than any other linear fitting approach [20].
4
y residuals orthogonal residuals
10 10 raw data
least squares fit
8 8 orthogonal fit
y 6 6
y
4 4
2 2
0 0
0 5 10 0 5 10
x x
5
process. This investigation focuses upon Type II, as the dominant system variables are considered as
design variables, and sources of noise are insignificant, as discussed later in Section 4.4.
Y X2
Response Design Feasible
Variable Design
Optimal
Deviation Solution Space
at Optimal Bounds Robust
Solution Solution
Bounds
Deviation
at Robust
Solution Constraint
Boundary
Infeasible
Objective Solution
Function Region
Design Design
Optimal
Solution
Robust Variable X Optimal Robust Variable X1
Solution Solution Solution
(a) (b)
Figure 3 - Type II Robust Design (a) goals & (b) constraints representation
A more in depth explanation of robust design is presented in [15]. The application of Type II robust design
is shown in Figure 3 (a). To reduce the variation of system response, y, through changes in the design
variable, x, the designer is interested in finding a flat region of the curve near the performance target. The
shallow slope of the response curve at the robust solution translates to a solution that still performs as
expected, despite variation in the design variables. The tradeoff between finding the robust or optimizing
solution is based upon the level of variation of each design variable and the designer’s preferences.
Constraints incur an added layer of complexity because the variation of system response must be
considered on top of the nominal response value. This variance consideration is represented in Figure 3
(b). At the optimal solution point the solution violates the constraint, since part of the area created by the
variability in the control variables lies outside of the feasible region, despite having a feasible average
value. The entire area surrounding robust solution point is fully inside the feasible region and hence is
viable even in the worst case variability scenario. This consideration of variability through robust design is
important, as the RANS CFD calculations do not capture the inherent modeling variability.
6
representing the overachievement and underachievement of each goal respectively. These deviations are
constrained to positive values, and no simultaneous over and under achievement is allowed.
Table 1 - Mathematical formulation of the compromise DSP
Given
An alternative to be improved through modification
Assumptions used to model the domain of interest
The system parameters:
n number of system variables
p number of inequality constraints
q number of equality constraints
m number of system goals
Find
Design Variables xi i = 1,…,n
Deviation Variables di+ , di− i = 1,…,m
Satisfy
Inequality Constraints gi ( x ) ≤ 0 i = 1,…,p
Equality Constraints hi ( x ) = 0 i = 1,…,q
Goals Ai ( x ) − di+ + di− = Gi i = 1,…,m
Bounds xi , L ≤ xi ≤ xi ,U i = 1,…,n
di+ ≥ 0; di− ≥ 0; di+ idi− = 0 i = 1,…,m
Minimize
Deviation Function: Archimedean formulation
m
Z = ∑ Wi ( di+ + di− ) i = 1,…,m
i =1
This cDSP template formulation shown in Table 1 constitutes the interface of the approach; yielding an
augmented cDSP construct for the robust design of turbulent convective systems. Further detail on the
formulation and solution of the cDSP is given in the application to the server cabinet configuration example
in Section 5.
7
dissipating several MW of power2. The data processing equipment is stored in 2 m high enclosures known
as cabinets. The demand for increased computational performance has led to very high power density
cabinet design, with a single cabinet dissipating up to 30 kW2. Thermal management is provided by
computer room air conditioning (CRAC) units that deliver cold air to the cabinets through perforated tiles
placed over an under-floor plenum. The cooling costs of data centers represent up to 40% of the energy
consumption of center operation3.
Thermal management difficulties in data centers, caused by the rapidly increasing power densities of
modern computational equipment, has lead to very high flow rates of cooling air, resulting in turbulent flow
regimes with large variability in velocity magnitude. In data center server cabinets this variability is caused
by variable speed fans in the servers, CRAC units, and unsteady heat generation by the processors, yielding
a highly variable problem. However, these computers are required to operate with near 99.9999%
reliability. Furthermore, the high thermal gradients lead to hot spots and thermal inefficiency as hot
exhaust air is drawn into the cooling air stream, resulting in overheating. A desired objective in data center
design is uniformity in the temperature distribution, as there are few effective modeling approaches to cope
with variability or temperature gradients. This uniformity approach is not only thermally and economically
inefficient, but also often impractical to implement [8-10, 12, 13, 28].
The approach taken in this investigation is to create energy efficient and reliable solutions through effective
application of robust design to create server configurations that allow the designer to trade off between
ultimate thermal efficiency and operational stability. The thermal efficiency measures apply primarily to
the cooling air supplied by the CRAC units, as this is directly proportional to the continual operating cost of
the facility. Addressing these thermal management and reliability challenges will contribute significantly
towards increasing the data center’s thermal and economic efficiency.
2
The Uptime Institute, 2004, "Heat Density Trends in Data Processing, Computer Systems and Telecommunications
Equipment", http://www.upsite.com/TUIpages/tuiwhite.html, accessed on 2/16 2004.
3
Lawrence Berkeley National Laboratory and Rumsey Engineers, 2003, "Data Center Energy Benchmarking Case Study",
http://datacenters.lbl.gov/, accessed on 11/20 2003.
8
The following design reconfiguration possibilities are considered. (1) Equipment of differing power
density can be distributed within the cabinets for more efficient cooling. This can be implemented through
physical relocation of the hardware, and/or by distributing the processing tasks to reduce the load on critical
equipment [29-31]. (2) The volume of cooling air supplied to the cabinet can be increased, accomplished
via a CRAC unit output increase. A combination of these reconfiguration options is explored through the
following problem geometry.
Server 10
Server 9
Ls
Section c Server 8
Server 7
Server 3
Server 2
Section a
Server 1
z
x Vin
Lc
(a) (b)
Figure 4 - Cabinet configuration & variables
The cabinet dimensions are height H = 1.93 m and width W = 0.87 m. Air enters the server cabinet
enclosure from the bottom cutout, Lc = 0.39 m at velocity Vin with temperature Tin, supplied through the
under floor plenum from the CRAC unit. The flow output of the CRAC units can be controlled resulting in
increased or decreased Vin; however, the complex flow patterns in the under floor plenum result in
significant variation. This variation is not accurately predicted by the RANS CFD codes used to model
plenum flow distributions [5, 32, 33], and thus this data must be estimated or empirically gathered. This
9
can be accomplished using a flow hood as used in [20], or other flow transducers such as a Pitot tube or hot
wire anemometer.
The cooling air is distributed within the cabinet and drawn through the various servers, as shown by the
flow arrows in Figure 4. Although internal flow patterns are complex, a mass balance exists under steady
state conditions between the air entering the cabinet and leaving through the top exhaust vent. The shaded
areas in Figure 4 (a) represent unfilled server racks where no air can flow. All solid surfaces are considered
no-slip, impermeable, and adiabatic. The system is analyzed at steady state, as transients are not of concern
in continually operational data center environments.
The individual server geometry is shown above in Figure 4 (b), where Ls = 0.61 m and Hs = 0.09 m. This
model has two isoflux blocks that act as flow obstructions, each representing a chip in a dual processor
server. Both blocks have a constant heat generation rate Q, which is dissipated through convective heat
transfer to the air flowing through the server. Note that these heated blocks are referred to as “chips” for
this illustrative design problem, although the two dimensional nature of the simulation means the heated
blocks are the same unit depth as the entire server. This simulated power dissipation requires lower heat
generation levels to maintain realistic chip temperatures, as enhanced chip level thermal management is not
being considered. The flow through the server is provided through a 130 CFM fan (0.0613 m3/s), modeled
by a cubic pressure–velocity relationship.
The cabinet is divided into three sections: a, b and c, corresponding to the lower two, middle three, and
upper five servers as shown in Figure 4. Qa, Qb, and Qc denote the heat generation of each processor in the
respective cabinet section. This sectioning of the cabinet was performed in order to reduce the number of
design variables to simplify the illustrative example considered but is not a limitation of the approach.
10
80
50
40
30
20
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Inlet Velocity (m/s)
(a) (b)
Figure 5 - Cabinet (a) velocity field (b) chip temperature profile
The cabinet temperature profile was found to be essentially isothermal, except for the thin thermal
boundary layers surrounding the chips. The resulting server chip temperatures for a parameter sweep of Vin
with all chip powers set to 60 W/m is shown in Figure 5 (b). The server temperature profile shows the three
sections have unique responses, as seen in Figure 5 (b). These clusters of server responses were used to
establish the cabinet sections a, b, and c shown in Figure 4 to arrive at a more manageable design problem.
u ( x) .
∞
u ( x) = uo ( x) + ∑ aiϕi ( x) (2)
i =1
The empirical basis ϕi is found by maximizing the projection of the observations u ( x ) onto the basis
functions, solving the following constrained variational problem through extremitizing the functional:
(u ,ϕ )
2
−λ ϕ ( 2
−1 ) (3)
11
where ⋅ denotes ensemble averaging, (,) is the L2 inner product, and . is the standard L2 norm,
a normalized basis. Variational calculus can be applied to express the functional in Eq. (3) as the integral
equation:
∫ Ω
R ( x, x ')ϕ ( x ')dx ' = λϕ ( x ') (4)
where R( x, x ') ≡< u ( x) ⊗ u * ( x ') > is the cross-correlation function. To compute R ( x, x ') , an ensemble of m
system observations containing n DOF each are assembled as a matrix U. For the server cabinet example
these observations are the FLUENT CFD velocity and turbulent viscosity fields, for the set of inlet
velocities, V o = {0, 0.25, 0.5, 0.75,1.0,1.25,1.5,1.75, 2.0} m/s , creating the ensemble of observations:
U = {u1 , u2 ,..., um } ∈ n× m
(5)
The eigenvectors of R ( x, x ') are the basis functions ϕi , called POD modes, and the eigenvalues determine
in decreasing magnitude the order of the modes. The eigenvalue spectrum is typically used as an ‘energy
criteria’ where the magnitude of each eigenvalue determines what portion of the total variation of the
system the corresponding eigenvector captures.
The basis produced by the POD can be proven to be the optimal linear decomposition, in the sense more
energy is captured for a given number of modes than any other linear decomposition [14]. Therefore in
general the first p ≤ m POD modes will better represent a system than the first p modes of any other linear
decomposition. The POD is able to create such a large reduction in the number of DOF in a system
because the eigenvalue spectrum exhibits a sharp decay, implying that only a few modes are needed to
create an accurate system representation. Further accuracy enhancements and computational discussions
are presented in [19].
F (u , β ) = ∫ ρβ u ⋅ nds
ˆ (7)
Γi
12
Depending upon the transport phenomena being modeled, the parameter β can be changed to describe the
flow of mass ( β = 1 ), momentum ( β = u ), or energy ( β = E ). The mass flux case is used for the
reconstruction of the velocity field, and thus the application of Eq. (7) to a control surface Γi yields the
mass flow rate m . To reconstruct an approximate solution the fluxes are expressed as a vector of goals
G∈ q
, for which a specific mass flux goal is desired through each of the set of q corresponding control
surfaces Γ = {Γ1 , Γ 2 ,..., Γ q } . This flux function defines the desired reconstructed flow field ur such that
G = F (ur ) , and thus achieving the desired mass flow rates across the surfaces Γ . The solution procedure
is thus to find the set of weight coefficients that minimize the error on the set Γ :
p
min G '− ∑ ai F (ϕi ) where G ' = G − F ( uo ) (8)
i =1
The corrected mass flux goal vector G ' is required as the POD modes are mean centered, as such the goals
must also be defined as deviations from the mean. The modal summation is carried to q ≤ p ≤ m modes
because the optimal reconstruction may require less than the full spectrum of modes, but always at least as
many modes as there are goals to match. This is true if the summation in Eq. (8) is not convergent, and
thus is truncated at the point giving the lowest error with respect to the mass flow rate goals. The weight
coefficients ai are found by assembling a coefficient matrix, C, by applying Eq. (8) to the q surfaces of the
p POD modes:
C = F (ϕ ) ∈ m× q
(9)
Eq. (10) can then be applied, where (⋅) + is the Moore-Penrose pseudo-inverse, yielding the least squares
approximation.
a = C + iG ' (10)
The strength of the FMP is that only enough POD modes need to be generated in order to accurately
represent the system dynamics, as no interpolative procedures are employed as have been used in previous
POD based reconstruction approaches [36-39]. Furthermore, this approach avoids the computationally
expensive Galerkin projection procedure, which is less efficient and can produce erroneous reconstructions
[19]. Because the POD modes satisfy the governing equations [19], their superposition creates a solution
that most closely matches the desired goals, yet still constrained by the system physics. Thus an accurate
boundary profile for the flux specified is retained in the reconstruction, despite using an integral
formulation.
The resulting FMP based cabinet flow model has only 9 DOF, representing a 5 order of magnitude decrease
from the CFD model. Computation of this reduced DOF model takes under 1 second, compared to ~½
hour for the CFD model, measured on a high end desktop PC4. Comparing the flow vector fields from the
4
Single Intel P4 2.4GHz processor with 2GB of RAM
13
FMP solution to a CFD generated case not part of the original observations reveals the FMP solution to
have less than ~5-10% difference over the entire domain [19]. In this section only the fundamentals of the
POD and FMP methods are presented. Further accuracy investigations and validation of the FMP can be
found in [19].
In this equation, cp is the specific heat, S the volumetric heat generation, and ρ the fluid density. The
effective thermal conductivity, keff, is computed using Eq. (12),
c p µt
keff = k + (12)
Prt
where the turbulent effective viscosity, µt , is computed using the FMP and the turbulent Prandtl number
Prt = 0.85 [34]. After validating the thermal model against analytical and accepted numerical solutions,
the model is implemented for the cabinet geometry for a heat generation of 60 W/m per chip. The average
difference in maximum chip temperatures between the finite volume model and FLUENT CFD model is
found to be less than 2.5%, and thus adequate for this application.
14
These goals are explained and derived in detail in their application in the cDSP formulation. The next step
is to classify the control variables, noise factors, constants, and identify the appropriate system responses.
These variables and system model schematic is shown below in Figure 6.
Control Variables (x):
Inlet air velocity, Vin [0, 1] m/s
Section a chip power, Qa [0, 200] W iterate
Section b chip power, Qb [0, 200] W
Section c chip power, Qc [0, 200] W
Constants (c):
Total Cabinet Power, Qtotal [1.8, 2.4] kW
Goals:
Minimize Inlet air velocity
Minimize Chip Temperatures
Server Cabinet Minimize Chip Temperature Variation
Model
Constraints:
Total Cabinet Power Qtotal = Gpower
All Chip Temperatures < 85oC
Response Parameters (y):
Chip Temperatures , Ti (oC)
Given
Response model of Total Cabinet Power, Inlet Air Velocity, and
Server Temperature as functions of x1,x2,x3,x4, = Vin, Qa, Qb, Qc
∆Vin = 0.1 m/s
∆Qa, ∆Qb, ∆Qc = f(xi) = -0.1xi + 22 W/m, i = 2,3,4 (13)
15
Collected vector of variability bounds, ∆ j = {∆Vin , ∆Qa , ∆Qb , ∆Qc } (14)
Target for total cabinet power, Gpower = 1800-2400 W/m
Target for inlet velocity, Gvin = 0.1 m/s
Target for total chip temperature sum and their total maximum
possible variation Gtemp = 300 oC, δTmax = 7657 oC
Number of design variables, n = 4
Number of inequality constraints, p = 1
Number of equality constraints, q = 1
Number of system goals, m = 3
Number of servers, s = 10
Find
The values of control factors:
x1, Inlet velocity, Vin
x2, Chip power for Section a, Qa
x3, Chip power for Section b, Qb
x4, Chip power for Section c, Qc
The values of deviation variables di+ , di− , i = 1,…,n
Satisfy
The constraints:
The individual server chip temperatures cannot exceed 85 oC
n δT
Tj + ∑
j
⋅ varj ≤ 85 , j = 1,…,s (15)
i =1 δ xi
16
Given
Using the system model identified in Figure 6 and the computational models developed, a response model
of the server cabinet is developed of the form:
y = f ( x) (24)
where y is a system response as a function of the control variables5. This model uses the FMP based flow
model with input x1, the inlet air velocity. The flow field generated is passed to the finite difference heat
transfer model with inputs x2, x3, x4, the chip heat generation rates for each cabinet section.
The variation of the control variables is determined through literature review and experience.
Manufacturers’ or experimental statistical data can also be used if available for more accurate
representation. For this investigation, a value of ∆Vin = 0.1 m/s corresponds to a ±5% velocity at the upper
bound of 1 m/s. The variation of ∆Qa, ∆Qb, and ∆Qc is given by Eq. (13) to determine the heat generation
variation in the different cabinet sections. Processors that are running continually will have a fairly
constant heat generation rate. To reduce the workload and hence heat generation on a processor, its
computational load is staggered creating a cyclic heat generation when the processor is computing or
waiting, and this cyclic process increases the variation of the heat generation rate. Equation (13) represents
this increased variation with a simple linear function. With the interval bounds representing the maximum
variation of each design variable defined, they are collected into a vector ∆j. Target values for the
responses are determined for the minimization goals by using the lower bound of the response; as such this
goal cannot be exceeded. This is 15 oC for the chip temperatures and 0.2 m/s for the inlet velocity. The
chip temperature goal, Gtemp is computed using the sum of the minimum server chip temperatures and
rounding down. For goals with a target of 0, such as the chip temperature variation goal, the maximum
total chip temperature variation of the system with respect to all design variables is computed using Eq.
(25).
2
n s δ Ti
δ T ( x ) = ∑∑ varj
2
(25)
i =1 δ x j
j =1
In this equation, δ Tmax from the Given section of the cDSP is found applying Eq. (25) using the upper
bound of x2, x3, and x4 and the lower bound of x1.
Find
The design variables, and the associated deviation from the goal value associated with each design variable,
as discussed in Section 4.4, are the parameters to be found.
Satisfy
5
In literature this equation is often of the form y = f ( x , z ) , however in this application there are no noise variables ( z )
17
For Type II robust design the mean and variability of the response are obtained using Taylor expansions of
the system response given in Eq. (24), yielding:
Mean of the Response: µ y = f ( x, z ) (26)
2
∂f
n
Variance of the Response: σ = ∑
2
y ∆xi
2
(27)
i =1 ∂xi
Because the response model is deterministic, the mean in Eq. (26) is simply the value of the response. This
form of the variance in Eq. (27) is known as the Mean Value First Order Second Moment (MVFOSM)
method [41], and the combination of Eqs. (26)-(27) and the cDSP goal formulation given in Table 1 is used
to derive all of the goals and constraints, Eqs. (15)-(19). The computations of the derivatives are computed
using the central difference technique as no closed form solution exists. The rationale behind this mean
and variance approach for goals is given in Figure 3 and the accompanying text.
All goal equations in Table 1, Eqs. (17)-(19) are formulated using the approach described in [16]. For a
data center server cabinet reliability and operational stability are of utmost concern. Therefore, the server
configuration should minimize the potential impact of one server’s thermal load on the rest of the system.
Through the consideration of the minimization of the chip temperature variation with respect to all system
parameters, the consequences of one server overheating are greatly reduced. This goal is reflected by Eq.
(19). The temperature variation is to be minimized for all servers, accounting for variation in all design
variables. Therefore the summation of the variation of the response for each server is computed, and
repeated for all design variables, resulting in the double summation in Eq. (19). Following the formulation
of absolute minimization goals for the cDSP, this value is divided by the maximum possible variation, as
computed in the Given section of the cDSP in Table 2.
It has been shown that processors are more reliable when kept cool; thus, the goal of achieving chip
temperatures of Gtemp given in Eq. (18). Note that the response is computed using the sum of the server
chip temperatures, as the minimization of this summation is equivalent to the minimization of each server
individually with equal emphasis, ensuring the most energy efficient solution is found. Lastly, as the costs
associated with cooling a data center can represent up to 40% of the operating cost, the goal of minimizing
the flow rate of air used to cool the processors, proportional to the inlet air velocity, should be pursued.
This conservation goal is embodied in Eq. (17).
As discussed in Section 2, the worst case scenario handling of the constraints is modeled as:
g j ( x) + ∆g j ≤ 0 j = 1,…,p (28)
Here the function gj(x) yields the value of the constraint function, in this application the chip temperatures
of the servers. This mean value is added to the maximum response variation attainable though the
variability of the control variables, given by ∆gj.
18
n ∂g j
∆g j = ∑ ∆xi , j = 1,…,p (29)
i =1 ∂xi
This worst case treatment of the constraints is appropriate in this application as violation of a constraint is
serious, resulting in a potentially disastrous overheating of the servers. Equations (28) and (29) are applied
directly to the server chip temperatures forming Eq. (15). Here the absolute value of the variation of the
server temperature response is computed for each of the design variables and added together, yielding the
maximum possible temperature. This is computed for all servers to ensure this constraint is met for the
entire cabinet.
The equality constraint, the total cabinet power level is computed using only the nominal response values
of the constraint function. This is because of the nature of an equality constraint, where the inclusion of
variability in a worst case scenario does not make sense as there is no way to ensure the constraint is
always met, only that it will be met by the average conditions, and hence its form in Eq. (16). The bounds
on the control factors keep the problem from diverging during the search, as well as providing simple
constraints. These bounds were established as shown in Eqs. (20)-(21) by evaluating sensible limits based
on the FMP flow model requirements and system response.
Minimize
The solution to the cDSP is the combination of control factors that minimize the total deviation function,
Eq. (23) representing the objectives of thermal efficiency and reliability. The priority of the multiple goals
is implemented though weighting each deviation variable. Variation of these weights can be performed to
change designer preferences of one goal over another, yielding different solutions.
19
within the cabinet, simply supplying more cold air from the CRAC units is not an effective cooling
solution.
To investigate this problem, the total cabinet heat generation was incremented from 1800 to 2400 W/m,
beyond which the problem constraints could not be met. This heat load range represents the lower bound
where the minimum flow rate of cooling air is required, to the maximum total cabinet power that can be
sustained. For each of these incremental heat loads the most energy efficient configuration is found that
simultaneously minimizes the volume of cooling air, the chip temperatures, and the variation of the chip
temperatures, as established by objective Eqs. (17)-(19). The weighting of the goals was established as:
W = {0.5, 0.25, 0.25} (30)
This weighting puts equal emphasis on the cooling energy conservation objective and server reliability
objectives. The resulting values of inlet air velocity and chip power for each cabinet section for increasing
total cabinet power levels are presented in Figure 7 (a).
1 200 86
Inlet Air Velocity
84 Mean
Upper Bound
Maximum Chip Temperature ( o C)
0.75 150 82
Lower Bound
Section Chip Power (W)
Inlet Air Velocity (m/s)
80
78
0.5 100
76
74
0.25 50 72
Section a
Section b
Section c 70
0 0 68
1800 1900 2000 2100 2200 2300 2400 1800 1900 2000 2100 2200 2300 2400
Total Cabinet Power (W ) Total Cabinet Power (W )
(a) (b)
Figure 7 – (a) Inlet air velocity and power distribution (b) maximum chip temperature and bounds
vs. total cabinet power
From Figure 7 (a), the volume of cooling air required to maintain reliable server operation increases in an
exponential fashion. This increase is to be expected, and from this curve a general estimate of cooling
costs for various heat loads can be extrapolated based on CRAC unit operating costs for the facility. Also
in Figure 7 it is evident that as the total power level increases, the server power distribution also must
change, adapting to the new flow conditions and resulting temperature fields for maximum efficiency. At
20
the inlet velocity of 0.54 m/s as used in the most efficient baseline case, the cabinet is dissipating nearly
2250 W/m when using a more thermally efficient power distribution. This shows that through efficiently
utilizing the airflow distribution within the server cabinet, much more power can be reliably dissipated
using the same volume of cooling air over a uniform power distribution.
In order to check that the optimization algorithm has correctly converged, the maximum temperature
bounds are presented in Figure 7 (b). In this figure the maximum chip temperature from all the servers is
plotted versus total cabinet power level. It is evident that the maximum chip temperature constraint is
never broken, as set by the worst case scenario constraint in Eq.(15). In this manner the temperature upper
bound is continually at 85 oC, not the mean value. It is also evident in this figure how the temperature
mean and variability responds with increasing cabinet heat loads and the resulting changes in power
distribution and inlet air velocity.
To validate the solutions of the cDSP, converged cases for 1800, 2100, and 2400 W/m power levels were
simulated using the CFD model, testing the full range of solutions produced. It was found that the CFD
results yielded chip temperatures within an average of 5% of the FMP computed solution. On a higher
level of validation, the power distribution of the servers found to be most efficient yields an approximate
hyperbolic tangent, demonstrated to be a highly efficient configuration by [43]. This result is encouraging,
as the investigation was computed using a very high fidelity three dimensional CFD analysis of a cabinet
with close to 2 million nodes.
The Pareto frontier is traced out through changing the weights in the Archimedean objective function in the
cDSP. This approach of plotting a Pareto curve between the optimal and robust solution points is
investigated in [44] for simple design problems, however the focus is upon the development of this frontier
for problems where a linear weighting may not identify all points along the frontier. In this application the
linear weighting approach was found to provide an adequate mapping of the frontier.
21
A Pareto frontier for a constant total cabinet power, Qtotal = 2300 W/m is constructed showing the feasible
limit of each design variable as the goal changes from an optimal to a robust solution. To generate this
frontier the weighting of the inlet air velocity minimization goal and minimization of the variation of chip
temperatures goal are varied from 0 to 1 and 1 to 0 respectively, while the minimization of chip
temperatures goal is weighted with a 0, defining W as:
W (i ) = {1 − i, 0, i} , i = 0, 0.1,...,1 (31)
The resulting Pareto frontier is plotted in Figure 8 for the response and all variable combinations.
Average Chip Temperature ( oC)
71 80
67 50
0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
Inlet Air Velocity (m/s) Inlet Air Velocity (m/s)
88 156
Section Chip Power (W)
(b) (c)
86 155
84 154
82 Feasable
Feasible 153 Feasable
space Feasible
space space
space
80 152
78 151
0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
Inlet Air Velocity (m/s) Inlet Air Velocity (m/s)
In this plot the differences in design parameters that would occur if the data center were highly efficient
and had little variability, lending itself to a more optimal solution, or a data center that was more loosely
controlled or needed a high level of reliability, requiring a more robust solution, are demonstrated. The
concept of the Pareto frontier is to investigate the requirements of obtaining this more robust solution.
Viewing Figure 8, as the priority changes from optimal to robust, the point spacing increases slightly,
showing more cooling air flow is required for only a slightly more robust solution. Subplot (y) further
22
shows that the chip temperatures to not decrease linearly either. This means that a point towards the middle
of the curve represents the best balance of minimization of cooling air flow rate and temperature variation
minimization. The designer, accounting for the amount of variability in the system under consideration,
specifies the location of this point, yielding the final design parameters.
More important than analysis of the server chip temperatures is the amount of variability in the temperature
response. In order to create a measure for this value for the entire cabinet the sum of the absolute value of
the slope of the temperature response with respect to the design variables is computed:
s
δ Ti
SVin = ∑ (32)
i =1 δ x1
n s
δ Ti
SQ = ∑∑ (33)
j = 2 i =1 δ xj
where n is the number of design variables and s is the number of servers. This is divided into two functions
as the units of the slopes are different. Equation (32) computes the slope of the temperature response with
respect to Vin, and Eq. (33) with respect to the sectional chip powers Qa,b,c, assuming a worst case scenario.
Plotting these responses as a function of the weighting value W as it is changed from optimal to robust
yields the following plots:
260
240
S Vin
220
200
180
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Optimal W eighting Value, i Robust
Solution Solution
5.6
5.5
5.4
SQ
5.3
5.2
5.1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Optimal W eighting Value, i Robust
Solution Solution
Figure 9 - Cabinet chip temperature variability for optimal to robust design objectives
Viewing Figure 9, computing the rough average temperature variability per W/m increase in power
generation for each server is possible by dividing S by 10. The more robust solution point reduces the
23
potential variation in chip temperatures by an average of 7 oC per m/s change in Vin and 0.4 oC per W/m
change in Q. This means using the fairly conservative bounds of variability used in this investigation, the
average variability is reduced by close to 5 oC, or 20%. Although this may seem insignificant, it is
important to remember that the CRAC units can accurately control the room temperature to a single degree,
and are operating continuously, 24 hours a day, 7 days a week, 365 days a year, and thus this reduction
constitutes significant savings. Note that this curve was generated for a cabinet power close to the upper
limit of the system, and by using a lower total cabinet power of 2000 W/m the average variability is
reduced by close to 15 oC, or 60%.
This increased operational stability is obtained not through changing the source of the variability, but only
by re-configuring the cabinet. The cost of this increased stability is a redistribution of the power load,
which has no negative connotations, and an increase in the output of the CRAC units to provide the server
cabinet with an increase of 0.2 m/s flow rate of supply air. Further benefit of this configuration is the
reduction of chip temperatures by 3 oC. Therefore the final tradeoffs between a robust solution, optimal
solution, or anywhere in between are known to the designer. The final decision will be based upon the
amount of variability in the data center, and the cost of increasing the flow rate of the CRAC units versus
the cost of lowering the supply air temperature; there is no universal “degradation” of the solution moving
along the Pareto frontier. Overall, this Pareto approach gives the designer a much greater deal of
information and freedom in configuring the data center cabinets for their desired goals over a single
application of the weighted sum approach.
7 CLOSURE
The results of using the proposed approach to design a robust server cabinet configuration are promising.
The key results being:
• 50% more power than a uniform distribution can be reliably dissipated while maintaining
equal emphasis on energy efficiency and stability.
• 20-60% reduction on the average potential variability of the processors can be achieved
through emphasizing design robustness.
• Any solution between the optimal and robust can be selected from the family of solutions
along the Pareto frontier generated by the cDSP.
• The small degree of analysis error incurred through assumptions and approximate models is
nullified through the robustness of the solutions obtained, verified through CFD analysis.
In our opinion, the proposed approach represents a step towards addressing the challenge of reliable data
center thermal management. Further, we assert, that the proposed approach can be used to increase the
thermal efficiency, considerably reducing the energy costs and environmental impact of operating a data
center, while simultaneously increase the operational stability of the center also, reducing the cost
associated with downtime and backup system maintenance.
24
The approach presented is founded upon the integration of three constructs: the FMP augmented POD,
robust design principles, and the cDSP, to solve the challenges of flow complexity, system variability, and
multiple objective tradeoffs, as shown in Figure 1 and described in Section 2. The viability of the approach
is demonstrated through the application to the data center server cabinet example in Section 5. Analysis of
the results obtained show that the approach enables the computation of superior solutions, both in ultimate
power dissipation and reduction in variability, over the traditionally implemented method, described in
Section 6. Although the robust design implementation is simple, the results are still effective, and the
meta-model can be further integrated with any more complex robust design implementation. In this paper
only a single, albeit complex, example is presented. However, the FMP meta-modeling approach has been
applied to many problems of varying scale and complexity [20], as have the cDSP and robust design
methods. Hence there is no fundamental reason this proposed approach cannot be extended to the more
general domain of the robust design of thermal-fluid systems with equally successful results.
8 ACKNOWLEDGEMENTS
The authors acknowledge the support of the Consortium for Energy Efficient Thermal Management
(CEETHERM), a joint initiative between Georgia Institute of Technology and the University of Maryland.
9 REFERENCES
[1] Pope, S.B., Turbulent Flows. 2000, New York: Cambridge University Press.
[2] Launder, B.E. and Spalding, D.B., Lectures in Mathematical Models of Turbulence. 1972,
London, England: Academic Press.
[3] Schmidt, R. and Iyengar, M. "Effect of Data Center Layout on Rack Inlet Air Temperatures".
ASME InterPACK. 2005. San Francisco, California, USA: ASME, IPACK2005-73385.
[4] Schmidt, R., Karki, K.C., Kelkar, K.M., Radmehr, A., and Patankar, S.V. "Measurements and
Predictions of the Flow Distribution Through Perforated Tiles in Raised Floor Data Centers". The
Pacific Rim / ASME International Electronics Packaging Technical Conference and Exhibition.
2001. Kauai, Hawaii, IPACK2001-15728.
[5] Patel, C., Bash, C., Belady, C., Stahl, L., and Sullivan, D. "Computational Fluid Dynamics
Modeling of High Compute Density Data Centers to Assure System Inlet Air Specifications".
IPACK'01 - The Pacific Rin/ASME International Electronics Packaging Technical Conference
and Exhibition. 2001. Kauai, Hawaii: ASME, IPACK2001-15622.
[6] Iwasaki, H. and Ishizuka, M. "Natural Convection Air Cooling Characteristics of Plate Fins in a
Ventilated Electronic Cabinet". ITHERM 1998 - Eight Intersociety Conference of Thermal and
Thermomechanical Phenomena in Electronic Systems. 1998. Seattle, Washington, p. 124-129.
[7] Patel, C.D., Sharma, R., Bash, C., and Beitelmal, M. "Thermal Considerations in Cooling of Large
Scale High Compute Density Data Centers". ITHERM 2002 - Eight Intersociety Conference on
Thermal and Thermomechanical Phenomena in Electronic Systems. 2002. San Diego, California,
p. 767- 776.
[8] Shrivastava, S., Sammakia, B., Schmidt, R., and Iyengar, M. "Comparative Analysis of Different
Data Center Airflow Management Configurations". ASME InterPACK. 2005. San Francisco,
California, USA: ASME, IPACK2005-73234.
[9] Rambo, J. and Joshi, Y. "Multi-Scale Modeling of High Power Density Data Centers".
InterPACK03 - The Pacific Rim / ASME International Electronics Packaging Technical
Conference and Exhibition. 2003. Kauai, Hawaii, InterPack2003-35297.
25
[10] Rambo, J. and Joshi, Y., Thermal Modeling of Technology Infrastructure Facilities: A Case Study
of Data Centers, in The Handbook of Numerical Heat Transfer,p. 821-849, W.J. Minkowycz, E.M.
Sparrow, and J.Y. Murthy, Editors. New York: Taylor and Francis, 2006.
[11] Shah, A., Carey, V., Bash, C., and Patel, C. "Exergy-Based Optimization Strategies for Multi-
Component Data Center Thermal Management: Part I, Analysis". ASME InterPACK. 2005. San
Francisco, California, USA: ASME, IPACK2005-73137.
[12] Iyengar, M., Schmidt, R., Sharma, A., McVicker, G., Shrivastava, S., Sri-Jayantha, S., Amemiya,
Y., Dang, H., Chainer, T., and Sammakia, B. "Thermal Characterization of Non-Raised Floor Air
Cooled Data Centers Using Numerical Modeling". ASME InterPACK. 2005. San Francisco,
California, USA: ASME, IPACK2005-73387.
[13] Bhopte, S., Agonafer, D., Schmidt, R., and Sammakia, B. "Optimization of Data Center Room
Layout to Minimize Rack Inlet Air Temperature". ASME InterPACK. 2005. San Francisco,
California, USA: ASME, IPACK2005-73027.
[14] Holmes, P., Lumley, J.L., and Berkooz, G., Turbulence, Coherent Structures, Dynamical Systems
and Symmetry. 1996, Great Britain: Cambridge University Press.
[15] Chen, W., Allen, J.K., Tsui, K., and Mistree, F., 1996, "A Procedure for Robust Design:
Minimizing Variations Caused by Noise Factors and Control Factors". ASME Journal of
Mechanical Design. 118: p. 478-485.
[16] Mistree, F., Hughes, O.F., and Bras, B., The Compromise Decision Support Problem and the
Adaptive Linear Programming Algorithm, in AIAA Structural Optimization: Status and Promise,p.
247-286, M.P. Kamat, Editor. Washington, D.C.: AIAA, 1993.
[17] Simpson, T., Peplinski, J., Koch, P., and Allen, J., 2001, "Metamodels for Computer-Based
Engineering Design: Survey and Recommendations". Engineering With Computers. 17: p. 129-
150.
[18] Loeve, M., Probability Theory. 1955, Princeton, NJ: Van Nostrand.
[19] Rambo, J. and Joshi, Y. "Reduced Order Modeling of Steady Turbulent Flows Using the POD".
ASME Summer Heat Transfer Conference. 2005. San Francisco, California, USA: ASME,
HT2005-72143.
[20] Rolander, N., 2005 "An Approach for the Design of Data Center Server Cabinets for Thermal
Efficiency," MS Thesis, MS, George W. Woodruff School of Mechanical Engineering, Georgia
Institute of Technology, Atlanta, GA.
[21] Lumley, J., The Structure of Inhomogeneous Turbulent Flows, in Atmospheric Turbulence and
Radio Wave Propagation, p. 166-178, A.M. Yaglom and V.I. Tatarsky, Editors. Nauka, Moscow,
1967.
[22] Aubry, N., Holmes, P., Lumley, J., and Stone, E., 1988, "The Dynamics of Coherent Structures in
the Wall Region of a Turbulent Boundary Layer". Journal of Fluid Mechanics. 192: p. 155-173.
[23] Sirovich, L., 1987, "Turbulence and the Dynamics of Coherent Structures, Part II: Symmetries and
Transformations". Quart. Appl. Math. XLV(N3): p. 573-582.
[24] Berkooz, G., Holmes, P., Lumley, J., and Mattingly, J., 1997, "Low-Dimensional Models of
Coherent Structures in Turbulence". Physics Reports - Review Section of Physics Letters. 287(N4):
p. 338-384.
[25] Webber, G., Handler, R., and Sirovich, L., 1997, "The Karhunen-Loeve Decomposition of
Minimal Channel Flow". Physics of Fluids. 9(4): p. 1054-1066.
[26] Moin, P. and Moser, R., 1989, "Characteristic-Eddy Decomposition of Turbulence in a Channel".
Journal of Fluid Mechanics. 200: p. 417-509.
[27] Ball, K., Sirovich, L., and Keefe, L., 1991, "Dynamical Eigenfunction Decomposition of
Turbulent Channel Flow". International Journal for Numerical Methods in Fluids. 12: p. 585-604.
[28] Rambo, J. and Joshi, Y. "Physical Models in Data Center Airflow Simulations". IMECE-03 -
ASME International Mechanical Engineering Congress and R&D Exposition. 2003. Washington
D.C., IMECE03-41381.
[29] Boucher, T.D., Auslander, D.M., Bash, C.E., Federspiel, C.C., and Patel, C.D. "Viability of
Dynamic Cooling Control in a Data Center Environment". Inter Society Conference on Thermal
Phenomena. 2004: IEEE, p. 593-600.
[30] Sharma, R.K., Bash, C., Patel, C.D., Friedrich, R.J., and Chase, J.S., "Balance of Power: Dynamic
Thermal Management for Internet Data Centers". 2003, Whitepaper issued by Hewlet Packard
Laboratories Palo Alto, Technical Report: HPL-2003-5.
26
[31] Patel, C., Sharma, R., Bash, C., and Graupner, S. "Energy Aware Grid: Global Workload
Placement based on Energy Efficiency". International Mechanical Engineering Congress and
Exposition. 2003. Washington, D.C., IMECE 2003-41443.
[32] VanGilder, J.W. and Schmidt, R. "Airflow Uniformity Through Perforated Tiles in a Raised-Floor
Data Center". ASME InterPACK. 2005. San Francisco, California, USA: ASME, IPACK2005-
73375.
[33] Radmehr, A., Schmidt, R., Karki, K.C., and Patankar, S.V. "Distributed Leakage Flow in Raised-
Floor Data Centers". ASME InterPACK. 2005. San Francisco, California, USA: ASME,
IPACK2005-73273.
[34] Fluent Incorporated, Fluent v. 6.1 Users Manual. 2001, Lebanon, New Hampshire: Fluent
Incorporated.
[35] Rolander, N., Rambo, J., Joshi, Y., and Mistree, F. "Robust Design if Air-Cooled Server Cabinets
for Thermal Efficiency". ASME InterPACK. 2005. San Francisco, California, USA: ASME,
IPACK2005-73171.
[36] Deane, A.E., Kevrekidis, I.G., Karniadakis, G.E., and Orszag, S.A., 1991, "Low-Dimensional
Models for Complex Geometry Flows: Application to Grooved Channels and Circular Cylinders".
Physics of Fluids A. 3(10): p. 2337-2354.
[37] Ma, X. and Karniadakis, G.E., 2002, "A Low-Dimensional Model for Simulating Three-
Dimensional Cylinder Flows". Journal of Fluid Mechanics. 458: p. 181-190.
[38] Park, H.M. and Cho, D.H., 1996, "Low Dimensional Modeling of Flow Reactors". International
Journal of Heat and Mass Transfer. 36: p. 359-368.
[39] Sirovich, L. and Tarman, I.H., 1998, "Extensions to the Karhunen-Loeve based Approximations of
Complicated Phenomena". Computer Methods in Applied Mechanics and Engineering. 155: p.
359-368.
[40] Patankar, S.V., Numerical Heat Transfer and Fluid Flow. 1980, New York: McGraw Hill.
[41] Phadke, M.S., Quality Engineering using Robust Design. 1989, Englewood Cliffs, New Jersey:
Prentice Hall.
[42] Gill, P., Murray, E.W., and Wright, M.H., Practical Optimization. 1981, London: Academic Press.
[43] Rambo, J. and Joshi, Y., 2005, "Arranging Servers in a Data Processing Cabinet to Optimize
Thermal Performance". ASME Journal of Electronics Packaging. (Publication appearing in Dec
2005).
[44] Mourelatos, Z.P. and Liang, J. "An Efficient Unified Approach for Reliability and Robustness in
Engineering Design". NSF Workshop on Reliable Engineering Computing. 2004. Savannah,
Georgia, p. 127-138.
27