0% found this document useful (0 votes)
57 views

DC Training Manual_Part_1_DC_Design_Basics_v4

The 'Datacenter Design Reference Guide' provides a comprehensive overview of data center design, covering essential concepts, technologies, and methodologies for creating efficient and reliable data centers. It emphasizes the importance of a structured design approach, addressing challenges such as power density, redundancy, and security. The guide serves as a practical tool for professionals involved in data center planning and management, offering insights into modern trends and best practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

DC Training Manual_Part_1_DC_Design_Basics_v4

The 'Datacenter Design Reference Guide' provides a comprehensive overview of data center design, covering essential concepts, technologies, and methodologies for creating efficient and reliable data centers. It emphasizes the importance of a structured design approach, addressing challenges such as power density, redundancy, and security. The guide serves as a practical tool for professionals involved in data center planning and management, offering insights into modern trends and best practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Datacenter Design Reference Guide

Part 1 of 4
Datacenter Design
Basics

Produced By:- Issued by:-


Hashim Ahmed Almansoor KUN Center for Business Development
Senior IT/ Datacenter Consultant Solutions & Services

Email: almansoorh@hotmail.com Email: info@kuncenterye.com


Mobile: +967 771 600 555 Office Phone: +967 1 422 999
Sana’a - Yemen Sana’a - Yemen
Table of Content
Table of Content 2
Preface 4
Datacenter Introduction 5
BACKGROUND/HISTORY 5
MAJOR CHARACTERISTICS 6

Datacenter Technologies 8
BLADE TECHNOLOGY: 8
BLADE SERVERS: 8
BLADE ENCLOSURE 9
UNIFIED FABRIC TECHNOLOGY: 11
WHAT IS CISCO UNIFIED FABRIC? 12
CISCO UNIFIED FABRIC FOR YOUR BUSINESS 18
TECHNOLOGY AND TRENDS ENABLED BY CISCO UNIFIED FABRIC 19
SERVER VIRTUALIZATION 23
WHAT IS SERVER VIRTUALIZATION: 24
WHY VIRTUALIZE SERVERS: 25
HOW DOES VIRTUALIZATION WORK? 25
SUMMARY: 27
DATACENTER VIRTUALIZATION/CLOUD 27

Datacenter Design Concepts 32


ARCHITECTING A PRODUCTIVE DATA CENTER 32
MAKE IT ROBUST 32
MAKE IT MODULAR 32
MAKE IT FLEXIBLE 33
STANDARDIZE 33
PROMOTE GOOD HABITS 34
ESTABLISHING DATA CENTER DESIGN CRITERIA 35
AVAILABILITY 35
ONE ROOM OR SEVERAL? 38
LIFE SPAN 39
BUDGET DECISIONS 40
MANAGING A DATA CENTER PROJECT 42
THE DESIGN PACKAGE 42
WORKING WITH EXPERTS 43
TIPS FOR A SUCCESSFUL PROJECT 44

DC Site Design 46

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 2


ASSESSING VIABLE LOCATIONS FOR YOUR DATA CENTER 46
BUILDING CODES AND THE DATA CENTER SITE 46
NATURAL DISASTERS 47
EVALUATING PHYSICAL ATTRIBUTES OF THE DATA CENTER SITE 52
RELATIVE LOCATION 53
DISASTER RECOVERY OPTIONS 53
PRE-EXISTING INFRASTRUCTURE 54
CONFIRMING SERVICE AVAILABILITY TO THE DATA CENTER SITE 57
PRIORITIZING NEEDS FOR THE DATA CENTER SITE 58
SIZING THE DATA CENTER 58
FINANCIAL AND OTHER CONSIDERATIONS WHEN SIZING THE DATA CENTER 59
EMPLOYEE-BASED SIZING METHOD 60
EQUIPMENT-BASED SIZING METHOD 62
OTHER INFLUENCING FACTORS WHEN SIZING YOUR DATA CENTER 63
ASSOCIATED DATA CENTER SUPPORT ROOMS 64
DEFINING SPACES FOR PHYSICAL ELEMENTS OF YOUR DATA CENTER 69

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 3


Preface
Designing a data center, whether a new facility or retrofitting an existing one, is no easy, simple task. If
you don’t interact with people well, if you can’t communicate effectively with people who are not in
your area of expertise, if you don’t enjoy solving difficult problems, if you want a simple, stress-free
work life, don’t design a data center!!!

Okay, now that all the loafing cowards have stopped reading, we can start talking about what this
manual along with the training lectures and laps hopes to accomplish.

This manual attempts to walk you through the design process and offers a method that can be used to
create a design that meets the requirements of your data center. This manual is not a manual of designs
or specific to any technology. It is a tool to work through your requirements and find solutions to create
the best design for those requirements.

Early in my career as a datacenter deployment manager, someone said to me, “Data centers are
black magic. They are not understandable or discernible by mere mortals.” I can’t print my response to
that person, but that brief confrontational conversation stuck in my brain. I can tell you, designing data
centers isn’t “black magic.” A data center is a complex and interdependent environment, however, it can
be broken down into smaller, more manageable pieces. Methodologies can be used that make designing
data centers understandable and discernible by mere mortals. To that person many years ago who tried
to tell me otherwise, I have this to say: “You were wrong, and this manual proves it!”

Over the years, I’ve worked in a number of different data centers, and in that time I’ve had the
opportunity to talk to many of my customers about their centers and take tours through them. What I
repeatedly found, with very few exceptions, was that there was no overall design methodology used
when planning these centers. If there was a methodology, it usually came out of overcoming one or two
problems that had bitten these people in previous data centers. Sometimes the problem areas were so
over-designed that it forced other design areas to suffer.

Often, the people who designed the space had never worked in data center environments. They
typically designed commercial spaces like offices and warehouses and they used one basic method or
formula for the design criteria: watts per square foot. This method assumes that the equipment load
across the entire space is uniform. In every data center I have seen, the equipment load has never been
uniform. Add to this that all of the pieces that make up a data center (power, cooling, floor load,
connectivity, etc.) are all interrelated and dependent on each other. It became very clear that this old
method of watts per square foot was not an effective or efficient design method. A better method that
could address these issues was needed.

When I started compiling this training manual, I looked at other sources from different vendors,
technologies, articles, blogs, and book related to the datacenter design process. Some of the
information will always apply in datacenter design while other will change through time. So let’s start by
introducing datacenters.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 4


Datacenter Introduction
Background/History
A data center or computer centre (also datacenter) is a facility used to house computer systems and
associated components, such as telecommunications and storage systems. It generally includes
redundant or backup power supplies, redundant data communications connections, environmental
controls (e.g., air conditioning, fire suppression) and security devices.

Data centers have their roots in the huge computer rooms of the early ages of the computing industry.
Early computer systems were complex to operate and maintain, and required a special environment in
which to operate. Many cables were necessary to connect all the components, and methods to
accommodate and organize these were devised, such as standard racks to mount equipment, elevated
floors, and cable trays (installed overhead or under the elevated floor). Also, a single mainframe
required a great deal of power, and had to be cooled to avoid overheating. Security was important –
computers were expensive, and were often used for military purposes. Basic design guidelines for
controlling access to the computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the 1980s, computers started to
be deployed everywhere, in many cases with little or no care about operating requirements. However,
as information technology (IT) operations started to grow in complexity, companies grew aware of the
need to control IT resources. With the advent of client-server computing, during the 1990s,
microcomputers (now called "servers") started to find their places in the old computer rooms. The
availability of inexpensive networking equipment, coupled with new standards for network structured
cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the
company. The use of the term "data center," as applied to specially designed computer rooms, started
to gain popular recognition about this time.

The boom of data centers came during the dot-com bubble. Companies needed fast Internet
connectivity and nonstop operation to deploy systems and establish a presence on the Internet.
Installing such equipment was not viable for many smaller companies. Many companies started building
very large facilities, called Internet data centers (IDCs), which provide businesses with a range of
solutions for systems deployment and operation. New technologies and practices were designed to
handle the scale and the operational requirements of such large-scale operations. These practices
eventually migrated toward the private data centers, and were adopted largely because of their
practical results. With an increase in the uptake of cloud computing, business and government
organizations are scrutinizing data centers to a higher degree in areas such as security, availability,
environmental impact and adherence to standards. Standard Documents from accredited professional
groups, such as the Telecommunications Industry Association, specify the requirements for data center
design. Well-known operational metrics for data center availability can be used to evaluate the business
impact of a disruption. There is still a lot of development being done in operation practice, and also in
environmentally friendly data center design. Data centers are typically very expensive to build and
maintain.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 5


Major Characteristics

IT operations are a crucial aspect of most organizational operations. One of the main concerns is
business continuity; companies rely on their information systems to run their operations. If a system
becomes unavailable, company operations may be impaired or stopped completely. It is necessary to
provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption.
Information security is also a concern, and for this reason a data center has to offer a secure
environment which minimizes the chances of a security breach. A data center must therefore keep high
standards for assuring the integrity and functionality of its hosted computer environment. This is
accomplished through redundancy of both fiber optic cables and power, which includes emergency
backup power generation.

The Telecommunications Industry Association's TIA-942 Telecommunications Infrastructure Standard for


Data Centers, which specifies the minimum requirements for telecommunications infrastructure of data
centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet
hosting data centers. The topology proposed in this document is intended to be applicable to any size
data center.[3]

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces,
provides guidelines for data center spaces within telecommunications networks, and environmental
requirements for the equipment intended for installation in those spaces. These criteria were developed
jointly by Telcordia and industry representatives. They may be applied to data center spaces housing
data processing or Information Technology (IT) equipment. The equipment may be used to:

 Operate and manage a carrier’s telecommunication network


 Provide data center based applications directly to the carrier’s customers
 Provide hosted applications for a third party to provide services to their customers
 Provide a combination of these and similar data center applications.

Effective data center operation requires a balanced investment in both the facility and the housed
equipment. The first step is to establish a baseline facility environment suitable for equipment
installation. Standardization and modularity can yield savings and efficiencies in the design and
construction of telecommunications data centers.

Standardization means integrated building and equipment engineering. Modularity has the benefits of
scalability and easier growth, even when planning forecasts are less than optimal. For these reasons,
telecommunications data centers should be planned in repetitive building blocks of equipment, and
associated power and support (conditioning) equipment when practical. The use of dedicated
centralized systems requires more accurate forecasts of future needs to prevent expensive over
construction, or perhaps worse — under construction that fails to meet future needs.

The "lights-out" data center, also known as a darkened or a dark data center, is a data center that,
ideally, has all but eliminated the need for direct access by personnel, except under extraordinary
circumstances. Because of the lack of need for staff to enter the data center, it can be operated without
lighting. All of the devices are accessed and managed by remote systems, with automation programs
used to perform unattended operations. In addition to the energy savings, reduction in staffing costs

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 6


and the ability to locate the site further from population centers, implementing a lights-out data center
reduces the threat of malicious attacks upon the infrastructure.[4][5]

There is a trend to modernize data centers in order to take advantage of the performance and energy
efficiency increases of newer IT equipment and capabilities, such as cloud computing. This process is
also known as data center transformation.[6]

Organizations are experiencing rapid IT growth but their data centers are aging. Industry research
company International Data Corporation (IDC) puts the average age of a data center at nine-years-old.[6]
Gartner, another research company says data centers older than seven years are obsolete.[7]

In May 2011, data center research organization Uptime Institute, reported that 36 percent of the large
companies it surveyed expect to exhaust IT capacity within the next 18 months.[8]

Data center transformation takes a step-by-step approach through integrated projects carried out over
time. This differs from a traditional method of data center upgrades that takes a serial and siloed
approach.[9] The typical projects within a data center transformation initiative include
standardization/consolidation, virtualization, automation and security.

 Standardization/consolidation: The purpose of this project is to reduce the number of data


centers a large organization may have. This project also helps to reduce the number of
hardware, software platforms, tools and processes within a data center. Organizations replace
aging data center equipment with newer ones that provide increased capacity and performance.
Computing, networking and management platforms are standardized so they are easier to
manage.[10]

 Virtualize: There is a trend to use IT virtualization technologies to replace or consolidate multiple


data center equipment, such as servers. Virtualization helps to lower capital and operational
expenses,[11] and reduce energy consumption.[12] Virtualization technologies are also used to
create virtual desktops, which can then be hosted in data centres and rented out on a
subscription basis.[13] Data released by investment bank Lazard Capital Markets reports that 48
percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a
catalyst for modernization.[14]

 Automating: Data center automation involves automating tasks such as provisioning,


configuration, patching, release management and compliance. As enterprises suffer from few
skilled IT workers,[10] automating tasks make data centers run more efficiently.

 Securing: In modern data centers, the security of data on virtual systems is integrated with
existing security of physical infrastructures.[15] The security of a modern data center must take
into account physical security, network security, and data and user security.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 7


Datacenter Technologies
Blade Technology:
With the high cost of data center floor space and current advances in technology, new installations with
denser cabinets that require more power and cooling continues to be the trend. Besides the challenges
that new installations present, equipment cabinet upgrades can also be a problem as the existing power
and cooling currently provided may not support the new cabinet configuration. Surveys show that
Information Technology equipment is typically replaced every 2 to 5 years depending on the individual
organization and its needs. Surveys also show (See Chart 1) that when asked about their top 3 concerns;
Heat/Power Density is the number one concern of Data Center Management.

High density applications like cluster server configurations have in some cases pushed the kW power
demands as high as 40 kW per cabinet. The required power depends on the equipment, how dense the
cabinet is and whether redundancy is required. This has led to new and innovative solutions for
providing cabinet level power utilizing CDU’s (Cabinet Distribution Units).

Blade Servers:
A blade server is a stripped-down server computer with a modular design optimized to minimize the use
of physical space and energy. Whereas a standard rack-mount server can function with (at least) a
power cord and network cable, blade servers have many components removed to save space, minimize
power consumption and other considerations, while still having all the functional components to be
considered a computer.[clarification needed] A blade enclosure, which can hold multiple blade servers, provides
services such as power, cooling, networking, various interconnects and management. Together, blades
and the blade enclosure form a blade system (also the name of a proprietary solution from Hewlett-
Packard). Different blade providers have differing principles regarding what to include in the blade itself,
and in the blade system altogether.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 8


In a standard server-rack configuration, 1U (one rack unit, 19" [48 cm] wide and 1.75" [4.45 cm] tall)
defines the minimum possible size of any equipment. The principal benefit and justification of blade
computing relates to lifting this restriction so as to reduce size requirements. The most common
computer rack form-factor is 42U high, which limits the number of discrete computer devices directly
mountable in a rack to 42 components. Blades do not have this limitation. As of 2009, densities of up to
128 discrete servers per rack are achievable with blade systems.[1]

Blade enclosure
Enclosure (or chassis) performs many of the non-core computing services found in most computers.
Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these
across many computers that may or may not perform at capacity. By locating these services in one place
and sharing them between the blade computers, the overall utilization becomes more efficient. The
specifics of which services are provided may vary by vendor.

HP BladeSystem c7000 enclosure (populated with 16 blades), with two 3U UPS units below.

1.1.1.1 Power
Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages
than required within computers. Converting this current requires one or more power supply units (or
PSUs). To ensure that the failure of one power source does not affect the operation of the computer,
even entry-level servers may have redundant power supplies, again adding to the bulk and heat output
of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure.
This single power source may come as a power supply in the enclosure or as a dedicated separate PSU
supplying DC to multiple enclosures.[2][3] This setup reduces the number of PSUs required to provide a
resilient power supply.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 9


The popularity of blade servers, and their own appetite for power, has led to an increase in the number
of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically
towards blade servers (such as the BladeUPS).

2.1.1.1 Cooling
During operation, electrical and mechanical components produce heat, which a system must dissipate to
ensure the proper functioning of its components. Most blade enclosures, like most computing systems,
remove heat by using fans.

A frequently underestimated problem when designing high-performance computer systems involves the
conflict between the amount of heat a system generates and the ability of its fans to remove the heat.
The blade's shared power and cooling means that it does not generate as much heat as traditional
servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling-
systems[4][5] that adjust to meet the system's cooling requirements.

At the same time, the increased density of blade-server configurations can still result in higher overall
demands for cooling with racks populated at over 50% full. This is especially true with early-generation
blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling
capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade
servers in the same rack that will only hold 42 1U rack mount servers.[6]

3.1.1.1 Networking
Blade servers generally include integrated or optional network interface controllers for Ethernet or host
adapters for fibre channel storage systems or Converged Network Adapter for a combined solution of
storage and data via one FCoE interface. In many blades at least one NIC or CNA is embedded on the
motherboard (NOB) and extra interfaces can be added using mezzanine cards.

A blade enclosure can provide individual external ports to which each network interface on a blade will
connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices
(such as switches) built into the blade enclosure or in networking blades.

4.1.1.1 Storage
While computers typically use hard disks to store operating systems, applications and data, these are
not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, E-SATA, SCSI,
SAS DAS, FC and iSCSI) are readily moved outside the server, though not all are used in enterprise-level
installations. Implementing these connection interfaces within the computer presents similar challenges
to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be
removed from the blade and presented individually or aggregated either on the chassis or through other
blades.

The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade, an
example of which implementation is the Intel Modular Server System. This allows more board space to
be devoted to extra memory or additional CPUs.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 10


Depending on vendors, some blade servers may include or exclude internal storage devices.

5.1.1.1 Other blades


Since blade enclosures provide a standard method for delivering basic services to computer devices,
other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage,
SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the
enclosure.

Systems administrators can use storage blades where a requirement exists for additional local storage.

6.1.1.1 Blade Uses


Blade servers function well for specific purposes such as web hosting, virtualization, and cluster
computing. Individual blades are typically hot-swappable. As users deal with larger and more diverse
workloads, they add more processing power, memory and I/O bandwidth to blade servers.

Although blade server technology in theory allows for open, cross-vendor solutions, the stage of
development of the technology as of 2009 is such that users encounter fewer problems when using
blades, racks and blade management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers;[12][13] as of 2009
increasing numbers of third-party software vendors have started to enter this growing field.[14]

Blade servers do not, however, provide the answer to every computing problem. One can view them as
a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply
technology. Very large computing tasks may still require server farms of blade servers, and because of
blade servers' high power density, can suffer even more acutely from the heating, ventilation, and air
conditioning problems that affect large conventional server farms.

Unified Fabric Technology:


CIOs today are the primary interface between the business and the IT department. CIOs understand
what the business needs are and how the IT department can service those needs, both in the short and
the long term as a true partner rather than as a service bureau.

While challenged by budgets and the pressures of business needs, CIOs know that unceasing
technological change due to burgeoning trends such as video, convergence, public/private cloud, and
workload mobility have to be accounted for in data center design and practices. Operational silos and
infrastructure not optimized for virtualized and cloud environments hamper the data center from
becoming the engine of enablement for the business. Complexity from the human and technological
sides of the equation hamper efficiency and impede progress in the data center. With all of this
knowledge, the CIO want to bring the services that the business needs. But how?

IT department budgets are likely to remain at current levels or decline. In the traditional data center
environment, IT staff focuses about 70 percent or more of its activity on maintenance tasks required to

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 11


keep existing infrastructure operating properly. If that ratio can be reversed - to 70 percent of staff time
spent on new projects that focus on the business and 30 percent spent on maintenance tasks - the
needs of the business can be served without costly additional staffing. In addition, because IT staff will
be able work on new projects rather than simply day-to-day maintenance, employee morale and job
satisfaction is improved, which can reduce costly employee turnover.

IT is the strategic business enabler that must evolve as the business evolves. To do this, IT must be
ahead of emerging trends, and to be a real partner to the business, the IT department needs to increase
the speed at which projects are rolled out. To meet business goals, deployment times that range from 6
to 36 months are unacceptable. In the past, CIOs could increase staff to meet business needs, but
today's budget constraints make that solution no longer feasible. The real solution is to shift the activity
of the current IT staff from the current maintenance of ongoing operations to more business-oriented
projects without endangering the current operations. The impact of IT on overall revenue is reduced
with efficiencies enabled by evolving trends such as virtual desktop infrastructure (VDI).

Evolution in the data center can help transform IT departments and break organizational silos and
reduce technological complexity. Private and public and private cloud hybrids and other strategies can
automate the data center and enable self-service for both IT and the business units. However, making
the transition to any of these models is not a simple task, but a journey that requires multiple steps.
Every data center has different requirements to serve the business. Most IT departments have started
on server virtualization and consolidation projects, which constitute the first steps. 10 Gigabit Ethernet
and the evolution of the data center network into a virtualization enabling environment are also part of
it. The next step is to prepare the network for the journey to the cloud, whether private or a private and
public hybrid environment.

What Is Cisco Unified Fabric?


A key building block for general-purpose, virtualized and Cloud-based data centers, Cisco Unified Fabric
provides the foundational connectivity and unifies storage, data networking and network services
delivering architectural flexibility and consistent networking across physical, virtual and cloud
environment. Cisco Unified Fabric enables CIOs to address the challenges of the data center in a
comprehensive and complete manner. Cisco Unified Fabric creates a true multiprotocol environment on
a single network that enables efficient communication between data center resources. Cisco Unified
Fabric provides the architectural flexibility needed for companies to match their data centers to business
needs and change as technology and the business changes. The functions of the data center are
becoming more automated, shifting the focus from the maintenance of infrastructure to the servicing of
business needs. CIOs need faster application response time not only in the headquarters office, but also
for remote employees and often on a worldwide basis. They also need critical business applications
deployed and upgraded quickly while providing consistency of experience for the end user and the IT
administrator. Operating costs for the data center, including energy (both electrical costs and heating,
ventilation, and air conditioning [HVAC] costs), need to be reduced, or at least not increase as energy
prices increase. For IT to meet these CIO goals, it needs a strong and flexible foundation to run on, and
Cisco Unified Fabric provides the architectural flexibility necessary. Cisco Unified Fabric provides the
networking foundation for the Cisco Unified Data Center on which you can build the data center

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 12


architecture, whether you run a traditional data center or are on the journey to full private cloud
computing or hybrid private and public cloud computing.

Cisco Unified Fabric is built on three main pillars: convergence, scalability, and intelligence brings
solutions when you need them, enabling optimized resources, faster application rollout, greater
application performance, and lower operating costs. Cisco Unified Fabric can help you reduce costs,
migrate to the next generation data center, and bring value to your business.

7.1.1.1 Convergence
Convergence of the data center network is the melding of the storage network (SAN) with the general
data network (LAN).Convergence is not an all or nothing exercise, despite fears to the contrary.
Convergence, just like the private cloud data center, is a journey that has many steps. Companies need
to keep using their current SAN infrastructure while extending it gradually, transparently, and non-
disruptively into the Ethernet network. The traditionally separate LAN and SAN fabrics evolve into a
converged, unified storage network through normal refresh cycles that replace old servers containing
host bus adapters (HBAs) with new ones containing converged network adapters (CNAs), and storage
devices undergo a similar refresh process. Customer investments are protected throughout their service
and financial life; transitions are gradual and managed.

One concern about the converged network is that Ethernet networks are not reliable enough to handle
sensitive storage traffic: storage traffic needs to arrive with dependable regularity, and in order and
without dropping any frames. However, with fully standardized IEEE Data Center Bridging (DCB), Cisco
provides lossless, in-order reliability for data center environments in conjunction with Cisco's work with
INCITS on Fibre Channel over Ethernet (FCoE). Cisco customers can deploy an Ethernet network for the
data center that conforms to the needs of storage traffic, with a lossless, in-order, highly reliable
network for the data center. The reliability features that are so necessary for storage traffic are also
becoming increasingly necessary for general data center traffic, proving that these protocols and
implementations are not merely storage specific, but a valuable part of the overall effort to increase the
reliability of the data center environment as it evolves.

One of the main ways in which Cisco Unified Fabric brings about a converged network in the data center
is through transparent integration. Transparency means that not only does a Cisco Unified Fabric bring
the essential convergence, but it also integrates with the existing infrastructure, preserving the
customer's investment in current SAN technology. Both the Cisco MDS 9000 Family and the Cisco
Nexus® product family have features that facilitate network convergence. For instance, the Cisco MDS
9000 Family can provide full, bidirectional bridging for FCoE traffic to older Fibre Channel-only storage
arrays and SANs. Similarly, servers attached through HBAs can access newer storage devices connected
through FCoE ports. The Cisco Nexus 5548UP and 5596UP Switches have unified ports. These ports can
support 10 Gigabit Ethernet (including FCoE) or Fibre Channel. With this flexibility, customers can deploy
these Cisco Nexus models now connected to traditional systems with HBAs and convert to FCoE or other
Ethernet- and IP-based storage protocols such as Small Computer System Interface over IP (iSCSI) or
network-attached storage (NAS) as those servers are refreshed.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 13


Because the Cisco Nexus Family and Cisco MDS 9000 Family run the same operating system, Cisco NX-OS
Software, IT staff knowledge and scripts applies across all switching platforms in the converged network.
For example, a new server with an HBA is zoned the same way on both a Cisco MDS 9000 Family and
Cisco Nexus Family switch, and it would be zoned in the same way on both switch families if it had a
CNA.

The transparent nature of convergence with Cisco Unified Fabric also extends to management. Cisco
Data Center Network Manager (DCNM), Cisco's primary management tool, is optimized to work with
both the Cisco MDS 9000 and Cisco Nexus Families. Cisco DCNM provides a secure management
environment that can manage a fully converged network and also monitor and automate common
network administration tasks. With Cisco DCNM, customers can manage storage and general data
center networks as a single entity. By using a familiar tool to manage both networking and storage, Cisco
eases the transition from separate storage and data networks to a converged environment.

Consolidation of the general data and storage network can save customers a lot of money. For example,
customers can significantly decrease the number of physical cables and ports by moving to a converged
10 Gigabit Ethernet network because the number of cables required for reliability and application
bandwidth is significantly reduced. A standard server requires at least four networking cables: two for
the SAN and two for the LAN with current 1 Gigabit Ethernet and Fibre Channel technology. Often, more
1 Gigabit Ethernet ports are needed to meet bandwidth requirements and to provide additional
connections for server management and for a private connection for server clusters. Two 10 Gigabit
Ethernet converged ports can replace all these ports, providing a cable savings of at least 2:1. From a
larger data center perspective, this cable reduction means fewer ports and the capability to decrease
the number of switches and layers in the data center, correspondingly reducing the amount of network
oversubscription. Reducing cabling saves both acquisition cost and the cost of running the cables, and it
reduces cooling costs by improving airflow.

Also, by eliminating or reducing the second network, customers end up with less equipment in the data
center, saving on costly rack space, power, and cooling and making the overall data center much more
efficient. However, the biggest cost savings is the ca[ability for administrators to shift their time from
maintenance of two separate networks and their associated cables and hardware to working on projects
that directly benefit the business.

8.1.1.1 Scalability
A simple definition of scalability is the ability to grow as needs change, often described by the number of
nodes that a given architecture can ultimately support. Cisco Unified Fabric brings thinking out-of-the-
box perspective and offers multidimensional scalability, encompassing device performance, fabric and
system scalability, and geographic span. The network has to be able to scale not only within the data
center, but also to encompass all data centers to create a true unified network. Cisco Unified Fabric
delivers true scalability: not just enabling increased growing port count as needed, but doing so without
compromising on performance, manageability, or cost.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 14


Scalability begins with 10 Gigabit Ethernet. 10 Gigabit Ethernet allows customers to consolidate their
networks, which means fewer tiers to the network and fewer overall ports while providing exponentially
more usable bandwidth for servers and storage. By moving to 10, 40, and 100 Gigabit Ethernet
technologies, customers will be able to consolidate the number of ports and cables dedicated to servers
as well as the overall number of switches under management in the data center. The reduction of
devices reduces management overhead and comes with a concomitant reduction in rack space use,
power, and cooling. In many cases, the consolidation of the network is in concert with or directly follows
server consolidation through server virtualization.

The capability to grow the network is a crucial aspect of scalability. Just as important is the capability to
grow the network in a manner that causes little disruption of the data center and conforms to the needs
of the particular business and data center environment. Cisco believes that each customer network has
its unique characteristics and needs solutions that fit those characteristics, rather than subscribing to
any single rigid architecture. Growth of the network depends on two factors: the capability to upgrade
hardware and the capability to support new protocols as they arise. Cisco's well-known investment
protection philosophy covers the first factor. Cisco consistently upgrades platforms, usually in place,
preserving customers' investment in existing equipment and reducing disruption. The Cisco Nexus 7000
Series Switches reflect the kind of upgrades that support growth that customers can expect. With the
new Fabric 2 cards and Fabric 2 I/O modules, the Cisco Nexus 7000 Series has doubled its capacity from
its introductory switches. Similarly, Cisco MDS 9500 Series Multilayer Directors sold in 2002 can be field
upgraded to support the newest FCoE and high-performance 8-Gbps modules. Cisco also designs it
switches to support new protocols and networking capabilities as they are introduced. The Cisco Nexus
and Cisco MDS 9000 Families both use Cisco NX-OS, a modern modular operating system that facilitates
easy upgrades to the latest features and protocols as they become available.

Growth in the past has meant simply adding capacity. Growth today means taking into consideration
virtualized servers, which may transport a great deal of east-west server-to-server traffic in addition to
the more conventional north-south server-to-client traffic. Cisco FabricPath combines the simplicity of
Layer 2 with the functions of Layer 3 without the problems of spanning tree. Cisco FabricPath allows
multiple paths between endpoints, increasing redundancy and allowing much larger Layer 2 domains.
The more complex network patterns that are created in virtualized and public or private cloud data
centers require a flexible approach to work. With Cisco FabricPath, workloads can be easily moved from
blade to blade, frame to frame, and rack to rack without the difficulty of blocked links. This capability
increases workload reliability as well as resiliency. Cisco Nexus 2000 Series Fabric Extenders create a
single infrastructure for both physical and virtual environments at the top of the rack and simplify
networking by extending the line cards to the top of the rack.

Cisco has also been working with industry partners on virtual extensible LANs (VXLAN), which enable
Layer 2 networking over Layer 3 and so allow Layer 2 domains to be isolated from one another while
allowing them to be extended across Layer 3 boundaries, creating the capability to move workloads
from data center to data center without assigning new IP addresses.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 15


The cross-data center space is where the benefits of Cisco Unified Fabric are particularly apparent. For
example, the Cisco Overlay Transport Virtualization (OTV) feature can extend your Ethernet network
between data centers without creating static tunnels that require the configuration of each data
connection (Figure 1). OTV encapsulates standard LAN traffic and moves it through the IP infrastructure
in between data centers. OTV prevents common events such as unknown unicast packets or spanning-
tree events from crossing the OTV link, providing connectivity with operational isolation. OTV supports
multi-homing and virtual PortChannel (vPC) technology, making it well-suited for next-generation
networks. In addition, OTV is simple to deploy and maintain, especially when compared to traditional
methods such as Multiprotocol Label Switching (MPLS). As part of the Cisco Unified Fabric, OTV allows
customers to scale their data centers beyond the walls, linking together geographically distant data
centers with relative ease, enabling the fast movement of workloads between data centers. For
connecting storage networks across data centers, the Cisco MDS 9000 I/O Accelerator (IOA) feature
reduces the effect of distance-induced latency, enabling reduced backup and replication windows and
making optimal use of expensive long-haul bandwidth.

Figure 1. OTV

9.1.1.1 Intelligence
Cisco Unified Fabric intelligence is what makes everything come together. True efficiency and usability
come from intelligence in the network. The intelligence in the Cisco Nexus and Cisco MDS Families come
from their common operating system, Cisco NX-OS. Cisco NX-OS provides the OS consistency and
common feature set that are necessary to a truly integrated switching solution. Cisco NX-OS allows
intelligent services to be delivered directly to the network in a consistent and even manner, regardless
of whether the application is a standard physical server or a virtual server workload.

The intelligence in Cisco Unified Fabric is implemented with policy-based network services. By using
policy, data center managers can achieve several advantages. After a policy is set, it can be applied the
same way to any workload. This feature is particularly advantageous in a virtualized environment, where
workloads tend to proliferate, particularly in the application development area.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 16


Security is one area in which policy-based network services can enable operations. With consistent
policy, every workload can have the proper security settings for its security class. Network-based policy
services are enabled by the Cisco Nexus 1000V or its hardware counterpart, the Cisco Nexus 1010. The
Nexus 1000V is a soft switch designed to be integrated with VMware vCloud Director. The 1000V
comprises two components, a soft switch embedded into the machine hypervisor and the Virtual
Supervisor Module, which enables and manages per VM policies.

Security audits are much simpler to perform, and overall security of the data center environment is
significantly increased. Security of stored data can be protected using Cisco Storage Media Encryption
(SME) for the Cisco MDS 9000 Family so that organizations no longer have to worry about data loss if
backup tapes are lost or failing disk drives are replaced. Cisco Data Mobility Manager improves
application availability by allowing applications to continue to run while data is migrated from one
storage array to another.

Cisco Unified Fabric contains a complete portfolio of security and Layer 4 through 7 application-
networking services that are completely virtualization aware. These services run as virtual workloads to
provide the scalable, cloud-ready services that your critical applications demand.

Cisco provides consistency from the physical network to the cloud, with consistent policies for virtual
and standard workloads and consistent of policy management across physical and virtual appliances,
such as the Cisco ASA 1000V Cloud Firewall (virtual) and the Cisco ASA Adaptive Security Appliances
(physical). Other products, such as the Cisco Virtual Security Gateway (VSG) virtual firewall, provides
logical isolation of virtual machines in trust zones on the basis of traditional firewall policies as well as
virtual machine attributes that correspond to the application type, tenant, etc. As a virtual firewall node,
Cisco VSG scales easily and allows security policies to migrate easily with application mobility. Cisco
offers a number of other virtualized appliances, The Cisco ASA 1000V Cloud Firewall provides tenant-
edge security services in multi-tenant environments and is operationally consistent with the physical
Cisco ASA security appliances and blades for a transparent transition from a physical to a cloud
environment. Cisco Virtual Wide Area Application Services (vWAAS) provides WAN optimization for
improved performance of virtual data center applications to client desktops, and the virtual Cisco
Network Analysis Module (NAM) provides deep insight into application and network performance
problems, allowing administrators to efficiently identify bottlenecks and optimize resources. For storage
backup and replication traffic, the Cisco MDS 9000 IOA feature improves reliability, performance, and
bandwidth utilization for business continuance and disaster recovery solutions.

The use of policy also enables faster deployment of applications. In the past, deploying an application
required considerable effort to configure the overall physical infrastructure. With Cisco Unified Fabric,
policy can be set with standard availability, security, and performance characteristics while maintaining
the capability to tune those features to the needs of a specialized application if necessary. In that case,
policies for that application can be built using the standard policies, with the policies retained, making
reinstallation or expansion of even a specialized application easy. In addition, with policies, if the
performance characteristics of the network change, rolling the change to every workload is as simple as

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 17


changing the policy and applying it. With consistent policies, application uptime is significantly
increased. The potential for human error is essentially eliminated.

Management of the network is the core of the network's intelligence. With Cisco DCNM, Cisco Nexus
and Cisco MDS 9000 Family products, including the virtual Cisco Nexus 1000V Switch for server
virtualized environments, can be managed from a single pane. Cisco DCNM can be used to set policy and
to automatically provision that policy in converged LAN and SAN environments. Cisco DCNM also
proactively monitors performance and can perform path analytics for both physical and virtual machine
environments.

The features provided by Cisco NX-OS can all be deployed with Cisco DCNM, and it provides multiple
dashboards for ease of use. These dashboards include operational features and can also include network
topological views. Cisco DCNM allows customers to analyze the network from end to end, including
virtualized elements, and record historical performance and capacity trends. Cisco DCNM has been
updated to include FCoE, handling provisioning and monitoring of FCoE deployments, including paths
containing a mix of Fibre Channel and FCoE.

Cisco DCNM has extensive reporting features that allow you to build custom reports specific to your
environment or use reports from preconfigured templates. Cisco DCNM can build these reports across
specific fabrics or across the entire infrastructure. These reports can be sent by email or exported for
further processing by another application, all on a user-defined schedule. Cisco DCNM also provides
automated discovery of the network, keeping track of all physical and logical network device
information. Cisco DCNM discovery data can be used for audit verification for asset tracking or imported
into change-management software.

Cisco Unified Fabric For Your Business


While Cisco Unified Fabric can bring quite a bit of cost savings to your data center, and that is a business
benefit, the true benefits are in the long-term results. Cisco Unified Fabric changes the focus of data
center personnel from mainly on the maintenance of infrastructure to the deployment of new
applications and processes that directly benefit the business. At Cisco, we use our own technologies and
philosophies in our internal IT department and data centers. With Cisco Unified Fabric, Cisco IT was able
to shift data center administrator focus from 70 percent maintenance work and 30 percent new projects
for the business to 40 percent maintenance work and 60 percent new projects for the business. This
shift in operational focus means that new applications, upgrades, and other business-urgent projects are
now installed and running in days rather than weeks or months. The business side of Cisco directly
benefits from the new speed and responsiveness of Cisco IT. CIOs are always looking for the cost savings
that goes with greater automation and consolidation, but the real benefit is the increased productivity
of the IT department as it relates to the rest of the business. The business side of the organization will
notice the increased service and understand that IT is a vital part of the business, not simply an expense
center to be managed. CIOs can accomplish the goal of making IT truly an equal with the rest of the
business by concentrating on operation excellence, and that excellence can be enabled with Cisco
Unified Fabric.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 18


Technology and Trends Enabled by Cisco Unified Fabric
Cisco Unified Fabric supports numerous IT trends, including server virtualization, network consolidation,
private cloud, and data center consolidation. In many cases, the unified fabric functions as the basis for
the trend, providing the bandwidth, automation, and intelligence required to implement the trend in
the organization. These trends require a network that is not merely good enough, but one that is data
center class and ready for more change in the future as technology and trends continue to evolve.

10.1.1.1 Server Virtualization


The Cisco Unified Data Center Server supports virtualization with Cisco's fabric extender technology and
the Cisco Nexus 1000v. The Cisco Nexus 1000v provides a soft switch at the hypervisor and management
control on a per-VM basis with services profiles. Cisco Data Center Virtual Machine Fabric Extender (VM-
FEX) creates a single infrastructure containing both physical and virtual switching at the top of the rack.
Cisco Adapter FEX creates many virtual network interface cards (vNICs) from a single adapter, extending
control and visibility from the rest of the network to the server level.

For every vNIC created, a corresponding virtual Ethernet (vEth) port on the switch is created, eliminating
the need for a separate virtual machine switching infrastructure. Every virtual machine has a dedicated
virtual port on the host switch rather than on a local virtual machine software switch. This approach
moves the switching from the CPU of the server to the specialized application-specify integrated circuits
(ASICs) in the switch itself while maintaining the flexibility of a local soft-switch architecture. The vEth
and vNICs are treated just like normal physical ports, allowing the use of port profiles and all the familiar
network management tools on them.

This approach simplifies the infrastructure and improves application performance. Both physical and
virtual network traffic can be monitored, managed, and provisioned as a whole, without a self-contained
local server hypervisor switch. CNAs that support Cisco Adapter FEX can also create virtual HBAs (vHBAs)
for FCoE deployments and network consolidation. Cisco has its own cards enabled for Cisco Adapter FEX
for the Cisco Unified Computing System™ (Cisco UCS™) and is working with third-party vendors such as
Broadcom, QLogic, Emulex, and Intel to support Cisco Adapter FEX.

Cisco Data Center VM-FEX and Adapter FEX communicate with the upstream parent switch, a Cisco
Nexus 5000 Series Switch, using a prestandard implementation of the IEEE 802.1BR standard. Cisco Data
Center VM-FEX has two primary modes: regular and high performance. In regular mode, traffic traverses
the server hypervisor as usual. In high-performance mode, traffic bypasses the server hypervisor and
goes directly to the switching source. I/O performance in this state is very near bare-metal I/O
performance. Cisco Data Center VM-FEX has more than 3000 production customers and more than 3
million ports deployed. Cisco Data Center VM-FEX and Adapter FEX together with a Cisco Nexus 5500
platform switch upstream bring networking visibility all the way to each individual virtual machine on
the server. These technologies combine to empower the networking administrator to create port
profiles, apply switching features (quality of service [QoS], access control lists [ACLs], etc.) and get
exposure to the traffic generated by each individual virtual machine. Virtual machine mobility then

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 19


becomes a simple process for the network administrator since these port profiles also move with the
virtual machine within the same switch domain.

With the end-to-end visibility and enablement that Cisco Data Center VM-FEX and Adapter FEX provide,
workloads can much more easily be moved from machine to machine or rack to rack. Automation can be
achieved with Cisco DCNM and server hypervisor management tools using service profiles to
automatically create the network environment needed for a given workload. Networking and server
workload become a single process rather than separate segments that need to be managed in a linear
way.

11.1.1.1 Inter-Data Center Communication


As companies accelerate their journey to private cloud architecture, they want to be able to move
workloads between data centers with the same ease that they move workloads from server to server or
rack to rack. Inter-data center communication offers numerous benefits. It aids disaster recovery,
helping restore data center functions after a system failure; although server virtualization makes it much
easier to reinstall operating systems and applications in a different data center, the networking
infrastructure is not so easy to re-create, and inter-data center communication helps address this
challenge. Inter-data center communication also enables organizations to move applications across the
world for a follow-the-sun approach or burst capacity to other corporate data centers in response to
usage spikes. Cisco helps provide these capabilities with the OTV feature, discussed earlier, and Cisco
Locator/ID Separation Protocol (LISP).

LISP provides a new way to route and to address IP. The current IP routing infrastructure uses a single
number to identify a device's location and identity. LISP separates the device identity from the device
location. In simple terms, by separating location and identity, LISP meets routing scalability challenges
because a central location holds all the location information, eliminating the need for every router to
know the entire routing table. It simplifies multihoming and eliminates the need to renumber IP
addresses. By eliminating the need to renumber IP addresses, LISP allows customers to move entire
workloads even onto foreign IP subnets and still have connectivity. LISP is also an enabler for the
transition from IPv4 to IPv6. It allows customers to incrementally deploy IPv6 or run IPv4 over an IPv6
infrastructure. In this way, LISP decreases complexity created by older methodologies and enables easier
operations. Cisco is working with the IETF LISP Working Group to continue the development of LISP and
to create a standard for it.

Of course, moving a workload to a remote data center does no good unless the data that the workload
needs is also there. The SAN extension and IOA features of the Cisco MDS 9000 Family can interconnect
unified storage networks at multiple sites, providing Fibre Channel traffic with data compression
(typically 4:1 or better), encryption for data in flight, and protocol acceleration to reduce latency and
increase throughput and bandwidth utilization.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 20


12.1.1.1 Disaster Recovery and Business Continuity
The goals of disaster recovery and business continuity are to help ensure that the most business-critical
processes handled by IT can be restored with little downtime. The amount of downtime that is tolerable
for a given organization varies greatly, but for most organizations more than a few hours of downtime
adversely affects business. For some organizations, any downtime is an adverse business condition.
Nevertheless, even disaster recovery and business continuity plans have to fit within the budget and can
be costly especially when a dual-hot data center strategy is used.

One of the most compelling reasons to deploy Cisco Unified Fabric is the capability to quickly recover in
the event of a disaster. With the profiles enabled by Cisco DCNM and Cisco UCS Manager, bringing up
critical processes at an alternate hot-site location is much easier. Cisco Unified Fabric can provide the
bandwidth needed for cross-data center replication through WAN acceleration in combination with
OTV. Data can be quickly replicated between data centers, with one designated as a hot spare. Cisco
Unified Fabric, coupled with server virtualization technologies that abstract the workload from the
physical server hardware, makes helping ensure disaster recovery and business continuity much easier.

Cisco NX-OS supports in-service software upgrade (ISSU). The overall modular software architecture of
Cisco NX-OS supports plug-in-based services and features. This framework makes it possible to perform
complete image upgrades without affecting the data-forwarding plane. This transparent upgrade
capability enables nonstop forwarding (NSF) during a software upgrade, including upgrades between full
image versions (for example, from Release 4.0 to Release 4.1).

ISSU is initiated manually either through the command-line interface (CLI) by an administrator, or (in
future releases) through the management interface of the Cisco DCNM software platform. The upgrade
process consists of several phased stages designed to reduce the impact on the overall system, with no
impact on data traffic forwarding.

13.1.1.1 Private Cloud


Server virtualization and the consolidation of data center equipment that has accompanied it have
created leaner, more efficient data centers. For customers who have completed consolidation, network
updates, and server virtualization, the stage is set for private cloud. Private cloud gives customers a
service portal that is serviced by an orchestrator. The service portal allows IT and even non-IT personnel,
depending on how the portal is configured, to request and automatically deploy IT resources. For
example, through the service portal a developer can request a Microsoft Windows server on the
development infrastructure. The orchestration layer then automatically creates the workload and
installs Microsoft Windows on it, allocating server, network, and storage resources. This new workload
can also be decommissioned on a timed basis to prevent the creation of orphaned resources. This kind
of automation can be used to create access for new employees and expand resources for existing
applications; a service portal and orchestrator can handle anything that can be automated. However, for
a service portal and orchestrator to be able to function with little intervention, they need to be on a
network infrastructure that facilitates that automation. Cisco Unified Fabric provides the network fabric
layer that enables the private cloud.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 21


Cisco Unified Fabric enables the private cloud with advanced automation and hooks designed to
facilitate private cloud implementation in Cisco NX-OS. Cisco NX-OS supports not only Cisco cloud
automation tools such as Cisco Intelligent Automation for Cloud (IAC) and Cisco Process Orchestrator,
but also a wide range of third-party automation and orchestration software. Technologies such as Cisco
FEX Technology and the advanced features built into the Cisco Nexus and Cisco MDS 9000 Families are
enabled by Cisco NX-OS for Cisco cloud orchestrators and service portals as well as for third-party cloud
orchestrators and service portals.

14.1.1.1 Virtual Desktop Infrastructure


Cisco Virtualization Experience Infrastructure (VXI) relies on Cisco Unified Fabric as one of its
foundational elements. Cisco VXI is Cisco's agile data center infrastructure for virtual desktop
deployment. Virtual desktops are becoming more common at enterprises not just for the normal
maintenance, technical support, and security benefits, but also as a means to project enterprise
applications onto personal devices such as tablets and smartphones (through bring-your-own-device
[BYOD] initiatives), enabling employees to be productive in the field regardless of the device they are
using. However, a virtual desktop deployment is only as successful as the data center infrastructure on
which it runs. As part of the overall Cisco VXI solution, Cisco Unified Fabric enables the use of port
profiles and service profiles to help ensure performance and security across the virtual desktop
infrastructure. It also offers scalability, with the capability to easily add capacity based on need, which is
crucial to VDI. Cisco Unified Fabric also can easily handle any number of storage connections for VDI,
including iSCSI, Fibre Channel, FCoE, and standard NAS shares.

Cisco has partnered with several companies in the VDI space, including Citrix, Microsoft, and VMware.
On the storage side, Cisco has an extensive partnerships with EMC and NetApp to facilitate the
implementation of virtual desktops. Cisco's tested and validated designs with Cisco Unified Fabric at
their core create stable, secure, and scalable virtual desktop infrastructure.

15.1.1.1 Bandwidth Expansion


Growth in the data center, particularly the increase in overall use of bandwidth, has been a challenge for
as long as there have been data center networks. With the continued exponential growth of data, and
with ever more devices accessing the network in the data center (virtual machines) and in the access
layer (tablets, smartphones, and laptops), data growth is unlikely to abate. Both customers and
employees now expect to have the world at their fingertips, and slow response times are unacceptable
to people who have become accustomed to nearly universal, ubiquitous access. To maintain customer
and employee satisfaction, the expansion of bandwidth must continue. Cisco Unified Fabric helps assure
customers that new technologies and switch capacity improvements will be available well before they
need them. Customers can also be assured that Cisco's upgrades are developed with the goal of
preserving customer investment as much as possible. The Cisco Unified Fabric architecture is designed
to be easily upgraded and expanded, scaling to fit customer needs.

Recently there has been much discussion about tiering in the data center. In traditional data center
designs, a three-tier strategy yielded the most benefits for the customer, and many customers adopted

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 22


this design. In today's modern, fully virtualized or private cloud data center, three-tier designs may not
be the most efficient. Cisco has always prided itself on providing the solution that best fits the customer.
For some customers, that will mean a single-tier design; for other customers, it will mean a two- or
three-tier design. Cisco can accommodate any of these design choices and will advise the customer on
what is best for the customer's unique environment. Whatever design best fits a customer's business
needs is the one that Cisco recommends for the customer. One size does not fit all, and Cisco's wide
range of products and services helps ensure that customers get what they need, not a solution
prescribed because that is the way a particular vendor does things.

16.1.1.1 Data Center Consolidation


Advancements in not only network density but also server density, with blade servers such as Cisco UCS
products, have accelerated consolidation in the data center. Virtualization and ongoing efforts to
standardize on fewer pieces of software have also reduced the overall data footprint. It is now possible,
and likely desirable, both to consolidate within the data center and to reduce the total number of data
centers. Many large organizations will have more data center space than they need after consolidation.
The benefits of closing data centers include not only the reduction in facilities costs, but also the
simplification of the company's overall IT strategy and disaster recovery plans.

Cisco Unified Fabric can help facilitate the consolidation of data centers by improving the
interconnection between existing data centers and through Cisco Services. Cisco Services can help you
plan for the network changes needed to help ensure that mission-critical data and processes are not lost
in the transition. Cisco's expertise with unified fabric technologies and Cisco's long experience with
routers can smooth the path to data center consolidation. Technologies such as LISP and OTV can create
the necessary stable links to enable data center consolidation as well as help ensure a solid connection
between the data centers retained.

Server Virtualization
In computing, virtualization (or virtualisation) is the creation of a virtual (rather than actual) version of
something, such as a hardware platform, operating system (OS), storage device, or network resources.[1]

While a physical computer in the classical sense is clearly a complete and actual machine, both
subjectively (from the user's point of view) and objectively (from the hardware system administrator's
point of view), a virtual machine is subjectively a complete machine (or very close), but objectively
merely a set of files and running programs on an actual, physical machine (which the user need not
necessarily be aware of).

Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic
computing, a scenario in which the IT environment will be able to manage itself based on perceived
activity, and utility computing, in which computer processing power is seen as a utility that clients can
pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while
improving scalability and overall hardware-resource utilization. With virtualization, several operating

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 23


systems can be run in parallel on a single central processing unit (CPU). This parallelism tends to reduce
overhead costs and differs from multitasking, which involves running several programs on the same OS.

Virtualization is a key piece of modern data center design. Virtualization occurs on many devices within
the data center, conceptually virtualization is the ability to create multiple logical devices from one
physical device. We’ve been virtualizing hardware for years: VLANs and VRFs on the network, Volumes
and LUNs on storage, and even our servers were virtualized as far back as the 1970s with LPARs. Server
virtualization hit mainstream in the data center when VMware began effectively partitioning clock cycles
on x86 hardware allowing virtualization to move from big iron to commodity servers.

What is server virtualization:


Server virtualization is the ability to take a single physical server system and carve it up like a pie
(mmmm pie) into multiple virtual hardware subsets.

Each Virtual Machine (VM) once created, or carved out, will operate in a similar fashion to an
independent physical server. Typically each VM is provided with a set of virtual hardware which an
operating system and set of applications can be installed on as if it were a physical server.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 24


Why virtualize servers:

Virtualization has several benefits when done correctly:

 Reduction in infrastructure costs, due to less required server hardware.


o Power
o Cooling
o Cabling (dependant upon design)
o Space
 Availability and management benefits
o Many server virtualization platforms provide automated failover for virtual machines.
o Centralized management and monitoring tools exist for most virtualization platforms.
 Increased hardware utilization
o Standalone servers traditionally suffer from utilization rates as low as 10%. By placing
multiple virtual machines with separate workloads on the same physical server much
higher utilization rates can be achieved. This means you’re actually using the hardware
your purchased, and are powering/cooling.

How does virtualization work?


Typically within an enterprise data center servers are virtualized using a bare metal installed
hypervisor. This is a virtualization operating system that installs directly on the server without the need
for a supporting operating system. In this model the hypervisor is the operating system and the virtual
machine is the application.

Each virtual machine is presented a set of virtual hardware upon which an operating system can be
installed. The fact that the hardware is virtual is transparent to the operating system. The key
components of a physical server that are virtualized are:

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 25


 CPU cycles
 Memory
 I/O connectivity
 Disk

At a very basic level memory and disk capacity, I/O bandwidth, and CPU cycles are shared amongst each
virtual machine. This allows multiple virtual servers to utilize a single physical servers capacity while
maintaining a traditional OS to application relationship. The reason this does such a good job of
increasing utilization is that your spreading several applications across one set of
hardware. Applications typically peak at different times allowing for a more constant state of utilization.

For example imagine an email server, typically an email server is going to peak at 9am, possibly again
after lunch, and once more before quitting time. The rest of the day it’s greatly underutilized (that’s
why marketing email is typically sent late at night.) Now picture a traditional backup server, these
historically run at night when other servers are idle to prevent performance degradation. In a physical
model each of these servers would have been architected for peak capacity to support the max load, but
most of the day they would be underutilized. In a virtual model they can both be run on the same
physical server and compliment one another due to varying peak times.

Another example of the uses of virtualization is hardware refresh. DHCP servers are a great example,
they provide an automatic IP addressing system by leasing IP addresses to requesting hosts, these leases
are typically held for 30 days. DHCP is not an intensive workload. In a physical server environment it
wouldn’t be uncommon to have two or more physical DHCP servers for redundancy. Because of the
light workload these servers would be using minimal hardware, for instance:

 800Mhz processor
 512MB RAM
 1x 10/100 Ethernet port

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 26


 16Gb internal disk

If this physical server were 3-5 years old replacement parts and service contracts would be hard to come
by, additionally because of hardware advancements the server may be more expensive to keep then to
replace. When looking for a refresh for this server, the same hardware would not be available today, a
typical minimal server today would be:

 1+ Ghz Dual or Quad core processor


 1GB or more of RAM
 2x onboard 1GE ports
 136GB internal disk

The application requirements haven’t changed but hardware has moved on. Therefore refreshing the
same DHCP server with new hardware results in even greater underutilization than
before. Virtualization solves this by placing the same DHCP server on a virtualized host and tuning the
hardware to the application requirements while sharing the resources with other applications.

Summary:
Server virtualization has a great deal of benefits in the data center and as such companies are adopting
more and more virtualization every day. The overall reduction in overhead costs such as power, cooling,
and space coupled with the increased hardware utilization make virtualization a no-brainer for most
workloads. Depending on the virtualization platform that’s chosen there are additional benefits of
increased uptime, distributed resource utilization, increased manageability.

Datacenter Virtualization/Cloud
The pressure is on for business and information technology services to produce 100% available
environments with an equally high return of the capital investment allocated to the infrastructure used
to support and operate their technology environments. Despite businesses’ desire for 100% availability
and an “availability-as- a-utility” model, a highly available IT infrastructure should not be architected as a
utility. The availability-as-a-utility model currently lacks standards and the implementation architectures
are complex; it is also interdependent on many components, and the level of people and process
complexity in IT service delivery increases the risk of downtime when compared to technology adoption
risks. These components are not easily quantized and their interactions are not well understood, which
is preventing practical development of the availability-a-as-utility model.

While availability-as- a-utility may not be practical, architecting your IT environment to be part of an
active / active cloud is practical. A recent study published by Gartner Research suggests that if the
business impact of downtime can be considered significant for some business processes, such as those
affecting revenue, regulatory compliance, customer loyalty, health, and safety, then the owners of
enterprise technology infrastructure should invest in continuous availability architectures whose
operating context is active / active (Scott, 2010).

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 27


Creating an active / active environment can be accomplished by using application level clustering or
cloud based virtual mobile workloads. The traditional approach of application level clustering does not
scale at the same rate as a virtualization based application platforms. In most cases, application level
clusters need to be architected and coded on a case-by-case basis. At the same time, the hosting of
these applications on a virtualized server platform typically requires no changes to the application level
confirmation or metadata. Many third party analysts recommend emerging technologies that enable
mobile workloads to replace the fragile, script-based or application dependent recovery routines. These
new technologies are easier to maintain and can provide more granularity and greater consistency, and
can increase efficiencies in the pursuit of this goal. Because emerging tools in this space tend to be
more loosely coupled, rather than tightly coupled (like that of traditional application clustering),
enterprises will be more likely to reduce the “spare” infrastructures required for recovery, and thus
reduce the overall cost of providing highly available recovery infrastructures. In addition, as more
virtualized cloud environments are deployed into production, these tools will be able to make use of the
underlying virtual platform for providing something close to availability-as- a-utility via virtual server
mobility (Witty & Morency, 2010). Therefore, both large and small organizations gain a greater ROI to
virtualize the hosted application and rely on virtualized mobile workloads to provide availability versus
investing in an application level active / active deployment.

Keep in mind that a subset of cloud, automated utility compute environments, do not improve
availability alone. To deliver high preforming and highly available services and applications, storage and
networking infrastructures must also be designed to support these environments via support for
workload mobility (Filks & Passmore, 2010). For this, the best solution is to prepare your applications
and infrastructure to exist within a virtual datacenter environment or to utilize fabric computing. This
type of strategy can offer a number of advantages to an organization, such as improved time to
deployment, greater infrastructure efficiencies, and increased resource utilization in the datacenter. In
addition, recent studies found that placing fabric computing and creating a virtualized datacenters on
the priority list of data center architecture planning when your virtualization plans call for a dynamic
infrastructure (Weiss & Butler, Febuary 2011). High availability, highly efficient multiple datacenter
implementations are prime examples of the previously mentioned dynamic infrastructure.

One of the tools to implement virtualized mobile workloads is the use of long-distance live migration of
virtualized workloads through one of the various types of datacenter bridging technologies. The live
migration of virtualized workloads enables an IT organization to move workloads as required. This can
be a manual process such as in anticipation of a disaster, datacenter moves, workload migrations, and
planned maintenance. It is also implemented automatically to rebalance capacity across
datacenters. Architecting your application infrastructure to support mobile workloads will reduce or
eliminate the downtime associated with these initiatives or projects. Moreover, the support for long-
distance live migration could be used to enable live workload migration across internal and external

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 28


service providers. An example of this is leveraging additional utility compute resources of cloud
datacenters and hybrid private / public cloud architectures.

Consider a VDI deployment deployed in virtualized datacenter model over two geographic
locations. This deployment would leverage long distance live migrations of workloads, first host
redundancy protocol localization for egress traffic, an application delivery network for ingress traffic
selection, and active / active SAN extensions to ensure storage consistency.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 29


In this scenario:

The operations team is able to migrate workloads between datacenters and perform routine
maintenance without the need for specialized maintenance windows. This allows for an increased level
of operational productivity by way more efficient time management.

The need to maintain state of infrastructure metadata and configuration revisions is diminished
significantly as the active / active virtualized datacenter is providing continuous validation of operational
consistency. This also increases productivity and reduces the task load of the operations team.

The investment of the compute, network, and storage infrastructure at both sites is being realized on a
continual basis; one whole set of infrastructure is not sitting dormant for lengthy periods of time.

The need for periodic full scale “failover-test” is eliminated. Both site’s operational veracity is validated
through continuous use. Again, this reduces operational staff requirements and workload. It also can
result in removing the capitol required to secure large recovery centers for testing purposes only.

This short example demonstrates where ROI can be increased while simultaneously providing for
increased application performance and utilization.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 30


The purposeful design and integration of workload mobility technologies into an organization’s IT
strategy has significant potential business benefits. Most enterprises approach availability in an
opportunistic way after they have put their IT infrastructure into production. However, achieving 100%
or near-100% availability and infrastructure efficiency requires a comprehensive planning and
integration; ad-hoc or point-in-time designs and implementations will not suffice. When constructing
your cloud or virtualized datacenter environment, it is critical to not just consider enabling specific
piece-parts of workload migrations and automation, but also enable the entire end-to-end information
technology service including network and storage infrastructures (Witty & Morency, 2010).

In some security circles there are the sayings, “secure by design” and “an environment that is 99%
secure is eventually 100% insecure,” which are lessons directly related to the deployment of clouds and
virtualized datacenters (in addition to the direct implications of the obvious InfoSec
context). Specifically, a cloud environment should be designed with location agnosticism via virtualized
mobile workloads from the start. It should not rely on legacy scripting, warm-standby modes, or offline
migration processes that work 99% of the time. Doing so increases the probability for a costly redesign
to improve infrastructure productivity, or worse, failure – to 100% of the time.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 31


Datacenter Design Concepts
Architecting a Productive Data Center
A server environment designed with your company's long-term needs in mind increases productivity and
avoids downtime. When your Data Center continues functioning during a utility power outage thanks to
a well-designed standby power system and servers avoid connectivity interruptions due to properly
managed cable runs, your employees keep working and your business remains productive. To create
such a resilient and beneficial server environment, you must follow five essential design strategies.

Make It Robust
Above all, your Data Center has to be reliable. Its overarching reason for existence is safeguarding your
company's most critical equipment and applications. Regardless of what catastrophes happen outside—
inclement weather, utility failures, natural disasters, or something else unforeseen—you want your Data
Center up and running so your business continues to operate.

To ensure this, your Data Center infrastructure must have depth: standby power supplies to take over
when commercial electricity fails, and redundant network stations to handle the communication needs
if a networking device malfunctions, for example. Primary systems are not the only ones susceptible to
failure, so your Data Center's backup devices might need backups of their own.

Additionally, the infrastructure must be configured so there is no Achilles Heel, no single component or
feature that makes it vulnerable. It does little good to have multiple standby power systems if they are
all wired through a single circuit, or to have redundant data connections if their cable runs all enter the
building at one location. In both examples, a malfunction at a single point can bring the entire Data
Center offline.

Make It Modular
Your Data Center must not only have a depth of infrastructure, it must also have breadth. You want
sufficient power, data, and cooling throughout the room so that incoming servers can be deployed
according to a logical master plan, not at the mercy of wherever there happens to be enough electrical
outlets or data ports to support them.

To achieve this uniform infrastructure, design the room in interchangeable segments. Stock server
cabinet locations with identical infrastructure and then arrange those locations in identical rows.
Modularity keeps your Data Center infrastructure simple and scalable. It also provides redundancy, on a
smaller scale, as the standby systems mentioned previously. If a component fails in one section of the
Data Center, users can simply plug in to the same infrastructure in another area and immediately be
operational again.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 32


Make It Flexible
It is safe to assume that routers, switches, servers, and data storage devices will advance and change in
the coming years. They will feature more of something than they do now, and it will be your Data
Center's job to support it. Maybe they will get bigger and heavier, requiring more power and floor space.
Maybe they will get smaller, requiring more data connections and cooling as they are packed tighter into
the Data Center. They might even incorporate different technology than today's machines, requiring
alternate infrastructure. The better your server environment responds to change, the more valuable and
cost-effective it is for your business. New equipment can be deployed quicker and easier, with minimal
cost or disruption to the business.

Data Centers are not static, so their infrastructure should not be either. Design for flexibility. Build
infrastructure systems using components that are easily changed or moved. This means installation of
patch panels that can house an array of connector types and pre-wiring electrical conduits so they can
accommodate various electrical plugs by simply swapping their receptacle. It also means avoiding items
that inhibit infrastructure mobility. Deploy fixed cable trays sparingly, and stay away from proprietary
solutions that handcuff you to a single brand or product.

Inflexible infrastructure invariably leads to more expense down the road. Assume, for example, that you
need to install a large data storage unit that requires different data connections and more electrical
outlets than your Data Center already provides. If the room's existing patch panels can house the new
cable connectors and its electrical conduits simply need their receptacles swapped to another type, it is
straightforward and inexpensive to modify a server cabinet location to accept the unit. It requires
significantly more effort and money if the Data Center contains proprietary patch panels, incompatible
electrical conduits, and cable trays; each will need to be removed or maneuvered around to
accommodate the new unit.

Part of a Data Center's flexibility also comes from whether it has enough of a particular type of
infrastructure to handle an increased need in the future. You therefore make your server environment
more adaptable by providing buffer capacity—more data ports, electrical circuits, or cooling capacity
than it otherwise seems to require, for example. Boosting these quantities makes a Data Center more
expensive during initial construction, but also better prepared for future server requirements.

Standardize
Make the Data Center a consistent environment. This provides stability for the servers and networking
equipment it houses, and increases its usability. The room's modularity provides a good foundation for
this, because once a user understands how infrastructure is configured at one cabinet location, he or
she will understand it for the entire room. Build on this by implementing uniform labeling practices,
consistent supplies, and standard procedures for the room. If your company has multiple server
environments, design them with a similar look and feel. Even if one Data Center requires infrastructure
absolutely different from another, use identical signage, color-coding, and supplies to make them
consistent. Standardization makes troubleshooting easier and ensures quality control.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 33


When building a new facility, it might be tempting to try something different, to experiment with an
alternate design philosophy or implement new technology. If there are new solutions that truly provide
quantifiable benefits, then by all means use them. Do not tinker with the design just to tinker, though.
There are many situations in which it is appropriate to experiment with new ideas and infrastructure—
your Data Center project is not one of them. (If you are really interested in trying out a new technology,
consider deploying it in a lab environment first. Labs are built for testing, so experimenting with
different materials or designs is more in line with their purpose.)

Once you find a design model or infrastructure component that provides the functions and features you
are looking for, make it your standard. Avoid variety for variety's sake. While it is good to know that
several products can solve a particular problem for your Data Center, it is a bad idea to deploy several of
them in the same room, at least not unless they are providing another benefit as well. The more
different components in the Data Center, the more complex the environment. The more complex the
environment, the greater the chance that someone will misunderstand the infrastructure and make a
mistake, most likely in an emergency. It is also much easier to support a Data Center when fewer
materials have to be stocked—a single universal power strip rather than a different model in every
country, for example.

NOTE

Establish standards for your Data Centers, but also be ready for those standards to evolve over time. The
server cabinet that so perfectly meets your needs today may not work so well in five years if server
dimensions or power requirements change, for example. Standardize for clarity and consistency, but
make sure that even your Data Center standards exercise some flexibility.

Promote Good Habits


Finally, the Data Center should be engineered to encourage desirable behavior. This is a subtle element,
rarely noticed even by those who work regularly in the environment. Incorporating the right
conveniences into the Data Center and eliminating the wrong ones definitely make the space easier to
manage, though.

Data Center users are busy people. They are looking for the fastest solution to their problems, especially
when they are rushing to bring a system online and are up against a deadline. Given a choice, most of
them follow the path of least resistance. You want to make sure that path goes where you want it to go.

Construct a nearby Build Room where system administrators can unbox servers to keep the Data Center
free of boxes and pallets, for example. Make primary Data Center aisles larger than those between
server rows, creating an obvious path for users to follow when rolling refrigerator-sized servers through
the room for deployment. Install wall-mounted telephones with long receiver cords throughout the Data
Center if you are concerned about interference from cellular phones and want to reduce their usage.
Provide pre-tested patch cords to promote standardized cabling practices. Design the Data Center so
users can easily exercise good habits and they will.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 34


Establishing Data Center Design Criteria
Armed with the knowledge of what your clients need and want, the essentials of good Data Center
design, and the general infrastructure that a Data Center includes, you are ready to define the final
factors driving the design of your server environment. You need to decide upon its scope.

How many layers of infrastructure should your Data Center possess? Will it be the only server
environment for your company or one of several? Will the room house production servers and be a
business-critical site or contain a minimum of equipment for disaster recovery purposes and serve as a
failover location? How long is its initial construction expected to meet your company's needs? And, the
bottom line question for many projects: What is it all going to cost? Addressing these issues provides the
framework for your Data Center's design.

Availability
As stated earlier, the most important aspect of a well-designed Data Center is its ability to protect a
company's critical equipment and applications. The degree to which Data Center devices function
continuously is known as the room's availability or its uptime.

NOTE

The term availability is commonly applied in several different ways. When network engineers talk about
availability, they are referring to the routers and switches that form their company's networks. When
system administrators speak of availability, it is in regards to the uptime of a particular server or
application. When facilities personnel talk about availability, they are referring to the electrical
infrastructure that powers all devices in the Data Center and the mechanical systems that keep them
cool. The focus of this book is the Data Center's physical infrastructure, and therefore the third use of
the term. It is also relevant to note that, because the Data Center's networking devices, applications,
and mechanical equipment are all dependent upon the room's electrical infrastructure—routers,
servers, and air handlers obviously cannot function without power—a company's network and server
availabilities can never be higher than its Data Center availability.

Availability is represented as a percentage of time. How many days, hours, and minutes is the Data
Center's electrical infrastructure operational and supplying power over a given time period? Just as a
baseball player's batting average drops any time he or she fails to hit and safely reach base, so does a
Data Center's availability number suffer whenever the electrical infrastructure fails to provide power to
the room. Unlike in baseball, a .400 average does not make you an all-star.

Most companies want extremely high availability for their Data Center, because downtime affects their
ability to be productive and perform business functions. How high, though, can vary significantly and is
represented by the concept of nines. The more nines of availability, the closer to 100% uptime a system
has achieved. Say, for example, that your company brings the Data Center's electrical system offline for
one hour of maintenance every month. Assuming there are no additional outages of any kind, that

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 35


means that the Data Center is running for all but 12 of the 8760 hours in the year. That's 99.863% of the
time, or two nines of availability.

For some, that's a perfectly acceptable amount of downtime. Other companies that rely the most upon
Data Center availability—financial institutions, government agencies, hospitals, companies with a sizable
Internet presence or that do business across multiple time zones, for example—set five nines of
availability as their standard. That's 99.999% uptime, or little more than five minutes of downtime in a
year.

Table 1-1 outlines the amount of downtime involved at the highest availability levels.

1.1.1.16.1 Table 1-1. Data Center Availability


Level of Availability Percent Downtime per Year
Six Nines 99.9999 32 seconds
Five Nines 99.999 5 minutes, 15 seconds
Four Nines 99.99 52 minutes, 36 seconds
Three Nines 99.9 8 hours, 46 minutes
Two Nines 99 3 days, 15 hours, 40 minutes

When discussing availability, remember that any downtime, even if scheduled beforehand so that it
affects fewer clients, is a reduction in the room's uptime. On the other hand, if a utility power outage
occurs and the Data Center runs on electricity from backup batteries, that does not reduce the room's
availability because there is no interruption to devices in the Data Center.

17.1.1.1 Infrastructure Tiers


The higher the availability you want your Data Center to achieve, the more layers of infrastructure it
must have. Logically, if one standby generator keeps the Data Center running when utility power fails,
then two provide even more protection. The second generator is there to take over in case a problem
occurs with the first during a power outage.

The amount of infrastructure required to support all servers or networking devices in the Data Center,
assuming that the space is filled to maximum capacity and all devices are functioning, is referred to as N
capacity. N stands for need. The term can apply to all types of Data Center infrastructure, but is most
commonly used when discussing standby power, cooling, and the room's network.

Exactly how many infrastructure components are required to achieve N capacity for your Data Center
depends upon several factors, including the room's size, how many electrical circuits it contains, and the
maximum number of servers and networking devices the environment can house. For a small server
environment, N capacity might consist of one air handler to adequately cool the room, one small

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 36


generator to hold its electrical load in the event commercial power fails, and three networking devices
to route all network traffic. For a large Data Center, providing that same functionality might require 15
air handlers, two generators with much larger capacity, and 20 networking devices. Remember, the Data
Center's capacity refers to the level of functionality it provides, not the number of its infrastructure
components.

N is the lowest tier a Data Center's infrastructure is typically designed and built to. It is possible to equip
a Data Center with infrastructure that can adequately support the room only when it is partially full of
servers, but that is not good design. Imagine an expectant couple buying a two-seater automobile. The
car might meet their transportation needs in the short term, but a future upgrade is inevitable.

N+1 is the next tier. N+1 infrastructure can support the Data Center at full server capacity and includes
an additional component, like an automobile with a spare tire. If the large Data Center mentioned
previously requires 15 air handlers, two generators, and 20 networking devices to function at maximum
capacity, it can be designed at N+1 by adding a 16th air handler, a third generator, and at least a 21st
networking device—maybe more depending on the design and need. A Data Center built to this tier can
continue functioning normally while a component is offline, either because of regular maintenance or a
malfunction. Higher tiers of N+2, N+3, and beyond can be likewise achieved by increasing the number of
redundant components.

An even higher tier is N * 2. Alternately called a 2N or system-plus-system design, it involves fully


doubling the required number of infrastructure components. Still using our earlier example, designing
that large Data Center N * 2 means installing 30 air handlers, four generators, and 40 networking
devices.

Because components come in many different configurations and capacities, a Data Center can achieve
an infrastructure tier in several different ways. For example, say your Data Center requires 1500
kilowatts of generator support. This room can be designed to N by installing one 1500-kilowatt
generator. It can also achieve N by sharing the load between two 750-kilowatt generators or among
three 500-kilowatt generators. The configuration options become more important as you achieve a
higher tier. Adding a single generator will make the Data Center N+1, which means two 1500-kilowatt
generators, three 750-kilowatt generators, or four 500-kilowatt generators. If you choose to install the
two largest generators, you are actually providing the room with N * 2 infrastructure.

Even higher tiers exist or can be created: 3N, 4N, and so on. There is theoretically no limit to how many
redundant systems you can install. As you consider how deep you want your infrastructure to be,
however, be aware that just because you can build a Data Center with quadruple-redundant power
systems and state-of-the-art connectivity doesn't mean you should. You want infrastructure tiered to
best meet your company's needs, now and in the foreseeable future.

It is quite possible to have too much redundant infrastructure. Although each extra layer adds
protection, they also add complexity. The more complex the system, the greater the chance of a mistake
occurring through human error, whether during installation of the system or during an emergency when
the standby system is needed. There's also a point of diminishing returns. While it is possible that during

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 37


a power outage your primary, secondary, and tertiary generators might all develop problems and your
quaternary generator is the one that keeps the room running, the odds are much higher of someone
misunderstanding the complicated system and causing an outage by accident. There is also the issue of
cost—quadrupling the number of generators that support your Data Center also quadruples what you
spend when building the room in the first place.

NOTE

I have maintained Data Center incident logs for years and consistently find that more than half of the
unplanned downtimes are caused by human error. It is an observation corroborated by several Data
Center industry groups and in conversations with dozens of other Data Center managers. From a janitor
tripping a full server cabinet's power strip by plugging a vacuum cleaner into it, to a security guard
flipping a circuit breaker to silence an irritating server alarm, to a maintenance worker crashing an entire
Data Center after mistaking an Emergency Power Off button for an automatic door-opener, people are a
Data Center's worst enemy. It is impractical to keep everyone out of your server environment all of the
time, though, and in fact all of the people in the listed incidents had permission to do the work they
were doing, if not in the questionable way they went about it. The lesson to take away from these
incidents is to make your Data Center infrastructure as simple and straightforward as you can. Balance
the benefit of greater redundancy against the hazards of a more complicated system.

One Room or Several?


Although this book generally refers to your server environment as the Data Center, it is just as likely that
your company has multiple rooms to host servers and networking equipment. They might all be fully
functioning server environments, or some might be standby facilities intended to come online only in
the event a primary Data Center is affected by a catastrophic event. Depending upon the size of your
company, they might be distributed among several buildings on a single campus or among several
countries around the world. If you have the ability to choose whether your Data Centers are centralized
within one location or decentralized among many, it is important to understand the advantages and
disadvantages of each configuration. Actually, even if you cannot choose because the arrangement is
already in place, it is helpful to be aware of the strengths and weaknesses of the arrangement of your
hosting space as a whole.

One large Data Center is simpler to manage than several smaller ones. Consistent standards can be
applied more easily to a single, uniform environment, and all of its support personnel can be located at
the site. One large Data Center is also generally less expensive per square foot or square meter than
several smaller environments because construction materials cost less per unit when bought in greater
quantities. In addition, the greater block of floor space is more forgiving for designing around
obstructions such as structural columns. Any upgrades to the Data Center environment, such as
increasing the room's cooling capacity or installing additional security measures, are also maximized
because the improvements benefit all of the company's servers.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 38


On the other hand, having only one server environment puts all of your eggs in one basket. A natural
disaster, major infrastructure failure, or act of sabotage can cripple your business functions. Multiple
smaller Data Centers, whether several miles or kilometers apart or even in different buildings on the
same company site, are less likely to fall victim to a single catastrophic event. Servers with the same
functions can be placed in more than one room, creating an additional form of redundancy.

Alternatively, smaller Data Centers don't achieve the economy of scale that larger rooms do. If building
codes require a wide walkway through your Data Center, for example, you sacrifice more usable space
providing aisles in several rooms rather than just one. It is also a greater challenge to standardize the
construction of server environments located in multiple countries or states. Supplies are not universally
available or even allowed in all regions, and building practices can vary from one city to another, let
alone from one country to another. For example, Heptafluoropropane, known commercially as FM-200
or HFC-227, is commonly used in the United States as a Data Center fire suppression agent, but is
prohibited in some European countries.

The overriding factor for whether your company's Data Center space should be centralized or
distributed depends upon where employees are located and what level of Data Center connectivity they
require to perform their jobs. Connection speeds are limited by geographic distances, and some
computing functions tolerate only a limited amount of latency. This can be improved to a degree by
installing more media to provide greater bandwidth, but requires additional networking hardware and
higher performance connection lines from service providers.

Ideally, a company is large enough that a few large or moderate Data Centers in total can be located at
various company sites where employees require server access to perform their jobs. Functions can be
consolidated at these few locations, providing the redundancy of multiple rooms while still achieving the
economy of scale that larger installations provide.

Life Span
Another factor that helps define the scope of your Data Center is how long it is expected to support your
company's needs without having to be expanded or retrofitted, or otherwise undergo major changes. A
server environment that is expected to handle a company's hosting and computing requirements for
one year should be designed differently than a Data Center to support those functions for 10 years.

When does it make sense to build a Data Center for a shorter time period? This would be when there is
uncertainty surrounding the room or site, such as if the Data Center is constructed in a leased building
that your company is not guaranteed to renew in the future. Perhaps your company is large and has
acquired another business, and your mission is to create a server environment that will serve its needs
only until all of its employees, equipment, and functions are transferred to a new site. Perhaps your
company is a startup, and your goal is to design a temporary Data Center, enabling your young business
to delay the design and construction of a permanent one until growth warrants a larger room and more
funds are available.

As with the decisions about how many Data Centers to build and what level of infrastructure should be
employed, your Data Center's projected life span depends upon the needs of your company, and the

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 39


ideal is likely between the extremes. Equipping a server environment that is going to exist for only
several months with abundant infrastructure is not advisable because your business would see only a
short-term benefit. On the other hand, designing a Data Center to last at least a decade without
alteration understandably requires the commitment of significantly more floor space and infrastructure
to accommodate future growth and technology.

The most effective strategy, then, is to design a Data Center with a projected life span of a few years,
with the intention of expanding it when it appears close to being filled with servers.

Budget Decisions
It is understandable to want a utopian Data Center, an impenetrable bunker with ample floor space,
abundant power, and scorching fast connectivity, capable of withstanding any catastrophe and meeting
all of your company's hosting needs for decades to come. The deep infrastructure needed to create that
theoretical ideal costs very real money, however, so it is important to understand what expenses you
are incurring or avoiding based on the design choices you make. It is no good to spend millions of dollars
on a server environment to protect your company's assets if that cost drives your business into
bankruptcy. You want to spend money on the amount of infrastructure that is appropriate for your
business needs—no more and no less.

The most obvious costs for a Data Center are labor and materials associated with its initial construction,
which, even for a room smaller than 1000 square feet or 100 square meters, normally runs into
hundreds of thousands of dollars. Consulting fees accrued during the design portion of the project add
tens of thousands of dollars to the price. For brand-new sites, there is also the cost of real estate, which
varies greatly depending upon the property's location and the physical characteristics of the building.
After initial construction, ongoing operational expenses associated with the Data Center normally
include utility power costs for providing the room with power and cooling. There is also the running tally
for servers and networking devices that are installed into the room over time.

So, how much is acceptable to spend on the construction of your Data Center? That depends. To
determine the answer, you need to know the value of what your Data Center is protecting. This is not
the purchase price of the servers and networking equipment, although that in itself can far outstrip the
cost of the Data Center. It is how much money your company loses when devices in your Data Center go
offline. Depending on what task an individual server performs, an outage could shut down your
company's website and thereby halt all online ordering, or it could lose data that was the result of
thousands of hours of work by employees. Downtime might also shut down your company's e-mail and
print capabilities. Your business might even face financial penalties if it is unable to provide contracted
services during a Data Center outage.

There are several ways to measure downtime costs. One is to define the cost of a generic employee at
your business and then multiply this by the length of the outage and by how many employees are
unable to work during downtime. An employee's total cost includes every expense they cause the
company to incur, directly or indirectly. Salary, medical plans, retirement benefits, telephone bills, even
the fraction of operational costs for lights, air conditioning, and cubicle or office retail space. The

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 40


personnel expenses, the three listed first, can be calculated by your human resources department, while
the operational costs can be figured by your facilities or real estate organization.

Say, for example, a generic employee costs your company a total of $150,000 a year. (Remember, this is
all costs combined, not just salary.) That is about $60 an hour, assuming the employee works a
traditional 40-hour work week, and 52-week calendar year. If your Data Center goes offline for two
hours and stops the work of 100 employees at that site, that is $12,000 for that single outage. It is fair to
argue that the length of the downtime should be calculated beyond two hours, because once the Data
Center is online it takes more time before all of the affected servers are back on and their applications
are running again. (It takes only a second for machines to lose power and go offline, but bringing them
all back up again can take hours.) The more servers are involved, the longer it takes for them to be
brought back up and the more staff time it takes to do so. For the purpose of this example, let us say
that all of the servers are up and running after another two hours after the Data Center itself comes
back online. That doubles the cost of the outage to $24,000 in soft dollars.

There is also the time that Facilities personnel spend on the outage and its aftermath, rather than doing
their other job functions. Facilities employees might not require the Data Center servers to be
operational to do their jobs, but their time spent identifying, fixing, and reporting on Data Center
infrastructure malfunctions associated with the outage is certainly a relevant cost. If just 20 hours of
staff time is occupied with the outage, that is another $1200, bringing the cost of this one event to more
than $25,000.

If your company's business transactions are handled via a website whose servers are housed in the Data
Center, then the downtime is also affecting your incoming revenue. Your finance department can tally
how much online revenue is traditionally processed through your website during a typical month or
quarter. Divide that by the number of hours that the website is online in that time period, and you have
its hourly income rate. Multiply that by the number of hours it takes for the Data Center and the web-
related servers to come back online, and you have a second data point regarding the outage's cost. For
instance, assume that your company typically brings in $1 million a year in online business. If the
website accepts orders around the clock, then divide $1 million by 8760, the number of hours in a year.
That works out to $114 an hour, which means that the four hours of downtime also disrupted about
$500 in sales.

The most difficult value of all to quantify comes from when a server crashes and data is destroyed.
When this happens not only are the man-hours that went in to creating that data gone, but there is also
a chance that difficult-to-replace intellectual property has been destroyed. Some of this can be
protected and later restored by regular data backups, but at many companies such backups are
performed only weekly. Such loss can also prolong how long it takes a business to bring a product to
market, which in turn leads to missed opportunities for sales or gaining an advantage over a competitor
or both.

All three of these costs—lost employee productivity, disrupted sales transaction revenue, and missing
intellectual property—are soft dollars. They are challenging to evaluate because they do not appear as

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 41


concrete expenses on your company's financial records. They do affect your business, though, and it is
important to weigh them against the price tag of various Data Center infrastructures.

Installing a generator to provide standby power to your server environment might cost $200,000, and
providing a second one for redundancy doubles the expense to $400,000—significant increases to the
overall cost of the project. The price for a single generator is easy to justify if power outages occur even
a few times a year and cost the $25,000-plus in lost productivity in the previous example. Your company
might not want to spend the additional funds for a second generator, however, unless it is for a much
larger Data Center hosting additional servers that, in turn, support many more employees and
customers.

Managing a Data Center Project


As you have undoubtedly concluded, designing and constructing a Data Center is an immense task
involving myriad decisions and details. It is also brimming with opportunities to make a mistake that can
cost your company millions of dollars. It is no surprise, then, if it feels overwhelming, especially for
anyone who has never managed a Data Center project before.

Fortunately, such an undertaking does not rest solely on the shoulders of one person. There are experts
who can and should be tapped for your Data Center project, tips that can help you avoid problems, and
one very useful tool to help you guide the project to success.

The Design Package


Once decisions are made about the design of your Data Center, the information must be assembled,
documented, and ultimately given to the contractors tasked with performing the work. This is done by
first creating a design package. This document can be as minimal as a sketch jotted on a napkin or as
involved as a multimedia package of written guidelines, blueprint schematics, and videotaped
installation practices. The important thing is that it include clear instructions about how the Data Center
is to be constructed and what infrastructure it must include. Pay careful attention to detail and accuracy.
The design package is your most powerful tool for ensuring that your server environment is built to your
specifications. Mistakes or ambiguity in this document lead to installation errors and can cost your
company hundreds of thousands of dollars to correct.

NOTE

In 2000 I was involved in the construction of a 964 square foot (90 square meter) Data Center in Dallas,
Texas. The parent company was building the new environment for an acquired company that specialized
in software systems for IP-based wireless infrastructure. During construction, the cabling contractor
misunderstood the amount of fiber cabling to be installed. He ran 12 strands of fiber to each server
cabinet location instead of the 12 ports that were specified. Fiber ports consist of two strands each, so
the room's 40 server cabinet locations ended up with only half of the connectivity they needed.
Installing the missing fiber could have cost the client company an extra $150,000, twice what was first

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 42


quoted for the work. Because the quantities were clearly spelled out in the design package, the
contractor kept to his original bid for the project.

At minimum, design guidelines for a Data Center must have basic instructions for installation of the
room's infrastructure, calling out how much and what types of pre-structured cabling media and
electrical receptacles are required. More thorough packages include testing procedures, relevant
building codes, part numbers for preferred materials, and even illustrative drawings. Whatever form
your design package takes, it must be detailed enough that workers unfamiliar with your Data Center
design philosophy can follow its instructions.

Working with Experts

As with any construction project, designing and building a Data Center involves many people from
several different fields. Some ensure that the construction is done in accordance to the law. Others add
value by providing knowledge and guidance in areas that are critical to the successful design and
operation of a Data Center. Here is an overview of common Data Center project participants, their
functions, and what expertise they provide:

 The facilities manager— This person's specialty includes all mechanical devices within the Data
Center infrastructure, from air handlers and power distribution units to fire sprinklers and
standby generators. The manager can provide information about your company's infrastructure-
related standards. These might include preferred vendors or suppliers, standardized wiring
schemes, existing service contracts, or other design philosophies your company follows when
building Data Centers, labs, or similar specialized environments. Once the Data Center is online,
the facilities department will provide ongoing maintenance of the mechanical systems.
 The IT manager— This person is responsible for the servers installed in the Data Center. This
manager has insight into the power and data connectivity requirements of these devices. Once
servers are online, the IT department supports, monitors, and upgrades them as needed.
 The network engineer— This person designs, supports, and manages the Data Center's network.
Just as the IT department supports servers, so is the Networking group responsible for all
networking devices. Some companies have multiple networks—perhaps one internal network, a
second external network, and a third dedicated entirely to backup functions. In that instance,
each may be representated by a different engineer.
 The Data Center manager— This person designs, supports, and manages the Data Center's
physical architecture and oversees the layout and installation of incoming servers. He or she
governs physical access into the room and enforces its standards of operation. This manager
also serves as a bridge among the facilities, IT, and networking organizations, ensuring that the
Data Center infrastructure meets the needs of its users. Some companies do not have a distinct
Data Center manager role, instead splitting responsibility for the architecture among the three
roles listed previously.
 The real estate manager or building planner— This person governs how company building space
is used. In a Data Center project, this manager/planner coordinates the floor space
requirements of the server environment and its associated rooms with the floor space needs of
other rooms and departments.
 The project manager— This person manages the Data Center construction project as a whole,
including its budget, timelines, and supervision of outside contractors. His or her project might
cover an entire building or company site, making the Data Center only one portion of what he or

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 43


she must supervise. Some companies outsource this role, but most often this person is a
facilities manager.
 The architectural firm— This outside company ensures that your Data Center design complies
with local building codes. They are also a conduit to specialized subcontracting work, such as a
structural engineer to confirm the weight bearing ability of a Data Center floor or a seismic
engineer to approve its proposed earthquake safeguards. After receiving a design package and
other instructions from the client company, the architectural firm creates formal construction
documents that local municipal officials review and that the project's various contractors follow
when building the Data Center.
 The general contractor— This person oversees and acts as a single point of contact for all other
contractors on the project. Project changes are normally directed in writing to the contractor
rather than through individual contractors.
 The electrical contractor— This contractor installs, labels, and tests all of the Data Center's
electrical and standby equipment.
 The mechanical contractor— This contractor installs and tests all of the Data Center's cooling
equipment. Ducting is typically the contractor's responsibility as well.
 The cabling contractor— Not surprisingly, the cabling contractor installs and tests all of the Data
Center's structured cabling. Its staff also installs any racks or cabinets that cabling terminates
into, and labels the room's cable runs.

Tips for a Successful Project

Although each Data Center project has its own quirks, all of them generally have to overcome similar
challenges in order to succeed. Budgets must be followed, materials must be installed, and timelines
must be adhered to. People must be managed, work must be inspected, and unanticipated issues must
be dealt with as they arise. Fortunately, because the challenges are the same, often their solutions can
be as well. Several fundamental practices have proven useful in keeping a Data Center project on track
and avoiding possible pitfalls.

 Define expectations and communicate them early and often— It is hard to have a successful
project if everyone involved does not understand what's expected of them. Establish clear
deadlines and provide thorough instruction to all contractors. The design package is your most
powerful tool for doing this. Also have a formal kickoff meeting early in the project. Involve all of
the principal members of the project to make sure that the design package is thoroughly read
and that any potential problems are identified and discussed up front.
 Expect long lead times on infrastructure items— Certain components used in the construction of
a Data Center can take months to arrive from their manufacturers, so it is important that the
person responsible for obtaining materials, either the project manager or a particular
contractor, order them early. Call this out directly to the contractors, who often prefer to wait
as long as possible to order infrastructure components. This is understandable because they
themselves often do not get paid until near the end of the project, and delaying purchases helps
their cash flow. It can cause problems for your project, though, so it should be discouraged.
Additional delays occur when working in countries that have stringent customs procedures. If
you know ahead of time what specific infrastructure items are difficult to obtain, it might be
worth purchasing and storing spares. This probably is not practical for high-priced or large items,
like a generator, but can be effective for smaller components such as patch panels or fiber
housings.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 44


Note

It is amazing what infrastructure can be difficult to obtain. Generators, raised-floor tiles, server
cabinets, and fiber cabling are the most common culprits, but they are not alone. During
construction of the Data Center in Dallas, mentioned previously, the cabling contractor located
and purchased all of the project's thousands of individual parts and materials, except for a mere
handful of violet jacks. They were required for the copper cabling that terminated into the
room, and the contractor spent weeks unsuccessfully trying to order them. The violet jacks
miraculously appeared on the day the Data Center came online. I never asked the contractor
where they came from, and he never volunteered the information. I suspect a lab somewhere
on the site was missing a few connections for several weeks, however.

 Establish deadline-based incentives for time-sensitive projects— If your Data Center project
absolutely must be completed quickly, include incentives in your vendor contracts that reward
for the timely completion of key tasks and penalize for delays. Tasks can almost always be
expedited if the right incentives exist. If you take this approach, do not allow safety to suffer in
the rush to meet deadlines. It is better to have a project take longer than to put workers at risk
or skip procedures that exist to ensure Data Center infrastructure works correctly.

 Document everything— Although the design package is intended to cover all details of the
project, questions inevitably arise during the course of construction. Can a different product be
substituted for the one specified in the design package? Is it acceptable to route cables along a
different path? Is the wording on a particular sign acceptable? No matter how minor the
clarifications or changes, document them thoroughly. The Data Center is large and complex and
might be only one part of a larger project. With all of the tasks everyone is trying to accomplish
it is easy to forget or misunderstand a verbal agreement made weeks earlier about a minor
issue. Also, although most people in the construction industry are honest and professional,
some attempt to profit by taking shortcuts or creating more work for themselves and passing on
additional fees to the client company. Clear and thorough documentation is the best weapon
against both honest confusion and questionable practices. E-mail is particularly effective
because messages are dated and simple to archive, and can include the entire thread of a
conversation.
 Visit the construction site frequently— No matter how many phone calls are made, e-mails are
written, meetings are held, and documentation is kept in association with the project, there is
no substitute for walking the site to make sure your Data Center is being built according to the
intended design. If budget or scheduling limitations prohibit regular visits, arrange to have
someone on the site take pictures at least once a week and send them to the project's key
representatives. Digital cameras are ideal for this. There is no cost or time spent to process the
images, and they can be distributed quickly.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 45


DC Site Design
Assessing Viable Locations for Your Data Center

When the time comes for your business to build a server environment, it is essential that the people
responsible for the Data Center's design have an opportunity to provide input into where it is
constructed. Traditionally, upper management decides what property to purchase, based upon a variety
of a company's wants, needs, and business drivers. Other purchase considerations might include a
parcel's price tag, its proximity to a talented labor pool, advantageous tax rates, or the desire to have a
corporate presence in a particular geographic area. Whatever the drivers are, a property's suitability to
house a Data Center must be among them. Purchasing or leasing a site without considering this greatly
hampers the Data Center's capability to protect company servers and networking devices. Not making
this a consideration also invariably leads to additional expense, either to retrofit the land's undesirable
characteristics or to add more infrastructure to compensate for them.

An ideal Data Center location is one that offers many of the same qualities that a Data Center itself
provides a company:

 Protection from hazards


 Easy accessibility
 Features that accommodate future growth and change

These qualities are fairly obvious, like saying that it is easier for an ice chest to keep drinks chilled when
it is also cold outside. Less apparent are what specific characteristics improve or hamper a property's
usability as a Data Center location and why.

Building Codes and the Data Center Site

The first step when evaluating an undeveloped property's suitability as a Data Center site is a
determination of how the property is zoned. Zoning controls whether a server environment is allowed
to be built there at all. Zoning is done in a majority of countries and reflects how the local government
expects a parcel of land to be used. Some classifications prohibit a Data Center.

18.1.1.1 Site Risk Factors

Every parcel of land comes with unique hazards. Knowing the hazards associated with any property
upon which you consider placing a Data Center is very useful and should be a serious consideration.
Maybe the site is in a region known for earthquakes. Maybe it is in a flood plain. Perhaps it is close to an
electrical tower that generates electromagnetic interference. Regardless of whether the dangers are
naturally occurring or man-made, it helps to understand how they can affect a server environment, how
to alter your Data Center's design to prepare for them, and who can provide information about whether
a hazard applies to a particular property. In many cases, the local planning or public works department is
an excellent resource. Your company can also hire a risk management firm to gather applicable hazard
information about a site.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 46


As you read the descriptions that follow about various hazards and the suggestions for how to mitigate
their influence, keep in mind that the absolute best way to avoid a threat to your Data Center is by
keeping it out of harm's way altogether. If a property has multiple risk factors, your company needs to
decide if the merits of the site outweigh the cost of additional infrastructure to compensate for those
hazards and the possibility that a colossal disaster can still overwhelm those preparations.

Natural Disasters

When considering risk factors connected to a property, most people think of natural disasters—
catastrophes that devastate a broad geographic area. That's understandable. These events affect
countless lives, do tremendous property damage, and garner significant media coverage. The following
sections describe several that can threaten a potential Data Center location.

19.1.1.1 Seismic Activity

Earthquakes are caused when tectonic plates within the earth shift, releasing tremendous amounts of
stored energy and transmitting powerful shock waves through the ground. The closer to the surface a
shift occurs, the stronger the quake that is felt. Earthquakes are measured in two ways:

 Magnitude refers to its size, which remains the same no matter where you are or how strong
the shaking is.
 Intensity refers to the shaking, and varies by location.

The most powerful earthquakes can topple buildings, buckle freeways, and cause secondary disasters,
including fires, landslides, and flash floods—all extremely hazardous conditions that you want your Data
Center to be well away from, or at least as insulated as possible against. Even a moderate quake that
causes minimal property damage can tip over Data Center server cabinets, sever underground data
cabling, or induce utility power outages.

If your Data Center site is in an area known for seismic activity, the entire building should be designed to
lessen earthquake impacts. Limit the planned heights of buildings, consolidate weight onto the lowest
floors, and use high-quality building materials that can withstand shaking and won't easily catch fire.
Anchor the building's structure to the foundation and use earthquake-resistant technologies such as
steel frames and shear walls. Finally, limit the number of glass exterior walls and, no matter what
architectural style is applied to the building, make sure that all balconies, chimneys, and exterior
ornamentation are securely braced.

20.1.1.1 Ice Storms

When weather conditions are right, freezing rain can blanket a region with ice, making roads impassable
and triggering widespread utility power outages for hundreds of square miles or kilometers. These ice
storms occur when relative humidity is near 100 percent and alternating layers of cold and warm air
form. Unlike some natural disasters that occur suddenly and dissipate, severe ice storms can last for
days. Because they cover a huge area and make it difficult for repair crews to get around, it can take
several weeks for normal utility service to be restored to an area.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 47


If your Data Center site is in a region susceptible to ice storms, operate under the assumption that the
room might need to run on standby power for extended periods of time and that contracted services for
refueling your standby generator, assuming you have one, might be unreliable. Consider this when
deciding what tier of infrastructure to build your Data Center to. Additional battery backups or standby
generators with greater fuel capacity might be in order.

Be aware that the wintry cold that contributes to an ice storm can itself threaten your building's
infrastructure. When temperatures approach 32° Fahrenheit (0° Celsius), ice blockages can form within
water pipes. High pressure then occurs between the blockage and an end faucet or valve, which can
cause the pipe to burst. Because the liquid inside is frozen, a break in a pipe might go unnoticed until it
thaws. Thoroughly insulate your building's piping and perform regular maintenance to reduce the
likelihood of a burst pipe.

21.1.1.1 Hurricanes

Hurricanes, alternatively known in parts of the world as tropical cyclones or typhoons, are severe
tropical storms capable of generating winds up to 160 miles per hour (257.5 kilometers per hour). (A
tropical storm is not officially considered a hurricane until its winds reach at least 74 miles per hour
[119.1 kilometers per hour].) Hurricanes form over all of the world's tropical oceans except for the South
Atlantic and Southeastern Pacific. Although they do not start on land, powerful hurricanes have been
known to come inland for hundreds of miles or kilometers before dissipating, causing widespread utility
power outages and sometimes spawning tornadoes.

If your Data Center site might be in the path of a hurricane in the future, design the room without
exterior windows. Transparent views into your server environment are not a good idea at any time,
because they create an unnecessary security risk, and should especially be avoided should the building
be struck by a hurricane. A hurricane's high winds can propel large debris through a glass window, even
one that has been taped or boarded over.

Locate the server environment at the center of the building, if possible, and surround it with cement
interior walls. If the Data Center must be near an external wall, surround it with a service corridor. All of
your site's major infrastructure components should likewise be sheltered to withstand high winds.

Additionally, because hurricanes often cause power failures that last for days, design your Data Center
with adequate standby power to continue functioning for that long

Besides high winds, hurricanes carry tremendous amounts of water. If a hurricane passes anywhere in
the vicinity of your Data Center site, there is an increased chance of moisture entering the buildings
there. For instance, external air vents on a building are typically oriented downward and covered with a
protective lip. Although this is sufficient to keep out moisture from even a heavy rainstorm, a storm
driven by hurricane winds projects water in all directions—including up into a downward-facing vent.
Install additional barriers in the Data Center building to make it more water resistant. Consider having a
subroof, for example, that can continue to protect the Data Center if a storm damages the main roof.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 48


22.1.1.1 Tornadoes

A tornado is an intense rotating column of air. Created by thunderstorms and fed by warm, humid air,
they extend from the base of a storm cloud to the ground. They contain winds up to 300 miles per hour
(482.8 kilometers per hour), and can inflict great swaths of damage 50 miles (80.5 kilometers) long and
more than a mile (1.6 kilometers) wide. Tornadoes can cause significant property damage, trigger utility
power outages, and generate large hail. The most powerful tornadoes are capable of throwing cars and
other large debris great distances, leveling homes, and even stripping bark off of trees.

If your Data Center site is in an area where tornadoes occur, it should be designed with the same
safeguards as for a hurricane—avoid external windows on the Data Center and provide enough standby
power systems to do without commercial power for extended periods of time.

23.1.1.1 Flooding

Flooding most often occurs because of torrential rains. The rains either cause rivers and oceans to rise
dramatically and threaten nearby structures or else trigger flash flooding in places with non-absorbent
terrain, such as pavement, hard-packed dirt, or already saturated soil. Although less frequent, flooding
can also occur from a break in a dam or other water control system. Severe flooding can uproot trees
and move parked cars, breach walls, and make roadways impassable. Flooding can also trigger utility
outages and cause landslides.

If your Data Center site is in an area prone to flooding, make sure that the building's walls are
watertight, reinforce the structure to resist water pressure, and build on elevated ground. If the
property has no elevated ground, then consider building the Data Center above the ground floor. This
keeps your company's most important equipment out of harm's way if water does reach the structure.

Placing the Data Center above the ground floor, however, affects other elements of the building's
design. First, a building's weight-bearing capability is less on its upper levels than the ground floor. To
compensate for this, either structurally reinforce the Data Center area or else accept significant
limitations upon the acceptable weight of incoming server equipment. Current trends in server design
are for more compact form factors that make for heavier weight loads, so in most instances
reinforcement is the better option.

Second, if the Data Center is not on the ground floor, the building must have a freight elevator to
accommodate incoming equipment and supplies. The elevator must be tall, wide, and deep enough to
accept server cabinets, tape libraries, or pallets of materials. The elevator must also have the ability to
support the equipment's weight as well as that of the pallet jack and people transporting them.

24.1.1.1 Landslides

A landslide occurs when a hill or other major ground slope collapses, bringing rock, dirt, mud, or other
debris sliding down to lower ground. These flows can cause significant property damage, either in a
single fast-moving event or gradually over time. Slides, also known as earthflows or mudflows, are

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 49


propelled by gravity and occur when inclined earth is no longer stable enough to resist its downward
pull. Earthquakes, heavy rainfall, soil erosion, and volcanic eruptions commonly trigger landslides.

If your Data Center site is in an area prone to slides, the environment should be designed with
safeguards similar to those for flooding—make exterior walls watertight and strong to withstand sliding
muck and debris and build on elevated ground. Other advisable practices are the construction of a
retention wall or channel to direct flows around the Data Center building and the planting of
groundcover on nearby slopes.

Parcels at the base of a steep slope, drainage channel, or developed hillside are more susceptible to
landslides. Slopes that contain no vegetation, such as those burned by fire, are also more vulnerable to
them. Trees, fences, power lines, walls, or other structures that are tilted on a site might be an
indication of a gradual slide. Local geologists as well as those in the planning or public works department
can tell you whether a particular property is vulnerable to landsliding.

25.1.1.1 Fire

Fires are the most common of natural disasters. They cause significant property damage, spread quickly,
and can be started by anything from faulty wiring to lightning strikes to intentional arson. Even a coffee
maker in a break room is a potential source of a fire. Large fires can span tens of thousands of acres and
threaten numerous buildings. Even the act of extinguishing a fire once it has entered a structure can
lead to millions of dollars in losses from water damage. Additionally, a fire that fails to reach your Data
Center can still cause problems. Minor amounts of smoke from a blaze can clog the sensitive
mechanisms within servers and networking devices, causing them to malfunction later.

The best ways to deal with fire in the design of your Data Center are prevention and early detection.
Install fire-resistant walls and doors, smoke detection devices, and fire suppression systems, both in the
Data Center and throughout the building. It is also desirable for the building to have adjustable dampers
on its ventilation and air conditioning system. This enables you to prevent outside air from entering the
server environment during a nearby brush or building fire.

Once your server environment is online, remove potential fuel for a fire by equipping the room with
fireproof trash cans and prohibiting combustible materials in the Data Center such as cardboard. Be sure
to keep brush and other flammable items cleared away from the building, too.

26.1.1.1 Pollution

Just as smoke particles from a fire can interfere with the proper functioning of servers and networking
devices, so too can other airborne contaminants such as dust, pesticides, and industrial byproducts.
Over time, these pollutants can cause server components to short-circuit or overheat.

If your Data Center is built in a region where contaminants are present, protect your equipment by
limiting the amount of outside air that is cycled into the room. The percentage of external air that must
be circulated into a Data Center is normally controlled by regional building codes or building control
standards. The ratios of internal and external air are based upon the size of the server environment and
its expected occupancy. A Data Center that has personnel working in it throughout the day is typically

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 50


required to incorporate more outside air than a Data Center the staff of which are in the room less
frequently. Some municipalities even allow zero external air if no employees work throughout the day in
the server environment.

A second method of protecting your Data Center is incorporation of high efficiency air filtration into the
environment's air conditioning system. Be sure to schedule frequent and regular filter changes for all
Data Center air handlers.

27.1.1.1 Electromagnetic Interference

Electromagnetic interference, or radio frequency interference, is when an electromagnetic field


interrupts or degrades the normal operation of an electronic device. Such interference is generated on a
small scale by everyday items ranging from cellular phones to fluorescent lights. Large sources of
interference, such as telecommunication signal facilities, airports, or electrical railways, can interfere
with Data Center servers and networking devices if they are in close proximity.

Electromagnetic interference is particularly challenging because it's not always easy to tell that your
Data Center devices are being subjected to it. Even when that is known, you may not be able to
immediately ascertain what the source of interference is. System administrators, network engineers,
and others who work directly with the equipment are most likely to see symptoms first, even if they
don't realize their cause. If you learn of a server experiencing unexplained data errors and standard
troubleshooting doesn't resolve the problem, check around for possible sources of electromagnetic
interference.

If your property is near an identified source of interference, locate the Data Center as far away as
possible to limit the effects. All manner of shielding products—coatings, compounds, and metals;
meshes, strips, and even metalized fabric—are available to block electromagnetic interference, but most
of them are intended for use on individual devices rather than over a large Data Center. Again, distance
from the source of interference is the best protection. That's because electromagnetic interference
works according to the inverse square law of physics, which states that a quantity of something is
inversely proportional to the square of the distance from a source point. The law applies to gravity,
electric fields, light, sound, and radiation.

So, if a Data Center is located twice as far from a source of electromagnetic interference, it receives only
1/4 of the radiation. Likewise, if a Data Center is 10 times as far away, it receives only 1/100. To see an
example of this effect, shine a flashlight (torch) against a wall. Back away from the wall, increasing the
wall's distance from the light source (the mouth of the flashlight), and the circle of light against the wall
becomes larger and fainter. Move closer, reducing the distance between wall and the light source, and
that circle of light becomes smaller and more intense.

28.1.1.1 Vibration

Servers and networking devices, like other complex and sensitive electronic equipment, are vulnerable
to vibrations as well. As when dealing with electromagnetic interference, there are several commercial
products available to inhibit vibrations from reaching Data Center servers—from springs to gel-filled
mats to rubber mounts—but the most effective solution is simply avoid locating your Data Center near

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 51


large vibration sources. Airports, railroads, major thoroughfares, industrial tools, and road construction
are common sources of vibrations.

29.1.1.1 Political Climates

Among the most challenging risk factors to diagnose and prepare a potential Data Center site for are the
man-made kind. Political instability in a region can delay the delivery of Data Center equipment and
materials, make utility services unreliable, and—worst of all—threaten the safety of employees.
Depending upon how contentious conditions are, workers of certain nationalities might even be
prohibited from traveling into the region.

When dealing in an area with conflict, adjust your Data Center project timelines to accommodate delays.
Design the server environment itself with standby power systems to support the room if utility services
fail. Reinforce building walls to withstand explosions. Install safety bollards around entrances and any
external infrastructure, such as generators, to protect against someone ramming a vulnerable area with
a car or truck. Consider placing security fencing around the entire site.

30.1.1.1 Flight Paths

If there's an airport in the region of a potential Data Center site, be aware of the flight paths that
incoming and outgoing planes regularly follow. Although crashes or debris falling from aircraft are rare,
the effect can be devastating if something does strike your Data Center.

How should you prepare for this unlikely event? Even if your property lies in the path of a busy airport, it
is probably not cost effective to make your Data Center an impenetrable bunker. A more practical
solution is to distribute your servers. Build two smaller server environments and place them in separate
locations, even if just two different buildings on the same property. As unlikely as it is for your Data
Center to be struck by an out-of-control plane, it is that much less likely for two rooms to suffer the
same fate.

Evaluating Physical Attributes of the Data Center Site

Once you are aware of the risk factors facing a potential Data Center site, it is time to assess the physical
features of the property by answering the following questions:

 Where is the site?


 Is it easy to reach?
 Does it have existing structures?
 If so, how suited are they to housing a server environment?
 Specifically, how well does the site support the key design strategies for constructing a
productive Data Center?

Remember, you want your Data Center to be robust, modular, flexible, standardized, and to intuitively
promote good practices by users.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 52


Relative Location

There's an old saying in real estate that the three most important features about a property are location,
location, location. The saying is equally true when evaluating a potential Data Center site, albeit for a
different reason. Whereas a home buyer might care about location because of a residence's vicinity to a
posh neighborhood, a Data Center designer cares because of how easy it is to reach the property and
where it is in relation to the company's other server environments.

31.1.1.1 Accessibility

When examining a property, make note of how easy it is to enter and leave by answering questions such
as the following:

 Is the site visible from a major roadway?


 Are their multiple routes to reach the property or just one?
 Could a hazardous materials spill or major traffic accident at a single intersection block access to
the site?

Treat the property's accessibility the same as other Data Center infrastructure details—look for
redundancy and stay away from single points of failure.

An ideal Data Center site can be reached easily and has several means of ingress and egress. A property
with limited access affects the everyday delivery of equipment, because large trucks might be unable to
reach the site. Limited access also influences the response time for emergency service vehicles to reach
the site in a crisis.

Finally, determine if the property is located near large population centers. This influences how close
your employees live and therefore how long it might take someone to reach the Data Center after hours
if an emergency occurs.

Disaster Recovery Options

There are countless publications that thoroughly explain how and why to create a business continuation
strategy for your company. While that topic isn't the focus of this book, it is a good idea to think about
how a potential Data Center site fits in to your company's disaster recovery plan.

If your plan calls for transferring business functions from one Data Center to another, for example, note
the distance between the property you are evaluating and your company's other server environments
and answer the following questions:

 Are the locations close enough that network latency won't be a problem?
 Can employees travel from one site to another in a reasonable amount of time, even if major
roadways are blocked or airline flights aren't operating normally?
 Are the locations far enough apart that they are both unlikely to be affected by a single disaster?

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 53


Likewise, if your company backs up information from the servers in your Data Center and stores the data
tapes off-site, where are those facilities in relation to your potential Data Center property? The greater
the distance between your Data Center and off-site storage facility, the longer it will take to retrieve and
restore the data after a disaster.

Pre-Existing Infrastructure

Many sites evaluated for housing a Data Center are at least partially developed, whether they have little
more than an empty building shell or a fully equipped office building with a pre-existing server
environment. Whatever the building was previously used for, diagnose if the infrastructure that's
already in place can accommodate your needs or at least be retrofitted to do so. Important
infrastructure considerations are power systems, cooling systems, and structured cabling, as described
in the sections that follow.

32.1.1.1 Power Analysis

Assess the property's power systems, including its electrical infrastructure and standby systems by
answering the following questions:

 How much power is readily available?


 Are there enough electrical circuits to support your Data Center?
 If not, is there enough physical capacity at the site to add more?
 Do power feeds come in to the building at more than one location?
 What alterations must be made to accommodate battery backup systems and standby
generators?
 If the site already has standby systems, are they of sufficient capacity to support your Data
Center?
 If the site doesn't have them, does it at least have the physical space and structural support for
them to be installed?

Make note of how much redundancy is present in the electrical infrastructure and what single points of
failure exist.

33.1.1.1 Cooling Capabilities

Data Centers require significantly more cooling infrastructure than the equivalent amount of office
space. Therefore, measuring the cooling capacity of a potential Data Center site is important. To assess
the cooling capacity of the site, determine the following:

 Can the building's existing cooling infrastructure provide adequate cooling for a Data Center?
 Is there adequate space and structural support on the site to support air chillers, condenser
units, or cooling towers?
 How much modification must be done to the building's existing air ducting to reroute cooling?

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 54


34.1.1.1 Structured Cabling

Determine how much and what type of structured cabling already exists in and to the building.
Determine if enough connections exist to support your Data Center and if cabling comes in to the
building at more than one location.

Certain cabling media have distance limitations, so it is a good idea to measure how far cable runs must
travel, both for the Data Center and throughout the building. Also make note of how much redundancy
is present in the cabling infrastructure and what single points of failure exist.

35.1.1.1 Amenities and Obstacles

Aside from whatever power, cooling, and cabling infrastructure a building already possesses, there are
several less obvious features that make a structure more or less amenable for housing a Data Center,
including the following:

 Clearances
 Weight issues
 Loading dock placement
 Freight elevator specifications
 Miscellaneous problem areas
 Distribution of key systems

Some of these elements can make a site completely unsuitable to housing a Data Center, while others
are merely matters of convenience. The sections that follow examine these elements in greater detail.

36.1.1.1 Clearances

One of the most basic features to examine about an existing structure is its physical dimensions. Some
of the questions you need to answer about the site's dimensions are as follows:

 Is there enough contiguous floor space to house your Data Center?


 How tall are the doorways?
 How wide are the halls?
 What's the distance from floor to ceiling?

These dimensions all need to be sufficient to enable Data Center equipment to pass through easily.

The area for the Data Center itself normally requires a minimum of about 13 feet (4 meters) from floor
to ceiling, and much more is preferable. The clearance is to accommodate the raised floor, the height of
most server cabinets, the minimum buffer space between the cabinet and the room's drop ceiling that is
typically required by local fire codes, and space above the drop ceiling where ducting is routed.
Additional space above the drop ceiling allows for easier and more effective cooling of the server
environment—more area means that a greater volume of cold air can be pumped in to the Data
Center—and so is desirable.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 55


An unobstructed pathway must also exist among the Data Center, its corresponding storage room, and
the exterior of the building, for transporting equipment. All entrances, corridors, doorways, and other
openings along this path must be at least 8 feet (2.4 meters) high and at least 4 feet (1.2 meters) wide.
These measurements are chosen to enable your tallest server cabinets and widest pallets of supplies to
be transported within the building and into the server environment easily. If you have Data Center-
related items that are larger in size, look for larger building clearances accordingly. That brand-new disk
library you purchase to perform data backups can't do you much good if it does not fit through the Data
Center doors.

37.1.1.1 Weight Issues

Once you've determined whether server cabinets and pallets of materials can be transported without
difficulty through the building, you need to make sure that none of them damage or crash through the
floor. Consider the structural capacity of the building and how much weight the floor is designed to
support, especially in the Data Center area. Pay particular attention to this if you intend to place the
server environment on an upper level—their weight-bearing capability is normally less than on the
ground floor.

38.1.1.1 Loading Dock

Servers, cabinets, networking devices, or backup storage units can sometimes be damaged during
transport to your Data Center. When this does happen, it is often attributed to the equipment being
shaken while rolled across uneven ground or dragged over the lip of an entrance and having the item
thump forcefully to the ground under its own weight. Although you can't control what happens during
shipment, you can safeguard how equipment is treated once it arrives at your site.

Having a loading dock in close proximity to your Data Center reduces the chance of equipment damage,
so it is very helpful if a property you are evaluating has one. Equipment can be rolled a short distance
across level ground, either directly into the server environment or an associated storage room, rather
than having to be offloaded from an elevated truck bed and shuttled a longer distance.

39.1.1.1 Freight Elevators

As stated earlier in the chapter, a freight elevator is mandatory if your Data Center is located anywhere
but on the ground floor. As with the doorways and corridors, the freight elevator must be at least 8 feet
(2.4 meters) high and at least 4 feet (1.2 meters) wide so as to accommodate everything from tall server
cabinets to wide pallets of equipment. The freight elevator must also have enough weight-bearing
capability to carry a fully loaded server cabinet. Today's heavier systems can exceed 1500 pounds per
server cabinet location, and it is reasonable to assume that that number will increase.

If your company site doesn't have a suitable freight elevator, you might be forced to take drastic
measures to bring large equipment in and out. The lack of a freight elevator in this building means that
large equipment bound for the Data Center must be raised by hand.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 56


40.1.1.1 Problem Areas

A key reason to have someone with Data Center design and operation experience help evaluate a
building is to identify inobvious trouble spots. Determining whether a structure has adequate
infrastructure or tangible facilities such as a loading dock or freight elevator is a straightforward
exercise; however, some buildings might have problem areas—from a Data Center perspective—that
are not as easily noticed.

Carefully examine all aspects of the building, large and small, to ensure that nothing can interfere with
the operation of a server environment. Consider issues such as the following:

 Where are immovable building elements such as structural columns and stairwells?— These
might restrict how much floor space is usable for a Data Center.
 Does the building have a kitchen or cafeteria?— This is a potential fire hazard, and if a site has
multiple structures, kitchens or cafeterias should be located in a different building from the Data
Center.
 Where are the building's water pipes?— Plumbing can leak and therefore shouldn't be routed
above the server environment.

41.1.1.1 Distribution of Key Systems

As you examine the site's existing infrastructure, look closely at how the systems are configured. You
ideally want important systems, such as power feeds and data cabling, to be spread out, each entering
the building at more than one location. Such physical separation helps protect infrastructure systems—
two cable runs following different paths are less likely to both be damaged by a single event than if they
each follow the same path, for example. Standby power systems such as generators or backup batteries
make the site more robust, and are even more beneficial if they are dispersed on a property rather than
clustered together.

Confirming Service Availability to the Data Center Site

Arguably more important than what infrastructure already exists at a potential Data Center site are
what utility services can be provided to it. It is fairly simple to have a contractor come out and install
data cabling if a property lacks it, for example, but you still can't communicate with the outside world if
there's no service provider offering connectivity. Make sure that the property has—or can be provided
with—adequate power and data connections for the Data Center, along with the standard water,
telephone, gas, and other utilities that any office environment requires.

Aside from power outages that can be caused by natural disasters, some parts of the world simply have
less reliable electrical infrastructure than others. Brownouts or momentary dips in power might be
common in these regions, which increases the need for your Data Center to have dependable standby
power. Just as a car engine undergoes the most stress when it is first started, so too does a standby
power system experience the most strain when a server environment's electrical load is first placed
upon it. Frequently cranking a car's engine—or transferring a Data Center's electrical load—causes much
more wear and tear than if the same equipment ran continuously for an extended time.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 57


The corresponding local service providers can tell you what power and data lines exist on and around a
property. When talking to the electric company, ask if it is possible to have the Data Center fed by more
than one substation or power grid, thereby providing your facility with another layer of redundancy.
When talking to the Internet service provider, determine what types and quantities of cabling are in the
ground, both on the property and in the surrounding area.

Prioritizing Needs for the Data Center Site

As you review potential Data Center sites, you'll find that there are no perfect properties, that is, parcels
with zero risk factors, all of the physical features you want, and the specific types and amounts of
infrastructure you are looking for. Many properties are completely inappropriate for housing a Data
Center, while even the most suitable are a mixed bag. Perhaps a site is in a seismically stable area and
well away from sources of pollution, electromagnetic interference, and vibration, but is vulnerable to
hurricanes or tornadoes. Maybe a property has an existing building that's easily accessible and
possesses adequate electrical capacity and incoming data connectivity, but has no loading dock.
Whatever the details, all parcels have their unique features and conditions, advantages and drawbacks.

Prioritize what characteristics are most important based upon the specific needs of your company. If you
know your business uses large, floor-standing servers, for example, then a building with ample
clearances and a loading dock is essential. If your business strictly employs high-density, low-profile
servers, then those characteristics are less valuable than a building with abundant cooling capacity and
available electrical circuits. Both scenarios, however, require a structure with high weight tolerances.

During the process of selecting a site, you have to answer the Data Center design version of "which
came first, the chicken or the egg?" In this case, the question involves a property's risk factors versus
your Data Center's infrastructure. Do you opt to add extra layers of infrastructure because the Data
Center must be built in a more hazardous area, or do you agree to build in a more hazardous area
because the room is equipped with additional infrastructure? You might be less concerned with locating
your server environment in a region with less reliable commercial power if you already plan to build a
Data Center with 3N standby power, for example.

Sizing the Data Center

Nothing has a greater influence on a Data Center's cost, lifespan, and flexibility than its size—even the
Data Center's capability to impress clients. Determining the size of your particular Data Center is a
challenging and essential task that must be done correctly if the room is to be productive and cost-
effective for your business. Determining size is challenging because several variables contribute to how
large or small your server environment must be, including:

 How many people the Data Center supports


 The number and types of servers and other equipment the Data Center hosts
 The size that non-server areas should be depending upon how the room's infrastructure is
deployed

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 58


Determining Data Center size is essential because a Data Center that is too small won't adequately meet
your company's server needs, consequently inhibiting productivity and requiring more to be spent on
upgrading or expansion and thereby putting the space and services within at risk. A room that is too big
wastes money, both on initial construction and ongoing operational expenses.

Financial and Other Considerations When Sizing the Data Center

A smaller Data Center is obviously less expensive in the short term to build, operate, and maintain than
a larger one. Data and electrical cables are routed shorter distances, less fire suppression materials are
needed to provide coverage over a reduced area, and—as anyone who has ever moved from a cozy
home to a spacious one has discovered with their utility bill—it costs less every month to power and
regulate temperature in a small space than a large one. From a risk management perspective, there is
also a benefit from constructing a smaller Data Center and then, when the Data Center fills up and more
hosting space is needed, building a second Data Center at another location. For these reasons, it is no
surprise that many companies decide to dedicate the least building space possible to host their servers,
only devoting greater area when absolutely necessary. While a small server environment is completely
appropriate in many cases, it can prove limiting—and ultimately more expensive in the long run—in
others.

The best practice is to design a Data Center with a lifespan of a few years and intend to expand that
server environment when it becomes close to full with equipment. If it is certain your company must
have a significant amount of Data Center space eventually, consider building a larger room up front. The
larger a server environment, the greater the economy of scale that is achieved. Just as having a house
painter complete two bedrooms doesn't cost twice as much as working on one, having a cabling vendor
run 10,000 yards or meters of data cabling doesn't cost twice as much as running 5,000. There are basic
expenses that are always incurred on a construction job, large or small. Many materials are also less
expensive per unit when purchased in greater quantities, which can save hundreds of thousands of
dollars over time. Quantity price breaks can apply to not only infrastructure components used in the
construction of the Data Center, but also to consumables and other supplies that are used on a day-to-
day basis in a functioning server environment, including:

 Server cabinets
 Patch cords
 Custom signage
 Multimedia boxes and patch panels into which structured cabling terminates
 Even rolls of hazard tape for marking electrical infrastructure are frequently eligible for discount
when bought in bulk

A bigger Data Center also addresses a company's server hosting needs for a greater period of time,
because it obviously takes longer to fill the available floor space. You don't want to spend months
constructing a new Data Center, only to have it fill up within a year and require you to start the process
all over again. Running two back-to-back Data Center projects instead of one is an inefficient use of staff
time and, assuming you are expanding the same server environment, unnecessarily exposes servers to
the chaos and downtime risks that any major construction effort brings.

With the costs of labor and material normally rising over time, a business is likely to spend less money
overall to build a large Data Center all at once, rather than building a small room to start and then

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 59


expanding it to the larger size within a couple of years. When laying out a Data Center, a larger footprint
also provides greater flexibility. Structural columns, dogleg spaces, and mandatory clearances are all
easier to accommodate when the Data Center floor space is larger. It is like trying to put a suitcase into a
car trunk that already contains a spare tire. The bigger the trunk, the easier it is to work around that tire.

Finally, don't forget that the size of a Data Center has a psychological effect upon those who are toured
through or work within the Data Center. While psychological impact should not be the overriding factor
for determining the size of this facility, be aware of it. A large Data Center teeming with rows of servers
and networking devices presents an image of technology, substance, and productivity—more so than a
small server environment. If your business provides tours of its facilities to clients, prospective
employees, or members of the public, a sizable Data Center can be a showpiece that helps reinforce
your company's professional image and credibility.

So, as you size your Data Center, keep in mind the advantages and disadvantages provided by both
smaller and larger footprints. Ideally, you want to create a server environment that is large enough to
accommodate your company's server needs for a reasonable length of time and achieve an economy of
scale, but not so large that money is wasted on higher operational expenses and portions of the Data
Center that are unoccupied.

Employee-Based Sizing Method

A good initial approach to sizing a Data Center is to determine the number of employees the Data
Center is intended to support and allocate a certain amount of floor space per person. The premise is
that the more employees your company has, the more critical equipment that is required for them to
perform their work; the more critical equipment your company possesses, the larger the Data Center
must be to host the equipment.

This method is most appropriate for Data Centers that house engineering or development servers. Such
machines are directly supported and worked upon by company employees—the more employees
working to develop future products, the more servers that are typically necessary. Production or
business Data Centers that host predominantly information technology (IT) functions are less influenced
by employee populations—your company needs servers to perform certain core business functions
whether 50 people are employed or 5000.

When using the employee-based method, count only those employees whose roles are associated with
Data Center servers and networking devices. Your administrative staff is undoubtedly composed of
indispensable people who keep your business functioning like a well-oiled machine, but if their work
does not involve your company's servers, don't count them when sizing the room that houses that
equipment. Either a human resources representative or the IT manager working on the Data Center
project can provide a tally of people whose work depends upon your company's server environment.

Understanding that the proportion of Data Center floor space to number of employees the Data Center
supports is not linear is the key to sizing a Data Center on a per capita basis—the ratio changes. A large
Data Center cannot only support more employees than a small one, it can support more employees per
square foot (square meter). The Data Center can support more employees per square foot (square

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 60


meter) because a minimum amount of floor space in any Data Center has to be devoted to non-server
functions, regardless of whether the room is large or small. This non-server space includes areas for
infrastructure equipment, such as air handlers and power distribution units, as well as areas to transport
equipment through, such as entrance ramps (assuming that the Data Center has a raised floor) and
walkways. Once non-server areas are established in the Data Center's design, they do not grow
proportionally as the rest of the room does.

For instance, assume that a Data Center has five server rows. Wide aisles surround the rows, and an
entrance ramp is located inside the room. The space is sufficiently cooled by two air handlers, but
because the room is designed to N+1 capacity, it has a total of three air handlers. Compare that with
another Data Center, designed to the same level of infrastructure, but containing 15 server rows instead
of five. Because the larger Data Center has triple the number of server rows, does it need to be three
times the size of the smaller Data Center? No. The larger server environment does need a proportional
increase in space for its server rows and aisles, but not for other non-server areas. For one, only six air
handlers are needed to provide adequate cooling and a seventh to achieve N+1 capacity—not nine.
Also, the entrance ramp doesn't occupy any more floor space, even if there are more server rows. The
cumulative space saved from not having to proportionally increase these non-server areas is
considerable.

The smaller Data Center is 1301 square feet (120.9 square meters) in size and contains 36 cabinet
locations, whereas the larger Data Center is 2270 square feet (210.9 square meters) and contains 78
cabinet locations. That is more than twice the cabinet locations while only increasing the size of the
server environment by about three-quarters. To put it another way, a company devotes 36.1 feet of
floor space per server cabinet in the smaller room—1301 ÷ 36 = 36.1—and only 29.1 feet of floor space
per server cabinet in the larger room—2270 ÷ 78 = 29.1. (The metric equivalents are 3.4 meters of floor
space per server cabinet in the smaller room—120.9 ÷ 36 = 3.4—and only 2.7 meters of floor space per
server cabinet in the larger room—210.9 ÷ 78 = 2.7.)

If you are asked to provide an approximate size for a Data Center before having a chance to put pen to
paper—or more likely mouse to AutoCAD drawing—begin with an estimate of about 10 square feet (1
square meter) for every Data Center-related employee. The formula is easy-to-remember as it allocates
roughly one server cabinet of equipment per person and accounts for a similar amount of non-server
space, such as walkways or places for infrastructure components. Use this formula when dealing with
about 100 employees or less. Adjust the formula steadily downward when dealing with larger numbers
of employees, because bigger Data Centers can accommodate a greater proportion of servers.

NOTE

Sizing a Data Center strictly by the number of employees obviously isn't an exact science. After working
on dozens of Data Center design projects, though, I've found that 10 square feet (1 square meter) per
employee is a reliable starting point for sizing smaller server environments. The rare times that the
formula has been off, it has designated slightly too much area for the Data Center, providing me the
luxury of giving back floor space during the design phase. This has always been welcomed by other
project representatives, who are usually needing additional room for something.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 61


Equipment-Based Sizing Method

Although the number of people your Data Center supports should influence the size of the room, don't
let it be your only guide. Refine your calculations based on the number and type of servers the room
must host.

The more you know about what servers are coming in to the Data Center, both when it first opens and
over time, the more accurately you can size the room. The IT manager and network engineer involved
with the Data Center project are excellent resources for what servers and networking devices are to be
ordered and must be hosted in the server environment in the future. The IT manager and network
engineer can also tell you about the physical dimensions of these items, which can come in a variety of
shapes, sizes, and weights.

Some servers, for example, are bigger than a refrigerator while others are no larger than a pizza box.
Most servers, fortunately, are configured to fit within one of a few standard cabinet profiles prevalent in
the server industry. Some devices are intentionally designed to fit only in proprietary cabinets,
however—forcing you to buy the manufacturer's cabinets along with its servers—and these unique
cabinets can be oversized or irregularly shaped. Still other servers require that they be installed on rails
and then pulled out entirely from the cabinet—imagine fully extending a drawer on a filing cabinet—to
perform maintenance or to upgrade their internal components. Don't forget that the size and type of
servers used in your Data Center affect not only the depth of the server rows, but also the space needed
between those rows.

The current trend in server design is to make them more compact—smaller in height but also deeper
than older models, which can increase the depth needed for your Data Center's server rows. The lower
profile of these servers enables more to be installed in a cabinet—if your server environment can
accommodate the greater port density, heat generation, and weight that comes with having so many
servers in a small area. If your Data Center is designed to support these more infrastructure-intensive
machines, then your company can take advantage of their space-saving potential and likely house them
in a smaller room. If not, you might need to spread out the servers so that they take up about as much
space as their larger predecessors.

When considering incoming equipment for purposes of sizing a Data Center, pay special attention to any
devices that aren't specifically designed to go in to a server environment. These miscellaneous items
frequently require greater floor space due to their irregular footprints, inability to fit into standard
server cabinets, or access panels that require a large clearance area to open and perform maintenance.

Use data about incoming equipment to estimate how many server cabinet locations are to be occupied
when the Data Center first opens, and how rapidly more servers are to arrive. You can learn a lot about
future hardware by talking with your IT manager. Ask what major projects are budgeted for in the
coming fiscal year, including their timelines and what specific servers are expected to be ordered. The
more you know, the better you can size the Data Center, not to mention ensure that cabinet locations
have the appropriate electrical and data infrastructure in place.

For example, suppose that your initial equipment—servers, networking devices, and backup gear—fills
20 cabinets and that your IT manager expects to buy servers to fill three more cabinets every month for
the next six months. If this is considered typical growth, extrapolate that to estimate 56 cabinets

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 62


occupied by the end of the year, 112 in two years, and 168 in three. You want your Data Center to
accommodate incoming equipment for a few years before needing to expand, so in this example the
server environment should definitely have enough floor space for 112 cabinet locations and likely closer
to 168.

Note that, when extending the equipment growth rate from the first year, you include the devices that
were initially installed in the Data Center. The assumption is that demand for that equipment is going to
grow at the same rate as for servers that are installed later. Even if the equipment that's arriving in the
first few months is for a special project rather than typical growth, it is probably valid to include the
equipment from the special project in your estimate because future special projects are likely to crop up
as well.

When in doubt for sizing, round up. It is better to have a handful of extra spaces for servers in your Data
Center than to prematurely run out of capacity.

Other Influencing Factors When Sizing Your Data Center

Once you have an idea of how many people your Data Center is to support and of the number and size
of incoming servers and networking devices, there are still other elements to consider when sizing the
room:

 Do you want to locate major infrastructure components within the Data Center or elsewhere?—
Traditionally, air handlers and power distribution units are located in the server environment,
but it is possible to place them somewhere else. If you choose to put air handlers and power
distribution units in a space adjacent to the Data Center rather than inside, for example, less
space is needed in the server environment itself but more must be set aside immediately next to
the room to house those infrastructure components. In addition, fire suppression containers are
most often located in a dedicated closet area, off of the Data Center, but can also be placed
within the server environment.
 How much space do you want around server rows?— Building codes often require a minimum
of 36 or 42 inches (91.4 or 106.7 centimeters) for walkways. If you plan to give tours of the Data
Center on a regular basis, consider making the main thoroughfares wider. The additional space
makes it easier to accommodate large tour groups and reduces the chances of a visitor
accidentally snagging a dangling patch cord or power cable. If your Data Center is in a seismically
active area and you choose to install seismic isolation platforms, you must provide additional
clearance to enable the platforms to sway in an earthquake.
 Do structural reinforcements need to be accommodated?— If the Data Center is in a region at
risk for hurricanes, earthquakes, or terrorist attacks, the room may require thicker walls and
structural columns. A secondary roof may also be appropriate. Reinforcements add to the size of
these Data Center elements, which can in turn alter the overall size of the room.
 Assuming that your Data Center has a raised floor, is the entrance ramp going to be located
inside the Data Center or in a corridor leading up to it?— Alternatively, the room can be sunken
so that the surface of the floor is level with the entrance, and no ramp is required at all. This
approach requires significantly more depth for the floor, but saves the need to dedicate floor
space for a ramp.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 63


Each of these decisions can occupy or free up considerable floor space, affecting the size of your server
environment.

Associated Data Center Support Rooms

When allocating building space for your Data Center, set aside an additional area for several support
rooms. Some of these rooms are simply convenience spaces that make tasks easier for Data Center
users, while others are integral to the proper operation of a server environment. Most of the
infrastructure that runs in to your Data Center must first terminate into another area—power in to an
electrical room or structured cabling into a networking room, for example. Just as a chain is only as
strong as its weakest link, so is your Data Center only as productive and secure as its associated rooms.

These dedicated areas include the following:

 Electrical room
 Networking room
 Loading dock
 Build room
 Storage room
 Operations command center
 Backup room
 Media storage area
 Vendor service areas

Figures 3-5 represents a Data Center and its associated support rooms as they might be arranged within
a building. The room marked with the grid is the Data Center, while the areas with hatch marks are
occupied by miscellaneous building features such as elevators, stairwells, bathrooms, janitorial closets,
or conference rooms. Numbered spaces are explained in the paragraphs that follow.

1.1.1.41.1 Figure 3-5. A Data Center and its Support Rooms

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 64


42.1.1.1 Electrical Room

The main electrical equipment that supports your Data Center is located in a dedicated electrical room,
separate from the server environment to avoid electromagnetic interference. Switching gear for the
Data Center's primary and standby electrical systems are located in the electrical room. The Data
Center's backup batteries, known as an uninterruptible power source, are traditionally placed in this
room as well. Area 7 in Figure 3-5 is the electrical room.

43.1.1.1 Networking Room

The networking room is the centralized area where all structured data cabling for the site—not just one
building—terminates. Other names for this area include data room, communications (comms) room, or
campus distributor. All things networking, including those in the Data Center and others (such as a
separate network for desktop computers), route through here. Aside from its obligatory structured
cabling, this room is traditionally equipped with the same level of standby power, cooling, and physical
security measures as the Data Center.

Many companies support their Data Center and networking room from the same standby power system.
Because devices in the Data Center cannot communicate with the outside world if the networking room
goes offline, the theory is that there is little point in having distributed standby power. Depending upon
what additional networks are routed through your network room and how much functionality your Data
Center servers still have when they are isolated from an external network, there may or may not be an
advantage to having distinct standby power systems for each room.

The networking room does not have to be located immediately near the Data Center to support it,
although doing so can save money during initial construction because cable runs are shorter. Area 3 in
Figure 3-5 is the networking room.

44.1.1.1 Loading Dock

As explained previously, having a loading dock in the building where your Data Center is located is very
useful. Incoming servers, networking devices, server cabinets, and other large supplies can be easily
received and transported to either a storage area or build room, as required. The dock should be able to
accommodate trucks of varying lengths and configurations and possess a leveler, enabling incoming
items to be rolled directly from the truck bed into the receiving area.

It is convenient if the loading dock is located close to whatever rooms are designated to receive
incoming items for the Data Center, although this is not mandatory. Do not incorporate a dock into the
Data Center space itself, or have your server environment open directly into the dock. This receiving
area is likely to contain a lot of dust and dirt because of all of the incoming cardboard boxes, wooden
pallets, and other shipping materials. You don't want contaminants from this area tracked or blown in to
the Data Center. Loading dock rollup doors, through which equipment is brought in, also have a way of

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 65


being left open for significant periods of time. If a loading dock is part of the Data Center, that creates a
huge opportunity for unauthorized personnel to gain access into the room.

Area 6 in Figure 3-5 is the loading dock.

45.1.1.1 Build Room

The build room, alternatively called a fitup room, staging area, or burn-in room, is a dedicated area for
system administrators and network engineers to unpack, set up, and pre-configure equipment that is
ultimately bound for the Data Center.

At minimum, this space is merely an empty room where machines sit temporarily until ready for
installation. Having users open their servers and networking devices in this area keeps boxes, pallets,
dirt, paperwork, and other miscellaneous trash out of the Data Center.

More sophisticated build rooms are equipped with their own power, data cabling, cooling, and fire
suppression infrastructure. Some build rooms are essentially mini Data Centers, right down to having
their own raised floor and standby power.

The build room should be located in close proximity to the Data Center and the loading dock, so
incoming equipment can be conveniently transported to and from the room. The amount of floor space
required for the build room depends upon how many devices it is expected to house and for how long.
The room should be at least large enough to accept several large servers arriving on a pallet jack. Area 5
in Figure 3-5 is the build room.

If possible, try to match the build room's ambient temperature to that of the Data Center. This helps
acclimate incoming equipment to the server environment, especially if the two rooms are near one
another. If you have ever brought a fish home from the pet store, you are familiar with this process.
After transporting the fish home in a bag of water, it is recommended that you put the entire bag into
your aquarium for a while rather than dump the fish in immediately. This protects the fish from being
shocked by any temperature difference between the water in the bag and that in the aquarium. Due to
the high dollar value of the equipment that resides in the build room—a single server can cost tens of
thousands of dollars—provide the build room with the same physical security measures as the Data
Center. (Specific recommendations about what physical safeguards to place on your server environment
are provided in Chapter 13, "Safeguarding the Servers.")

46.1.1.1 Storage Room

While the build room can accept incoming items on a short-term basis, designate another space to store
Data Center–related materials for longer periods of time. There are many items that can and do
accumulate in and around a server environment. It is best to store the spare items in a separate area
rather than allow them to take up valuable Data Center floor space. The stored materials can include:

 Decommissioned servers, waiting to be turned in for credit to the manufacturer when newer
systems are purchased during an upgrade cycle.
 Custom shipping crates for servers still under evaluation.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 66


 Excess consumables, such as cabinet shelves or filters for the Data Center air handlers.

The size of the storage room is driven by the size of the Data Center—a large server environment is
going to accept more incoming equipment, go through more consumable items, and decommission
more servers than a small server environment and therefore needs more storage space. A good rule of
thumb is to size the storage room at about 15 percent that of the Data Center. As with other sizing
guidelines in this chapter, fine-tune the footprint of your build room based on the actual physical factors
of your company site.

Equip the Data Center storage room with the same physical security measures as the Data Center and
build room. Although your company undoubtedly has other storage needs, stay away from using a
common storage area and co-mingling Data Center–related materials with other items. Even if you have
an excellent inventory control system, materials in a common storage area are more likely to be
borrowed or scavenged and therefore unavailable when needed for the Data Center.

Area 4 in Figure 3-5 is the storage room. Note that the loading, dock, storage room, and build room are
situated so that incoming equipment can be conveniently received (Area 6), moved in to storage for a
time if needed (Area 4), moved in to the build room to be unboxed (Area 5), and then transported to the
Data Center.

47.1.1.1 Operations Command Center

The operations command center, alternatively known as a call center or control room, is a workspace
where employees remotely monitor Data Center servers. If a problem arises with a device, command
center staff contacts the appropriate employee or vendor resources to address the issue, and helps
coordinate remediation efforts until the problem is resolved. Some companies have a separate network
operations center to monitor and address networking devices, while others consolidate them into a
single space.

Command center facilities are typically equipped with large, wall-mounted display screens, multiple
telephones, and computer console furniture with recessed display screens at each seating location. This
configuration enables employees to conveniently monitor multiple servers at one time.

Because command center tasks all involve monitoring devices that are in another location, this room
does not have to be near the Data Center. In fact, it is preferable if this facility is located a significant
distance away for disaster recovery purposes. Some companies choose to outsource their call center
functions. When outsourcing, no facility needs to be constructed at all.

Area 1 in Figure 3-5 is the operations command center. Although the room doesn't have to be adjacent
to your server environment, its relative location to the Data Center and lobby in this arrangement
provides an interesting opportunity. If you equip the command center with physical access controls and
provide a good line of sight from that room in to the Data Center, you can use the command center as a
viewing area for tours. This enables you to show your server environment without bringing people in to
the space.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 67


48.1.1.1 Backup Room

A backup room is a workspace for support personnel who perform and monitor backups for the servers
in the Data Center. Providing a backup room enables the appropriate employees and vendors to
perform backup functions without having to be stationed full time in the server environment. As with
any operational practice that reduces how many people are in the Data Center and for how long, having
backup personnel do their work outside of the Data Center reduces the risk of accidental downtime.
Providing a backup room may also help justify a lower occupancy rating for your Data Center. As
mentioned in Chapter 2 lower occupancy can reduce how much external air is required to be cycled in to
the Data Center, therefore decreasing the Data Center's exposure to outside contaminants.

A backup room is normally equipped with the same level of power and connectivity as standard desktop
locations. Although a backup room has its benefits, consider it a convenience rather than a mandatory
space associated with a Data Center. If limited space is available in a building for Data Center-related
functions, this is one of the first rooms to sacrifce.

The backup room can be located anywhere, but because workers have to enter the server environment
to perform certain tasks, it is most convenient if located a short walk from the Data Center. Area 2 in
Figure 3-5 is the backup room.

49.1.1.1 Media Storage Area

A media storage area is for the storage of magnetic, optical, or whatever other media is employed to
regularly back up content from the servers in your Data Center. Much like the build room, establishing a
separate area for these backup materials keeps dirt, debris, and unnecessary equipment weight out of
the Data Center.

Most disaster recovery strategies recommend storing backup media several miles (kilometers) away
from your server environment, so that a single catastrophic event doesn't destroy both your original
data and backup copies. Not all businesses choose to do that, however, due to the associated costs. If
your company maintains its backup media on site, the materials are likely stored in fireproof safes,
which can weigh thousands of pounds. A media storage area can accommodate these safes. Your Data
Center must already support increasingly heavy servers, and there is no reason to add unnecessarily to
the room's weight-load. Even if your company does store its backup media off site and has no heavy
safes, it is still useful to have a media storage area. Backup tapes have to be collected and sorted before
they are transported off site, and the media storage room enables the work to be performed outside of
the Data Center. Incoming tapes can also be stored in the media storage room and then unboxed when
needed.

It is not necessary for a media storage room to be placed in the immediately vicinity of your Data
Center. If you expect the media storage room to house media safes, locate it on the first floor to better
accommodate their excessive weight. Area 2 in Figure 3-5 also serves as the media storage room.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 68


50.1.1.1 Vendor Service Areas

If vendors do a significant amount of work in your Data Center, you might want to dedicate areas for
them to work out of or store materials in. This can be appropriate if vendor representatives, rather than
your company's own system administrators, install and perform maintenance upon many of your Data
Center servers. A provided vendor service area gives these non-employees a place to work other than in
your most sensitive environment, the Data Center. If you have people from competing vendor
companies working at your site, consider creating more than one of these areas. While it is unlikely that
one vendor might try to sabotage a competitor, establishing separate work areas can remove the
opportunity altogether.

No vendor storage area is shown in Figure 3-5.

Defining Spaces for Physical Elements of Your Data Center

Laying out all of the physical elements of a Data Center is like working on a complicated, three-
dimensional jigsaw puzzle. Mechanical equipment, mandatory clearances, walkways, server rows, and
miscellaneous obstacles are all pieces that must be interconnected properly for the server environment
to function efficiently. Force one piece into the wrong place and others won't fit well, if at all. Data
Center layouts are trickier than the hardest puzzle, though. There is no number on the side of the wall
saying how many pieces your server environment has or what it must look like when finished. You are
just left to fit as many servers, networking devices, and infrastructure elements into the room as you
can.

Also unlike the puzzle, there are several ways to arrange Data Center items:

 Do you place air handlers in the middle of the room or against a wall?
 What about power distribution units?
 How wide do you make the areas surrounding each server row?
 Which direction should the rows face?
 Do you orient all of them the same way, or alternate their direction?

You might think having multiple options makes the task of designing the room easier. In one sense,
having several options does make the task easier, because more than one solution is possible. However,
some layouts maximize floor space and coordinate Data Center infrastructure better than others, and it
takes an experienced eye to know the difference between a mediocre solution and a great one.

To best lay out your Data Center, define the amount of floor space that each item must occupy in the
room and arrange each strategically. Whenever possible, overlap clearance areas so that they do
double-duty in the room. For example, if a power distribution unit requires a buffer area to protect
surrounding equipment from electromagnetic interference and an air handler needs a clearance area to
swing open an access panel on the side, place these two items in mutual proximity. The buffer for one
can serve as the clearance area for the other. This conserves space in the Data Center and enables it to
be used for other purposes.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 69


For smaller items that can conceivably be tucked anywhere within the server environment, consider
how frequently and where in the room the items are most often needed. Storage cylinders for the Data
Center's fire suppression system, for example, need to be accessed only occasionally for maintenance,
so it makes sense to put the cylinders in a remote corner rather than the middle of the room. In
contrast, having access to a telephone while working on a server is helpful, so phones should be placed
within easy reach of the room's server rows.

51.1.1.1 Mechanical Equipment

The largest individual objects in a Data Center are typically its major infrastructure components:

 Power distribution units that provide electrical power


 Air handlers that regulate cooling
 Fire suppressant containers

Because this mechanical equipment is essential for a server environment and can take up large chunks
of floor space, place it on your Data Center map first.

52.1.1.1 Power Distribution Units

Electrical equipment of varying shapes and sizes is employed to provide power to your Data Center.
Typically, power feeds in to your building from a utility source, where the power is conditioned and then
routed into your Data Center. Server cabinet locations are provided power by way of individual electrical
conduits, which run back to banks of circuit breakers. The breakers are within floor-standing power
distribution units, known as PDUs, or distributed circuit panel boards that are essentially industrial
versions of the type of circuit breaker panels found in your home.

For purposes of laying out the Data Center, assume that PDUs will be used and set aside space for them.
You can reclaim that floor space if you ultimately don't use PDUs. Power distribution units vary in size by
model, based upon how many circuit breakers they contain. A typical size used in server environments is
about 7 feet wide and 3 feet deep (2.1 meters wide and 91.4 centimeters deep). Circuit panel boards
also vary in size, and can be either wall-mounted or free-standing units on the Data Center floor. If you
opt for free-standing circuit panel boards, choose a model that is less than 24 inches (61 centimeters)
wide. This lets you place the free-standing unit within a single floor tile location. The height of the panel
board varies, again depending upon how many circuits it holds, but the depth is typically no more than
about 8 inches (20.3 centi-meters).

When placing PDUs, you must balance two factors. First, the closer a unit is located to the server cabinet
locations it feeds, obviously the shorter its electrical conduits need to be. As with most Data Center
infrastructure, shorter conduits are easier to route neatly and less expensive than longer ones. Second,
PDUs generate electromagnetic interference and, even with shielding, shouldn't be placed within close
proximity to servers or networking devices. While it is possible to place PDUs in the middle of the Data
Center floor, locating them along a Data Center wall has the advantage of reducing how much floor
space must be provided as a buffer around the unit. With the back edge of the PDU up against a wall, a
buffer area need only be provided on three sides rather than four.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 70


Circuit panel boards serve the same function as fully loaded PDUs but hold fewer circuits. With less
electrical capacity running through them, they are also much less a source of electromagnetic
interference. This enables you to place the panel boards much closer to servers and use shorter
electrical conduits. If your Data Center is short on floor space, consider using wall-mounted panel boards
because they are located off the floor altogether.

53.1.1.1 Air Handlers

Air handlers, the large cooling units that regulate temperatures in the Data Center, are typically installed
along the walls at regular intervals to provide even cooling throughout the server environment.
Although they can provide cooling anywhere in the room, it is best to place the air handlers
perpendicular to your server rows. The structured data cabling and electrical conduits that are installed
under the rows may inhibit airflow if the handlers are placed parallel to them.

Figure 4-4 shows potential locations for air handlers within a Data Center.

1.1.1.53.1 Figure 4-4. Air Handler Placement Options

Air handler A is in the middle of the Data Center floor. Placing the unit here occupies floor space that
might otherwise hold server cabinets.

Air handler B is against a Data Center wall. Although the placement is an improvement over the
placement of Air handler A, the unit is parallel to the room's server rows and will therefore be less
efficient at cooling.

Air handler C shows the preferred placement—against a wall, perpendicular to server rows.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 71


Air handlers' physical dimensions vary from model to model, depending upon capacity. Some of the
largest units traditionally used in a Data Center are 10 feet wide and just under 4 feet deep (about 3
meters wide and just under 1.2 meters deep).

An alternative design involves building a secure corridor on either side of the Data Center, placing the air
handlers in them and providing pass-through areas under the raised floor and overhead for airflow. This
enables maintenance workers to have access to the equipment without needing to enter the server
environment. Such a configuration is uncommon, though, probably due to the extra building space that
is needed for the corridors. The air handlers have the same footprint no matter where they are placed,
but the corridor requires additional space for someone to walk around, which is already included in the
Data Center through its aisles.

54.1.1.1 Fire Suppression Tanks

If you have chosen to use it, also set aside space for the cylinders containing fire suppressant that is to
be dispersed into the Data Center in the event of a fire. Their size, and therefore the area needed to
house the cylinders, varies based upon how much and what type of suppressant they contain. The larger
the Data Center space the fire suppression system must cover, the larger the cylinders are likely to be.

Ideally you want to place these tanks in a lockable closet outside of but immediately adjacent to the
server environment. This keeps the cylinders protected and enables them to be serviced without
maintenance workers having to enter the Data Center. If such space outside the room isn't available,
create it inside. Although it is possible to install them in an empty corner of the Data Center, a lockable
closet is again preferred. This prevents the storage tanks from being intentionally tampered with or
accidentally disturbed.

A Data Center can contain fire suppression cylinders under the raised floor, but this is less desirable. The
cylinders are more exposed to damage and can restrict airflow.

55.1.1.1 Buffer Zones

When laying out Data Center objects, don't forget to provide necessary clearance areas. Power
distribution units, air handlers, and storage closets all require enough space for doors and access panels
to swing open. Building codes in many areas prohibit Data Center doors from opening outward into a
main corridor, so clearances must be provided inside the room.

Because PDUs can be a source of electromagnetic interference, provide a clearance area of at least 4
feet (1.2 meters) around them. The units are shielded to block such emissions, but the shielding may
need to be removed during maintenance or when taking power readings.

Air handlers normally require a buffer of 36 to 42 inches (91.4 to 106.7 centimeters) between them and
the Data Center servers they cool. This cushion of space optimizes how cold air reaches cabinet
locations in the room, preventing air from passing too quickly by cabinet locations or short-cycling by
the air handler. Air handlers also require a buffer area of 8 to 10 feet (2.4 to 3 meters) to enable the
periodic replacement of their main shaft.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 72


56.1.1.1 Aisles

Perhaps surprisingly, the preponderance of floor space in most Data Centers isn't occupied by
computing equipment, cabinets, or infrastructure components. It is made up of empty space that
surrounds these items. Although devoid of costly equipment, aisles are a key part of your Data Center.
Don't overlook the aisles when laying out the room and don't skimp on them when allocating floor
space. Walkways that appear adequately sized on a map can often seem much smaller once you are in
the constructed room. When designed properly, aisles enable people and equipment to move or be
moved easily through your server environment and promote good air circulation. When planned poorly,
these thoroughfares become the first trouble spots as a Data Center fills with servers.

So, how large do you make your Data Center aisles? Building codes in many regions require minimum
walkways of 36 or 42 inches (91.4 or 106.7 centimeters). This is adequate for one person (internal
doorways in private homes are smaller) but fairly narrow for maneuvering a pallet jack or oversized
equipment through, especially because many aisles are between server rows where devices can
protrude from their cabinets or cables can dangle and present a snagging hazard. If possible, set aside 4
feet (1.2 meters) for aisles between server rows and 5 feet (1.5 meters) or more for any major
thoroughfares where you expect frequent equipment and people traffic.

If you're used to packing as much equipment in to a Data Center footprint as possible, large aisles might
seem like a waste of space. Don't underestimate their value. A server environment with insufficient aisle
space is more likely to have difficulty regulating temperature and hosting large equipment. And while it
is relatively simple to add power or cabling to an operational Data Center that was not made robust
enough in its original design, it is practically impossible to carve out additional space between server
rows without incurring downtime. Even if your Data Center has to be a few hundred square feet (few
dozen square meters) larger to accommodate larger aisles between rows, that does not add much to the
overall cost of the room. It is the labor and materials cost associated with the infrastructure that are
expensive, whether that infrastructure is placed in a smaller or larger room.

Figure 4-5 illustrates the initial stage of a Data Center layout, in which a floor grid is drawn and major
mechanical equipment, clearances, and aisles are placed strategically along the walls. Aisles and buffer
zones are hatched, and areas that serve dual purposes are cross-hatched.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 73


1.1.1.56.1 Figure 4-5. Mechanical Equipment, Buffer Areas and Aisles

Note that, as shown in Figure 4-5, overlapping clearance spaces and aisles onto one another conserves
Data Center floor space. Placing the PDUs near walls additionally reduces how much area must be
reserved for their buffer space.

57.1.1.1 Equipment Rows

The final space to lay out is, ironically, the one most people probably think of first when discussing a
Data Center—the equipment rows. These rows are where your company's servers and networking
devices are installed and to which all of the room's other infrastructure systems ultimately connect.
Electrical and data cabling run to these rows, air handlers blow cooled air at them, and the fire
suppression provides coverage for them. The equipment that goes there is what the room is ultimately
all about, so plan the space carefully.

58.1.1.1 Form Versus Function

A key influence on the layout of your Data Center is how you opt to physically arrange your servers, such
as:

 Do you cluster them by task so that devices performing similar functions are together?
 Do you group them according to your company's internal organization so that machines
associated with a given department are together?
 Do you organize them by type so identical models are together, creating so-called server farms?

All are valid approaches.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 74


If you group your servers by function—either of the first two methods—or even if you do not
particularly organize your Data Center devices at all, you generally end up with a heterogeneous mix of
gear in your rows. This tends to even out the need for infrastructure over an area—some servers in a
row might need a lot of connectivity, with others requiring very little. You therefore want to lay out your
rows consistently across the Data Center, so that any cabinet location in any server row can
accommodate incoming equipment. This may require the rows to be somewhat deeper to handle a
variety of server footprints but makes the design of your Data Center uniform and straightforward.

If you choose to bunch similar servers together, be aware that this creates uneven demand for both
infrastructure and physical space in the room. For example, if Row 1 consists entirely of large servers, it
requires greater depth than Row 2 that contains only small servers. Row 2 may in turn need many more
cabling ports, because it is hosting so many more devices in the same amount of floor space. This goes
against the design principle of making your Data Center infrastructure consistent, so any server can be
installed in any room location. If you plan to strictly organize incoming servers by type, you can
customize sections of the Data Center—laying out rows to differing depths and even scaling
infrastructure. This can maximize the use of floor space, structured cabling, and electrical circuits, but
makes the room inflexible in the long run. You may eventually find yourself having to completely retrofit
any specialized server rows as technology changes and new machines arrive with different demands.

59.1.1.1 Setting Row Dimensions

Knowing that your Data Center rows might have to accommodate equipment of various sizes and
shapes, how large should you make them? The answer depends. In most cases, the majority of servers
and networking devices in your Data Center are going to be installed into server cabinets that are one of
a few dimensions common to the server industry—generally about 24 inches wide and anywhere from
30 to 48 inches deep (about 61 centimeters wide and 76 to 122 centimeters deep). Use these cabinet
measurements as the basic building blocks for your rows, and then be sure that those devices can fit
easily within your Data Center. To accommodate the deepest servers, set your row depth at 48 inches
(122 centimeters).

For the width of your rows, decide how many cabinets you want each row to house and set aside the
appropriate amount of space. Many Data Center infrastructure components—fiber housings, copper
patch panels, multimedia faceplates—are grouped into multiples of 12. Clustering 12 or 24 server
cabinets per row therefore leads to a matching number of infrastructure components. This avoids half-
filled patch panels or fiber housings wasting valuable cabinet space that could otherwise hold additional
servers.

Depending upon whether you want your Data Center's structured cabling to connect directly to each
server cabinet location or to be distributed through a substation, you may need to include one or more
cabinets in the server row to act as a networking substation.

Networking devices for a given server row are installed in a substation, and from here Data Center users
can plug patch cords into a patching field to connect to individual server cabinet locations. If you choose
not to incorporate them into your design, you can reduce the width of the row accordingly.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 75


60.1.1.1 Networking Rows

Not all of the cabinet locations within your Data Center are for servers. Some house networking
equipment that enable your servers to communicate with one another. While it is feasible to distribute
networking devices through your Data Center, it is more common to cluster the major devices together
in their own row and then have servers throughout the room connect to them.

At this point, just be aware that you need to include a networking row in the Data Center layout. Its
depth and surrounding aisles are the same as for server rows. Because the networking row houses less
equipment, it doesn't need to be as wide. If you have the available floor space, however, it is not a bad
idea to match its width to that of the server rows. You can use any spare cabinet locations in the
networking row to house those rare devices that do not fit in to the rest of your Data Center's
organizational scheme, such as IP telephony switches.

Figure 4-6 illustrates the second stage of a Data Center layout, in which server and networking rows are
positioned in the room. The H-like symbols at the end of each row are circuit panel boxes placed back to
back.

1.1.1.60.1 Figure 4-6. Data Center Networking and Servers Rows

Server cabinet locations are placed in parallel rows in Figure 4-6. The networking row is perpendicular to
them in this design, but could alternatively be placed parallel to the server rows.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 76


61.1.1.1 Orienting Rows

The final layout detail for your server rows is how to orient them. You want all devices within a row to
face the same direction for simplicity and consistent organization. Orientation of the devices affects
where users in the Data Center work on equipment as well as airflow patterns in the room.

One popular strategy is alternation of the direction of server rows, so that the backs of servers in one
row face the backs of servers in an adjacent row. This locates all of the patch cords connecting to those
devices in the same aisle while keeping the aisles on either side free of them. Because many servers and
networking devices vent their exhaust out of their backside, this configuration can create distinct hot
and cold areas in your Data Center. While this may sound like a problem, the practice of creating hot
and cold aisles is a popular tactic for regulating temperature in a server environment. If you design your
Data Center's cooling infrastructure to address such a configuration up front, there are advantages to
this layout. In fact, this server row arrangement is the only one endorsed by the Telecommunications
Industry Association for Data Center design.

A second approach to orienting server rows is to have all of them face a single direction, like books in a
bookcase. This creates fewer concentrated hot spots than the alternating approach, but the exhaust of
one row vents toward the intake of the row behind it, which makes the need for abundant aisle space in
between crucial. It also cuts in half how many patch cords and power cables are located in any given
aisle—the overall amount of cabling obviously remains the same, but it is distributed more. This
additional physical separation, although minor, can help in the event of a mishap that potentially
damages cabling in an aisle. The biggest asset of this design, though, is a purely intangible one. Having
all Data Center server rows and equipment face the same direction is simpler and therefore easier to
navigate for many people. Think of a gasoline (petrol) station in which the fuel nozzles are all on one
side of the gasoline pumps versus a station where the nozzles are in different locations. You can get
gasoline at either station, but at the second station there may be some guesswork or hesitation when
you try to orient your car so that its gas cap is closest to the pump nozzle.

Datacenter Design Reference Guide – Part 1 – Datacenter Design Basics Page 77

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy