100% found this document useful (5 votes)
3K views47 pages

Tia 942 B PDF Free

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
100% found this document useful (5 votes)
3K views47 pages

Tia 942 B PDF Free

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 47
cay FY . S ANSI/TIA.942-B-2017 ADVANCING GLOBAL COMMUNICATIONS APPROVED: JULY 12, 2017 TIA STANDARD Telecommunications Infrastructure Standard for Data Centers 4.2 Relationship of data center spaces to other building spaces Figure 2 illustrates the major spaces of a typical data center and how they relate to each other and the spaces outside of the data center. See clause 6 for information concerning the telecommunications spaces within the data center. This Standard addresses telecommunications infrastructure for the data center spaces, which is the computer room and its associated support spaces. Building Site Building Envelope General Office Space Telecommunications & Equipment Rooms serving spaces outside data center Data Center es Seat uals Data Center Electrical & ices emits) Mechanical Rooms Operations Telecom Storage Rooms & Center Room(s) serving Loading Docks data center spaces Computer Room Figure 2: Relationship of spaces in a data center 5 DATACENTER CABLING SYSTEM INFRASTRUCTURE This Standard establishes a structure for data center cabling system based on the generic cabling system structure in ANSI/TIA-568.0-D. rot 1 CP 41 CP y hegre, teers, EO EO EO Legend: DA | _ Distributor A Cabling @ Subsystem 1 DB] Distributor B cable DC] Distributor c ® Cabling Subsystem 2 Equipment outlet cable Optional consolidation point Cabling Subsystem 3 —-—-— Optional tie cabling cable NOTE — All elements shown represent cables and connecting hardware, not spaces or pathways. Figure 3: Functional elements of generic cabling topology Figure 3 provides a representation of the functional elements that comprise a generic cabling system. It depicts the relationship between the elements and how they may be configured to create a total system. The functional elements are “equipment outlets,” “distributors,” and "cabling subsystems’, which together comprise a generic telecommunications cabling system Distributor A Distributor A Distributor A Distributor A Interconnection Equipment/splitter Interconnection Cross- connection Cabling Subsystem 2 or Cabling Subsystem 3 Distributor room Cabling Subsystem 1 Figure 4: Examples of interconnections and cross-connections for Distributor A Figure 4 shows examples of interconnections and cross-connections for Distributor A. Similar configurations may be present for Distributor B and Distributor C. Access provider or Access provider or ‘campus cabling ‘campus cabling Entrance Room Entrance Room Horizontal cabling for spaces outside: computer room EDA EDA EDA EDA EDA EDA EDA EDA EDA LEGEND backbone cabling ‘oneolidation Point © hrercomect orzontl Srovs-Connect horizontal cabling Intermediate Croes-Connect f outer Equipment Outlet om Sarees cue GB erseesnee Tete boot "i ‘Telecommunicetions Room Main Distribution Area Intermediate Distribution Area * AZDA ie not part of the category 8 channel topology Horizontal Distribution Arca Zone Distribution Area Equipment Distribution Area Figure 6 lilustrates @ representative model for the various functional elements that comprise & cabling system for a data center. It depicts the relationship between the elements and how they are configured to creete the tote system. ‘The basic elements of the data conter cabling system structure ara the folowing 8), Horizontal cabling (Cabling Subsystem 1 ~ see clause 7.3) )_ Backbone cabling (Cabling Subsystem 2 and Cabling Subsystem 3 — see clause 7.4) ©) Cross-connect in the entrance room or main distribution area (Distibutor C, Distributor B or Distributor A) 4) Main cross-connect (MC) in the main distribution area (Distrbulor C or could also be Distributor B or Distributor A) ) Optional intermediate cross-connect (IC) in the intermediate distribution area (Distributor B) 1) Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or ‘main distribution area (Distibutor A or could also be Distributor B or Distributor C) 9) Consolidation point in the zone distribution area (optional) ‘h) Equipment outiet (EO) located in the equipment distribution area or zone distribution area 6 DATA CENTER TELECOMMUNICATIONS SPACES AND RELATED TOPOLOGIES 6.1 General The data center requires spaces dedicated to supporting the telecommunications infrastructure. Telecommunications spaces shall be dedicated to support telecommunications cabling and equipment. Typical spaces found within a data center generally include the entrance room, main distribution area (MDA), intermediate distribution area (IDA), horizontal distribution area (HDA), Zone distribution area (ZDA) and equipment distribution area (EDA). With the exception of the MDA and EDA, not all of these spaces may be present within the data center. These spaces shall be sized to accommodate the anticipated end-state size and demand forecast for all data center phases. These spaces should also be planned to provide for growth and transition to evolving technologies. These spaces may or may not be walled off or otherwise separated from the other computer room spaces. 6.2 Data center structure 6.2.1 Major elements The data center telecommunications spaces include the entrance room, main distribution area (MDA), intermediate distribution area (IDA), horizontal distribution area (HDA), Zone distribution area (ZDA) and equipment distribution area (EDA). The entrance room is the space used for the interface between data center structured cabling system and inter-building cabling, for both access provider and customer-owned cabling. This space includes the access provider demarcation hardware and access provider equipment. The entrance room may be located outside the computer room if the data center is in a building that includes general purpose offices or other types of spaces outside the data center. The entrance room may also be outside the computer room for improved security, as it avoids the need for access provider technicians to enter the computer room. Data centers may have multiple entrance rooms to provide additional redundancy or to avoid exceeding maximum cable lengths for access provider-provisioned circuits. The entrance room interfaces with the computer room through the MDA. In some cases, the secondary entrance rooms may have cabling to IDAs or HDAs to avoid exceeding maximum cable lengths for access provider-provisioned circuits. The entrance room may be adjacent to or combined with the MDA. The MDA includes the main cross-connect (MC), which is the central point of distribution for the data center structured cabling system and may include a horizontal cross-connect (HC) when equipment areas are served directly from the MDA. This space is inside the computer room: it may be located in a dedicated room in a multi-tenant data center for security. Every data center shall have at least one MDA. The computer room core routers, core LAN switches and core SAN switches are often located in the MDA, because this space is the hub of the cabling infrastructure for the data center. Access provider provisioning equipment is often located in the MDA rather than in the entrance room to avoid the need for a second entrance room due to circuit length restrictions. The MDA may serve one or more IDAs, HDAs, and EDAs within the data center and one or more telecommunications rooms located outside the computer room space to support office spaces, operations center and other external support rooms. The IDA may serve one or more HDAs and EDAs within the data center, and one or more. telecommunications rooms located outside the computer room space. The HDA is used to serve the EDAs when the HC is not located in the MDA or an IDA. Therefore, when used, the HDA may include the HC, which is the distributor for cabling to the EDAs. The HDA is inside the computer room, but may be located in a dedicated room within the computer room for additional security. The HDA typically includes LAN switches, SAN switches, and Keyboard/Video/Mouse (KVM) switches for the end equipment located in the EDAs. A data center may have computer room spaces located on multiple floors with each floor being serviced by its own HC. Some data centers may require no HDAs, as the entire computer room may be able to be supported from the MDA. However, a typical data center will have several HDAs. The EDA is the space allocated for end equipment, including computer systems and telecommunications equipment (e.g., servers, mainframe, and storage arrays). These areas shall not serve the purposes of an entrance room, MDA, IDA, or HDA. There may be an optional interconnection within a ZDA that is called a consolidation point (see figure 5). This consolidation point is between the horizontal cross-connect and the equipment outlet to facilitate moves, adds, and changes. 6.2.2 Basic data center topology The basic data center includes a single entrance room, possibly one or more telecommunications rooms, one MDA, and several HDAs. Figure 6 illustrates the basic data center topology. Access Providers: Entrance Room Access Providers (Carrier Equip & Work Areas in Demarcation) Offices, Operations Center, Support Rooms 3 Backbone cabling Computer} 1 MDA Room; (Routers, Backbone LANISAN Switches, PBX, M13 Muxes) TR (Office & Operations Center LAN Switches) Backbone cabling > HDA (LANISANIKVM Switches) Switches) Horizontal cabling ED, (Rack/Cabinet) 6.2.3 Reduced data center topologies Data center designers can consolidate the main cross-connect, and horizontal cross-connect in a single MDA, possibly as small as a single cabinet or rack. The telecommunications room for cabling to the support areas and the entrance room may also be consolidated into the MDA in a reduced data center topology. The reduced data center topology is illustrated in figure 7. Access Providers Computer MDA Room Work Areas in Offices, Operations Center, Support KVM Switches, PBX, M13 Rooms Muxes) Horizontal cabling] EDA (Rack/Cabinet) Figure 7: Example of a reduced data center topology 6.2.4 Distributed data center topologies Large data centers, such as data centers located on multiple floors or in multiple rooms, may require intermediate cross-connects located in IDAs. Each room or floor may have one or more IDAs. Multiple telecommunications rooms may be required for data centers with large or widely separated office and support areas. In very large data centers, circuit length restrictions may require multiple entrance rooms. The data center topology with multiple entrance rooms and IDAs is shown in Figure 8. The primary entrance room shall not have direct connections to IDAs and HDAs. Although cabling from the secondary entrance room directly to the IDAs and HDAs is not common practice or encouraged, it is allowed to meet certain circuit length limitations and redundancy needs. Access Providers Access Providers ‘Work Areas in Primary Entrance ‘Secondary Offices, Operations Room Entrance Room Centr, Support (Camer Equip & (Camer Equip & Horizontal cabling ‘Backbone cabling oa MDA (Office & Operations B Backbone cabling Center LAN Switches) PBX, M13 Mines) Backbone eating Backbone cabling ina Switches) Computer Room IDA (UANISAN Swatches) Backbone cabling (UANISANKVM Switches) HDA (LANISANIKV Horizontal cabling Figure 8: Example of a distributed data center topology with multiple entrance rooms 6.2.5 Topologies for broadband coaxial cabling See ANSI/TIA-568.4-D for broadband coaxial cabling system topologies that can be used within data centers. 6.3 Energy efficient design 6.3.1 General Energy efficiency should be considered in the design of the data center. Clause 6.3.2 provides recommendations for design of telecommunications cabling, pathways, and spaces that can improve energy efficiency. Other methods involving other aspects of the data center design are described in other publications, including the following documents: « ASHRAE, Best Practices for Datacom Facility Energy Efficiency, Second Edition (2009) * ASHRAE, Design Considerations for Data and Communications Equipment Centers, Second Edition (2009) * ASHRAE, Thermal Guidelines for Data Processing Environments, Fourth Edition, 2015 «* European Commission, 2017 Best Practices for EU Code of Conduct on Data Centres, Version 8.1.0 (2017) * European Commission, European Code of Conduct on Data Centre Energy Efficiency, Introductory guide for applications 2016, Version 3.1.2 6.3.2 Energy efficiency recommendations 6.3.2.1 General By their nature, data centers consume large amounts of energy, most of which is converted to heat, requiring serious consideration to cooling efficiencies. There is no single thermal management architecture that is most energy efficient for all installations. Critical factors unique to the customer, application, and environment should be carefully evaluated in the start-up and operational analysis. 6.3.2.2. Telecommunications cabling ‘Overhead telecommunications cabling may improve cooling efficiency and is a best practice where ceiling heights permit because it can substantially reduce airflow losses due to airflow obstruction and turbulence caused by under floor cabling and cabling pathways. See ANSI/TIA- 569-D for additional guidance regarding overhead pathways (e.g., structural load) If telecommunications cabling is installed in an under floor space that is also used for cooling, under floor air obstructions can be reduced by: * using network and cabling designs (e.g., top-of-rack switching) that require less cabling: * selecting cables with smaller diameters to minimize the volume of under floor cabling: * utilizing higher strand count optical fiber cables instead of several lower count optical fiber cables to minimize the volume of under floor cabling; * designing the cabling pathways to minimize adverse impact on under floor airflow (e.g., routing cabling in hot aisles rather than cold aisles so as not to block airflow to ventilated tiles on cold aisles); * designing the cabling layout such that the cabling routes are opposite to the direction of air flow so that at the origin of airflow there is the minimal amount of cabling to impede flow (see figure 9 for examples); and * properly sizing pathways and spaces to accommodate cables with minimal obstruction (e.g., ‘shallower and wider trays). Computer Room Example of air flow and cable routing in minimum contention Cabinets Computer Room Air Conditioners. ——— Cable Path Ar Flow Legend Figure 9: Examples of routing cables and air flow contention Routing of telecommunications cabling within cabinets, racks, and other enclosure systems should not hamper the proper cooling of the equipment within the enclosures (e.g., avoid routing of cabling in front of vents). Sufficient airflow as required by the equipment manufacturer shall be maintained. In all cases, change management procedures should be in place and should include the removal of abandoned cable in accordance with the best practices or the AHJ. This assures pathways remain neat so as to not create a weight issue overhead or air dams in under floor systems. 6.3.2.3 Telecommunications pathways Telecommunications pathways should be placed in such a manner as to minimize disruption to airflow to and from equipment. For example, if placed under the access floor they should not be placed under ventilated tiles or where they disrupt the flow of air into or out of air conditioning equipment Consider computational fluid dynamics (CFD) models for large data centers to optimize location of telecommunications pathways, air conditioning equipment, equipment enclosures, air return, air vents, and ventilated tiles. 6.3.2.4 Telecommunications spaces Consider use of enclosures or enclosure systems that improve cooling efficiency: * cabinets with isolated air-supply: * cabinets with isolated air-return; * cabinets with in-cabinet cooling systems; + hot-aisle containment or cold-aisle containment systems. Routing of cabling and cable pathways should not compromise the efficiency of the enclosure or enclosure system. For example, cable openings in the enclosure or enclosure system should use brushes or grommets to minimize loss of air. Commercially produced hardware and accessories specifically intended to prevent cold and hot air from mixing should be installed to improve energy efficiency. This may include: * Blanking panels in unused rack unit positions Blanks in open port or module locations in patch panels * Angled covers above and below angled patch panels or a group of continuous angled patch panels * Panels, seals, and grommets to prevent air bypass between the cabinet rails and the side of the cabinets * Seals between the floor and the bottom of the cabinets (front, side, and rear depending on the containment and cabinet design) * Seals or side panels between cabinets with different front rail depths. Equipment with different environmental requirements should be segregated into different spaces to allow equipment with less restrictive environmental requirements to be operated in spaces in a more energy-efficient environment. Consider allocating and designing separate spaces dedicated to high density equipment so that the entire data center is not powered and cooled for the equipment with the greatest energy demands. It is recommended to use a cooling system that is able to vary the volume of air as needed. Equipment should match the airflow design for the enclosures and computer room space in which they are placed. This generally means that equipment should be mounted in cabinets/racks with air intakes facing the cold-aisle and air exhausts facing the hot-aisle. Equipment with non- standard airflow may require specially designed enclosures or air baffles to avoid disruption of proper air flow. Cabinets and racks should be provisioned with power strips that permit monitoring of power and cooling levels to ensure that enclosures do not exceed designed power and cooling levels. Use energy efficient lighting and lighting schemes (see 6.4.2.5). Avoid exterior windows and sky lights in computer rooms and other environmentally controlled telecommunications spaces. Consider operation and design practices that minimize the need to cool unneeded equipment and spaces. * Build the computer room in phases or zones, only building and occupying spaces as needed. + In occupied data centers, institute a process to identify and remove equipment that is no longer needed or to identify and consolidate (e.g., virtualize) underutilized equipment * Install monitoring equipment and perform periodic reporting of total data center energy use and energy use of individual systems such as power distribution units, air conditioning units, and IT equipment cabinets/racks. * Consider air baffles and temporary room dividers that can be moved and adjusted as needed. ‘Any temporary room dividers shall not create a code violation.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy