0% found this document useful (0 votes)
92 views114 pages

Ecs D Series

This document provides hardware details for Dell EMC Elastic Cloud Storage (ECS) appliances, including component configurations and specifications. It describes the servers, switches, disk drives, disk array enclosures, and cabling used in ECS appliances. Guidelines are provided for power cabling, SAS cabling, and networking based on the appliance series. The document is intended to help with hardware installation, configuration, and support of ECS appliances.

Uploaded by

Marcelo Mafra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views114 pages

Ecs D Series

This document provides hardware details for Dell EMC Elastic Cloud Storage (ECS) appliances, including component configurations and specifications. It describes the servers, switches, disk drives, disk array enclosures, and cabling used in ECS appliances. Guidelines are provided for power cabling, SAS cabling, and networking based on the appliance series. The document is intended to help with hardware installation, configuration, and support of ECS appliances.

Uploaded by

Marcelo Mafra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Dell EMC Elastic Cloud Storage (ECS)

D- and U-Series

Hardware Guide
302-003-477
09
Copyright © 2014-2018 Dell Inc. or its subsidiaries. All rights reserved.

Published April 2018

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.

Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.

Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com

2 D- and U-Series Hardware Guide


CONTENTS

Figures 5

Tables 7

Chapter 1 Hardware Components and Configurations 9


ECS Appliance hardware components.........................................................10
U-Series components.....................................................................10
D-Series components..................................................................... 12
C-Series components.....................................................................13
U-Series Appliance (Gen2) configurations and upgrade paths.................... 15
U-Series Appliance (Gen1) configurations and upgrade paths.....................18
D-Series Appliance configurations and upgrade paths.................................19
C-Series Appliance (Gen2) configurations and upgrade paths....................20
C-Series Appliance (Gen1) configurations and upgrade paths.................... 22
Certified hardware in support of ECS 3.2................................................... 23

Chapter 2 Servers 25
ECS Appliance servers............................................................................... 26
Server front views......................................................................... 27
Server rear view............................................................................ 28
Rack and node host names......................................................................... 30

Chapter 3 Switches 33
ECS Appliance switches............................................................................. 34
Private switch: Cisco 3048 48-P................................................... 35
Private switch: Arista 7010T-48.....................................................36
Private switch: Arista 7048T-48.................................................... 37
Public switch: Arista 7050SX-64................................................... 38
Public switch: Arista 7050S-52..................................................... 39
Public switch: Arista 7150S-24...................................................... 40
Public switch: Arista 7124SX..........................................................42

Chapter 4 Disk Drives 45


Integrated disk drives................................................................................. 46
Storage disk drives..................................................................................... 46
Disk array enclosures.................................................................................. 47
Pikes Peak (dense storage)........................................................... 47
Voyager DAE................................................................................. 56

Chapter 5 Third Party Rack Requirements 67


Third-party rack requirements....................................................................68

Chapter 6 Power Cabling 71


ECS power calculator................................................................................. 72
U-Series single-phase AC power cabling ....................................................72

D- and U-Series Hardware Guide 3


CONTENTS

U-Series three-phase AC power cabling..................................................... 74


D-Series single-phase AC power cabling .................................................... 77
D-Series three-phase AC power cabling..................................................... 79
C-Series single-phase AC power cabling ................................................... 83
C-Series 3-phase AC power cabling .......................................................... 84

Chapter 7 SAS Cabling 89


U-Series SAS cabling................................................................................. 90
D-Series SAS cabling..................................................................................93

Chapter 8 Network Cabling 95


Connecting ECS appliances in a single site ................................................ 96
Network cabling......................................................................................... 97

4 D- and U-Series Hardware Guide


FIGURES

1 U-Series minimum and maximum configurations..........................................................11


2 D-Series minimum and maximum configurations......................................................... 13
3 C-Series minimum and maximum configurations......................................................... 15
4 Phoenix-16 (Gen1) and Rinjin-16 (Gen2) server chassis front view ............................ 27
5 Phoenix-12 (Gen1) and Rinjin-12 (Gen2) server chassis front view ............................ 27
6 Server chassis rear view (all)......................................................................................29
7 Rear ports on nodes (all)............................................................................................ 29
8 Cisco 3048 ports (rear).............................................................................................. 35
9 Cisco 3048 ports (front)............................................................................................ 35
10 Arista 7010T-48 ports.................................................................................................36
11 Arista 7048T-48 ports.................................................................................................37
12 Arista 7050SX-64 ports..............................................................................................38
13 Arista 7050S-52 ports................................................................................................ 39
14 Arista 7150S-24 ports.................................................................................................40
15 Arista 7124SX............................................................................................................. 42
16 C-Series (Gen1) Integrated disks with node mappings............................................... 46
17 Pikes Peak chassis......................................................................................................48
18 Pikes Peak chassis with I/O module and power supplies removed, sleds extended.....49
19 Enclosure LEDs from the front................................................................................... 49
20 Sleds letter designations............................................................................................ 50
21 Drive designations and sled LEDs................................................................................ 51
22 Disk drive in carrier.....................................................................................................52
23 Empty drive carrier.....................................................................................................52
24 I/O module separated from enclosure........................................................................ 53
25 SAS link LEDs............................................................................................................. 53
26 Power supply separated from I/O module.................................................................. 54
27 Power supply LEDs.....................................................................................................55
28 Enclosure fan locations...............................................................................................56
29 U-Series disk layout for 10-disk configurations (Gen2 only)....................................... 57
30 U-Series disk layout for 15-disk configurations (Gen1, Gen2 full-rack only)................58
31 U-Series disk layout for 30-disk configurations (Gen1, Gen2).................................... 59
32 U-Series disk layout for 45-disk configurations (Gen1, Gen2 full-rack)...................... 60
33 U-Series disk layout for 60-disk configurations...........................................................61
34 LCC with LEDs........................................................................................................... 62
35 LCC Location..............................................................................................................62
36 Fan control module with LED......................................................................................63
37 Location of fan modules............................................................................................. 63
38 ICM LEDs................................................................................................................... 65
39 DAE power supply...................................................................................................... 66
40 U-Series single-phase AC power cabling for eight-node configurations .....................73
41 Cable legend for three-phase delta AC power diagram............................................... 74
42 Three-phase AC delta power cabling for eight-node configuration............................. 75
43 Cable legend for three-phase WYE AC power diagram............................................... 76
44 Three-phase WYE AC power cabling for eight-node configuration............................. 77
45 D-Series single-phase AC power cabling for eight-node configurations ..................... 78
46 Three-phase AC delta power cabling for eight-node configuration.............................80
47 Three-phase WYE AC power cabling for eight-node configuration............................. 82
48 C-Series single-phase AC power cabling for eight-node configurations: Top .............83
49 C-Series single-phase AC power cabling for eight-node configurations: Bottom ....... 84
50 C-Series 3-phase AC power cabling for eight-node configurations: Top ....................85
51 C-Series 3-phase AC power cabling for eight-node configurations: Bottom .............. 86
52 U-Series (Gen2) SAS cabling for eight-node configurations....................................... 91
53 U-Series (Gen2) SAS cabling..................................................................................... 92

D- and U-Series Hardware Guide 5


FIGURES

54 U-Series (Gen1) SAS cabling for eight-node configurations....................................... 93


55 D-Series SAS cabling for eight-node configurations................................................... 94
56 Linear or daisy-chain topology....................................................................................96
57 Linear or daisy-chain split-brain..................................................................................96
58 Ring topology............................................................................................................. 96
59 Star topology..............................................................................................................97
60 Public switch cabling for U- and D-Series...................................................................98
61 U-Series and D-Series network cabling...................................................................... 99
62 Network cabling labels.............................................................................................. 100
63 Private switch cabling for U- and D-Series................................................................102
64 C-Series public switch cabling for the lower segment from the rear......................... 104
65 C-Series public switch cabling for the upper segment from the rear......................... 107
66 C-Series private switch cabling for the lower segment from the rear........................109
67 C-Series private switch cabling for the upper segment from the rear........................ 112

6 D- and U-Series Hardware Guide


TABLES

1 U-Series hardware components.................................................................................. 10


2 D-Series hardware components.................................................................................. 12
3 C-Series hardware components.................................................................................. 13
4 U-Series (Gen2) configurations ................................................................................. 16
5 U-Series (Gen2) disk upgrades................................................................................... 17
6 U-Series (Gen2) node upgrades.................................................................................. 17
7 U-Series (Gen1) configurations .................................................................................. 18
8 U-Series (Gen1) upgrades........................................................................................... 19
9 D-Series configurations ............................................................................................. 20
10 D-Series upgrades...................................................................................................... 20
11 C-Series (Gen2) configurations .................................................................................20
12 C-Series (Gen2) upgrades.......................................................................................... 21
13 C-Series (Gen1) configurations ................................................................................. 22
14 C-Series (Gen1) upgrades.......................................................................................... 23
15 ECS Certified hardware.............................................................................................. 23
16 Server LEDs............................................................................................................... 28
17 Rack ID 1 to 50 .......................................................................................................... 30
18 Default node names.................................................................................................... 30
19 ECS Appliance switch summary..................................................................................34
20 Cisco 3048 switch configuration detail....................................................................... 35
21 Arista 7010T-48 switch configuration detail................................................................36
22 Arista 7048T-48 switch configuration detail............................................................... 37
23 7050SX-64 switch port connections used on the top 10 GbE switch (hare) .............. 38
24 7050SX-64 switch port connections used on the bottom 10 GbE switch (rabbit) ..... 38
25 7050S-52 switch port connections used on the top 10 GbE switch (hare) ................ 39
26 7050S-52 switch port connections used on the bottom 10 GbE switch (rabbit) ........40
27 7150S switch port connections used on the top 10 GbE switch (hare) ....................... 41
28 7150S switch port connections used on the bottom 10 GbE switch (rabbit) ...............41
29 7124SX switch port connections used on the top 10 GbE switch (hare) .................... 42
30 7124SX switch port connections used on the bottom 10 GbE switch (hare) .............. 42
31 Storage disk drives..................................................................................................... 46
32 Enclosure LEDs.......................................................................................................... 49
33 Sled and drive LEDs.................................................................................................... 51
34 SAS link LEDs............................................................................................................. 54
35 SAS link LEDs.............................................................................................................55
36 DAE LCC status LED................................................................................................... 61
37 Fan control module fan fault LED............................................................................... 63
38 ICM bus status LEDs.................................................................................................. 64
39 ICM 6 Gb/s port LEDs................................................................................................ 64
40 DAE AC power supply/cooling module LEDs.............................................................. 66
41 Third-party rack requirements....................................................................................68
42 U- and D-Series 10 GB public switch network cabling for all Arista models............... 100
43 U- and D-Series 10 GB public switch MLAG cabling for all Arista models...................101
44 U- and D-Series 1 GB private switch network cabling................................................102
45 U- and D-Series 1 GB private switch management and interconnect cabling.............103

D- and U-Series Hardware Guide 7


TABLES

8 D- and U-Series Hardware Guide


CHAPTER 1
Hardware Components and Configurations

l ECS Appliance hardware components................................................................ 10


l U-Series Appliance (Gen2) configurations and upgrade paths............................15
l U-Series Appliance (Gen1) configurations and upgrade paths............................ 18
l D-Series Appliance configurations and upgrade paths........................................ 19
l C-Series Appliance (Gen2) configurations and upgrade paths........................... 20
l C-Series Appliance (Gen1) configurations and upgrade paths............................ 22
l Certified hardware in support of ECS 3.2...........................................................23

Hardware Components and Configurations 9


Hardware Components and Configurations

ECS Appliance hardware components


Describes the hardware components that make up ECS Appliance hardware models.
ECS Appliance series
The ECS Appliance series include:
l D-Series: A dense object storage solution with servers and separate disk array
enclosures (DAEs).
l U-Series: A commodity object storage solution with servers and separate DAEs.
l C-Series: A dense compute and storage solution of servers with integrated disks.
Hardware generations
ECS appliances are characterized by hardware generation:
l U-Series Gen2 models featuring 12 TB disks became available in March 2018.
l The D-Series was introduced in October 2016 featuring 8 TB disks. D-Series
models featuring 10 TB disks became available March 2017.
l The original U-Series appliance (Gen1) was replaced in October 2015 with second
generation hardware (Gen2).
l The original C-Series appliance (Gen1) was replaced in February 2016 with second
generation hardware (Gen2).
Statements about a series that is made in this document apply to all generations
except where noted.

U-Series components
The U-Series ECS Appliance includes the following hardware components.

Table 1 U-Series hardware components

Component Description
40U rack Titan D racks that include:
l Single-phase PDUs with four power drops (two per side).
The high availability configuration (HA) of four power
drops is mandatory and any deviation requires that an
RPQ be submitted and approved.
l Optional three-phase WYE or delta PDUs with two power
drops (one per side)
l Front and rear doors
l Racking by Dell EMC manufacturing

Private switch One 1 GbE switch

Public switch Two 10 GbE switches

Nodes Intel-based unstructured server in four- and eight-node


configurations. Each server chassis contains four nodes
(blades). Gen2 also has the option for five- and six-node
configurations.

10 D- and U-Series Hardware Guide


Hardware Components and Configurations

Table 1 U-Series hardware components (continued)

Component Description
Disk array enclosure (DAE) The U-Series disk array enclosure (DAE) drawers hold up to
60 3.5-inch disk drives. Features include:
l Gen1 hardware uses 6TB disks and Gen2 hardware uses 8
TB and 12 TB disks
l Two 4-lane 6 Gb/s SAS connectors
l SAS bandwidth of 3500 MB/s
l Drive service: hot swappable

Figure 1 U-Series minimum and maximum configurations

U-Series components 11
Hardware Components and Configurations

Note

For more robust data protection, a five-node configuration is the recommended


minimum.

D-Series components
The D-Series ECS Appliance includes the following hardware components.

Table 2 D-Series hardware components

Component Description
40U rack Titan D racks that include:
l Single-phase PDUs with six power drops (three per side).
The high availability configuration (HA) of six power drops
is mandatory and any deviation requires that an RPQ be
submitted and approved.
l Optional three-phase WYE or delta PDUs with two power
drops (one per side)
l Front and rear doors
l Racking by Dell EMC manufacturing

Private switch One 1 GbE switch

Public switch Two 10 GbE switches

Nodes Intel-based unstructured server in eight-node configurations.


Each server chassis contains four nodes.

Disk array enclosure (DAE) The D-Series disk array enclosure (DAE) drawers hold up to
98 3.5-inch disk drives. Features include:
l Models featuring 8TB disks and models featuring 10TB
disks
l Two 4-lane 12 Gb/s SAS 3.0 connectors
l SAS bandwidth of 5600 MB/s
l Drive service: cold service

Service tray 50-lb capacity service tray

12 D- and U-Series Hardware Guide


Hardware Components and Configurations

Figure 2 D-Series minimum and maximum configurations

Note

These rack configurations are available with either 8 TB or 10 TB disks.

C-Series components
The C-Series ECS Appliance includes the following hardware components.

Table 3 C-Series hardware components

Component Description
40U rack Titan D Compute racks that include:
l Two single-phase PDUs in a 2U configuration with two
power drops. The high availability configuration (HA) of
two power drops is mandatory and any deviation requires
that an RPQ be submitted and approved.

C-Series components 13
Hardware Components and Configurations

Table 3 C-Series hardware components (continued)

Component Description

l Optional two three-phase WYE or delta PDUs in a 2U


configuration with two power drops
l Front and rear doors
l Racking by Dell EMC manufacturing

Private switch One or two 1 GbE switches. The second switch is required for
configurations with more than six servers.

Public switch Two or four 10 GbE switches. The third and fourth switches
are required for configurations with more than six servers.

Nodes Intel-based unstructured servers in 8- through 48-node


configurations. Each server chassis contains four nodes
(blades).

Disks The C-Series has 12 3.5-inch disk drives integrated with each
server. Gen1 hardware uses 6 TB disks. Gen2 hardware uses 8
TB disks.

Service tray 50-lb capacity service tray

14 D- and U-Series Hardware Guide


Hardware Components and Configurations

Figure 3 C-Series minimum and maximum configurations

U-Series Appliance (Gen2) configurations and upgrade


paths
Describes the second generation U-Series ECS Appliance configurations and the
upgrade paths between the configurations. The Gen2 hardware became generally
available in October 2015.
U-Series configurations (Gen2)
The U-Series Appliance is a commodity object storage solution.

U-Series Appliance (Gen2) configurations and upgrade paths 15


Hardware Components and Configurations

Table 4 U-Series (Gen2) configurations

Storage capacity
Model number Nodes DAEs Disks in DAE Switches
(8 TB disks) (12 TB disks)
U400 (minimum 4 4 10 320 TB 480 TB One private and two
configuration) public

U400-E 5 5 10 400 TB 600 TB One private and two


public

U480-E 6 6 10 480 TB 720 TB One private and two


public

U400-T 8 8 10 640 TB 960 TB One private and two


public

U2000 8 8 30 1.92 PB 2.88 PB One private and two


public

U2800 8 8 45 2.88 PB 4.32 PB One private and two


public

U4000 (maximum 8 8 60 3.84 PB 5.76 PB One private and two


configuration) public

Note

Five-node configurations are the smallest configuration that can tolerate a node
failure and still maintain the EC protection scheme. A four-node configuration that
suffers a node failure changes to a simple data mirroring protection scheme. Five-
node configurations are the recommended minimum configuration.

U-Series (Gen2) upgrade paths


U-Series Gen2 upgrades can be applied flexibly to eligible configurations. Multiple
upgrades can be applied in one service call.
Upgrade rules for an appliance with all 8 TB or 12 TB disks (upgrade will not include
mixed disk capacities in the rack):
l The minimum number of disks in a DAE is 10.
l Disk Upgrade Kits are available in 5 or 10 disk increments.
l All DAEs in the appliance must have the same number of disks in increments of 5
(10, 15, 20, and so on, up to 60) with NO empty slots between disks.
l Upgrades are flexible, meaning you can upgrade to any disk level even if that level
does not correspond to a named model. For example, you can upgrade the original
appliance to have 35 disks per DAE even though this configuration does not have
an official label like the U2000 (30 disks per DAE) or the U2800 (45 disks per
DAE).
l To upgrade a half-rack configuration to a full-rack configuration, you must order
the 1 Server Chassis containing 4 nodes, 4 DAEs with 10, 20,
30, 45 or 60 Upgrade Kit. To achieve any configuration between 10, 20, 30
45 or 60 disks, add 5 or 10 disk upgrade kits in the required quantity to match the
disk quantities per DAE in nodes 1-4.
l All empty drive slots must be filled with a disk filler.

16 D- and U-Series Hardware Guide


Hardware Components and Configurations

l The best practice is to have only one storage pool in a VDC, unless you have more
than one storage use case at the site. In a site with a single storage pool, each DAE
in each rack must have the same number of disks.
Upgrade rules for systems with either four nodes/DAEs 8 TB or four nodes/DAEs 12
TB (upgrade will include mixed disk capacities in the rack):
l The minimum number of disks in a DAE is 10.
l Disk Upgrade Kits are available in 5 or 10 disk increments.
l No mixing of disk capacities in a DAE.
l Each four node/DAE must have the same disk capacity and number of disks in
increments of 5 (10, 15, 20, and so on, up to 60) with NO empty slots between
disks in DAE. Example: nodes 1-4, 30, 6TB disks in each DAE, nodes 5-7, 20 12TB
disks in each DAE.
l All empty drive slots must be filled with a disk filler.

Table 5 U-Series (Gen2) disk upgrades

Disk upgrade kit Uses


5-Disk Upgrade Used to supplement other disk upgrade kits to make up a valid
configuration.

10-Disk Upgrade Used to supplement other disk upgrade kits to make up a valid
configuration.

40-Disk Upgrade l Add 10 disks to each DAE in a four-node configuration.


l Add 5 disks to each DAE in an eight-node configuration.
l Populate a new DAE in a configuration with 40-disk DAEs.

60-Disk Upgrade l Add 10 disks to each DAE in a six-node configuration.


l Populate a new DAE in a configuration with 60-disk DAEs.

Table 6 U-Series (Gen2) node upgrades

Current nodes Kit for upgrade to 5 nodes Kit for upgrade to 6 nodes Kit for upgrade to 8 nodes
Four l One server chassis with one l One server chassis with two l One server chassis with four
(all four nodes have node and three fillers. nodes and two fillers. nodes.
12 TB disks)
l One DAE with the same l Two DAEs with the same l Four DAEs with the same
number of disks as one of number of disks as one of number of disks as one of
the current DAEs. Disks the current DAEs. Disks the current DAEs. Disks
must be Gen2 12 TB. must be Gen2 12 TB. must be Gen2 12 TB.

Four l One server chassis with one l One server chassis with two l One server chassis with four
(all four nodes have node and three fillers. nodes and two fillers. nodes.
8 TB disks)
l One DAE with the same l Two DAEs with the same l 8 TB expansion disks: four
number of disks as one of number of disks as one of DAEs being added with the
the current DAEs. Disks the current DAEs. Disks same number of disks as one
must be Gen2 8 TB. must be Gen2 8 TB. of the current DAEs.
l 12 TB expansion disks: four
DAEs being added with the
same number of disks.

U-Series Appliance (Gen2) configurations and upgrade paths 17


Hardware Components and Configurations

Table 6 U-Series (Gen2) node upgrades (continued)

Current nodes Kit for upgrade to 5 nodes Kit for upgrade to 6 nodes Kit for upgrade to 8 nodes
Five Not applicable l One node. l Three nodes.
(all five nodes have
either all 8 TB disks
l One DAE with the same l Three DAEs with the same
or all 12 TB disks) number of disks as one of number of disks as one of
the current DAEs. the current DAEs.
You cannot intermix 8 TB and 12 You cannot intermix 8 TB and 12
TB disks. TB disks.

Six Not applicable Not applicable l Two nodes.


(all six nodes have
either all 8 TB disks
l Two DAEs with the same
or all 12 TB disks) number of disks as one of
the current DAEs.
You cannot intermix 8 TB and 12
TB disks.

Note

Seven-node configurations are not supported.

When you are planning to increase the number of drives in the DAEs and add nodes to
the appliance, order the disks first. Then order the node upgrades. The new DAEs are
shipped with the correct number of disks preinstalled.

U-Series Appliance (Gen1) configurations and upgrade


paths
Describes the first generation ECS Appliance configurations and the upgrade paths
between the configurations. Gen1 hardware became generally available in June 2014.
U-Series configurations (Gen1)
The U-Series Appliance is a dense storage solution using commodity hardware.

Table 7 U-Series (Gen1) configurations

Model Nodes DAEs Disks in DAE 1 to Disks in DAE 5 to Storage Switches


number 4 8 capacity
U300 4 4 15 Not applicable 360TB One private and two
(minimum public
configuration
)

U700 4 4 30 Not applicable 720TB One private and two


public

U1100 4 4 45 Not applicable 1080TB One private and two


public

U1500 4 4 60 Not applicable 1440TB One private and two


public

18 D- and U-Series Hardware Guide


Hardware Components and Configurations

Table 7 U-Series (Gen1) configurations (continued)

Model Nodes DAEs Disks in DAE 1 to Disks in DAE 5 to Storage Switches


number 4 8 capacity
U1800 8 8 60 15 1800TB One private and two
public

U2100 8 8 60 30 2160TB One private and two


public

U2500 8 8 60 45 2520TB One private and two


public

U3000 8 8 60 60 2880TB One private and two


(maximum public
configuration
)

U-Series (Gen1) upgrade paths


U-Series upgrades consist of the disks and infrastructure hardware that is needed to
move from the existing model number to the next higher model number. To upgrade
by more than one model level, order the upgrades for each level and apply them in one
service call.

Table 8 U-Series (Gen1) upgrades

Model number Disk upgrade (to the next Hardware upgrade (to the
higher model) next higher model)
U300 (minimum Not applicable Not applicable
configuration)

U700 One 60-disk kit Not applicable

U1100 One 60-disk kit Not applicable

U1500 One 60-disk kit Not applicable

U1800 One 60-disk kit One server chassis (four nodes)


and four DAEs

U2100 One 60-disk kit Not applicable

U2500 One 60-disk kit Not applicable

U3000 (maximum One 60-disk kit Not applicable


configuration)

D-Series Appliance configurations and upgrade paths


Describes the D-Series ECS Appliance configurations and the upgrade paths. The D-
Series hardware became generally available in October 2016. 10 TB models became
available March 2017.
D-Series configurations
The D-Series Appliance is a dense object storage solution using commodity hardware.

D-Series Appliance configurations and upgrade paths 19


Hardware Components and Configurations

Table 9 D-Series configurations

Model number Nodes DAEs Disks in each Disk Size Storage Switches
DAE capacity
D4500 8 8 70 8TB 4.5 PB One private and two
public

D5600 8 8 70 10TB 5.6 PB One private and two


public

D6200 8 8 98 8TB 6.2 PB One private and two


public

D7800 8 8 98 10TB 7.8 PB One private and two


public

D-Series upgrade paths


The D-Series Appliances can be upgraded as shown in the table.

Table 10 D-Series upgrades

Upgrade option Number Hardwa Description


name of disks re
224 8 TB Disk 224 16 Sleds Adds 2 sleds of 14 disks each (28 disks total) to
Upgrade Kit each DAE.
(upgrades D4500 to
D6200)

224 10 TB Disk 224 16 Sleds Adds 2 sleds of 14 disks each (28 disks total) to
Upgrade Kit each DAE.
(upgrades D5600 to
D7800)

C-Series Appliance (Gen2) configurations and upgrade


paths
Describes the second generation C-Series ECS Appliance configurations and the
upgrade paths between the configurations. Gen2 hardware became generally available
in February 2016.
C-Series (Gen2) configurations
The C-Series Appliance is a dense compute solution using commodity hardware.

Table 11 C-Series (Gen2) configurations

Phoenix-12 Nodes Storage Switches


Compute Servers capacity
2 (minimum 8 144TB One private and two public
configuration)

3 12 216TB One private and two public

4 16 288TB One private and two public

20 D- and U-Series Hardware Guide


Hardware Components and Configurations

Table 11 C-Series (Gen2) configurations (continued)

Phoenix-12 Nodes Storage Switches


Compute Servers capacity
5 20 360TB One private and two public

6 24 432TB One private and two public

7 28 504TB Two private and four public

8 32 576TB Two private and four public

9 36 648TB Two private and four public

10 40 720TB Two private and four public

11 44 792TB Two private and four public

12 (maximum 48 864TB Two private and four public


configuration)

C-Series (Gen2) upgrade paths


C-Series upgrades consist of the disks and infrastructure hardware that is needed to
move from the existing model number to the next higher model number. To upgrade
by more than one model level, order the upgrades for each level and apply them in one
service call.

Table 12 C-Series (Gen2) upgrades

Model number Disk upgrade (to the next Hardware upgrade (to
higher model) the next higher model)
2 Phoenix-12 Compute Not applicable Not applicable
Servers (minimum
configuration)

3 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

4 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

5 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

6 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

7 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes) and one private and
two public switches

8 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

9 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

10 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

C-Series Appliance (Gen2) configurations and upgrade paths 21


Hardware Components and Configurations

Table 12 C-Series (Gen2) upgrades (continued)

Model number Disk upgrade (to the next Hardware upgrade (to
higher model) the next higher model)
11 Phoenix-12 Compute 12 integrated disks One server chassis (four
Servers nodes)

12 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers (maximum nodes)
configuration)

C-Series Appliance (Gen1) configurations and upgrade


paths
Describes the first generation C-Series ECS Appliance configurations and the upgrade
paths between the configurations. Gen1 hardware became generally available in March
2015.
C-Series (Gen1) configurations
The C-Series Appliance is a dense compute solution using commodity hardware.

Table 13 C-Series (Gen1) configurations

Phoenix-12 Compute Nodes Storage Switches


Servers capacity
2 (minimum 8 144TB One private and two public
configuration)

3 12 216TB One private and two public

4 16 288TB One private and two public

5 20 360TB One private and two public

6 24 432TB One private and two public

7 28 504TB Two private and four public

8 32 576TB Two private and four public

9 36 648TB Two private and four public

10 40 720TB Two private and four public

11 44 792TB Two private and four public

12 (maximum 48 864TB Two private and four public


configuration)

C-Series (Gen1) upgrade paths


C-Series upgrades consist of the disks and infrastructure hardware that is needed to
move from the existing model number to the next higher model number. To upgrade
by more than one model level, order the upgrades for each level and apply them in one
service call.

22 D- and U-Series Hardware Guide


Hardware Components and Configurations

Table 14 C-Series (Gen1) upgrades

Model number Disk upgrade (to the next Hardware upgrade (to
higher model) the next higher model)
2 Phoenix-12 Compute Not applicable Not applicable
Servers (minimum
configuration)

3 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

4 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

5 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

6 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

7 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes) and one private and
two public switches

8 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

9 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

10 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

11 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers nodes)

12 Phoenix-12 Compute 12 integrated disks One server chassis (four


Servers (maximum nodes)
configuration)

Certified hardware in support of ECS 3.2


The following table lists the latest hardware pre-qualified for a Certified installation.

Note

All Arista switch models listed also ship standard with the ECS Appliance.

Table 15 ECS Certified hardware

Server models Switch models

l Dell DSS7000 One 1 GbE private switch is required to handle


management traffic:
l Dell R730xd
l Arista 7010T- 48
l HP Proliant SL4540 Gen8
l Arista 7048T

Certified hardware in support of ECS 3.2 23


Hardware Components and Configurations

Table 15 ECS Certified hardware

Server models Switch models

l Dell S3048-ON
l Cisco Nexus 3048
Two 10 GbE switches are required to handle
data traffic:
l Arista 7050SX-64
l Arista 7050S-52
l Arista 7150S-24
l Arista 7124SX

24 D- and U-Series Hardware Guide


CHAPTER 2
Servers

l ECS Appliance servers....................................................................................... 26


l Rack and node host names.................................................................................30

Servers 25
Servers

ECS Appliance servers


Provides a quick reference of servers.
ECS has the following server types:
l D-Series Gen2 Rinjin-16 for Object and HDFS (October 2016)
l U-Series Gen2 Rinjin-16 for Object and HDFS (November 2015)
l U-Series Gen1 Phoenix-16 for Object and HDFS (June 2014)
l C-Series Gen2 Rinjin-12 for Object and HDFS (February 2016)
l C-Series Gen1 Phoenix-12 for Object and HDFS (March 2015)
D-Series
D-Series Rinjin-16 nodes have the following standard features:
l Four-node servers (2U) with two CPUs per node
l 2.4 GHz six-core Haswell CPUs
l Eight 8 GB DDR4 RDIMMs
l One system disk per node (400 GB SSD)
l LED indicators for each node
l Dual hot-swap chassis power supplies
l One high density SAS cable with one connector
U-Series
U-Series Gen2 Rinjin-16 nodes have the following standard features:
l Four-node servers (2U) with two CPUs per node
l 2.4 GHz six-core Haswell CPUs
l Eight 8 GB DDR4 RDIMMs
l One system disk per node (400GB SSD)
l LED indicators for each node
l Dual hot-swap chassis power supplies
l One SAS adapter with two SAS ports per node
U-Series Gen1 Phoenix-16 nodes have the following standard features:
l Four-node servers (2U) with two CPUs per node
l 2.4 GHz four-core Ivy Bridge CPUs
l Four channels of native DDR3 (1333) memory
l One system disk per node (either a 200 GB or 400 GB SSD)
l LED indicators for each node
l Dual hot-swap chassis power supplies
l One SAS adapter with one SAS port per node
C-Series
C-Series Gen2 Rinjin-12 nodes have the following standard features:
l Four-node servers (2U) with two CPUs per node
l 2.4 GHz six-core Haswell CPUs

26 D- and U-Series Hardware Guide


Servers

l Eight 8 GB DDR4 RDIMMs


l One system disk per node
l LED indicators for each node
l Dual hot-swap chassis power supplies
C-Series Gen1 Phoenix-12 nodes have the following standard features:
l Four-node servers (2U) with two CPUs per node
l 2.4 GHz four-core Ivy Bridge CPUs
l Four channels of native DDR3 (1333) memory
l The first disk that is assigned to each node is a 6TB hybrid system/storage disk
l LED indicators for each node
l Dual hot-swap chassis power supplies. Supports N + 1 power.
l 12 3.5” hot-swap SATA hard drives per server (three for each node)

Server front views


The following figure shows the server chassis front with the four nodes identified.
Figure 4 Phoenix-16 (Gen1) and Rinjin-16 (Gen2) server chassis front view

The following figure shows the server chassis front identifying the integrated disks
assigned to each node.
Figure 5 Phoenix-12 (Gen1) and Rinjin-12 (Gen2) server chassis front view

LED indicators are on the left and right side of the server front panels.

Server front views 27


Servers

Table 16 Server LEDs

1. System Power Button with LED


for each node.
1
2. System ID LED Button for each
NODE 3 NODE 4 node.
2
3 ID ID 3. System Status LED for each
node.
4
4. LAN Link/Activity LED for each
node.
NODE 1 NODE 2
ID ID

CL5558

Server rear view


The Rinjin-16, Phoenix-16, Rinjin-12, and the Phoenix-12 server chassis provide dual
hot-swappable power supplies and four nodes.
The chassis shares a common redundant power supply (CRPS) that enables HA power
in each chassis that is shared across all nodes. The nodes are mounted on hot-
swappable trays that fit into the four corresponding node slots accessible from the
rear of the server.

28 D- and U-Series Hardware Guide


Servers

Figure 6 Server chassis rear view (all)

1. Node 1
2. Node 2
3. Node 3
4. Node 4

Note

In the second server chassis in a five- or six- node configuration, the nodes (blades)
must be populated starting with the node 1 slot. Empty slots must have blank fillers.

Figure 7 Rear ports on nodes (all)

1. 1 GbE: Connected to one of the data ports on the 1 GbE switch

Server rear view 29


Servers

2. RMM: A dedicated port for hardware monitoring (per node)


3. SAS to DAE. Used on U- and D-Series servers only. U-Series Gen1 has a single
port. U-Series Gen2 hardware has two ports. The D-Series has one high density
SAS cable with one connector.
4. 10 GbE SW2 (hare): The left 10 GbE data port of each node is connected to one of
the data ports on the 10 GbE (SW2) switch
5. 10 GbE SW1 (rabbit): The right 10 GbE data port of each node is connected to one
of the data ports on the 10 GbE (SW1) switch

Rack and node host names


Lists the default rack and node host names for an ECS appliance.
Default rack IDs and color names are assigned in installation order as shown below:

Table 17 Rack ID 1 to 50

Rack Rack color Rack Rack color Rack Rack color


ID ID ID
1 red 18 carmine 35 cornsilk

2 green 19 auburn 36 ochre

3 blue 20 bronze 37 lavender

4 yellow 21 apricot 38 ginger

5 magenta 22 jasmine 39 ivory

6 cyan 23 army 40 carnelian

7 azure 24 copper 41 taupe

8 violet 25 amaranth 42 navy

9 rose 26 mint 43 indigo

10 orange 27 cobalt 44 veronica

11 chartreuse 28 fern 45 citron

12 pink 29 sienna 46 sand

13 brown 30 mantis 47 russet

14 white 31 denim 48 brick

15 gray 32 aquamarine 49 avocado

16 beige 33 baby 50 bubblegum

17 silver 34 eggplant

Nodes are assigned node names based on their order within the server chassis and
within the rack itself. The following table lists the default node names.

Table 18 Default node names

Node Node name Node Node name Node Node name


1 provo 9 boston 17 memphis

30 D- and U-Series Hardware Guide


Servers

Table 18 Default node names (continued)

Node Node name Node Node name Node Node name


2 sandy 10 chicago 18 seattle

3 orem 11 houston 19 denver

4 ogden 12 phoenix 20 portland

5 layton 13 dallas 21 tucson

6 logan 14 detroit 22 atlanta

7 Lehi 15 columbus 23 fresno

8 murray 16 austin 24 mesa

Nodes positioned in the same slot in different racks at a site will have the same node
name. For example node 4 will always be called ogden, assuming you use the default
node names.
The getrackinfo command identifies nodes by a unique combination of node name and
rack name. For example, node 4 in rack 4 and node 4 in rack 5 will be identified as:

ogden-green
ogden-blue

and can be pinged using their NAN resolvable (via mDNS) name:

ogden-green.nan.local
ogden-blue.nan.local

Rack and node host names 31


Servers

32 D- and U-Series Hardware Guide


CHAPTER 3
Switches

l ECS Appliance switches.....................................................................................34

Switches 33
Switches

ECS Appliance switches


Provides a quick reference of private and public switches.
l Private switch—One 1 GbE private switch to handle management traffic. In a C-
Series appliance with more than six servers, a second private switch is added.
l Public switch—Two 10 GbE switches to handle data traffic. In a C-Series
appliance with more than six servers, two more public switches are added.

Table 19 ECS Appliance switch summary

Switch model Part number Type Used in


Arista 7010T-48 100-400-120-xx Private 1 GbE (Turtle) l D-Series
l U-Series Gen2
l U-Series Gen1
l C-Series Gen2
l C-Series Gen1

Arista 7048T 100-585-063-xx Private 1 GbE (Turtle) l U-Series Gen2


l U-Series Gen1
l C-Series Gen2
l C-Series Gen1

Cisco 3048 48-P 100-400-130-xx Private 1 GbE (Turtle) l D-Series


This switch is
available when
l U-Series Gen2
customers are
supplying their
own public Cisco
switches through
an RPQ.

Arista 100-400-065-xx Public 10 GbE (Hare and l D-Series


7050SX-64 Rabbit)
l U-Series Gen2
l C-Series Gen2
l C-Series Gen1

Arista 7050S-52 100-585-062-xx Public 10 GbE (Hare and l U-Series Gen2


Rabbit)
l C-Series Gen2
l C-Series Gen1

Arista 7150S-24 100-564-196-xx Public 10 GbE (Hare and U-Series Gen1


Rabbit)

Arista 7124SX 100-585-061-xx Public 10 GbE (Hare and U-Series Gen1


Rabbit)

34 D- and U-Series Hardware Guide


Switches

Private switch: Cisco 3048 48-P


The private switch is used for management traffic. It has 52 ports and dual power
supply inputs. The switch is configured in the factory.
Figure 8 Cisco 3048 ports (rear)

Figure 9 Cisco 3048 ports (front)

Table 20 Cisco 3048 switch configuration detail

Figure label Ports Connection description


1 1–24 Connected to the MGMT (eth0) network ports on the nodes
(blue cables).

2 25–48 Connected to the RMM network ports on the nodes (gray


cables).

3 49 The 1 GbE management port. This port is connected to rabbit


(bottom) 10GB switch management port. See Note 2.

3 50 The 1 GbE management port. This port is connected to hare


(top) 10GB switch management port. See Note 2.

4 51 Rack/Segment Interconnect IN. See Note 1 and 2.

5 52 52 Rack/Segment Interconnect OUT. See Note 1 and 2.

6 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch. This port is on
the front of the switch.

Private switch: Cisco 3048 48-P 35


Switches

Note

1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 52 use CISCO 1G BASE-T SFPs (part number 100-400-141). In
an ECS Appliance, these four SFPs are installed in the 1 GbE switch. In a
customer-supplied rack order, these SFPs need to be installed.

Private switch: Arista 7010T-48


The private switch is used for management traffic. It has 52 ports and dual power
supply inputs. The switch is configured in the factory.
Figure 10 Arista 7010T-48 ports

Table 21 Arista 7010T-48 switch configuration detail

Figure label Ports Connection description


1 1–24 Connected to the MGMT (eth0) network ports on the nodes
(blue cables).

2 25–48 Connected to the RMM network ports on the nodes (gray


cables).

3 49 The 1 GbE management port. This port is connected to rabbit


(bottom) 10GB switch management port. See Note 2.

3 50 The 1 GbE management port. This port is connected to hare


(top) 10GB switch management port. See Note 2.

4 51 Rack/Segment Interconnect IN. See note 1 and 2.

5 52 52 Rack/Segment Interconnect OUT. See note 1 and 2.

6 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Note

1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 51 contain SFPs (RJ45 copper). In an ECS Appliance or a
customer-supplied rack order, these four SFPs are installed in the 1 GbE switch.

36 D- and U-Series Hardware Guide


Switches

Private switch: Arista 7048T-48


The private switch is used for management traffic. It has 52 ports and dual power
supply inputs. The switch is configured in the factory.
Figure 11 Arista 7048T-48 ports

Table 22 Arista 7048T-48 switch configuration detail

Figure label Ports Connection description


1 1–24 Connected to the MGMT (eth0) network ports on the nodes
(blue cables).

2 25–48 Connected to the RMM network ports on the nodes (gray


cables).

3 49 The 1 GbE management port. This port is connected to rabbit


(bottom) 10GB switch management port. See Note 2.

3 50 The 1 GbE management port. This port is connected to hare


(top) 10GB switch management port. See Note 2.

4 51 Rack/Segment Interconnect IN. See Note 1 and 2.

5 52 52 Rack/Segment Interconnect OUT. See Note 1 and 2.

6 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Note

1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 51 contain SFPs (RJ45 copper). In an ECS Appliance or a
customer-supplied rack order, these four SFPs are installed in the 1 GbE switch.

Private switch: Arista 7048T-48 37


Switches

Public switch: Arista 7050SX-64


The 7050SX-64 switch is a 52-port switch. The switch is equipped with 52 SFP+
ports, dual hot-swap power supplies, and redundant, field-replaceable fan modules.
Figure 12 Arista 7050SX-64 ports

Table 23 7050SX-64 switch port connections used on the top 10 GbE switch (hare)

Figure label Ports Connection description


1 1–8 The 10 GbE uplink data ports. These ports provide the
connection to the customer's 10 GbE infrastructure. SR Optic.
See note.

2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the left 10 GbE
(P02) interface on each node. SR Optic.

3 33–44 Unused.

4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (rabbit). SR Optic.

5 49–52 Unused.

6 <...> The 1 GbE management port. This port is connected to port


50 of the management switch (turtle). RJ-45.

7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Table 24 7050SX-64 switch port connections used on the bottom 10 GbE switch (rabbit)

Figure label Ports Connection description


1 1–8 The 10 GbE uplink data ports. These ports provide the
connection to the customer's 10 GbE infrastructure. SR Optic.
See note.

2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the right 10
GbE (P01) interface on each node. SR Optic.

3 33–44 Unused.

4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (hare). SR Optic.

5 49–52 Unused.

38 D- and U-Series Hardware Guide


Switches

Table 24 7050SX-64 switch port connections used on the bottom 10 GbE switch (rabbit)
(continued)

Figure label Ports Connection description


6 <...> The 1 GbE management port. This port is connected to port
49 of the management switch (turtle). RJ-45.

7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Note

10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.

Public switch: Arista 7050S-52


The 7050S-52 switch is a 52-port switch. The switch is equipped with 52 SFP+ ports,
dual hot-swap power supplies, and redundant, field-replaceable fan modules.
Figure 13 Arista 7050S-52 ports

Table 25 7050S-52 switch port connections used on the top 10 GbE switch (hare)

Figure label Ports Connection description


1 1–8 The 10 GbE uplink data ports. These ports provide the
connection to the customer's 10 GbE infrastructure. SR Optic.
See note.

2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the left 10 GbE
(P02) interface on each node. SR Optic.

3 33–44 Unused.

4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (rabbit). SR Optic.

5 49–52 Unused.

6 <...> The 1 GbE management port. This port is connected to port


50 of the management switch (turtle). RJ-45.

Public switch: Arista 7050S-52 39


Switches

Table 25 7050S-52 switch port connections used on the top 10 GbE switch (hare) (continued)

Figure label Ports Connection description


7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Table 26 7050S-52 switch port connections used on the bottom 10 GbE switch (rabbit)

Figure label Ports Connection description


1 1–8 The 10 GbE uplink data ports. These ports provide the
connection to the customer's 10 GbE infrastructure. SR Optic.
See note.

2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the right 10
GbE (P01) interface on each node. SR Optic.

3 33–44 Unused.

4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (hare). SR Optic.

5 49–52 Unused.

6 <...> The 1 GbE management port. This port is connected to port


49 of the management switch (turtle). RJ-45.

7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Note

10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.

Public switch: Arista 7150S-24


The 7150S-24 switch is a 24 port switch. The switch is equipped with 24 SFP+ ports,
dual hot-swap power supplies, and redundant, field-replaceable fan modules.
Figure 14 Arista 7150S-24 ports

40 D- and U-Series Hardware Guide


Switches

Table 27 7150S switch port connections used on the top 10 GbE switch (hare)

Figure label Ports Connection description


1 1–8 The 10 GbE uplink data ports. These ports provide the
connection to the customer's 10 GbE infrastructure. SR Optic.
See note.

2, 3 9–20 The 10 GbE node data ports. Only ports 9–16 are used in U-
and D-Series. These ports are connected to the left (P02) 10
GbE interface on each node. SR Optic.

4, 5 21–24 Unused.

4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the bottom 10 GbE switch (rabbit). SR Optic.

5 49–52 Unused.

6 <...> The 1 GbE management port. This port is connected to port


50 of the management switch (turtle). RJ-45.

7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Table 28 7150S switch port connections used on the bottom 10 GbE switch (rabbit)

Figure label Ports Connection description


1 1–8 The 10 GbE uplink data ports. These ports provide the
connection to the customer's 10 GbE infrastructure. SR Optic.
See note.

2, 3 9–20 The 10 GbE node data ports. Only ports 9–16 are used in U-
and D-Series. These ports are connected to the right (P01) 10
GbE interface on each node. SR Optic.

4, 5 21–24 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the top 10 GbE switch (hare). SR Optic.

6 <...> The 1 GbE management port. This port is connected to port


49 of the management switch (turtle). RJ-45.

7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.

Note

10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.

Public switch: Arista 7150S-24 41


Switches

Public switch: Arista 7124SX


The Arista 7124SX switch is equipped with 24 SFP+ ports, dual hot-swap power
supplies, and redundant field replaceable fan modules.
Figure 15 Arista 7124SX

Table 29 7124SX switch port connections used on the top 10 GbE switch (hare)

Figure label Ports Connection description


1 1-8 The 10 GbE uplink data ports. These ports provide the
connection to the customers 10 GbE infrastructure. SR Optic.
See note.

2, 3 9-20 The 10 GbE node data ports. Only ports 9-16 are used in U-
and D-Series. These ports are connected to the left (P02) 10
GbE interface on each node. SR Optic.

4, 5 21-24 Unused.

4 45-48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the bottom 10 GbE switch (rabbit). SR Optic.

5 49-52 Unused.

6 <...> The 1 GbE management port. This port is connected to port


50 of the management switch (turtle). RJ-45.

7 Serial console The console port is used to manage the switch through a
serial connection and the Ethernet management port is
connected to the 1 GbE management switch.

Table 30 7124SX switch port connections used on the bottom 10 GbE switch (hare)

Figure label Ports Connection description


1 1-8 The 10 GbE uplink data ports. These ports provide the
connection to the customers 10 GbE infrastructure. SR Optic.
See note.

2 9-20 The 10 GbE node data ports. Only ports 9-16 are used in U-
and D-Series. These ports are connected to the right (P01) 10
GbE interface on each node. SR Optic.

3, 4 21-24 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the top 10 GbE switch (hare). SR Optic.

42 D- and U-Series Hardware Guide


Switches

Table 30 7124SX switch port connections used on the bottom 10 GbE switch (hare)
(continued)

Figure label Ports Connection description


5 <...> The 1 GbE management port. This port is connected to port
49 of the management switch (turtle). RJ-45.

6 Serial console The console port is used to manage the switch through a
serial connection and the Ethernet management port is
connected to the 1 GbE management switch.

Note

10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.

Public switch: Arista 7124SX 43


Switches

44 D- and U-Series Hardware Guide


CHAPTER 4
Disk Drives

l Integrated disk drives.........................................................................................46


l Storage disk drives.............................................................................................46
l Disk array enclosures......................................................................................... 47

Disk Drives 45
Disk Drives

Integrated disk drives


Describes disk drives that are integrated into the server chassis of the ECS Appliance.
D-Series
In D-Series servers, OS disks are integrated into the server chassis and are accessible
from the front of the server chassis. Each node has one OS SSD drive.
U-Series
In U-Series servers, OS disks are integrated into the server chassis and are accessible
from the front of the server chassis. Each node has one OS SSD drive.

Note

Early Gen1 appliances had two mirrored disks per node.

C-Series
In C-Series servers with integrated storage disks, the disks are accessible from the
front of the server chassis. The disks are assigned equally to the four nodes in the
chassis. All disks must be the same size and speed. Gen1 uses 6 TB disks and Gen2
uses 8 TB and 12 TB disks.

Note

In Gen1 only, the first integrated disk that is assigned to each node is called disk drive
zero (HDD0). These storage drives contain some system data.

Figure 16 C-Series (Gen1) Integrated disks with node mappings

Storage disk drives


Describes the disk drives used in ECS Appliances.

Table 31 Storage disk drives

Series and Generation Service Size RPM Type


D-Series D5600 and D7800 Object 10 TB 7200 SAS

D-Series D4500 and D6200, U-Series Object 8 TB 7200 SAS


Gen2, C-Series Gen2

U-Series Gen1, C-Series Gen1 Object 6 TB 7200 SATA

U-Series Gen2 Object 12 TB 7200 SAS

All disks integrated into a server chassis or in a DAE must conform to these rules:

46 D- and U-Series Hardware Guide


Disk Drives

l All disk drives must be the same size within a DAE


l All disk drives must be the same speed

Disk array enclosures


The D-Series and U-Series Appliance include disk array enclosures (DAEs). The DAE is
a drawer that slides in and out of the 40U rack. The storage disk drives, I/O modules,
and cooling modules are located inside of the DAE.

Note

Use the power and weight calculator to plan for the weight of the configuration.

ECS Appliances use two types of DAE:


l The D-Series includes the Pikes Peak (dense storage) enclosure, which can hold
up to 98 disks.
l The U-Series includes the Voyager DAE, which can hold up to 60 disks.
The C-Series does not use DAEs. C-Series servers have integrated disks: 12 3.5-inch
disk drives accessible from the front of each server.
Pikes Peak (dense storage)
The Pikes Peak enclosure has the following features:
l Seven sleds with up to 14 3.5-inch disk drives each in a single 4U drawer (up to 98
disk drives total). Serviced from the front, after removing the I/O module.
l One I/O module containing two replaceable power supply units (PSUs). Serviced
from the front.
l Three exhaust fans or cooling modules; n+1 redundant. Serviced from the rear.
l Two power supplies; n+1 redundant. Serviced from within the I/O module in front.
l Blank filler sleds for partially populated configurations.
l Two 4-lane 12 Gb/s SAS 3.0 interconnects.
l 19" 4U 1m deep chassis.
Voyager DAE
The Voyager DAE has the following features:
l 3.5-inch disk drives in a single 4U drawer. Serviced from the front.
l One Link Control Card (LCC). Serviced from the front.
l One Inter-Connect Module (ICM). Serviced from the back.
l Three fans or cooling modules; n+1 redundant. Serviced from the front.
l Two power supplies; n+1 redundant. Serviced from the back.

Pikes Peak (dense storage)


The Pikes Peak enclosure is used in D-Series ECS Appliances.

Chassis, sleds, and disks


Chassis
The chassis is composed of:
l Seven sleds with up to 14 3.5-inch disk drives each in a single 4U drawer (up to 98
disk drives total). Serviced from the front, after removing the I/O module.

Disk array enclosures 47


Disk Drives

l One I/O module containing two replaceable power supply units (PSUs). Serviced
from the front.
l Three exhaust fans or cooling modules; n+1 redundant. Serviced from the rear.
l Two power supplies; n+1 redundant. Serviced from within the I/O module in front.
l Blank filler sleds for partially populated configurations.
l Two 4-lane 12 Gb/s SAS 3.0 interconnects.
l 19" 4U 1m deep chassis.
Replacing a sled, a drive, or the I/O module requires taking the DAE offline (cold
service). All drives in the DAE are inaccessible during the cold service. However, the
identify LEDs will continue to operate for 15 minutes after power is disconnected.
Figure 17 Pikes Peak chassis

48 D- and U-Series Hardware Guide


Disk Drives

Figure 18 Pikes Peak chassis with I/O module and power supplies removed, sleds extended

Figure 19 Enclosure LEDs from the front

Table 32 Enclosure LEDs

LED Color State Description


Enclosure "OK" Green Solid Enclosure operating normally

Enclosure Fail Yellow Fast flashing Enclosure failure

Enclosure Identify Blue Slow flashing Enclosure received an identify


command

Pikes Peak (dense storage) 49


Disk Drives

Sleds and disks


The seven sleds are designated by letters A through G.
Figure 20 Sleds letter designations

Each sled must be fully populated with 14 8 TB drives of the same speed. The D6200
uses seven sleds and the D4500 uses five sleds. In the D4500 configuration, sleds
positions C and E are populated by blank filler sleds. Sleds are serviced by pulling the
sled forward and removing the cover.
Drives are designated by the sled letter plus the slot number. The following figure
shows the drive designators for sled A.

50 D- and U-Series Hardware Guide


Disk Drives

Figure 21 Drive designations and sled LEDs

Each sled and drive slot has an LED to indicate failure or to indicate that the LED was
enabled by an identify command.

Table 33 Sled and drive LEDs

LED Color State Description


Sled Identify/Fail Amber Slow Flashing Link received an identify
command

HDD Identify/Fail Amber Fast Flashing SAS link failure

Each drive is enclosed in a tool-less carrier before it is inserted into the sled.

Pikes Peak (dense storage) 51


Disk Drives

Figure 22 Disk drive in carrier

Figure 23 Empty drive carrier

I/O module and power supplies


I/O module
At the front of the enclosure is a removable base that includes the I/O module on the
bottom and two power supplies on top. The I/O module contains all of the SAS
functionality for the DAE. The I/O module is replaceable after the DAE is powered off.

52 D- and U-Series Hardware Guide


Disk Drives

Figure 24 I/O module separated from enclosure

The front of the I/O module has a set of status LEDs for each SAS link.
Figure 25 SAS link LEDs

Pikes Peak (dense storage) 53


Disk Drives

Table 34 SAS link LEDs

LED Color State Description


Mini-SAS Link OK Green Solid Valid SAS link detected

Mini-SAS Identify/ Amber Slow flashing SAS link received an identify


Fail command

Mini-SAS Identify/ Amber Fast flashing SAS link failure


Fail

Note

The Link OK and SAS A and B Fail are not Green and Amber fast flashing when the
DAE is powered on and the node/SCSi HBA is not online (NO LINK).

While the I/O module hardware used in the D-Series is identical between 8TB and 10
TB models, the software configuration of the I/O module is different depending on the
disks used in the model. Consequently, the I/O module field-replaceable unit (FRU)
number is different depending on disk size:
l I/O module FRU for 8TB models (D4500 and D6200): 05-000-427-01
l I/O module FRU for 10TB models (D5600 and D7800): 105-001-028-00
Power supplies
Two power supplies (n + 1 redundant) sit on top of the I/O module in front. A single
power supply can be swapped without removing the I/O module assembly or powering
off the DAE.
Figure 26 Power supply separated from I/O module

At the top of each power supply is a set of status LEDs.

54 D- and U-Series Hardware Guide


Disk Drives

Figure 27 Power supply LEDs

Table 35 SAS link LEDs

LED Color State Description


PSU Fail Amber Solid There is a fault in the power supply

PSU Identify Blue Solid The power supply received an


identify command

AC OK Green Solid AC power input is within regulation

DC OK Green Solid DC power output is within regulation

Fan modules
The Pikes Peak DAE has three hot-swappable managed system fans at the rear in a
redundant 2-plus-1 configuration. Logic in the DAE will gracefully shut down the DAE
if the heat becomes too high after a fan failure. A failed fan must be left in place until
the fan replacement service call. Each fan has an amber fault LED. The fans are
labeled A, B, and C from right to left.

Pikes Peak (dense storage) 55


Disk Drives

Figure 28 Enclosure fan locations

Voyager DAE
The Voyager DAE is used in U-Series ECS Appliances.

Disk drives in Voyager DAEs


Disk drives are encased in cartridge-style enclosures. Each cartridge has a latch that
allows you to snap-out a disk drive for removal and snap-in for installation.
The inside of each Voyager has physically printed labels that are on the left and the
front sides of the DAE that describe the rows (or banks) and columns (or slots) where
the disk drives are installed.
The banks are labeled from A to E and the slots are labeled from 0 to 11. When
describing the layout of disk drives within the DAE, the interface format for the DAE is
called E_D. That is, E indicates the enclosure, and D the disk. For example, you could
have an interface format of 1_B11. This format is interpreted as enclosure 1, in row
(bank) B/slot number 11.
Enclosures are numbered from 1 through 8 starting at the bottom of the rack. Rear
cable connections are color-coded.
The arrangement of disks in a DAE must match the prescribed layouts that are shown
in the figures that follow. Not all layouts are available for all hardware.
Looking at the DAE from the front and above, the following figure shows the disk drive
layout of the DAE.
Disk population rules:
l The first disk must be placed at Row A Slot 0 with each subsequent disk placed
next to it. When Row A is filled, the next disk must be placed in Row B Slot 0. (Do
not skip a slot.)
l (Gen2) For a full-rack, each DAE must have the same number of disks from 10 to
60 in increments of 5.
l (Gen2) For a half-rack, each DAE must have the same number of disks from 10 to
60 in increments of 10.
l (Gen2) To upgrade a half-rack, add the "1 server, 4 DAEs, and 40 disk upgrade
kit." Each DAE in the full rack must have the same number of disks. Add enough
40-disk upgrade kits to match the disks in the original DAEs.

56 D- and U-Series Hardware Guide


Disk Drives

l (Gen1) A DAE can contain 15, 30, 45, or 60 disks.


l (Gen1) The lower four DAEs must contain the same number of disks.
l (Gen1) The upper DAEs are added only after the lower DAEs contain 60 disks.
l (Gen1) The upper DAEs must contain the same number of disks.
The figures show example layouts.
Figure 29 U-Series disk layout for 10-disk configurations (Gen2 only)

Voyager DAE 57
Disk Drives

Figure 30 U-Series disk layout for 15-disk configurations (Gen1, Gen2 full-rack only)

58 D- and U-Series Hardware Guide


Disk Drives

Figure 31 U-Series disk layout for 30-disk configurations (Gen1, Gen2)

Voyager DAE 59
Disk Drives

Figure 32 U-Series disk layout for 45-disk configurations (Gen1, Gen2 full-rack)

60 D- and U-Series Hardware Guide


Disk Drives

Figure 33 U-Series disk layout for 60-disk configurations

Link control cards


Each DAE includes a link control card (LCC) whose main function is to be a SAS
expander and provide enclosure services. The LCC independently monitors the
environment status of the entire enclosure and communicates the status to the
system. The LCC includes a fault LED and a power LED.

Note

Remove the power from the DAE before replacing the LCC.

Table 36 DAE LCC status LED

LED Color State Description


Power Green On Power on

— Off Power off

Power fault Amber On Fault

— Off No fault or power off

Voyager DAE 61
Disk Drives

Figure 34 LCC with LEDs

Figure 35 LCC Location

1 2 3

CL4669

Fan control module


Each DAE includes three fan control modules (cooling modules) on the front of the
DAE. The fan control module augments the cooling capacity of each DAE. It plugs
directly into the DAE baseboard from the top of the DAE. Inside the fan control
module, sensors measure the external ambient temperatures to ensure even cooling
throughout the DAE.

62 D- and U-Series Hardware Guide


Disk Drives

Table 37 Fan control module fan fault LED

LED Color State Description


Fan fault Amber On Fault detected. One or more
fans faulted.

— Off No fault. Fans operating


normally.

Figure 36 Fan control module with LED

Figure 37 Location of fan modules

Interconnect Module
The Interconnect Module (ICM) is the primary interconnect management element.
It is a plug-in module that includes a USB connector, RJ-12 management adapter, Bus
ID indicator, enclosure ID indicator, two input SAS connectors and two output SAS

Voyager DAE 63
Disk Drives

connectors with corresponding LEDs. These LEDs indicate the link and activity of each
SAS connector for input and output to devices.

Note

Disconnect power to the DAE when changing the ICM.

Table 38 ICM bus status LEDs

LED Color State Description


Power fault Green On Power on

— Off Power off

Power on Amber On Fault

— Off No fault or power off

The ICM supports the following I/O ports on the rear:


l Four 6 Gb/s PCI Gen2 SAS ports
l One management (RJ-12) connector to the SPS (field service diagnostics only)
l One USB connector
l One 6 Gb/s SAS x8 ports
It supports four 6 Gb/s SAS x8 ports on the rear of the ICM (two inputs and two
outputs, one used in Gen1 hardware and two used in Gen2 hardware). This port
provides an interface for SAS and NL-SAS drives in the DAE.

Table 39 ICM 6 Gb/s port LEDs

LED Color State Description


Link/Activity Blue On Indicates a 4x or 8x connection with all lanes
running at 6 Gb/s.

Green On Indicates that a wide port width other than 4x


or 8x has been established or one or more
lanes is not running at full speed or
disconnected.

— Off Not connected.

64 D- and U-Series Hardware Guide


Disk Drives

Figure 38 ICM LEDs

1. Power fault LED (amber)


2. Power LED (green)
3. Link activity LEDs (blue/green)
4. Single SAS port that is used for Gen1 hardware.
5. Two SAS ports that are used for Gen2 hardware.

Power supply
The power supply is hot-swappable. It has a built-in thumbscrew for ease of
installation and removal. Each power supply includes a fan to provide cooling to the
power supply. The power supply is an auto-ranging, power-factor-corrected, multi-
output, offline converter with its own line cord. Each supply supports a fully
configured DAE and shares load currents with the other supply. The power supplies
provide four independent power zones. Each of the hot-swappable power supplies can
deliver 1300 W at 12 V in its load-sharing highly available configuration. Control and
status are implemented throughout the I2C interface.

Voyager DAE 65
Disk Drives

Table 40 DAE AC power supply/cooling module LEDs

LED Color State Description


AC power on (12 V Green On OK. AC or SPS power applied. All output
power): one LED for voltages are within respective operating
each power cord. ranges, not including fan fault.

— Off 12 V power is out of operation range, or in


shutdown or fault detected within the
unit.

Power fault Amber On Under ICM control. LED is on if any fans


or outputs are outside the specified
operating range while the unit is not in low
power mode.

— Off All outputs are within the specified range,


or in shutdown or fault detected within
unit.

Figure 39 DAE power supply

66 D- and U-Series Hardware Guide


CHAPTER 5
Third Party Rack Requirements

l Third-party rack requirements........................................................................... 68

Third Party Rack Requirements 67


Third Party Rack Requirements

Third-party rack requirements


Customers who want to assemble an ECS Appliance using their own racks must
ensure that the racks meet the following requirements listed in Table 41 on page 68.
RPQ is required for the following additional scenarios related to customer-provided
rack:
l Single model that is installed in multi-racks.
l The U-Series DAE Cable Management Arms (CMA) cannot be installed due to
third-party rack limitations.
l Transfers from Dell EMC to customer rack.
Option: Customer rack enables the adjustment of rear rails to 24 inches so that
Dell EMC fixed rails can be used. RPQ is not required if all third-party rack
requirements in Table 41 on page 68 are met.

Table 41 Third-party rack requirements

Requirement Category Description


Cabinet 44 inches minimum rack depth.

Recommended 24 inches wide cabinet to provide room for cable routing on the sides of
the cabinet.

Sufficient contiguous space anywhere in the rack to install the components in the
required relative order.

If a front door is used, it must maintain a minimum of 1.2 inches of clearance to the
bezels. It must be perforated with 50% or more evenly distributed air opening. It should
enable easy access for service personnel and allow the LEDs to be visible through it.

If a rear door is used, it must be perforated with 50% or more evenly distributed air
opening.

Blanking panels should be used as required to prevent air recirculation inside the cabinet.

There is a recommended minimum of 42 inches of clearance in the front and 36 inches of


clearance in the rear of the cabinet to allow for service area and proper airflow.

NEMA rails 19 inches wide rail with 1U increments.

Between 24 inches and 34 inches deep.

NEMA round and square hole rails are supported.

NEMA treaded hole rails are NOT supported.

NEMA round holes must accept M5 size screws.

Special screws (036-709-013 or 113) are provided for use with square holes rails.

Square hole rails require M5 nut clips that are provided by the customer for third-party
rack provided.

Power The AC power requirements are 200–240 VAC +/- 10% 50–60 Hz.

Vertical PDUs and AC plugs must not interfere with the DAE and Cable Management
arms requiring a depth of 42.5 inches.

68 D- and U-Series Hardware Guide


Third Party Rack Requirements

Table 41 Third-party rack requirements (continued)

Requirement Category Description


The customer rack should have redundant power zones, one on each side of the rack with
separate PDU power strips. Each redundant power zone should have capacity for the
maximum power load. NOTE: Dell EMC is not responsible for any failures, issues, or
outages resulting from failure of the customer provided PDUs.

Cabling Cables for the product must be routed in such a way that it mimics the standard ECS
Appliance offering coming from the factory. This includes dressing cables to the sides to
prevent drooping and interfering with service of field replaceable units (FRUs).

Optical cables should be dressed to maintain a 1.5 inches bend radius.

Cables for third-party components in the rack cannot cross or interfere with ECS logic
components in such a way that they block front to back air flow or individual FRU service
activity.

Disk Array Enclosures (DAEs) All DAEs should be installed in sequential order from bottom to top to prevent a tipping
risk.

WARNING

Opening more than one DAE at a time creates a tip hazard. ECS racks provide an
integrated solution to prevent more than one DAE from being open at a time.
Customer racks will not be able to support this feature.

Weight Customer rack must be capable of supporting the weight of ECS equipment.

Note

Use the power and weight calculator to refine the power and heat values to more-
closely match the hardware configuration for the system. The calculator contains the
latest information for power and weight planning.

ECS support personnel can refer to the Elastic Cloud Storage Third-Party Rack
Installation Guide for more details on installing in customer racks.

Third-party rack requirements 69


Third Party Rack Requirements

70 D- and U-Series Hardware Guide


CHAPTER 6
Power Cabling

l ECS power calculator.........................................................................................72


l U-Series single-phase AC power cabling ........................................................... 72
l U-Series three-phase AC power cabling.............................................................74
l D-Series single-phase AC power cabling ........................................................... 77
l D-Series three-phase AC power cabling.............................................................79
l C-Series single-phase AC power cabling ........................................................... 83
l C-Series 3-phase AC power cabling .................................................................. 84

Power Cabling 71
Power Cabling

ECS power calculator


Use the power and weight calculator to refine the power and heat values to more-
closely match the hardware configuration for your system. The calculator contains the
latest information for power and weight planning.

U-Series single-phase AC power cabling


Provides the single-phase power cabling diagram for the U-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.

72 D- and U-Series Hardware Guide


Power Cabling

Figure 40 U-Series single-phase AC power cabling for eight-node configurations

U-Series single-phase AC power cabling 73


Power Cabling

Figure 40 U-Series single-phase AC power cabling for eight-node configurations (continued)

Note

For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5
through 8 and server chassis 2.

U-Series three-phase AC power cabling


Provides cabling diagrams for three-phase AC delta and wye power.
Three-phase Delta AC power cabling
The legend maps colored cables that are shown in the diagram to part numbers and
cable lengths.
Figure 41 Cable legend for three-phase delta AC power diagram

74 D- and U-Series Hardware Guide


Power Cabling

Figure 42 Three-phase AC delta power cabling for eight-node configuration

U-Series three-phase AC power cabling 75


Power Cabling

Figure 42 Three-phase AC delta power cabling for eight-node configuration (continued)

Note

For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.

Three-phase WYE AC power cabling


The legend maps colored cables that are shown in the diagram to part numbers and
cable lengths.
Figure 43 Cable legend for three-phase WYE AC power diagram

76 D- and U-Series Hardware Guide


Power Cabling

Figure 44 Three-phase WYE AC power cabling for eight-node configuration

Note

For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.

D-Series single-phase AC power cabling


Provides the single-phase power cabling diagram for the D-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.

D-Series single-phase AC power cabling 77


Power Cabling

Figure 45 D-Series single-phase AC power cabling for eight-node configurations

78 D- and U-Series Hardware Guide


Power Cabling

D-Series three-phase AC power cabling


Provides cabling diagrams for three-phase AC delta and wye power.
Three-phase Delta AC power cabling
The legend maps colored cables shown in the diagram to part numbers and cable
lengths.

D-Series three-phase AC power cabling 79


Power Cabling

Figure 46 Three-phase AC delta power cabling for eight-node configuration

80 D- and U-Series Hardware Guide


Power Cabling

Figure 46 Three-phase AC delta power cabling for eight-node configuration (continued)

Note

For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.

Three-phase WYE AC power cabling


The legend maps colored cables shown in the diagram to part numbers and cable
lengths.

D-Series three-phase AC power cabling 81


Power Cabling

Figure 47 Three-phase WYE AC power cabling for eight-node configuration

82 D- and U-Series Hardware Guide


Power Cabling

Figure 47 Three-phase WYE AC power cabling for eight-node configuration (continued)

Note

For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.

C-Series single-phase AC power cabling


Provides the single-phase power cabling diagram for the C-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.
Figure 48 C-Series single-phase AC power cabling for eight-node configurations: Top

C-Series single-phase AC power cabling 83


Power Cabling

Figure 49 C-Series single-phase AC power cabling for eight-node configurations: Bottom

C-Series 3-phase AC power cabling


Provides the 3-phase power cabling diagrams for the C-Series ECS Appliance.
The switches plug into the front of the rack and route through the rails to the rear.

84 D- and U-Series Hardware Guide


Power Cabling

Figure 50 C-Series 3-phase AC power cabling for eight-node configurations: Top

C-Series 3-phase AC power cabling 85


Power Cabling

Figure 51 C-Series 3-phase AC power cabling for eight-node configurations: Bottom

86 D- and U-Series Hardware Guide


Power Cabling

Figure 51 C-Series 3-phase AC power cabling for eight-node configurations: Bottom


(continued)

C-Series 3-phase AC power cabling 87


Power Cabling

88 D- and U-Series Hardware Guide


CHAPTER 7
SAS Cabling

l U-Series SAS cabling......................................................................................... 90


l D-Series SAS cabling......................................................................................... 93

SAS Cabling 89
SAS Cabling

U-Series SAS cabling


Provides wiring diagrams for the SAS cables that connect nodes to Voyager DAEs.
Gen2
Gen2 use two SAS cables for each node to DAE connection.
The top port on the DAE is port 0 and always connects to the SAS adapter's left port
on the node. The bottom port is port 1 and always connects to the SAS adapter's right
port on the node.

90 D- and U-Series Hardware Guide


SAS Cabling

Figure 52 U-Series (Gen2) SAS cabling for eight-node configurations

U-Series SAS cabling 91


SAS Cabling

Figure 53 U-Series (Gen2) SAS cabling

Gen1

Note

Hardware diagrams number nodes starting with zero. In all other discussions of ECS
architecture and software, nodes are numbered starting with one.

92 D- and U-Series Hardware Guide


SAS Cabling

Figure 54 U-Series (Gen1) SAS cabling for eight-node configurations

D-Series SAS cabling


Provides wiring diagrams for the SAS cables that connect nodes to Pikes Peak DAEs.
D-Series has one High Density SAS cable (two cables put together); One connector
on the HBA and one connector on the I/O Module.
The top port on the DAE is port 0 and always connects to the SAS adapter's left port
on the node. The bottom port is port 1 and always connects to the SAS adapter's right
port on the node.

D-Series SAS cabling 93


SAS Cabling

Figure 55 D-Series SAS cabling for eight-node configurations

94 D- and U-Series Hardware Guide


CHAPTER 8
Network Cabling

l Connecting ECS appliances in a single site ........................................................96


l Network cabling................................................................................................. 97

Network Cabling 95
Network Cabling

Connecting ECS appliances in a single site


The ECS appliance management networks are connected together through the Nile
Area Network (NAN). The NAN is created by connecting either port 51 or 52 to
another turtle switch of another ECS appliance. Through these connections, nodes
from any segment can communicate to any other node in the NAN.
The simplest topology to connect the ECS appliances together does not require extra
switch hardware. All the turtle switches can be connected together in a linear or daisy
chain fashion.
Figure 56 Linear or daisy-chain topology

In this topology, if there is a loss of connectivity a split-brain can occur.


Figure 57 Linear or daisy-chain split-brain

For a more reliable network, the ends of the daisy chain topology can be connected
together to create a ring network. The ring topology is more stable because it would
require two cable link breaks in the topology for a split-brain to occur. The primary
drawback to the ring topology is that the RMM ports cannot be connected to the
customer network unless an external customer or aggregation switch is added to ring.
Figure 58 Ring topology

96 D- and U-Series Hardware Guide


Network Cabling

The daisy-chain or ring topologies are not recommended for large installations. When
there are four or more ECS appliances, an aggregation switch is recommended. The
addition of an aggregation switch in a star topology can provide better fail over by
reducing split-brain issues.
Figure 59 Star topology

Network cabling
The network cabling diagrams apply to U-Series, D-Series, or C-Series ECS Appliance
in an Dell EMC or customer provided rack.
To distinguish between the three switches, each switch has a nickname:
l Hare: 10 GbE public switch is at the top of the rack in a U- or D-Series or the top
switch in a C-Series segment.
l Rabbit: 10 GbE public switch is located just below the hare in the top of the rack in
a U- or D-Series or below the hare switch in a C-Series segment.
l Turtle: 1 GbE private switch that is located below rabbit in the top of the rack in a
U-Series or below the hare switch in a C-Series segment.
U- and D-Series network cabling
The following figure shows a simplified network cabling diagram for an eight-node
configuration for a U- or D-Series ECS Appliance as configured by Dell EMC or a
customer in a supplied rack. Following this figure, other detailed figures and tables
provide port, label, and cable color information.

Network cabling 97
Network Cabling

Figure 60 Public switch cabling for U- and D-Series

98 D- and U-Series Hardware Guide


Network Cabling

Figure 61 U-Series and D-Series network cabling

Network cabling 99
Network Cabling

Figure 62 Network cabling labels

Table 42 U- and D-Series 10 GB public switch network cabling for all Arista models

Chassis / node / Switch port / label Switch port / label Label color
10GB adapter port (rabbit, SW1) (hare, SW2)
1 / Node 1 P01 (Right) 10G SW1 P09 Orange

1 / Node 1 P02 (Left) 10G SW2 P09

1 / Node 2 P01 10G SW1 P10 Blue


(Right)

1 / Node 2 P02 (Left) 10G SW2 P10

1 / Node 3 P01 10G SW1 P11 Black


(Right)

1 / Node 3 P02 (Left) 10G SW2 P11

1 / Node 4 P01 10G SW1 P12 Green


(Right)

100 D- and U-Series Hardware Guide


Network Cabling

Table 42 U- and D-Series 10 GB public switch network cabling for all Arista models (continued)

Chassis / node / Switch port / label Switch port / label Label color
10GB adapter port (rabbit, SW1) (hare, SW2)
1 / Node 4 P02 (Left) 10G SW2 P12

2 / Node 5 P01 10G SW1 P13 Brown


(Right)

2 / Node 5 P02 (Left) 10G SW2 P13

2 / Node 6 P01 10G SW1 P14 Light Blue


(Right)

2 / Node 6 P02 (Left) 10G SW2 P14

2 / Node 7 P01 10G SW1 P15 Purple


(Right)

2 / Node 7 P02 (Left) 10G SW2 P15

2 / Node 8 P01 10G SW1 P16 Magenta


(Right)

2 / Node 8 P02 (Left) 10G SW2 P16

Note

1.5m (U-Series) or 3m (C-Series) Twinax network cables are provided for 10GB.

Table 43 U- and D-Series 10 GB public switch MLAG cabling for all Arista models

Connection Connection 10 GB Port number 10 Port number


SW1 (rabbit) GB SW2 (hare) labels
MLAG cables (71xx 10 23 23 10G SW1 P23
GB switches)
10G SW2 P23

24 24 10G SW1 P24

10G SW2 P25

MLAG cables 45 45 10G SW1 P45


(7050x10 GB
10G SW2 P45
switches)
46 46 10G SW1 P46

10G SW2 P46

47 47 10G SW1 P47

10G SW2 P47

48 48 10G SW1 P48

10G SW2 P48

Note

1m Twinax network cables are provided to cable 10 GB switch to switch MLAG.

Network cabling 101


Network Cabling

Figure 63 Private switch cabling for U- and D-Series

Table 44 U- and D-Series 1 GB private switch network cabling

Chassis / RMM Port / Switch eth0 Port / Switch Label Color


Node Label (Grey Port / Label Label (Blue Port / Label
Cable) (Grey Cable) (Blue
Cable) Cable)
1 / Node 1 Node 01 RMM 1GB SW P25 Node01 P01 1GB SW P01 Orange

1 / Node 2 Node 02 1GB SW P26 Node02 P02 1GB SW P02 Blue


RMM

1 / Node 3 Node 03 1GB SW P27 Node03 P03 1GB SW P03 Black


RMM

102 D- and U-Series Hardware Guide


Network Cabling

Table 44 U- and D-Series 1 GB private switch network cabling (continued)

Chassis / RMM Port / Switch eth0 Port / Switch Label Color


Node Label (Grey Port / Label Label (Blue Port / Label
Cable) (Grey Cable) (Blue
Cable) Cable)
1 / Node 4 Node 04 1GB SW P28 Node04 P04 1GB SW P04 Green
RMM

2 / Node 5 Node 05 1GB SW P29 Node05 P05 1GB SW P05 Brown


RMM

2 / Node 6 Node 06 1GB SW P30 Node06 P06 1GB SW P06 Light Blue
RMM

2 / Node 7 Node 07 1GB SW P31 Node07 P07 1GB SW P07 Purple


RMM

2 / Node 8 Node 08 1GB SW P32 Node08 P08 1GB SW P08 Magenta


RMM

Table 45 U- and D-Series 1 GB private switch management and interconnect cabling

1 GB Switch 10GB SW1 10GB SW2 Labels Color


Ports (rabbit) Port (hare) Port
Number Number
49 <...> - mgmt port 10G SW2 MGMT White
1G SW P49

50 <...> - mgmt port 10G SW2 MGMT White


1G SW P50

51 Rack/Segment Interconnect IN or first rack empty

52 Rack/Segment Interconnect OUT

Note

Port 49 and 50 are 1 meter white cables. RJ45 SFPs are installed in ports 49 to 52.

C-Series network cabling


A full rack configuration in the C-Series is made up of two segments: lower and upper.
Each segment has a hare, rabbit, and turtle switch, and the two segments are
connected. A configuration of six or less servers is a single-segment appliance and has
one set of switches. Cabling information for the lower and upper segments for public
and private switches are provided below.

Network cabling 103


Network Cabling

Figure 64 C-Series public switch cabling for the lower segment from the rear

104 D- and U-Series Hardware Guide


Network Cabling

Figure 64 C-Series public switch cabling for the lower segment from the rear (continued)

Network cabling 105


Network Cabling

Figure 64 C-Series public switch cabling for the lower segment from the rear (continued)

106 D- and U-Series Hardware Guide


Network Cabling

Figure 65 C-Series public switch cabling for the upper segment from the rear

Network cabling 107


Network Cabling

Figure 65 C-Series public switch cabling for the upper segment from the rear (continued)

108 D- and U-Series Hardware Guide


Network Cabling

Figure 66 C-Series private switch cabling for the lower segment from the rear

Network cabling 109


Network Cabling

Figure 66 C-Series private switch cabling for the lower segment from the rear (continued)

110 D- and U-Series Hardware Guide


Network Cabling

Figure 66 C-Series private switch cabling for the lower segment from the rear (continued)

Network cabling 111


Network Cabling

Figure 67 C-Series private switch cabling for the upper segment from the rear

112 D- and U-Series Hardware Guide


Network Cabling

Figure 67 C-Series private switch cabling for the upper segment from the rear (continued)

Network cabling 113


Network Cabling

Figure 67 C-Series private switch cabling for the upper segment from the rear (continued)

Customer network connections


Customers connect to an ECS Appliance by way of 10 GbE ports and their own
interconnect cables. When multiple appliances are installed in the same data center,
daisy chain or home-run connections the private switches to a customer-provided
switch.

114 D- and U-Series Hardware Guide

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy