Ecs D Series
Ecs D Series
D- and U-Series
Hardware Guide
302-003-477
09
Copyright © 2014-2018 Dell Inc. or its subsidiaries. All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Figures 5
Tables 7
Chapter 2 Servers 25
ECS Appliance servers............................................................................... 26
Server front views......................................................................... 27
Server rear view............................................................................ 28
Rack and node host names......................................................................... 30
Chapter 3 Switches 33
ECS Appliance switches............................................................................. 34
Private switch: Cisco 3048 48-P................................................... 35
Private switch: Arista 7010T-48.....................................................36
Private switch: Arista 7048T-48.................................................... 37
Public switch: Arista 7050SX-64................................................... 38
Public switch: Arista 7050S-52..................................................... 39
Public switch: Arista 7150S-24...................................................... 40
Public switch: Arista 7124SX..........................................................42
U-Series components
The U-Series ECS Appliance includes the following hardware components.
Component Description
40U rack Titan D racks that include:
l Single-phase PDUs with four power drops (two per side).
The high availability configuration (HA) of four power
drops is mandatory and any deviation requires that an
RPQ be submitted and approved.
l Optional three-phase WYE or delta PDUs with two power
drops (one per side)
l Front and rear doors
l Racking by Dell EMC manufacturing
Component Description
Disk array enclosure (DAE) The U-Series disk array enclosure (DAE) drawers hold up to
60 3.5-inch disk drives. Features include:
l Gen1 hardware uses 6TB disks and Gen2 hardware uses 8
TB and 12 TB disks
l Two 4-lane 6 Gb/s SAS connectors
l SAS bandwidth of 3500 MB/s
l Drive service: hot swappable
U-Series components 11
Hardware Components and Configurations
Note
D-Series components
The D-Series ECS Appliance includes the following hardware components.
Component Description
40U rack Titan D racks that include:
l Single-phase PDUs with six power drops (three per side).
The high availability configuration (HA) of six power drops
is mandatory and any deviation requires that an RPQ be
submitted and approved.
l Optional three-phase WYE or delta PDUs with two power
drops (one per side)
l Front and rear doors
l Racking by Dell EMC manufacturing
Disk array enclosure (DAE) The D-Series disk array enclosure (DAE) drawers hold up to
98 3.5-inch disk drives. Features include:
l Models featuring 8TB disks and models featuring 10TB
disks
l Two 4-lane 12 Gb/s SAS 3.0 connectors
l SAS bandwidth of 5600 MB/s
l Drive service: cold service
Note
C-Series components
The C-Series ECS Appliance includes the following hardware components.
Component Description
40U rack Titan D Compute racks that include:
l Two single-phase PDUs in a 2U configuration with two
power drops. The high availability configuration (HA) of
two power drops is mandatory and any deviation requires
that an RPQ be submitted and approved.
C-Series components 13
Hardware Components and Configurations
Component Description
Private switch One or two 1 GbE switches. The second switch is required for
configurations with more than six servers.
Public switch Two or four 10 GbE switches. The third and fourth switches
are required for configurations with more than six servers.
Disks The C-Series has 12 3.5-inch disk drives integrated with each
server. Gen1 hardware uses 6 TB disks. Gen2 hardware uses 8
TB disks.
Storage capacity
Model number Nodes DAEs Disks in DAE Switches
(8 TB disks) (12 TB disks)
U400 (minimum 4 4 10 320 TB 480 TB One private and two
configuration) public
Note
Five-node configurations are the smallest configuration that can tolerate a node
failure and still maintain the EC protection scheme. A four-node configuration that
suffers a node failure changes to a simple data mirroring protection scheme. Five-
node configurations are the recommended minimum configuration.
l The best practice is to have only one storage pool in a VDC, unless you have more
than one storage use case at the site. In a site with a single storage pool, each DAE
in each rack must have the same number of disks.
Upgrade rules for systems with either four nodes/DAEs 8 TB or four nodes/DAEs 12
TB (upgrade will include mixed disk capacities in the rack):
l The minimum number of disks in a DAE is 10.
l Disk Upgrade Kits are available in 5 or 10 disk increments.
l No mixing of disk capacities in a DAE.
l Each four node/DAE must have the same disk capacity and number of disks in
increments of 5 (10, 15, 20, and so on, up to 60) with NO empty slots between
disks in DAE. Example: nodes 1-4, 30, 6TB disks in each DAE, nodes 5-7, 20 12TB
disks in each DAE.
l All empty drive slots must be filled with a disk filler.
10-Disk Upgrade Used to supplement other disk upgrade kits to make up a valid
configuration.
Current nodes Kit for upgrade to 5 nodes Kit for upgrade to 6 nodes Kit for upgrade to 8 nodes
Four l One server chassis with one l One server chassis with two l One server chassis with four
(all four nodes have node and three fillers. nodes and two fillers. nodes.
12 TB disks)
l One DAE with the same l Two DAEs with the same l Four DAEs with the same
number of disks as one of number of disks as one of number of disks as one of
the current DAEs. Disks the current DAEs. Disks the current DAEs. Disks
must be Gen2 12 TB. must be Gen2 12 TB. must be Gen2 12 TB.
Four l One server chassis with one l One server chassis with two l One server chassis with four
(all four nodes have node and three fillers. nodes and two fillers. nodes.
8 TB disks)
l One DAE with the same l Two DAEs with the same l 8 TB expansion disks: four
number of disks as one of number of disks as one of DAEs being added with the
the current DAEs. Disks the current DAEs. Disks same number of disks as one
must be Gen2 8 TB. must be Gen2 8 TB. of the current DAEs.
l 12 TB expansion disks: four
DAEs being added with the
same number of disks.
Current nodes Kit for upgrade to 5 nodes Kit for upgrade to 6 nodes Kit for upgrade to 8 nodes
Five Not applicable l One node. l Three nodes.
(all five nodes have
either all 8 TB disks
l One DAE with the same l Three DAEs with the same
or all 12 TB disks) number of disks as one of number of disks as one of
the current DAEs. the current DAEs.
You cannot intermix 8 TB and 12 You cannot intermix 8 TB and 12
TB disks. TB disks.
Note
When you are planning to increase the number of drives in the DAEs and add nodes to
the appliance, order the disks first. Then order the node upgrades. The new DAEs are
shipped with the correct number of disks preinstalled.
Model number Disk upgrade (to the next Hardware upgrade (to the
higher model) next higher model)
U300 (minimum Not applicable Not applicable
configuration)
Model number Nodes DAEs Disks in each Disk Size Storage Switches
DAE capacity
D4500 8 8 70 8TB 4.5 PB One private and two
public
224 10 TB Disk 224 16 Sleds Adds 2 sleds of 14 disks each (28 disks total) to
Upgrade Kit each DAE.
(upgrades D5600 to
D7800)
Model number Disk upgrade (to the next Hardware upgrade (to
higher model) the next higher model)
2 Phoenix-12 Compute Not applicable Not applicable
Servers (minimum
configuration)
Model number Disk upgrade (to the next Hardware upgrade (to
higher model) the next higher model)
11 Phoenix-12 Compute 12 integrated disks One server chassis (four
Servers nodes)
Model number Disk upgrade (to the next Hardware upgrade (to
higher model) the next higher model)
2 Phoenix-12 Compute Not applicable Not applicable
Servers (minimum
configuration)
Note
All Arista switch models listed also ship standard with the ECS Appliance.
l Dell S3048-ON
l Cisco Nexus 3048
Two 10 GbE switches are required to handle
data traffic:
l Arista 7050SX-64
l Arista 7050S-52
l Arista 7150S-24
l Arista 7124SX
Servers 25
Servers
The following figure shows the server chassis front identifying the integrated disks
assigned to each node.
Figure 5 Phoenix-12 (Gen1) and Rinjin-12 (Gen2) server chassis front view
LED indicators are on the left and right side of the server front panels.
CL5558
1. Node 1
2. Node 2
3. Node 3
4. Node 4
Note
In the second server chassis in a five- or six- node configuration, the nodes (blades)
must be populated starting with the node 1 slot. Empty slots must have blank fillers.
Table 17 Rack ID 1 to 50
17 silver 34 eggplant
Nodes are assigned node names based on their order within the server chassis and
within the rack itself. The following table lists the default node names.
Nodes positioned in the same slot in different racks at a site will have the same node
name. For example node 4 will always be called ogden, assuming you use the default
node names.
The getrackinfo command identifies nodes by a unique combination of node name and
rack name. For example, node 4 in rack 4 and node 4 in rack 5 will be identified as:
ogden-green
ogden-blue
and can be pinged using their NAN resolvable (via mDNS) name:
ogden-green.nan.local
ogden-blue.nan.local
Switches 33
Switches
6 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch. This port is on
the front of the switch.
Note
1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 52 use CISCO 1G BASE-T SFPs (part number 100-400-141). In
an ECS Appliance, these four SFPs are installed in the 1 GbE switch. In a
customer-supplied rack order, these SFPs need to be installed.
6 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Note
1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 51 contain SFPs (RJ45 copper). In an ECS Appliance or a
customer-supplied rack order, these four SFPs are installed in the 1 GbE switch.
6 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Note
1. The NAN (Nile Area Network) links all ECS Appliances at a site.
2. Ports 49 through 51 contain SFPs (RJ45 copper). In an ECS Appliance or a
customer-supplied rack order, these four SFPs are installed in the 1 GbE switch.
Table 23 7050SX-64 switch port connections used on the top 10 GbE switch (hare)
2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the left 10 GbE
(P02) interface on each node. SR Optic.
3 33–44 Unused.
4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (rabbit). SR Optic.
5 49–52 Unused.
7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Table 24 7050SX-64 switch port connections used on the bottom 10 GbE switch (rabbit)
2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the right 10
GbE (P01) interface on each node. SR Optic.
3 33–44 Unused.
4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (hare). SR Optic.
5 49–52 Unused.
Table 24 7050SX-64 switch port connections used on the bottom 10 GbE switch (rabbit)
(continued)
7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.
Table 25 7050S-52 switch port connections used on the top 10 GbE switch (hare)
2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the left 10 GbE
(P02) interface on each node. SR Optic.
3 33–44 Unused.
4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (rabbit). SR Optic.
5 49–52 Unused.
Table 25 7050S-52 switch port connections used on the top 10 GbE switch (hare) (continued)
Table 26 7050S-52 switch port connections used on the bottom 10 GbE switch (rabbit)
2 9–32 The 10 GbE node data ports, only ports 9–16 are used in the
U- and D-Series. These ports are connected to the right 10
GbE (P01) interface on each node. SR Optic.
3 33–44 Unused.
4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the other 10 GbE switch (hare). SR Optic.
5 49–52 Unused.
7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.
Table 27 7150S switch port connections used on the top 10 GbE switch (hare)
2, 3 9–20 The 10 GbE node data ports. Only ports 9–16 are used in U-
and D-Series. These ports are connected to the left (P02) 10
GbE interface on each node. SR Optic.
4, 5 21–24 Unused.
4 45–48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the bottom 10 GbE switch (rabbit). SR Optic.
5 49–52 Unused.
7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Table 28 7150S switch port connections used on the bottom 10 GbE switch (rabbit)
2, 3 9–20 The 10 GbE node data ports. Only ports 9–16 are used in U-
and D-Series. These ports are connected to the right (P01) 10
GbE interface on each node. SR Optic.
4, 5 21–24 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the top 10 GbE switch (hare). SR Optic.
7 Serial console The console port is used to manage the switch through a
serial connection. The Ethernet management port is
connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.
Table 29 7124SX switch port connections used on the top 10 GbE switch (hare)
2, 3 9-20 The 10 GbE node data ports. Only ports 9-16 are used in U-
and D-Series. These ports are connected to the left (P02) 10
GbE interface on each node. SR Optic.
4, 5 21-24 Unused.
4 45-48 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the bottom 10 GbE switch (rabbit). SR Optic.
5 49-52 Unused.
7 Serial console The console port is used to manage the switch through a
serial connection and the Ethernet management port is
connected to the 1 GbE management switch.
Table 30 7124SX switch port connections used on the bottom 10 GbE switch (hare)
2 9-20 The 10 GbE node data ports. Only ports 9-16 are used in U-
and D-Series. These ports are connected to the right (P01) 10
GbE interface on each node. SR Optic.
3, 4 21-24 The 10 GbE LAG ports. These ports are connected to the LAG
ports on the top 10 GbE switch (hare). SR Optic.
Table 30 7124SX switch port connections used on the bottom 10 GbE switch (hare)
(continued)
6 Serial console The console port is used to manage the switch through a
serial connection and the Ethernet management port is
connected to the 1 GbE management switch.
Note
10 GbE switches ship with one SFP - RJ-45 copper SFP installed in port 1. Fibre SFPs
can be ordered through Dell EMC. An ECS appliance that is shipped in a Dell EMC rack
has all SFPs installed, but not installed for a customer rack installation. In either case,
the switch may require additional SFPs to be installed or reconfigured in ports 1–8
based on customer uplink configuration.
Disk Drives 45
Disk Drives
Note
C-Series
In C-Series servers with integrated storage disks, the disks are accessible from the
front of the server chassis. The disks are assigned equally to the four nodes in the
chassis. All disks must be the same size and speed. Gen1 uses 6 TB disks and Gen2
uses 8 TB and 12 TB disks.
Note
In Gen1 only, the first integrated disk that is assigned to each node is called disk drive
zero (HDD0). These storage drives contain some system data.
All disks integrated into a server chassis or in a DAE must conform to these rules:
Note
Use the power and weight calculator to plan for the weight of the configuration.
l One I/O module containing two replaceable power supply units (PSUs). Serviced
from the front.
l Three exhaust fans or cooling modules; n+1 redundant. Serviced from the rear.
l Two power supplies; n+1 redundant. Serviced from within the I/O module in front.
l Blank filler sleds for partially populated configurations.
l Two 4-lane 12 Gb/s SAS 3.0 interconnects.
l 19" 4U 1m deep chassis.
Replacing a sled, a drive, or the I/O module requires taking the DAE offline (cold
service). All drives in the DAE are inaccessible during the cold service. However, the
identify LEDs will continue to operate for 15 minutes after power is disconnected.
Figure 17 Pikes Peak chassis
Figure 18 Pikes Peak chassis with I/O module and power supplies removed, sleds extended
Each sled must be fully populated with 14 8 TB drives of the same speed. The D6200
uses seven sleds and the D4500 uses five sleds. In the D4500 configuration, sleds
positions C and E are populated by blank filler sleds. Sleds are serviced by pulling the
sled forward and removing the cover.
Drives are designated by the sled letter plus the slot number. The following figure
shows the drive designators for sled A.
Each sled and drive slot has an LED to indicate failure or to indicate that the LED was
enabled by an identify command.
Each drive is enclosed in a tool-less carrier before it is inserted into the sled.
The front of the I/O module has a set of status LEDs for each SAS link.
Figure 25 SAS link LEDs
Note
The Link OK and SAS A and B Fail are not Green and Amber fast flashing when the
DAE is powered on and the node/SCSi HBA is not online (NO LINK).
While the I/O module hardware used in the D-Series is identical between 8TB and 10
TB models, the software configuration of the I/O module is different depending on the
disks used in the model. Consequently, the I/O module field-replaceable unit (FRU)
number is different depending on disk size:
l I/O module FRU for 8TB models (D4500 and D6200): 05-000-427-01
l I/O module FRU for 10TB models (D5600 and D7800): 105-001-028-00
Power supplies
Two power supplies (n + 1 redundant) sit on top of the I/O module in front. A single
power supply can be swapped without removing the I/O module assembly or powering
off the DAE.
Figure 26 Power supply separated from I/O module
Fan modules
The Pikes Peak DAE has three hot-swappable managed system fans at the rear in a
redundant 2-plus-1 configuration. Logic in the DAE will gracefully shut down the DAE
if the heat becomes too high after a fan failure. A failed fan must be left in place until
the fan replacement service call. Each fan has an amber fault LED. The fans are
labeled A, B, and C from right to left.
Voyager DAE
The Voyager DAE is used in U-Series ECS Appliances.
Voyager DAE 57
Disk Drives
Figure 30 U-Series disk layout for 15-disk configurations (Gen1, Gen2 full-rack only)
Voyager DAE 59
Disk Drives
Figure 32 U-Series disk layout for 45-disk configurations (Gen1, Gen2 full-rack)
Note
Remove the power from the DAE before replacing the LCC.
Voyager DAE 61
Disk Drives
1 2 3
CL4669
Interconnect Module
The Interconnect Module (ICM) is the primary interconnect management element.
It is a plug-in module that includes a USB connector, RJ-12 management adapter, Bus
ID indicator, enclosure ID indicator, two input SAS connectors and two output SAS
Voyager DAE 63
Disk Drives
connectors with corresponding LEDs. These LEDs indicate the link and activity of each
SAS connector for input and output to devices.
Note
Power supply
The power supply is hot-swappable. It has a built-in thumbscrew for ease of
installation and removal. Each power supply includes a fan to provide cooling to the
power supply. The power supply is an auto-ranging, power-factor-corrected, multi-
output, offline converter with its own line cord. Each supply supports a fully
configured DAE and shares load currents with the other supply. The power supplies
provide four independent power zones. Each of the hot-swappable power supplies can
deliver 1300 W at 12 V in its load-sharing highly available configuration. Control and
status are implemented throughout the I2C interface.
Voyager DAE 65
Disk Drives
Recommended 24 inches wide cabinet to provide room for cable routing on the sides of
the cabinet.
Sufficient contiguous space anywhere in the rack to install the components in the
required relative order.
If a front door is used, it must maintain a minimum of 1.2 inches of clearance to the
bezels. It must be perforated with 50% or more evenly distributed air opening. It should
enable easy access for service personnel and allow the LEDs to be visible through it.
If a rear door is used, it must be perforated with 50% or more evenly distributed air
opening.
Blanking panels should be used as required to prevent air recirculation inside the cabinet.
Special screws (036-709-013 or 113) are provided for use with square holes rails.
Square hole rails require M5 nut clips that are provided by the customer for third-party
rack provided.
Power The AC power requirements are 200–240 VAC +/- 10% 50–60 Hz.
Vertical PDUs and AC plugs must not interfere with the DAE and Cable Management
arms requiring a depth of 42.5 inches.
Cabling Cables for the product must be routed in such a way that it mimics the standard ECS
Appliance offering coming from the factory. This includes dressing cables to the sides to
prevent drooping and interfering with service of field replaceable units (FRUs).
Cables for third-party components in the rack cannot cross or interfere with ECS logic
components in such a way that they block front to back air flow or individual FRU service
activity.
Disk Array Enclosures (DAEs) All DAEs should be installed in sequential order from bottom to top to prevent a tipping
risk.
WARNING
Opening more than one DAE at a time creates a tip hazard. ECS racks provide an
integrated solution to prevent more than one DAE from being open at a time.
Customer racks will not be able to support this feature.
Weight Customer rack must be capable of supporting the weight of ECS equipment.
Note
Use the power and weight calculator to refine the power and heat values to more-
closely match the hardware configuration for the system. The calculator contains the
latest information for power and weight planning.
ECS support personnel can refer to the Elastic Cloud Storage Third-Party Rack
Installation Guide for more details on installing in customer racks.
Power Cabling 71
Power Cabling
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5
through 8 and server chassis 2.
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.
Note
For a four-node configuration, counting from the bottom of the rack, ignore DAEs 5–8
and server chassis 2.
SAS Cabling 89
SAS Cabling
Gen1
Note
Hardware diagrams number nodes starting with zero. In all other discussions of ECS
architecture and software, nodes are numbered starting with one.
Network Cabling 95
Network Cabling
For a more reliable network, the ends of the daisy chain topology can be connected
together to create a ring network. The ring topology is more stable because it would
require two cable link breaks in the topology for a split-brain to occur. The primary
drawback to the ring topology is that the RMM ports cannot be connected to the
customer network unless an external customer or aggregation switch is added to ring.
Figure 58 Ring topology
The daisy-chain or ring topologies are not recommended for large installations. When
there are four or more ECS appliances, an aggregation switch is recommended. The
addition of an aggregation switch in a star topology can provide better fail over by
reducing split-brain issues.
Figure 59 Star topology
Network cabling
The network cabling diagrams apply to U-Series, D-Series, or C-Series ECS Appliance
in an Dell EMC or customer provided rack.
To distinguish between the three switches, each switch has a nickname:
l Hare: 10 GbE public switch is at the top of the rack in a U- or D-Series or the top
switch in a C-Series segment.
l Rabbit: 10 GbE public switch is located just below the hare in the top of the rack in
a U- or D-Series or below the hare switch in a C-Series segment.
l Turtle: 1 GbE private switch that is located below rabbit in the top of the rack in a
U-Series or below the hare switch in a C-Series segment.
U- and D-Series network cabling
The following figure shows a simplified network cabling diagram for an eight-node
configuration for a U- or D-Series ECS Appliance as configured by Dell EMC or a
customer in a supplied rack. Following this figure, other detailed figures and tables
provide port, label, and cable color information.
Network cabling 97
Network Cabling
Network cabling 99
Network Cabling
Table 42 U- and D-Series 10 GB public switch network cabling for all Arista models
Chassis / node / Switch port / label Switch port / label Label color
10GB adapter port (rabbit, SW1) (hare, SW2)
1 / Node 1 P01 (Right) 10G SW1 P09 Orange
Table 42 U- and D-Series 10 GB public switch network cabling for all Arista models (continued)
Chassis / node / Switch port / label Switch port / label Label color
10GB adapter port (rabbit, SW1) (hare, SW2)
1 / Node 4 P02 (Left) 10G SW2 P12
Note
1.5m (U-Series) or 3m (C-Series) Twinax network cables are provided for 10GB.
Table 43 U- and D-Series 10 GB public switch MLAG cabling for all Arista models
Note
2 / Node 6 Node 06 1GB SW P30 Node06 P06 1GB SW P06 Light Blue
RMM
Note
Port 49 and 50 are 1 meter white cables. RJ45 SFPs are installed in ports 49 to 52.
Figure 64 C-Series public switch cabling for the lower segment from the rear
Figure 64 C-Series public switch cabling for the lower segment from the rear (continued)
Figure 64 C-Series public switch cabling for the lower segment from the rear (continued)
Figure 65 C-Series public switch cabling for the upper segment from the rear
Figure 65 C-Series public switch cabling for the upper segment from the rear (continued)
Figure 66 C-Series private switch cabling for the lower segment from the rear
Figure 66 C-Series private switch cabling for the lower segment from the rear (continued)
Figure 66 C-Series private switch cabling for the lower segment from the rear (continued)
Figure 67 C-Series private switch cabling for the upper segment from the rear
Figure 67 C-Series private switch cabling for the upper segment from the rear (continued)
Figure 67 C-Series private switch cabling for the upper segment from the rear (continued)