Datasheet c78 736817 PDF
Datasheet c78 736817 PDF
Cisco HyperFlex
HX220c M4 Node
OVERVIEW
Cisco HyperFlex Systems unlock the full potential of hyperconvergence. The systems are based on an
end-to-end software-defined infrastructure, combining software-defined computing in the form of Cisco
Unified Computing System (Cisco UCS) servers; software-defined storage with the powerful Cisco HX Data
Platform and software-defined networking with the Cisco UCS fabric that will integrate smoothly with Cisco
Application Centric Infrastructure (Cisco ACI). Together with a single point of connectivity and hardware
management, these technologies deliver a preintegrated and adaptable cluster that is ready to provide a
unified pool of resources to power applications as your business needs dictate.
Front View
DETAILED VIEWS
Chassis Front View
Figure 2 shows the front view of the Cisco HyperFlex HX220c M4 Node (with front bezel removed).
4 5 7 9
6 8 10
3
2
352973
Drive 04 Drive 05 Drive 06 Drive 07 Drive 08
1 11
1 2
PCIe 01 PCIe 02
352977
PSU 01 PSU 02
mLOM
3 4 5 6 7 8 9 10 11
5 USB 3.0 ports (two) 11 Power supplies (up to two, redundant as 1+1)
Notes . . .
1. For details of the serial port pinout, see Serial Port Details, page 42.
Capability/Feature Description
Embedded NIC Two embedded (on the motherboard) Intel i350 GbE ports, supporting the
following:
iSCSI boot
NIC teaming
One full-height profile, 3/4-length slot with x24 connector and x16 lane
One half-height profile, half-length slot with x24 connector and x16 lane
An internal slot is reserved for use by the Cisco 12 Gbps Modular SAS HBA.
Internal storage Drives are installed into front-panel drive bays that provide hot-pluggable access.
devices
Small Form Factor (SFF) drives.
Cisco Flexible Flash The system supports two internal 64 GB Cisco Flexible Flash drives (SD cards).
drives
The SD cards are mirrored to each other and are used for booting.
Capability/Feature Description
Video The Cisco Integrated Management Controller (CIMC) provides video using the
Matrox G200e video/graphics controller:
Front panel
One KVM console connector (supplies two USB 2.0 connectors, one VGA
DB15 connector, and one serial port (RS232) RJ45 connector)
Storage controller Cisco 12 Gbps Modular SAS HBA with internal SAS connectivity
WoL The 1-Gb Base-T Ethernet LAN ports support the wake-on-LAN (WoL) standard.
Front Panel A front panel controller provides status indications and control buttons
ACPI This system supports the advanced configuration and power interface (ACPI) 4.0
standard.
Capability/Feature Description
Fans Chassis:
Six hot-swappable fans for front-to-rear cooling
HX220C-M4S1 HX220c M4 Node, with two CPUs, memory, six HDDs, two SSDs, two power
supplies, two SD cards, one VIC 1227 mLOM card, no PCIe cards, and no rail kit
HX-M4S-HXDP This major line bundle (MLB) consists of the Server Nodes (HX220C-M4S and
HX240C-M4SX) with HXDP software spare PIDs
HX2X0C-M4S This major line bundle (MLB) consists of the Server Nodes (HX220C-M4S and
HX240C-M4SX), Fabric Interconnects (HX-FI-6248UP and HX-FI-6296UP), and
HXDP software spare PIDs.
Notes . . .
1. This product may not be purchased outside of the approved bundles (must be ordered under the MLB).
Includes two power supplies, two CPUs, memory, hard disk drives (HDDs), solid-state drives
(SSDs), VIC 1227 mLOM card, and SD cards
NOTE: Use the steps on the following pages to see or change the
configuration of the system.
Cache size of up to 55 MB
Select CPUs
Highest
Clock Cache DDR4 DIMM
Intel Power
Product ID (PID) Freq Size Cores QPI Clock
Number (W)
(GHz) (MB) Support
(MHz)1
E5-2600 v4 Series Processor Family CPUs
HX-CPU-E52699E E5-2699 v4 2.20 145 55 22 9.6 GT/s 2400
HX-CPU-E52699AE E5-2699A v4 2.40 145 55 22 9.6 GT/s 2400
HX-CPU-E52698E E5-2698 v4 2.20 135 50 20 9.6 GT/s 2400
HX-CPU-E52697AE E5-2697A v4 2.60 145 40 16 9.6 GT/s 2400
HX-CPU-E52697E E5-2697 v4 2.30 145 45 18 9.6 GT/s 2400
HX-CPU-E52695E E5-2695 v4 2.10 120 45 18 9.6 GT/s 2400
HX-CPU-E52690E E5-2690 v4 2.60 135 35 14 9.6 GT/s 2400
HX-CPU-E52683E E5-2683 v4 2.10 120 40 16 9.6 GT/s 2400
HX-CPU-E52680E E5-2680 v4 2.40 120 35 14 9.6 GT/s 2400
HX-CPU-E52667E E5-2667 v4 3.20 135 25 8 9.6 GT/s 2400
HX-CPU-E52660E E5-2660 v4 2.00 105 35 14 9.6 GT/s 2400
HX-CPU-E52650E E5-2650 v4 2.20 105 30 12 9.6 GT/s 2400
HX-CPU-E52650LE E5-2650L v4 1.70 65 35 14 9.6 GT/s 2400
HX-CPU-E52640E E5-2640 v4 2.40 90 25 10 8.0 GT/s 2133
HX-CPU-E52630E E5-2630 v4 2.20 85 25 10 8/0 GT/s 2133
HX-CPU-E52630LE E5-2630L v4 1.80 55 25 8 8.0 GT/s 2133
HX-CPU-E52620E E5-2620 v4 2.10 85 20 8 8.0 GT/s 2133
HX-CPU-E52609E E5-2609 v4 1.70 85 20 8 6.4 GT/s 1866
Notes . . .
1. If higher or lower speed DIMMs are selected than what is shown in the table for a given CPU, the DIMMs will be
clocked at the lowest common denominator of CPU clock and DIMM clock.
Approved Configurations
For the HX-CPU-E52630LE and higher-numbered CPUs, you can select 1 or 2 identical CPUs from
Table 3 on page 12.
NOTE: The 1-CPU configuration is only supported for the HX Edge configuration
For the HX-CPU-E52609E or HX-CPU-E52620E CPUs, you must select two identical CPUs from
Table 3 on page 12.
Caveats
You can select one or two identical processors (depending on the CPU selected).
NOTE: The 1-CPU configuration is only supported for the HX Edge configuration
For optimal performance, select DIMMs with the highest clock speed for a given processor (see
Table 3 on page 12). If you select DIMMs whose speeds are lower or higher than that shown in
the tables, suboptimal performance will result.
DIMMs
Memory is organized with four memory channels per CPU, with up to three DIMMs per channel,
as shown in Figure 4.
Slot 3
Slot 2
Slot 1
Slot 1
Slot 2
Slot 3
A1 A2 A3 E3 E2 E1
Chan A Chan E
B1 B2 B3 F3 F2 F1
Chan B Chan F
C1 C2 C3
CPU 1 CPU 2 G3 G2 G1
Chan C Chan G
D1 D2 D3 H3 H2 H1
Chan D Chan H
24 DIMMS
1.5 TB maximum memory (with 64 GB DIMMs)
Select DIMMs
Ranks/
Product ID (PID) PID Description Voltage
DIMM
DIMM Options
2400-MHz DIMM Options
UCS-ML-1X644RV-A 64 GB DDR4-2400-MHz LRDIMM/PC4-19200/quad rank/x4 1.2 V 4
UCS-MR-1X322RV-A 32GB DDR4-2400-MHz RDIMM/PC4-19200/dual rank/x4 1.2 V 2
UCS-MR-1X161RV-A 16GB DDR4-2400-MHz RDIMM/PC4-19200/single rank/x4 1.2 V 1
2133-MHz DIMM Options
UCS-MR-1X648RU-A 64 GB DDR4-2133-MHz TSV-RDIMM/PC4-17000/octal rank/x4 1.2 V 8
Approved Configurations
Can only multi-select with UCS-MR-1X161RV-A. Cannot select any other memory
option.Cannot mix with any other PID.
For HX-MR-1X161RV-A RDIMMs:
For 1-CPU systems: min = 8 and max = 12.
For 2-CPU systems: min = 8 and max = 24 (must be even numbers - for example,
8, 10, 12, ... ,24).
For 2 CPU systems and UCS-MR-1X322RV-A is also selected, max qty 24; qty per
PID allowed = 8,12 each PID.
Can only multi-select with UCS-MR-1X322RV-A. Cannot select any other memory
option.
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs.
Caveats
System speed is dependent on how many DIMMs are populated per channel and the CPU DIMM
speed support. See Table 5 and Table 6 for details.
Notes . . .
1. 2133-MHz DIMMs are the only offered and supported DIMMs for the HX220c M4 Node.
DIMM = 2400 MHz 1DPC 2400 MHz 2400 MHz 2400 MHz
CPU = 2400 MHz
2DPC 2400 MHz 2400 MHz 2400 MHz
DIMM = 2400 MHz 1DPC 2133 MHz 2133 MHz 2133 MHz
CPU = 2133 MHz
2DPC 2133 MHz 2133 MHz 2133 MHz
DIMM = 2400 MHz 1DPC 1866 MHz 1866 MHz 1866 MHz
CPU = 1866 MHz
2DPC 1866 MHz 1866 MHz 1866 MHz
The HX220c M4 Node supports the following memory reliability, availability, and serviceability
(RAS) modes:
Mixing of Independent and Lockstep channel mode is not allowed per platform.
Pairs of DIMMs (A1/B1, A2/B2, etc) MUST be the exact same (same PID, rev, DIMM loading
order)
Cisco memory from previous generation systems (DDR3) is not compatible with this system.
For more information regarding memory, see CPUs and DIMMs, page 39.
Cisco 12 Gbps Modular SAS HBA, which plugs into a dedicated RAID controller slot.
Approved Configurations
The Cisco 12 Gbps Modular SAS HBA supports up to 8 internal drives with non-RAID support.
STEP 5 SELECT HARD DISK DRIVES (HDDs) or SOLID STATE DRIVES (SSDs)
The standard disk drive features are:
Hot-pluggable
NOTE:
All SED HDDs are FIPs 140-2 compliant
SED SSDs (10X endurance) are FIPS 140-2 compliant
SED SSDs (3X and 1X endurance) are not FIPS 140-2 compliant
Select Drives
Drive
Product ID (PID) PID Description Capacity
Type
HDD Data Drives
HX-HD12TB10K12G 1.2 TB 12G SAS 10K RPM SFF HDD SAS 1.2 TB
HX-HD18TB10KS4K 1.8 TB 12G SAS 10K RPM SFF HDD SAS 1.8 TB
SSD Caching Drives
HX-SD480G12S3-EP 480 GB 2.5 inch Enterprise Performance 6G SATA SSD (3X endurance) SATA 480 GB
HX-SD800GSAS3-EP 800GB 2.5 inch Enterprise performance 12G SAS SSD (3X DWPD) SAS 800 GB
SATA SSD Boot Drives
HX-SD120GBKS4-EV 120 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 120 GB
HX-SD240GBKS4-EV 240GB 2.5 inch Enterprise Value 6G SATA SSD SATA 240 GB
SED Persistent Drives
HX-HD12G10K9 1.2 TB 12G SAS 10K RPM SFF HDD (SED) SAS 1.2 TB
SED Cache/WL Drives
HX-SD800GBEK9 800GB Enterprise performance SAS SSD (10X FWPD, SED) SAS 800 GB
Approved Configurations
Three to six 1.2 TB 12G SAS 10K RPM SFF HDD data drives (UCS-HD12TB10K12G) OR
Three to six 1.8 TB 12G SAS 10K RPM SFF HDD data drives (HX-HD18TB10KS4K).
NOTE: Less than 6 HDDs is only supported for the HX Edge configuration
One 480 GB 2.5 inch Enterprise Performance 6G SATA SSD caching drive
(UCS-SD480G12S3-EP) or one 800 GB 2.5 inch Enterprise performance 12G SAS SSD
caching dirve (HX-SD800GSAS3-EP).
One 120 GB 2.5 inch Enterprise Value 6 G SATA SSD boot drive (UCS-SD120GBKS4-EV)
or one 240 GB 2.5 inch Enterprise Value SSD boot drive (HX-SD240GBKS4-EV).
Caveats
You must choose up to six HDD data drives, one caching drive and one boot drive.
If you select SED drives (HX-HD12G10K9), you must adhere to the following:
Card
Product ID (PID) PID Description
Height
Notes . . .
1. The mLOM card does not plug into any of the riser 1 or riser 2 card slots; instead, it plugs into a connector
inside the chassis.
2. The NIC is supported for HyperFlex Edge configurations.
Caveats
VIC 1227 supports 10G SFP+ optical and copper twinax connections
The VIC 1227 is supported with the following software releases: 2.0.8h and above
(CIMC) and 2.2.6f (UCSM).
http://ucspowercalc.cisco.com
Approved Configurations
Connector:
IEC60320/C13
186570
CAB-AC-L620-C13 AC Power Cord, NEMA L6-20 - C13,
2M/6.5ft
CAB-C13-C14-AC CORD,PWR,JMP,IEC60320/C14,IEC6
0320/C13, 3.0M
186571
CAB-9K10A-AU Power Cord, 250VAC 10A 3112 Plug,
Australia
Cordset rating: 10 A, 250 V/500 V MAX
Length: 2500mm
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
186576
SFS-250V-10A-ID Power Cord, SFS, 250V, 10A, India
OVE
Cordset rating 16A, 250V
Plug:
(2500mm)
EL 208
Connector:
EL 701
187490
SFS-250V-10A-IS Power Cord, SFS, 250V, 10A, Israel
EL-212
16A
250V
Connector:
Plug: EL 701B
EL 212 (IEC60320/C13)
186574
(SI-32)
186578
CAB-9K10A-UK Power Cord, 250VAC 10A BS1363
Plug (13 A fuse), UK
Cordset rating: 10 A, 250 V/500 V MAX
Length: 2500mm
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
2,133.6 25
The reversible cable management arm mounts on either the right or left slide rails at the rear of
the HX220c Node and is used for cable management. Use Table 13 to order a cable management
arm.
UCSC-CMAF-M4 Reversible CMA for tool-less friction and ball bearing rail kit
For more information about the tool-less rail kit and cable management arm, see the Cisco UCS
C220 M4 Installation and Service Guide at this URL:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C220M4/install/C220M4.html
NOTE: If you plan to rackmount your UCS HX220c M4 Node, you must order one of
the tool-less rail kits.
NOTE: The TPM module used in this system conforms to TPM v1.2 and 2.0, as
defined by the Trusted Computing Group (TCG). It is also SPI-based.
UCS-SD-64G-S 64 GB SD Card
UCS-SD-32G-S 32 GB SD Card
Caveats
Notes . . .
1. Although VMware 6.0 is installed at the factory, both VMware 5.5 and VMware 6.0 are supported.
If you have noncritical implementations and choose to have no service contract, the following
coverage is supplied:
Next business day (NBD) onsite parts replacement eight hours a day, five days a week.
UCSM updates for systems with Unified Computing System Manager. These updates
include minor enhancements and bug fixes that are designed to maintain the
compliance of UCSM with published specifications, release notes, and industry
standards.
For support of the entire Unified Computing System, Cisco offers the Cisco Smart Net Total Care
for UCS Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world.
For systems that include Unified Computing System Manager, the support service includes
downloads of UCSM upgrades. The Cisco Smart Net Total Care for UCS Service includes flexible
hardware replacement options, including replacement in as little as two hours. There is also
access to Cisco's extensive online technical resources to help maintain optimal efficiency and
uptime of the unified computing environment. You can choose a desired service listed in
Table 17.
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
Expand their service portfolios to support the most complex network environments
Partner Unified Computing Support Options enable eligible Cisco partners to develop and
consistently deliver high-value technical support that capitalizes on Cisco intellectual assets.
This helps partners to realize higher margins and expand their practice.
The Partner Unified Computing Support Option for the Cisco HX220C Node is:
Partner Support Service for UCS provides hardware and software support, including triage
support for third party software, backed by Cisco technical resources and level three support.
See Table 18.
Service
On
Product ID (PID) Level Description
Site?
GSP
Combined Services makes it easier to purchase and manage required services under one
contract. Smart Net Total Care services for UCS help increase the availability of your vital data
center infrastructure and realize the most value from your unified computing investment. The
more benefits you realize from the Cisco Unified Computing System (Cisco UCS), the more
important the technology becomes to your business. These services allow you to:
Protect your vital business applications by rapidly identifying and addressing issues
Improve operational efficiency by allowing UCS experts to augment your internal staff
resources
Enhance business agility by diagnosing potential issues before they affect your
operations
Service
On
Product ID (PID) Level Description
Site?
GSP
With the Smart Net Total Care for UCS with Drive Retention (UCSDR) Service, you can obtain a
new disk drive in exchange for a faulty drive without returning the faulty drive. In exchange for
a Cisco replacement drive, you provide a signed Certificate of Destruction (CoD) confirming that
the drive has been removed from the system listed, is no longer in service, and has been
destroyed.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The UCDR service enables you to
retain your drives and ensures that the sensitive data on those drives is not compromised, which
reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in Table 20 on page 33.
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Service Service
Service Description Service Level Product ID (PID)
Program Name Level GSP
For a complete listing of available services for Cisco Unified Computing System, see this URL:
http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
Notes . . .
1. Use these same base PIDs to order spare racks (available only as next-day replacements).
For more information about the R42610 rack, see RACKS, page 45.
For more information about the PDU, see PDUs, page 47.
SUPPLEMENTAL MATERIAL
Hyperconverged Systems
Cisco HyperFlex Systems let you unlock the full potential of hyperconvergence and adapt IT to the needs of
your workloads. The systems use an end-to-end software-defined infrastructure approach, combining
software-defined computing in the form of Cisco HyperFlex HX-Series nodes; software-defined storage with
the powerful Cisco HX Data Platform; and software-defined networking with the Cisco UCS fabric that will
integrate smoothly with Cisco Application Centric Infrastructure (Cisco ACI). Together with a single point of
connectivity and management, these technologies deliver a preintegrated and adaptable cluster with a
unified pool of resources that you can quickly deploy, adapt, scale, and manage to efficiently power your
applications and your business.
Cisco Nexus 9000 Series Switch (optional) Cisco Nexus 9000 Series Switch (optional)
vPC
Shared Services
peer link
vCenter
DHCP
Cisco UCS 6248UP Cisco UCS 6248UP
vPC vPC NTP
Fabric Interconnect Fabric Interconnect
DNS
Active
Legend Directory
Converged
10 GbE
Interconnects
Cisco HX220c M4 Nodes (3 minimum)
Chassis
An internal view of the HX220c M4 Node with the top cover removed is shown in Figure 6. The
location of the two SD cards is marked with callout #7.
2 3 4 5 6 7 8
PSU 2
FAN 6 SD 1
9
FAN 5
CPU 2 PSU 1
SD 2
FAN 4
10
1 PCIe Riser 2
11
FAN 1
13
352978
17 16 15 14
1 Drives (SAS/SATA drives are hot-swappable) 10 Trusted platform module (TPM) socket on
motherboard (not visible in this view)
2 Cooling fan modules (six) 11 PCIe riser 2 (half-height PCIe slot 2)
3 SuperCap backup unit mounting location 12 PCIe riser 1 (full-height PCIe slot 1)
(not used in this system)
4 DIMM sockets on motherboard 13 Modular LOM (mLOM) connector on chassis
(either 16 or 24 DIMMs populated) floor
5 CPUs and heatsinks (two) 14 Cisco 12 Gbps Modular SAS HBA disk controller
PCIe riser (dedicated riser with horizontal
socket)
6 Embedded SATA RAID header for RAID 5 key 15 Cisco 12 Gbps Modular SAS HBA controller card
(not used)
7 Cisco SD card bays on motherboard (two) 16 Embedded SATA RAID mini-SAS connectors on
motherboard (not visible in this view and not
used)
8 Internal USB 3.0 port on motherboard 17 RTC battery on motherboard
(not used)
9 Power supplies (two, hot-swappable when
redundant as 1+1)
38
USB USB
Two Options for Drive Backplane Thumbdrive
USB
Figure 7
UCSC-C220-M4S, 1U, 8-2.5" drives USB 3.0
VGA
SAS x1
1 SATA Serial
PCIe x4*
SAS x1 Cable to
2 PCIe x4*
X4 SAS
modular Intel 1 GB-T
2-port
Connector
RAID card or Gen2 Wellsburg PCH i350 NIC
X4 SAS
Wellsburg (6 Gb/s) SATA 1 GB-T
Block Diagram
Connector
SAS x1 connector
3
SAS x1 or PCIe riser PCIe
4 1 GB-T
X4 SAS
SUPPLEMENTAL MATERIAL
Connector
BMC (Mgmt)
5 SAS x4 SD controller
Cable to
6 modular
X4 SAS
7 RAID card or DMI2 (x4) SD 3.0
Connector
8 Wellsburg Gen2
A1A2 A3 SD1 SD2
connector
mLOM
*PCIe drives allowed in Chan A
DMI2 Module
slots 1 and 2 only B1B2 B3
PCIe (a variety of
Chan B network
UCSC-C220-M4L, 1U, 4-3.5" drives Intel cards can be
PCIe
Network connectors
here)
. . . . . . . . . . . .
2 (CPU 1)
D1D2 D3
X4 SAS
Connector
Cables to Chan D
3 SAS x2 modular
RAID card
4
QPI (9.6 GT) QPI (9.6 GT)
X4 SAS
E1 E2 E3 PCIe Riser 1
HX220c M4 Node Block Diagram (simplified)
Front Panel
Connector
Chan E Slot 1
PCIe x16 Gen3
F1 F2 F3
Chan F Intel PCIe Riser 2 (option 1)
X2 USB 2.0
Xeon
G1G2 G3 PCIe x16 Gen3 Slot 2 1 VGA
Haswell XP 1 Serial
Chan G
(CPU 2)
PCIe Riser 2 (option 2)
H1H2 H3
PCIe x8 Gen3 Slot 2
Chan H
KVM
PCIe x8 Gen3
Connector
X2 mini
SAS/SATA SAS HD
PCIe x4 Gen3
A simplified block diagram of the HX220c M4 Node is shown in Figure 7.
X4 SAS
Connector
Gen3 Modular RAID PCIe x4 Gen3
(12 Gb/s) SAS/SATA PCIe x8 Gen 3
Card
X4 SAS
Connector
Physical Layout
Each CPU has four DIMM channels:
Each DIMM channel has three slots: slot 1, slot 2, and slot 3. The blue-colored DIMM slots are for slot 1, the
black-colored slots for slot 2, and the white slots for slot 3.
As an example, DIMM slots A1, B1, C1, and D1 belong to slot 1, while A2, B2, C2, and D2 belong to slot 2.
Figure 8 shows how slots and channels are physically laid out on the motherboard. The DIMM slots on the
right half of the motherboard (channels A, B, C, and D) are associated with CPU 1, while the DIMM slots on
the left half of the motherboard (channels E, F, G, and H) are associated with CPU 2. The slot 1 (blue) DIMM
slots are always located farther away from a CPU than the corresponding slot 2 (black) and slot 3 (white)
slots. Slot 1 slots (blue) are populated before slot 2 slots (black) and slot 3 (white) slots.
Each channel has three DIMM slots (for example, channel A = slots A1, A2, and A3).
When both CPUs are installed, populate the DIMM slots of each CPU identically.
Fill blue slots in the channels first: A1, E1, B1, F1, C1, G1, D1, H1
Fill black slots in the channels second: A2, E2, B2, F2, C2, G2, D2, H2
Fill black slots in the channels third: A3, E3, B3, F3, C3, G3, D3, H3
Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized.
DIMM Parameter DIMMs in the Same Channel DIMM in the Same Slot1
DIMM Capacity
RDIMM = 16, 32, or 64 GB DIMMs in the same channel (for For best performance, DIMMs in the
example, A1, A2, and A3) can have same slot (for example, A1, B1, C1,
different capacities. D1) should have the same capacity.
DIMM Speed
2133-MHz2 DIMMs will run at the lowest speed DIMMs will run at the lowest speed of
of the DIMMs/CPUs installed the DIMMs/CPUs installed
DIMM Type
RDIMMs
Notes . . .
1. Although you can have different DIMM capacities in the same slot, this will result in less than optimal
performance. For optimal performance, all DIMMs in the same slot should be identical.
2. Only 2133-MHz DIMMs are currently available for the HX220c M4 node.
Pin Signal
1 RTS (Request to Send)
2 DTR (Data Terminal Ready)
3 TxD (Transmit Data)
4 GND (Signal Ground)
5 GND (Signal Ground)
6 RxD (Receive Data)
7 DSR (Data Set Ready)
8 CTS (Clear to Send)
UCS-CPU-LPCVR= CPU load plate dust cover (for unpopulated CPU sockets)
UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit For Replacement of CPU
UCSC-CMAF-M4= Reversible CMA for friction & ball bearing rail kits
Notes . . .
1. A drive blanking panel must be installed if you remove a disk drive from the system. These panels are
required to maintain system temperatures at safe operating levels, and to keep dust away from system
components.
2. Required if ordering the RAID controller as a spare or to replace damaged cables
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C220M4/install/C220M4.html
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C220M4/install/C220M4.html
CAUTION: Use only the thermal grease specified for this system
(UCS-CPU-GREASE3=). This thermal grease comes in a white-tipped
syringe and is to be used only in the HX220c M4 and HX240c M4 Nodes.
Other systems use thermal grease in a blue-tipped syringe
(UCS-CPU-GREASE=).
Thermal grease for other systems may have different thermal
conductivity properties and may cause overheating if used in the HX220c
M4 or HX240c M4 Nodes.
DO NOT use thermal grease available for purchase at any commercial
electronics store. If these instructions are not followed, the CPU may
overheat and be destroyed.
NOTE: When you purchase a spare CPU, the thermal grease with syringe applicator
is included.
RACKS
The Cisco R42610 rack (see Figure 10) is certified for Cisco UCS installation at customer sites and is
suitable for the following equipment:
The rack is compatible with hardware designed for EIA-standard 19-inch racks. Rack specifications are
listed in Table 26.
NOTE: The AC input connector is an IEC 320 C-14 15 A/250 VAC power inlet.
Front view - door Front view - door open Front view - door removed
PDUs
Cisco RP Series Power Distribution Units (PDUs) offer power distribution with branch circuit protection.
Cisco RP Series PDU models distribute power to up to 24 outlets. The architecture organizes power
distribution, simplifies cable management, and enables you to move, add, and change rack equipment
without an electrician.
With a Cisco RP Series PDU in the rack, you can replace up to two dozen input power cords with just one.
The fixed input cord connects to the power source from overhead or under-floor distribution. Your IT
equipment is then powered by PDU outlets in the rack using short, easy-to-manage power cords.
The C-series severs accept the zero-rack-unit (0RU) PDU. See Figure 11).
Cisco RP Series PDU models provide two 20-ampere (A) circuit breakers for groups of receptacles. The
effects of a tripped circuit are limited to a receptacle group. Simply press a button to reset that circuit.
KVM CABLE
The KVM cable provides a connection into the system, providing a DB9 serial connector, a VGA connector for
a monitor, and dual USB ports for a keyboard and mouse. With this cable, you can create a direct
connection to the operating system and the BIOS running on the system.
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Weight1
Maximum (8 drives, two 2 CPUs, 24 DIMMs, two power 37.9 lbs (17.2 kg)
supplies)
Notes . . .
1. Weight includes inner rail, which is attached to the system. Weight does not include outer rail, which is
attached to the rack.
Power Specifications
The general power specifications for the HX220c M4 Node 770 W (AC) power supply are listed in Table 29.
Description Specification
Power supply efficiency Climate Savers Platinum Efficiency (80Plus Platinum Certified)
For configuration-specific power specifications, use the Cisco UCS Power Calculator at this URL:
http://ucspowercalc.cisco.com
Environmental Specifications
The power specifications for the HX220c M4 Node are listed in Table 30.
Parameter Minimum
Temperature operating 41 to 95 F (5 to 35 C)
derate the maximum temperature by 1C per every
1000 ft. (305 m) of altitude above sea level
Compliance Requirements
The regulatory compliance requirements for C-Series systems are listed in Table 31.
Parameter Description