HX 220c m5 Specsheet PDF
HX 220c m5 Specsheet PDF
Cisco HyperFlex
HX220c M5 Node
(HYBRID)
OVERVIEW
Cisco HyperFlex™ Systems unlock the full potential of hyperconvergence. The systems are based on an
end-to-end software-defined infrastructure, combining software-defined computing in the form of Cisco
Unified Computing System (Cisco UCS) servers; software-defined storage with the powerful Cisco HX Data
Platform and software-defined networking with the Cisco UCS fabric that will integrate smoothly with Cisco
Application Centric Infrastructure (Cisco ACI™). Together with a single point of connectivity and hardware
management, these technologies deliver a preintegrated and adaptable cluster that is ready to provide a
unified pool of resources to power applications as your business needs dictate.
The HX220c M5 servers extend the capabilities of Cisco’s HyperFlex portfolio in a 1U form factor with the
addition of the Intel® Xeon® Processor Scalable Family, 24 DIMM slots with configuration options ranging
from 128GB up to 3TB of DRAM, and an all flash footprint of cache and capacity drives for highly available,
high performance storage.
The latest update includes support for 2nd Generation Intel® Xeon® Scalable Processors, 2933-MHz DDR4
memory.
DETAILED VIEWS
Chassis Front View
Figure 2 shows the front view of the Cisco HyperFlex HX220c M5 Node
3 Dual 1/10GE ports (LAN1 and LAN2). LAN1 is 9 PCIe riser 2 (slot 2) (half-height, x16);
left connector LAN2 is right connector NOTE: Use of PCIe riser 2 requires a dual CPU
configuration.
5 1GE dedicated management port 11 Threaded holes for dual-hole grounding lug
Capability/Feature Description
CPU One or two Intel® Xeon® scalable family CPUs or one or two 2nd Generation
Intel® Xeon® scalable family
Video The Cisco Integrated Management Controller (CIMC) provides video using the
ASPEED Pilot 4 video/graphics controller:
■ Integrated 2D graphics core with hardware acceleration
■ DDR4 memory interface supports up to 16 MB directly accessible from host
and entire DDR memory indirectly accessible from host processor.
■ Supports all display resolutions up to 1920 x 1200 x 32bpp resolution at 60Hz
■ High-speed integrated 24-bit RAMDAC
■ Single lane PCI-Express host interface
■ eSPI processor to BMC support
Front Panel A front panel controller provides status indications and control buttons
ACPI This server supports the advanced configuration and power interface (ACPI) 4.0
standard.
Capability/Feature Description
Internal storage Up to 10 Drives are installed into front-panel drive bays that provide
devices hot-swappable access for SAS/SATA drives. 10 Drives are used as below:
• Three to Eight SAS HDD or three to Eight SED SAS HDD (for capacity)
• One SATA/SAS SSD or One SED SAS SSD (for caching)
• One SATA/SAS SSD (System drive for Hyperflex Operations)
A mini-storage module connector on the motherboard for M.2 module for one M.2
SATA SSDs for following usage:
• ESXi boot and HyperFlex storage controller VM
One socket for one micro-SD card on PCIe Riser 1 for following usage:
• The micro-SD card serves as a dedicated local resource for utilities such as
host upgrade utility (HUU). Images can be pulled from a file share
(NFS/CIFS) and uploaded to the cards for future use. Cisco Intersight
leverages this card for advanced server management.
Capability/Feature Description
Modular LAN on The dedicated mLOM slot on the motherboard can flexibly accommodate the
Motherboard following cards:
(mLOM) slot
■ Cisco 1457 Quad Port Virtual Interface Card (10GE/25GE)
(optional) PCIe slot 1 and PCIe slot 2 on the motherboard can flexibly accommodate the
Additional NICs following cards:
UCSM Unified Computing System Manager (UCSM) runs in the Fabric Interconnect and
automatically discovers and provisions some of the server components.
HX-M5S-HXDP This major line bundle (MLB) consists of the Server Nodes (HX220C-M5SX and
HX240C-M5SX) with HXDP software spare PIDs. Use this PID for creating
estimates and placing orders.
HX220C-M5SX1 HX220c M5 Node, with one or two CPUs, memory, eight HDDs for data storage,
one SSD (HyperFlex system drive), one SSD for caching, two power supplies, one
M.2 SATA SSD, one micro-SD card, ESXi boot one VIC 1387 mLOM card, no PCIe
cards, and no rail kit.
HX2X0C-M5S This major line bundle (MLB) consists of the Server Nodes (HX220C-M5SX and
HX240C-M5SX), Fabric Interconnects (HX-FI-6248UP, HX-FI-6296UP, HX-FI-6332,
HX-FI-6332-16UP) and HXDP software spare PIDs.
Notes:
1. This product may not be purchased outside of the approved bundles (must be ordered under the MLB).
• Requires configuration of one or two power supplies, one or two CPUs, recommended
memory sizes, 1 SSD for Caching, 1 SSD for system logs, up to 8 data HDDs, 1 VIC mLOM
card, 1 M.2 SATA SSD and 1 micro-SD card.
• Provides option to choose 10G QSAs to connect with HX-FI-6248UP and HX-FI-6296UP
• Provides option to choose rail kits.
NOTE: Use the steps on the following pages to configure the server with
the components that you want to include.
■ Intel® Xeon® processor scalable family CPUs and 2nd Generation Intel®Xeon® scalable family CPUs
■ From 8 cores up to 28 cores per CPU
■ Intel C620 series chipset
■ Cache size of up to 38.5 MB
Select CPUs
Highest
Clock Cache DDR4 DIMM
Power UPI1 Links
Product ID (PID) Freq Size Cores Clock Workload/Processor type
(W) (GT/s)
(GHz) (MB) Support
(MHz)
Cisco Recommended CPUs (2nd Generation Intel® Xeon® Processors)
HX-CPU-I8276 2.2 165 38.50 28 3 x 10.4 2933 Oracle, SAP
HX-CPU-I8260 2.4 165 35.75 24 3 x 10.4 2933 Microsoft Azure Stack
HX-CPU-I6262V 1.9 135 33.00 24 3 x 10.4 2400 Virtual Server infrastructure
or VSI
HX-CPU-I6248 2.5 150 27.50 20 3 x 10.4 2933 VDI, Oracle, SQL, Microsoft
Azure Stack
HX-CPU-I6238R 2.2 165 38.50 28 2 x 10.4 2933 Oracle, SAP (2-Socket TDI
only), Microsoft AzureStack
HX-CPU-I6238 2.1 140 30.25 22 3 x 10.4 2933 SAP
HX-CPU-I6230R 2.1 150 35.75 26 2 x 10.4 2933 Virtual Server
Infrastructure, Data
Protection, Big Data,
Splunk, Microsoft
AzureStack
HX-CPU-I6230 2.1 125 27.50 20 3 x 10.4 2933 Big Data, Virtualization
HX-CPU-I5220R 2.2 125 35.75 24 2 x 10.4 2666 Virtual Server
Infrastructure, Splunk,
Microsoft Azure Stack
HX-CPU-I5220 2.2 125 24.75 18 2 x 10.4 2666 HCI
HX-CPU-I5218R 2.1 125 27.50 20 2 x 10.4 2666 Virtual Server
Infrastructure, Data
Protection, Big Data,
Splunk, Scale-out Object
Storage, Microsoft
AzureStack
Highest
Clock Cache DDR4 DIMM
Power UPI1 Links
Product ID (PID) Freq Size Cores Clock Workload/Processor type
(W) (GT/s)
(GHz) (MB) Support
(MHz)
HX-CPU-I5218 2.3 125 22.00 16 2 x 10.4 2666 Virtualization, Microsoft
Azure Stack, Splunk, Data
Protection
HX-CPU-I4216 2.1 100 22.00 16 2 x 9.6 2400 Data Protection, Scale Out
Storage
HX-CPU-I4214R 2.4 100 16.50 12 2 x 9.6 2400 Data Protection, Splunk,
Scale-out Object Storage,
Microsoft AzureStack
HX-CPU-I4214 2.2 85 16.75 12 2 x 9.6 2400 Data Protection, Scale Out
Storage
Highest
Clock Cache DDR4 DIMM
Power UPI1 Links
Product ID (PID) Freq Size Cores Clock Workload/Processor type
(W) (GT/s)
(GHz) (MB) Support
(MHz)
HX-CPU-8164 2.0 150 35.75 26 3 x 10.4 2666 Intel® Xeon®
HX-CPU-8160 2.1 150 33.00 24 3 x 10.4 2666 Intel® Xeon®
HX-CPU-8158 3.0 150 24.75 12 3 x 10.4 2666 Intel® Xeon®
HX-CPU-8153 2.0 125 22.00 16 3 x 10.4 2666 Intel® Xeon®
6000 Series Processor
HX-CPU-I6262V 1.9 135 33.00 24 3 x 10.4 2400 2nd Gen Intel® Xeon®
HX-CPU-I6254 3.1 200 24.75 18 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6252N 2.3 150 35.75 24 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6252 2.1 150 35.75 24 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6248 2.5 150 27.50 20 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6246 3.3 165 24.75 12 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6244 3.6 150 24.75 8 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6242 2.8 150 22.00 16 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6240R 2.4 165 35.75 24 2 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6240Y 2.6 150 24.75 18/14/ 3 x 10.4 2933 2nd Gen Intel® Xeon®
8
HX-CPU-I6240M 2.6 150 24.75 18 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6240L 2.6 150 24.75 18 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6240 2.6 150 24.75 18 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6238R 2.2 165 38.50 28 2 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6238M 2.1 140 30.25 22 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6238L 2.1 140 30.25 22 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6238 2.1 140 30.25 22 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6234 3.3 130 24.75 8 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6230R 2.1 150 35.75 26 2 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6230N 2.3 125 27.50 20 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6230 2.1 125 27.50 20 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6226R 2.8 150 22.00 16 2 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6226 2.7 125 19.25 12 3 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I6222V 1.8 115 27.50 20 3 x 10.4 2400 2nd Gen Intel® Xeon®
HX-CPU-6142M 2.6 150 22.00 16 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6140M 2.3 140 24.75 18 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6134M 3.2 130 24.75 8 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6154 3.0 200 24.75 18 3 x 10.4 2666 Intel® Xeon®
Highest
Clock Cache DDR4 DIMM
Power UPI1 Links
Product ID (PID) Freq Size Cores Clock Workload/Processor type
(W) (GT/s)
(GHz) (MB) Support
(MHz)
HX-CPU-6152 2.1 140 30.25 22 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6150 2.7 165 24.75 18 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6148 2.4 150 27.50 20 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6146 3.2 165 24.75 12 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6144 3.5 150 24.75 8 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6142 2.6 150 22.00 16 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6140 2.3 140 24.75 18 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6138 2.0 125 27.50 20 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6136 3.0 150 24.75 12 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6134 3.2 130 24.75 8 3 X 10.4 2666 Intel® Xeon®
HX-CPU-6132 2.6 140 19.25 14 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6130 2.1 125 22.00 16 3 x 10.4 2666 Intel® Xeon®
HX-CPU-6126 2.6 125 19.25 12 3 x 10.4 2666 Intel® Xeon®
5000 Series Processor
HX-CPU-I5220S 2.6 125 19.25 18 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5220R 2.2 150 35.75 24 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5220 2.2 125 24.75 18 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5218R 2.1 125 27.50 20 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5218B 2.3 125 22.00 16 2 x 10.4 2933 2nd Gen Intel® Xeon®
HX-CPU-I5218N 2.3 105 22.00 16 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5218 2.3 125 22.00 16 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5217 3.0 115 11.00 8 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5215M 2.5 85 13.75 10 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5215L 2.5 85 13.75 10 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-I5215 2.5 85 13.75 10 2 x 10.4 2666 2nd Gen Intel® Xeon®
HX-CPU-5120 2.2 105 19.25 14 2 x 10.4 2400 Intel® Xeon®
HX-CPU-5118 2.3 105 16.50 12 2 x 10.4 2400 Intel® Xeon®
HX-CPU-5117 2.0 105 19.25 14 2 x 10.4 2400 Intel® Xeon®
HX-CPU-5115 2.4 85 13.75 10 2 x 10.4 2400 Intel® Xeon®
4000 Series Processor
HX-CPU-I4216 2.1 100 22.00 16 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-I4215R 3.2 130 11.00 8 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-I4215 2.5 85 11.00 8 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-I4214R 2.4 100 16.50 12 2 x 9.6 2400 2nd Gen Intel® Xeon®
Highest
Clock Cache DDR4 DIMM
Power UPI1 Links
Product ID (PID) Freq Size Cores Clock Workload/Processor type
(W) (GT/s)
(GHz) (MB) Support
(MHz)
HX-CPU-I4214Y 2.2 105 16.75 12/10/ 2 x 9.6 2400 2nd Gen Intel® Xeon®
8
HX-CPU-I4214 2.2 85 16.75 12 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-I4210R 2.4 100 13.75 10 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-I4210 2.2 85 13.75 10 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-I4208 2.1 85 11.00 8 2 x 9.6 2400 2nd Gen Intel® Xeon®
HX-CPU-4116 2.1 85 16.50 12 2 x 9.6 2400 Intel® Xeon®
HX-CPU-4114 2.2 85 13.75 10 2 x 9.6 2400 Intel® Xeon®
HX-CPU-4110 2.1 85 11.00 8 2 x 9.6 2400 Intel® Xeon®
HX-CPU-4108 1.8 85 11.00 8 2 x 9.6 2400 Intel® Xeon®
3000 Series Processor
HX-CPU-I3206R 1.9 85 11.00 8 2 x 9.6 2133 2nd Gen Intel® Xeon®
HX-CPU-3106 1.7 85 11.00 8 2 x 9.6 2133 Intel® Xeon®
Notes:
1. UPI = Ultra Path Interconnect. 2-socket servers support only 2 UPI performance, even if the CPU
supports 3 UPI.
Approved ConfigurationS
NOTE: The 1-CPU configuration is only supported for CPU SKUs HX-CPU-4114 and
above. 1-CPU configuration is not supported for HX-CPU-3106, HX-CPU-4108
or HX-CPU-4110 due to the low core count on those processors.
■ Select two identical CPUs from any one of the rows of Table 3 on page 9.
NOTE: The 1-CPU configuration is only supported for the HX Edge configuration
CAUTION:
■ HX-CPU-I6242R, I6248R, I6246R, I6258R are not offered on HX220, HXAF220 or
Edge products due to lower ambient temperature requirement.
■ These four CPUs operate at a much higher temperature and may not be
supportable in environments they may end up in.
■ They are orderable on HX240, HXAF240, and HX All NVMe.
For more info, please review the KB article:
https://hxkb.cisco.com/article.php?id=150
■ DIMMs
NOTE: The compatibility of Intel® Xeon® scalable processor family CPUs and 2nd
Generation Intel® Xeon® Scalable CPUs with different DIMM memory speeds and
production servers is as shown below:
DIMM
CPU Family Speed Configuration
(MHz)
Intel Scalable CPUs 2666 2666 MHz DIMMs are supported for all
production servers
2nd Gen Intel Scalable CPUs 2666 2666 MHz DIMMs are only supported
when upgrading from Intel Scalable
CPUs to 2nd Gen Intel Scalable CPUs
Slot 1
Slot 2
Slot 2
Slot 1
A1 A2 G2 G1
Chan A Chan G
B1 B2 H2 H1
Chan B Chan H
C1 C2 J2 J1
Chan C Chan J
CPU 1 CPU 2
D1 D2 K2 K1
Chan D Chan K
E1 E2 L2 L1
Chan E Chan L
F1 F2 M2 M1
Chan F Chan M
24 DIMMS
3072 GB maximum memory (with 128 GB DIMMs)
6 memory channels per CPU,
up to 2 DIMMs per channel
Select DIMMs
NOTE: The memory mirroring feature is not supported with HyperFlex nodes.
Ranks/
Product ID (PID) PID Description Voltage
DIMM
Approved Configurations
■ Select 4,6, 8, or 12 identical DIMMs per CPU. The DIMMs will be placed by the factory as
shown in the following table.
12 (A1, A2, B1, B2, C1, C2); (D1, D2, E1, E2, F1, F2)
■ Select 8,12 16, or 24 identical DIMMs per CPU. The DIMMs will be placed by the factory as
shown in the following table
CPU 1 CPU 2
8 (A1,B1); (D1,E1) (G1, H1); (K1, L1)
12 (A1, B1, C1); (D1, E1, F1) (G1, H1, J1); (K1, L1, M1)
16 (A1, A2, B1, B2); (D1, D2, E1, E2) (G1, G2, H1, H2); (K1, K2, L1, L2)
24 (A1, A2, B1, B2, C1, C2); (D1, D2, E1, E2, (G1, G2, H1, H2, J1, J2); (K1, K2, L1, L2, M1,
F1, F2) M2)
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs in the server.
Table 5 2933-MHz DIMM Memory Speeds with Different 2nd Generation Intel® Xeon® Scalable Processors
DIMM and CPU RDIMM RDIMM RDIMM
LRDIMM LRDIMM
Frequencies DPC (2Rx4) - (2Rx4) - (1Rx4) -
(4Rx4)- (4Rx4) -
(MHz) 64 GB (MHz) 32 GB (MHz) 16 GB (MHz)
128 GB (MHz) 64 GB (MHz)
Table 6 2666-MHz DIMM Memory Speeds with Different Intel® Xeon® Scalable Processors
TSV-
TSV-
DIMM and CPU RDIMM LRDIMM RDIMM LRDIMM
RDIMM
Frequencies DPC (8Rx4) - (4Rx4) - (2Rx4) - (2Rx4) -
(4Rx4) -
(MHz) 128 GB 64 GB (MHz) 32 GB (MHz) 32 GB (MHz)
64 GB (MHz)
(MHz)
■ The Cisco 12G SAS HBA, which plugs into a dedicated RAID controller slot.
Approved Configurations
Select Drives
Drive
Product ID (PID) PID Description Capacity
Type
Capacity Drives
HX-HD12T10NK9** 1.2TB 2.5 inch 12G SAS 10K RPM HDD SED SAS 1.2 TB
HX-HD12TB10K12N 1.2TB 2.5 inch 12G SAS 10K RPM HDD SAS 1.2 TB
HX-HD18TB10K4KN 1.8 TB 12G SAS 10K RPM SFF HDD SAS 1.8 TB
HX-HD24TB10K4KN 2.4 TB 12G SAS 10K RPM SFF HDD (4K) (HyperFlex Release 4.0(1a) and SAS 2.4 TB
later)
Caching Drives
Enterprise Performance SSDs(High endurance, supports up to 10X or 3X DWPD (drive writes per day))1
SATA SSDs
HX-SD480G63X-EP 480GB 2.5 inch Enterprise Performance 6G SATA SSD (3X endurance) SATA 480 GB
SAS SSDs
HX-SD800G123X-EP 800GB 2.5 inch Enterprise Performance 12G SAS SSD (3X endurance) SAS 800 GB
HX-SD800GBHNK9** 800GB Enterprise performance SAS SSD (10X FWPD, SED) (HyperFlex SAS 800 GB
Release 3.5(2g)or later)
HyperFlex System Drive / Log Drives
Enterprise Value SATA SSDs (Low endurance, supports up to 1X DWPD (drive writes per day))2
HX-SD240GM1X-EV 240GB 2.5 inch Enterprise Value 6G SATA SSD (HyperFlex Release SATA 240 GB
3.5(2a) and later)
HX-SD480G6I1X-EV 480GB 2.5 inch Enterprise Value 6G SATA SSD (HyperFlex Release SATA 480 GB
4.0(2a) and later
HX-SD480GM1X-EV 480GB 2.5 inch Enterprise Value 6G SATA SSD (HyperFlex Release SATA 480 GB
4.0(2a) and later)
Drive
Product ID (PID) PID Description Capacity
Type
Boot Drives
HX-M2-240GB 240GB SATA M.2 SSD SATA 240 GB
HX-M2-960GB 960GB SATA M.2 (HyperFlex Release 4.0(2a) and later) SATA 960 GB
NOTE:
■ Cisco uses solid state drives (SSDs) from a number of vendors. All solid state drives (SSDs) are subject to
physical write limits and have varying maximum usage limitation specifications set by the manufacturer.
Cisco will not replace any solid state drives (SSDs) that have exceeded any maximum usage specifications
set by Cisco or the manufacturer, as determined solely by Cisco.
■ ** SED drive components are not supported with Microsoft Hyper-V
Notes:
1. Targeted for write centric IO applications. Supports endurance of 10 or 3 DWPD (drive writes per day). Target App
app are caching, online transaction processing (OLTP), data warehousing, and virtual desktop infrastructure (VDI).
2. Targeted for read centric IO applications. Supports endurance of 1 DWPD (drive write per day). Target applications
are boot, streaming media, and collaboration.
Approved Configurations
NOTE: Less than 6 capacity drives is supported only for HX Edge configuration
SED drives are not supported for HX Edge configuration
If you select 'SED capacity' drives, you must choose 'SED cache' drives below
NOTE: 'SED cache' drive can only be selected if you have selected 'SED capacity'
drives.
SED drives are not supported with Microsoft Hyper-V
Caveats
You must choose up to eight capacity drives, one caching drive, one system drive and one boot
drive. If you select SED drives, you must adhere to the following
— You must select minimum of 6 'capacity' drives
— All selected 'cache' and 'capacity' drives must be SED drives
Notes:
1. The mLOM card does not plug into any of the riser 1 or riser 2 card slots; instead, it plugs into a connector inside
the chassis.
2. The NIC is supported for HyperFlex Edge configurations.
3. The NIC is not supported with Microsoft Hyper-V.
Caveats
— Breakout cables cannot be used to connect to 6200 series FI. Please use QSA.
— VIC 1387 supports Cisco QSA Modules for connecting to HX-FI-6248UP, HX-FI-6296UP
— Cisco QSA Module is available as an option under 'Accessories -> SFP'
— When choosing QSA option, order 2 QSA per server.
— PID for QSA is CVR-QSFP-SFP10G.
— Use of 10GbE is not permitted with 6300 series FI
NOTE: All GPU cards must be procured from Cisco as there is a unique SBIOS ID
required by CIMC and UCSM
Caveats
http://ucspowercalc.cisco.com
Select one or two power supplies from the list in Table 11.
NOTE: In a server with two power supplies, both power supplies must be identical.
Connector:
IEC60320/C13
186570
CAB-AC-L620-C13 AC Power Cord, NEMA L6-20 - C13,
2M/6.5ft
CAB-C13-C14-AC CORD,PWR,JMP,IEC60320/C14,IEC6
0320/C13, 3.0M
186571
CAB-9K10A-AU Power Cord, 250VAC 10A 3112 Plug,
Australia
Cordset rating: 10 A, 250 V/500 V MAX
Length: 2500mm
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
186576
CAB-250V-10A-ID Power Cord, 250V, 10A, India
OVE
Cordset rating 16A, 250V
Plug:
(2500mm)
EL 208
Connector:
EL 701
187490
CAB-250V-10A-IS Power Cord, SFS, 250V, 10A, Israel
EL-212
16A
250V
Connector:
Plug: EL 701B
EL 212 (IEC60320/C13)
186574
(SI-32)
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
Plug: Connector:
NEMA 5-15P IEC60320/C15
192260
CAB-250V-10A-BR Power Cord - 250V, 10A - Brazil
1 76.2 From Plug End
2,133.6 ± 25
Notes:
1. This power cord is rated to 125V and only supported for PSU rated at 1050W or less
A chassis intrusion switch gives a notification of any unauthorized mechanical access into the
server.
NOTE:
■ The TPM module used in this system conforms to TPM v1.2 and 2.0, as defined
by the Trusted Computing Group (TCG). It is also SPI-based.
■ TPM installation is supported after-factory. However, a TPM installs with a
one-way screw and cannot be replaced, upgraded, or moved to another server. If
a server with a TPM is returned, the replacement server must be ordered with a
new TPM.
The reversible cable management arm mounts on either the right or left slide rails at the rear of
the server and is used for cable management. Use Table 15 to order a cable management arm.
For more information about the tool-less rail kit and cable management arm, see the Cisco UCS
C220 M5 Installation and Service Guide at this URL:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C220M5/install/C220M5.html
NOTE: If you plan to rackmount your HX220c M5 Node, you must order a tool-less
rail kits.The same rail kits and CMA's are used for M4 and M5 servers.
VMware1
HX-VSP-6-5-EPL-D Factory Installed - VMware vSphere 6.5 Ent Plus SW+Lic (2 CPU)
HX-VSP-6-5-STD-D Factory Installed - VMware vSphere 6.5 Std SW and Lic (2 CPU)
HX-VSP-6-7-EPL-D Factory Installed - VMware vSphere 6.7 Ent Plus SW+Lic 2-CPU
HX-VSP-6-7-STD-D Factory Installed - VMware vSphere 6.7 Std SW and Lic (2CPU)
HX-VSP-EPL-1A VMware vSphere 6 Ent Plus (1 CPU), 1-yr, Support Required Cisco
HX-VSP-EPL-3A VMware vSphere 6 Ent Plus (1 CPU), 3-yr, Support Required Cisco
HX-VSP-EPL-5A VMware vSphere 6 Ent Plus (1 CPU), 5-yr, Support Required Cisco
Microsoft Hyper-V3,4
HX-19-ST16C-NS Windows Server 2019 Standard (16 Cores/2 VMs) - No Cisco SVC
Notes:
1. Although VMware 6.0 is installed at the factory, VMware 6.5 is also supported.
2. Choose quantity of two when choosing PAC licensing for dual CPU systems.
3. Microsoft Windows Server with Hyper-V will NOT be installed in Cisco Factory. Customers need to bring their own
Windows Server ISO image that needs to be installed at deployment site.
4. To ensure the best possible Day 0 Installation experience, mandatory Installation Services are required with all
Hyper-V orders. Details on PIDs can be found in HyperFlex Ordering Guide.
For support of the entire Unified Computing System, Cisco offers the Cisco Smart Net Total Care
for UCS Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world
For systems that include Unified Computing System Manager, the support service includes
downloads of UCSM upgrades. The Cisco Smart Net Total Care for UCS Service includes flexible
hardware replacement options, including replacement in as little as two hours. There is also
access to Cisco's extensive online technical resources to help maintain optimal efficiency and
uptime of the unified computing environment. For more information please refer to the following
url: http://www.cisco.com/c/en/us/services/technical/smart-net-total-care.html?stickynav=1
An enhanced offer over traditional Smart Net Total Care which provides onsite troubleshooting
expertise to aid in the diagnostics and isolation of hardware issue within our customers’ Cisco
Hyper-Converged environment. It is delivered by a Cisco Certified field engineer (FE) in
collaboration with remote TAC engineer and Virtual Internet working Support Engineer (VISE).
You can choose a desired service listed in Table 20
**Includes Local Language Support (see below for full description) – Only available in China and Japan
***Includes Local Language Support and Drive Retention – Only available in China and Japan
Solution Support
Solution Support includes both Cisco product support and solution-level support, resolving
complex issues in multivendor environments, on average, 43% more quickly than product
support alone. Solution Support is a critical element in data center administration, to help
rapidly resolve any issue encountered, while maintaining performance, reliability, and return on
investment.
This service centralizes support across your multivendor Cisco environment for both our
products and solution partner products you've deployed in your ecosystem. Whether there is an
issue with a Cisco or solution partner product, just call us. Our experts are the primary point of
contact and own the case from first call to resolution. For more information please refer to the
following url:
http://www.cisco.com/c/en/us/services/technical/solution-support.html?stickynav=1
You can choose a desired service listed in Table 21
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
PSS options enable eligible Cisco partners to develop and consistently deliver high-value
technical support that capitalizes on Cisco intellectual assets. This helps partners to realize
higher margins and expand their practice.
PSS provides hardware and software support, including triage support for third party software,
backed by Cisco technical resources and level three support. You can choose a desired service
listed in Table 22.
Combined Services makes it easier to purchase and manage required services under one
contract. The more benefits you realize from the Cisco HyperFlex System, the more important
the technology becomes to your business. These services allow you to:
With the Cisco Drive Retention Service, you can obtain a new disk drive in exchange for a faulty
drive without returning the faulty drive.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The Drive Retention service enables
you to retain your drives and ensures that the sensitive data on those drives is not compromised,
which reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in the above tables (where
available)
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Where available, and subject to an additional fee, local language support for calls on all assigned
severity levels may be available for specific product(s) – see tables above.
For a complete listing of available services for Cisco HyperFlex System, see the following URL:
https://www.cisco.com/c/en/us/services/technical.html?stickynav=1
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/r-series-
racks/rack-pdu-specsheet.pdf
SUPPLEMENTAL MATERIAL
Hyperconverged Systems
Cisco HyperFlex Systems let you unlock the full potential of hyperconvergence and adapt IT to the needs of
your workloads. The systems use an end-to-end software-defined infrastructure approach, combining
software-defined computing in the form of Cisco HyperFlex HX-Series nodes; software-defined storage with
the powerful Cisco HX Data Platform; and software-defined networking with the Cisco UCS fabric that will
integrate smoothly with Cisco Application Centric Infrastructure (Cisco ACI). Together with a single point of
connectivity and management, these technologies deliver a preintegrated and adaptable cluster with a
unified pool of resources that you can quickly deploy, adapt, scale, and manage to efficiently power your
applications and your business.
CHASSIS
An internal view of the HX220c M5 Node chassis with the top cover removed is shown in Figure 6.
2 3 4 5 6 7 8
Fan 07 PSU 02
9
Fan 06
CPU 02 PSU 01
Fan 05
Fan 04 10
1 PCIe 02 11
SCPM
PCIe 01
Fan 03
12
CPU 01
Fan 02
Fan 01
mRAID 13
305952
16 15 14
1 Drive bays 1–10 are hot swappable 9 Power supplies (Hot-swappable when
redundant as 1+1)
2 Cooling fan modules (seven) 10 Trusted platform module (TPM) socket on
motherboard (not visible in this view)
3 N/A 11 PCIe slot 2 (half-height, x16); includes
PCIe cable connector for SFF NVMe SSDs
(x8)
4 DIMM sockets on motherboard (up to 12 per 12 PCIe slot 1 (full-height, x16); includes
CPU; total 24) socket for Micro-SD card
5 CPUs and heatsinks (up to two) 13 Modular LOM (mLOM) card bay on chassis
floor (x16) (not visible in this view)
6 Mini storage module connector 14 Cisco 12 Gbps Modular SAS HBA controller
card
For M.2 module with SATA M.2 SSD slots
7 Internal USB 3.0 port on motherboard 15 PCIe cable connectors for front-panel
NVMe SSDs on PCIe riser 2
8 RTC battery vertical socket on motherboard 16 Micro-SD card socket on PCIe riser 1
Block Diagram
Figure 7 HX220c M5 SFF Block Diagram
ZĞĂƌWĂŶĞů
h^ h^ϯ͘ϬͬϮ͘Ϭ
dŚƵŵďĚƌŝǀĞ
h^ϯ͘ϬͬϮ͘Ϭ
h^ϯ͘Ϭ
s'
^d ^ĞƌŝĂů
ŽŶŶĞĐƚŽƌ
yϰ^d
/ŶƚĞů ϮͲƉŽƌƚ ϭϬ'Ͳd
^d
>ĞǁŝƐďƵƌŐW, džϱϱϬE/
ϭϬ'Ͳd
ŽŶŶĞĐƚŽƌ
yϰ^d
W/Ğͬ>W ϭ'Ͳd
D/ϯ D 'ďƐǁŝƚĐŚ ;DŐŵƚͿ
h^ϯ͘Ϭ Ɛ^d;džϮͿ
ĞDD
DŝŶŝ^ƚŽƌĂŐĞ
DŽĚƵůĞ
D/ϯ;džϰͿ
;ƚLJƉĞƐǀĂƌLJǁŝƚŚŵ>KDŵŽĚƵůĞͿ
;D͘ϮͿ
͘͘͘͘͘͘͘͘͘͘͘͘
Zϰ/DDƐ
EĞƚǁŽƌŬĐŽŶŶĞĐƚŽƌƐ
ϭ Ϯ
ŚĂŶ
D/ϯ
ϭ Ϯ
ŵ>KD
ŚĂŶ DŽĚƵůĞ
W/Ğdžϭϲ'ĞŶϯ
ϭ Ϯ
ŚĂŶ W/Ğdžϭϲ'ĞŶϯ
/ŶƚĞů
yĞŽŶ
ϭ Ϯ W/Ğdžϰ'ĞŶϯ
^ŬLJůĂŬĞW
ŚĂŶ
;WhϭͿ
dĞŶĨƌŽŶƚĚƌŝǀĞƐ;Ϯ͘ϱΗͿ
;^^ͬ^d,ƐŽƌ^^Ɛ͕ŽƉƚŝŽŶĂůEsDĞĚƌŝǀĞƐ ϭ Ϯ
W/Ğdžϴ'ĞŶϯ
ŚĂŶ &ƌŽŶƚWĂŶĞů
ĂďůĞƚŽW/Ğ
ĐŽŶŶĞĐƚŽƌŽŶZŝƐĞƌϮ &ϭ &Ϯ
ĨŽƌϮŽƉƚŝŽŶĂůĨƌŽŶƚ
ϭ ŚĂŶ&
^^džϰ EsDĞĚƌŝǀĞƐ
ŽŶŶĞĐƚŽƌ
yϰ^^
Ϯ yϮh^Ϯ͘Ϭ
ϯ ϭs'
ϰ hW/;ϯdžϭϬ͘ϰ'dͬƐͿ ϭ^ĞƌŝĂů
ϱ ^^džϰ 'ϭ 'Ϯ
ŽŶŶĞĐƚŽƌ
yϰ^^
ϲ ĂďůĞƐƚŽŵŽĚƵůĂƌ ŚĂŶ'
ϳ ,ĐĂƌĚ
<sD
ϴ ,ϭ ,Ϯ ŽŶŶĞĐƚŽƌ
ϵ ^^džϰ ŚĂŶ,
ŽŶŶĞĐƚŽƌ
yϰ^^
ϭϬ
:ϭ :Ϯ
W/ĞZŝƐĞƌϭ
ŚĂŶ:
/ŶƚĞů ^ƐůŽƚ
<ϭ <Ϯ
yĞŽŶ
^ŬLJůĂŬĞW ^ůŽƚϭ
ŚĂŶ<
;WhϮͿ
>ϭ >Ϯ
W/ĞZŝƐĞƌϮ
&ƌŽŶƚ
ŚĂŶ> EsDĞŽŶŶ
Dϭ DϮ W/ĞdžϮϰ'ĞŶϯ;džϭϲƚŽƌŝƐĞƌ͕džϴƚŽ&ƌŽŶƚEsDĞͿ ^ůŽƚϮ
ŚĂŶD
^d
ŽŶŶĞĐƚŽƌ
yϰ^d
^d
ŽŶŶĞĐƚŽƌĨŽƌ,ĂƌĚ
ŽŶŶĞĐƚŽƌ
yϰ^d
ĂďůĞƐƚŽĚƌŝǀĞďĂĐŬƉůĂŶĞƐ
&ŽƌϭϬͲĚƌŝǀĞƐLJƐƚĞŵ͕ƚŚƌĞĞdžϰ^^ĐŽŶŶĞĐƚŽƌƐĨŽƌϭϬĨƌŽŶƚ
^^ͬ^dĚƌŝǀĞƐ
Pin Signal
1 RTS (Request to Send)
2 DTR (Data Terminal Ready)
3 TxD (Transmit Data)
4 GND (Signal Ground)
5 GND (Signal Ground)
6 RxD (Receive Data)
7 DSR (Data Set Ready)
8 CTS (Clear to Send)
RACKS
The Cisco R42612 rack is certified for Cisco UCS installation at customer sites and is suitable for the
following equipment:
The rack is compatible with hardware designed for EIA-standard 19-inch racks.Cisco R42612 Rack. See Cisco
RP-Series Rack and Rack PDU specification for more details at
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/r-series-racks/rack-
pdu-specsheet.pdf
PDUs
Cisco RP Series Power Distribution Units (PDUs) offer power distribution with branch circuit protection.
Cisco RP Series PDU models distribute power to up to 42 outlets. The architecture organizes power
distribution, simplifies cable management, and enables you to move, add, and change rack equipment
without an electrician.
With a Cisco RP Series PDU in the rack, you can replace up to two dozen input power cords with just one.
The fixed input cord connects to the power source from overhead or under-floor distribution. Your IT
equipment is then powered by PDU outlets in the rack using short, easy-to-manage power cords.
The C-series severs accept the zero-rack-unit (0RU) or horizontal PDU. See Cisco RP-Series Rack and Rack
PDU specification for more details at
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/r-series-racks/rack-
pdu-specsheet.pdf
KVM CABLE
The KVM cable provides a connection into the server, providing a DB9 serial connector, a VGA connector for
a monitor, and dual USB ports for a keyboard and mouse. With this cable, you can create a direct
connection to the operating system and the BIOS running on the server.
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Height 1.7 in. (4.32 cm)
Width 16.89 in. (43.0 cm)
including handles:
18.98 in. (48.2 cm)
Depth 29.8 in. (75.6 cm)
including handles:
30.98 in. (78.7 cm)
Front Clearance 3 in. (76 mm)
Side Clearance 1 in. (25 mm)
Rear Clearance 6 in. (152 mm)
Weight
Maximum (8 HDDs, 2 CPUs, 16 DIMMs, two power supplies) 37.5 lbs (17.0 kg)
Minimum (1 HDD, 1 CPU, 1 DIMM, one power supply) 29.0 lbs (13.2 kg)
Bare (0 HDD, 0 CPU, 0 DIMM, one power supply) 26.7 lbs (12.1 kg)
Power Specifications
The server is available with the following types of power supplies:
Parameter Specification
Input Connector IEC320 C14
Input Voltage Range (V rms) 100 to 240
Maximum Allowable Input Voltage Range (V rms) 90 to 264
Frequency Range (Hz) 50 to 60
Maximum Allowable Frequency Range (Hz) 47 to 63
Maximum Rated Output (W) 770
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) 100 120 208 230
Nominal Input Current (A rms) 8.8 7.4 4.2 3.8
Maximum Input at Nominal Input Voltage (W) 855 855 855 846
Maximum Input at Nominal Input Voltage (VA) 882 882 882 872
Minimum Rated Efficiency (%)1 90 90 90 91
Minimum Rated Power Factor1 0.97 0.97 0.97 0.97
Maximum Inrush Current (A peak) 15
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)2 12
Notes:
1. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
2. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Parameter Specification
Input Connector IEC320 C14
Input Voltage Range (V rms) 100 to 240
Maximum Allowable Input Voltage Range (V rms) 90 to 264
Frequency Range (Hz) 50 to 60
Notes:
1. Maximum rated output is limited to 800W when operating at low-line input voltage (100-127V)
2. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
3. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Parameter Specification
Input Connector Molex 42820
Input Voltage Range (V rms) -48
Maximum Allowable Input Voltage Range (V rms) -40 to -72
Frequency Range (Hz) NA
Maximum Allowable Frequency Range (Hz) NA
Maximum Rated Output (W) 1050
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) -48
Nominal Input Current (A rms) 24
Maximum Input at Nominal Input Voltage (W) 1154
Maximum Input at Nominal Input Voltage (VA) 1154
Minimum Rated Efficiency (%)1 91
Minimum Rated Power Factor1 NA
Maximum Inrush Current (A peak) 15
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)2 5
Notes:
1. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
2. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Parameter Specification
Input Connector IEC320 C14
Input Voltage Range (V rms) 200 to 240
Maximum Allowable Input Voltage Range (V rms) 180 to 264
Frequency Range (Hz) 50 to 60
Maximum Allowable Frequency Range (Hz) 47 to 63
Maximum Rated Output (W)1 1600
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) 100 120 208 230
Nominal Input Current (A rms) NA NA 8.8 7.9
Maximum Input at Nominal Input Voltage (W) NA NA 1778 1758
Maximum Input at Nominal Input Voltage (VA) NA NA 1833 1813
Minimum Rated Efficiency (%)2 NA NA 90 91
Minimum Rated Power Factor2 NA NA 0.97 0.97
Maximum Inrush Current (A peak) 30
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)3 12
Notes:
1. Maximum rated output is limited to 800W when operating at low-line input voltage (100-127V)
2. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
3. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
For configuration-specific power specifications, use the Cisco UCS Power Calculator at this URL
http://ucspowercalc.cisco.com
Environmental Specifications
The environmental specifications for the HX220c M5 server are listed in Table 33.
Parameter Minimum
Operating Temperature 10oC to 35oC (50oF to 95oF) with no direct sunlight
Maximum allowable operating temperature de-rated
1oC/300 m (1oF/547 ft) above 950 m (3117 ft)
Extended Operating Temperature 5oC to 40oC (41oF to 104oF) with no direct sunlight
Maximum allowable operating temperature de-rated
1oC/175 m (1oF/319 ft) above 950 m (3117 ft)
5oC to 45oC (41oF to 113oF) with no direct sunlight
Maximum allowable operating temperature de-rated
1oC/125 m (1oF/228 ft) above 950 m (3117 ft)
System performance may be impacted when operating in the
extended operating temperature range.
Operation above 40C is limited to less than 1% of annual
operating hours.
Hardware configuration limits apply to extended
operating temperature range.
Non-Operating Temperature -40oC to 65oC (-40oF to 149oF)
Maximum rate of change (operating and non-operating)
20oC/hr (36oF/hr)
Operating Relative Humidity 8% to 90% and 24oC (75oF) maximum dew-point temperature,
non-condensing environment
Non-Operating Relative Humidity 5% to 95% and 33oC (91oF) maximum dew-point temperature,
non-condensing environment
Operating Altitude 0 m to 3050 m {10,000 ft)
Sound Power level, Measure 5.8
A-weighted per ISO7779 LWAd (Bels)
Operation at 73°F (23°C)
Sound Pressure level, Measure 43
A-weighted per ISO7779 LpAm (dBA)
Operation at 73°F (23°C)
Compliance Requirements
The regulatory compliance requirements for HX220 C M5 servers are listed in Table 35.
Parameter Description
Regulatory Compliance Products should comply with CE Markings per directives
2014/30/EU and 2014/35/EU
Safety UL 60950-1 Second Edition
CAN/CSA-C22.2 No. 60950-1 Second Edition
EN 60950-1 Second Edition
IEC 60950-1 Second Edition
AS/NZS 60950-1
GB4943 2001
EMC - Emissions 47CFR Part 15 (CFR 47) Class A
AS/NZS CISPR32 Class A
CISPR32 Class A
EN55032 Class A
ICES003 Class A
VCCI Class A
EN61000-3-2
EN61000-3-3
KN32 Class A
CNS13438 Class A
EMC - Immunity EN55024
CISPR24
EN300386
KN35