Sr650 Maintenance Manual
Sr650 Maintenance Manual
Maintenance Manual
Before using this information and the product it supports, be sure to read and understand the safety
information and the safety instructions, which are available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
In addition, ensure that you are familiar with the terms and conditions of the Lenovo warranty for your server,
which can be found at:
http://datacentersupport.lenovo.com/warrantylookup
Ennen kuin asennat tämän tuotteen, lue turvaohjeet kohdasta Safety Information.
Notes:
1. The product is not suitable for use at visual display workplaces according to §2 of the Workplace
Regulations.
2. The set-up of the server is made in the server room only.
CAUTION:
This equipment must be installed or serviced by trained personnel, as defined by the NEC, IEC 62368-
1 & IEC 60950-1, the standard for Safety of Electronic Equipment within the Field of Audio/Video,
Information Technology and Communication Technology. Lenovo assumes you are qualified in the
servicing of equipment and trained in recognizing hazards energy levels in products. Access to the
equipment is by the use of a tool, lock and key, or other means of security, and is controlled by the
authority responsible for the location.
Important: Electrical grounding of the server is required for operator safety and correct system function.
Proper grounding of the electrical outlet can be verified by a certified electrician.
Use the following checklist to verify that there are no potentially unsafe conditions:
1. Make sure that the power is off and the power cord is disconnected.
2. Check the power cord.
• Make sure that the third-wire ground connector is in good condition. Use a meter to measure third-
wire ground continuity for 0.1 ohm or less between the external ground pin and the frame ground.
• Make sure that the power cord is the correct type.
To view the power cords that are available for the server:
a. Go to:
Performance, ease of use, reliability, and expansion capabilities were key considerations in the design of the
server. These design features make it possible for you to customize the system hardware to meet your needs
today and provide flexible expansion capabilities for the future.
The server comes with a limited warranty. For details about the warranty, see:
https://support.lenovo.com/us/en/solutions/ht503310
The machine type and serial number are on the ID label on the right rack latch in the front of the server.
Figure 3. QR code
Specification Description
Dimension • 2U
• Height: 86.5 mm (3.4 inches)
• Width:
– With rack latches: 482.0 mm (19.0 inches)
– Without rack latches: 444.6 mm (17.5 inches)
• Depth: 763.7 mm (30.1 inches)
Note: The depth is measured with rack latches installed, but without the security
bezel installed.
Weight Up to 32.0 kg (70.6 lb), depending on the server configuration
https://static.lenovo.com/us/en/serverproven/index.shtml
Notes:
• Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R processor is
supported only when the following requirements are met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– The operating temperature is equal to or less than 30°C.
– Up to eight drives are installed in the drive bays 8–15.
• Intel Xeon 6144, 6146, 8160T, 6126T, 6244, and 6240Y processor, or processors
with TDP equal to 200 watts or 205 watts (excluding 6137, 6242R, 6246R, 6248R,
6250, 6256, or 6258R) are supported only when the following requirements are
met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– Up to eight drives are installed in the drive bays 8–15 if the operating
temperature is equal to or less than 35°C, or up to sixteen drives are installed in
the drive bays 0–15 if the operating temperature is equal to or less than 30°C.
• For server models with sixteen/twenty/twenty-four NVMe drives, two processors
are needed, and the maximum supported processor TDP is 165 watts.
• For server models with twenty-four 2.5-inch and twelve 3.5-inch-drive bays, if Intel
Xeon 6144 and 6146 processors installed, the operating temperature is equal to or
less than 27°C.
• Intel Xeon 6154, 8168, 8180, and 8180M processors support the following server
models: eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, or sixteen 2.5- inch-
drive bays. For server models with sixteen 2.5-inch and eight 3.5-inch drive bays,
the operating temperature is equal to or less than 30°C.
• Intel Xeon 6246, 6230T, and 6252N processors support the following server
models: eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, or sixteen 2.5- inch-
drive bays.
• If two TruDDR4 2933, 128 GB 3DS RDIMMs are installed in one channel, the
operating temperature is equal to or less than 30°C.
Chapter 1. Introduction 3
Table 1. Server specifications (continued)
Specification Description
Memory For 1st Generation Intel Xeon Scalable Processor (Intel Xeon SP Gen 1):
• Slots: 24 memory module slots
• Minimum: 8 GB
• Maximum:
– 768 GB using registered DIMMs (RDIMMs)
– 1.5 TB using load-reduced DIMMs (LRDIMMs)
– 3 TB using three-dimensional stack registered DIMMs (3DS RDIMMs)
• Type (depending on the model):
– TruDDR4 2666, single-rank or dual-rank, 8 GB/16 GB/32 GB RDIMM
– TruDDR4 2666, quad-rank, 64 GB LRDIMM
– TruDDR4 2666, octa-rank, 128 GB 3DS RDIMM
For 2nd Generation Intel Xeon Scalable Processor (Intel Xeon SP Gen 2):
• Slots: 24 DIMM slots
• Minimum: 8 GB
• Maximum:
– 1.5 TB using RDIMMs
– 3 TB using 3DS RDIMMs
– 6 TB using DC Persistent Memory Module (DCPMM) and RDIMMs/3DS
RDIMMs in Memory Mode
• Type (depending on the model):
– TruDDR4 2666, single-rank or dual-rank, 16 GB/32 GB RDIMM
– TruDDR4 2933, single-rank or dual-rank, 8 GB/16 GB/32 GB/64 GB RDIMM
– TruDDR4 2933, single-rank or dual-rank, 16 GB/32 GB/64 GB Performance+
RDIMM
– TruDDR4 2666, quad-rank, 64 GB 3DS RDIMM
– TruDDR4 2933, quad-rank, 128 GB 3DS RDIMM
– TruDDR4 2933, quad-rank, 128 GB Performance+ 3DS RDIMM
– 128 GB/256 GB/512 GB DCPMM
Notes:
• Memory dummy is required when any of the following hardware configuration
requirement is met:
– Processors with TDP more than 125 watts installed
– Any of following processors installed: 5122, 8156, 6128, 6126, 4112, 5215,
5217, 5222, 8256, 6226, 4215, 4114T, 5119T, 5120T, 4109T, 4116T, 6126T,
6130T, 6138T, 5218T, 6238T
– GPU installed
– Server model: twenty-four 2.5-inch-drive bays, twelve 3.5-inch-drive bays
(except for Chinese Mainland)
• For the server model with the processors with TDP less than 125 watts installed
and without memory dummy installed, the memory performance might be
degraded if one fan fails.
Specification Description
• Operating speed and total memory capacity depend on the processor model and
UEFI settings.
• For a list of supported memory modules, see:
https://static.lenovo.com/us/en/serverproven/index.shtml
Chapter 1. Introduction 5
Table 1. Server specifications (continued)
Specification Description
Graphics processing unit Your server supports the following GPUs or processing adapters:
(GPU) • Full-height, full-length, double-slot GPUs or processing adapters: AMD MI25,
AMD V340, NVIDIA® M10, NVIDIA M60, NVIDIA P40, NVIDIA P100, NVIDIA P6000,
NVIDIA RTX5000, NVIDIA RTX A6000, NVIDIA V100, NVIDIA V100S, NVIDIA A100,
A16, and A30.
• Full-height, full-length, single-slot GPU: NVIDIA P4000, NVIDIA RTX4000, and
Cambricon MLU100-C3
• Full-height, half-length, single-slot GPU: NVIDIA V100, NVIDIA A10
• Half-height, half-length, single-slot GPU: NVIDIA A2
• Low-profile, half-length, single-slot GPUs: NVIDIA P4, NVIDIA P600, and NVIDIA
P620, NVIDIA T4, and Cambricon MLU270-S4
Note: The NVIDIA V100 GPU has two types of form factor: full-height full-length
(FHFL) and full-height half-length (FHHL). Hereinafter the full-height full-length V100
GPU is called as the FHFL V100 GPU; the full-height half-length V100 GPU is called
as the FHHL V100 GPU.
Specification Description
• For server models installed with three NVIDIA P4 can be installed in PCIe slot 1,
PCIe slot 5, and PCIe slot 6 at the same time, the operating temperature must be
equal to or less than 35°C.
• If up to five NVIDIA P4 GPUs are installed, the server models support no more
than eight 2.5-inch hot-swap SAS/SATA/NVMe drives and the operating
temperature must be equal to or less than 35°C.
• For server models installed with FHHL V100 GPU, NVIDIA T4 or Cambricon
MLU270-S4 GPU, the operating temperature must be equal to or less than 30°C.
• If one NVIDIA T4 or Cambricon MLU270-S4 GPU is installed, install in slot 1.
• For server models installed with one CPU, if two NVIDIA T4 or Cambricon
MLU270-S4 GPUs are installed, install in slot 1 and slot 2. For server models
installed with two CPUs, if two NVIDIA T4 or Cambricon MLU270-S4 GPUs are
installed, install in slot 1 and slot 5.
• For server models installed with one CPU, if three NVIDIA T4 or Cambricon
MLU270-S4 GPUs are installed, install in slot 1, slot 2 and slot 3. For server
models installed with two CPUs, if three NVIDIA T4 or Cambricon MLU270-S4
GPUs are installed, install in slot 1, slot 5 and slot 6.
• Four NVIDIA T4 or Cambricon MLU270-S4 GPUs are supported only for server
models installed with two CPUs, and installed in slot 1, slot 2, slot 5, and slot 6.
• Five NVIDIA T4 or Cambricon MLU270-S4 GPUs are supported only for server
models installed with two CPUs, and installed in slot 1, slot 2, slot 3, slot 5, and
slot 6.
• NVIDIA T4 GPU cannot be mixed with NVIDIA A2 GPU.
• If NVIDIA P600, NVIDIA P620, NVIDIA P4000, NVIDIA RTX4000, NVIDIA P6000,
NVIDIA RTX A6000, or NVIDIA RTX5000 GPU is installed, the fan redundancy
function is not supported. If one fan fails, power off the system immediately to
prevent GPU overheat and replace the fan with a new one.
• Cambricon MLU100-C3 processing adapter supports CentOS 7.6 when used in
combination with Intel Xeon SP Gen 2, and supports CentOS 7.5 when used in
combination with Intel Xeon SP Gen 1.
GPU is supported only when the following hardware configuration requirements are
met at the same time:
• Server model: eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, or sixteen 2.5-
inch-drive bays
• Processor: High Tcase type; TDP less than or equal to 150 watts
Notes:
– For server models with eight 2.5-inch-drive bays, if the server is installed with
GPUs (except for GPU model NVIDIA P4, NVIDIA T4, NVIDIA V100 FHHL,
NVIDIA P600, NVIDIA P620, NVIDIA P4000, NVIDIA RTX4000, NVIDIA P6000,
NVIDIA RTX A6000, and NVIDIA RTX5000) and the operating temperature is
equal to or less than 30°C, the TDP should be less than or equal to 165 watts.
– For server models with eight 3.5-inch-drive bays or sixteen 2.5-inch-drive bays,
if server is installed with NVIDIA T4 or Cambricon MLU270-S4 GPU, the TDP
should be less than or equal to 150 watts.
– For server models with eight 2.5-inch-drive bays, if the server is installed with
up to four NVIDIA T4 or Cambricon MLU270-S4 GPUs, the TDP can be more
than 150 watts, if the server is installed with five NVIDIA T4 or Cambricon
MLU270-S4 GPUs, the TDP should be less than or equal to 150 watts.
• Drive: no more than four NVMe drives installed, and no PCIe NVMe add-in-card
(AIC) installed.
• Power supply: for one GPU, 1100-watt or 1600-watt power supplies installed; for
two or three GPUs, 1600-watt power supplies installed
Chapter 1. Introduction 7
Table 1. Server specifications (continued)
Specification Description
RAID adapters (depending on • Onboard SATA ports with software RAID support (Intel VROC SATA RAID,
the model) formerly known as Intel RSTe)
Specification Description
CAUTION:
• 240 V dc input (input range: 180-300 V dc) is supported in Chinese Mainland
ONLY. Power supply with 240 V dc input cannot support hot plugging power
cord function. Before removing the power supply with dc input, please turn
off server or disconnect dc power sources at the breaker panel or by turning
off the power source. Then, remove the power cord.
• In order for the ThinkSystem products to operate error free in both a DC or
AC electrical environment, a TN-S earthing system which complies to 60364-
1 IEC 2005 standard has to be present or installed.
Chapter 1. Introduction 9
Table 1. Server specifications (continued)
Specification Description
Specification Description
Chapter 1. Introduction 11
Important information about the air baffle and GPU
There are two types of air baffle for your server. Depending on the GPU model, select the appropriate air
baffle for your server.
Notes:
• For server models without GPU installed, select the standard air baffle.
• Before installing the large-size air baffle, ensure that the height of the installed heat sinks is 1U to leave
adequate space for installing the large-size air baffle.
Risks that are posed by the presence of excessive particulate levels or concentrations of harmful gases
include damage that might cause the device to malfunction or cease functioning altogether. This
specification sets forth limits for particulates and gases that are intended to avoid such damage. The limits
must not be viewed or used as definitive limits, because numerous other factors, such as temperature or
moisture content of the air, can influence the impact of particulates or environmental corrosives and gaseous
contaminant transfer. In the absence of specific limits that are set forth in this document, you must
implement practices that maintain particulate and gas levels that are consistent with the protection of human
health and safety. If Lenovo determines that the levels of particulates or gases in your environment have
caused damage to the device, Lenovo may condition provision of repair or replacement of devices or parts
on implementation of appropriate remedial measures to mitigate such environmental contamination.
Implementation of such remedial measures is a customer responsibility.
Contaminant Limits
Reactive gases Severity level G1 as per ANSI/ISA 71.04-19851:
• The copper reactivity level shall be less than 200 Angstroms per month (Å/month ≈ 0.0035 μg/
cm2-hour weight gain).2
• The silver reactivity level shall be less than 200 Angstroms per month (Å/month ≈ 0.0035 μg/
cm2-hour weight gain).3
• The reactive monitoring of gaseous corrosivity must be conducted approximately 5 cm (2 in.) in
front of the rack on the air inlet side at one-quarter and three-quarter frame height off the floor
or where the air velocity is much higher.
Airborne Data centers must meet the cleanliness level of ISO 14644-1 class 8.
particulates
For data centers without airside economizer, the ISO 14644-1 class 8 cleanliness might be met by
choosing one of the following filtration methods:
• The room air might be continuously filtered with MERV 8 filters.
• Air entering a data center might be filtered with MERV 11 or preferably MERV 13 filters.
For data centers with airside economizers, the choice of filters to achieve ISO class 8 cleanliness
depends on the specific conditions present at that data center.
• The deliquescent relative humidity of the particulate contamination should be more than 60%
RH.4
• Data centers must be free of zinc whiskers.5
1ANSI/ISA-71.04-1985. Environmental conditions for process measurement and control systems: Airborne
contaminants. Instrument Society of America, Research Triangle Park, North Carolina, U.S.A.
2The derivation of the equivalence between the rate of copper corrosion growth in the thickness of the corrosion
product in Å/month and the rate of weight gain assumes that Cu2S and Cu2O grow in equal proportions.
3The derivation of the equivalence between the rate of silver corrosion growth in the thickness of the corrosion
product in Å/month and the rate of weight gain assumes that Ag2S is the only corrosion product.
4The deliquescent relative humidity of particulate contamination is the relative humidity at which the dust absorbs
enough water to become wet and promote ionic conduction.
5 Surface debris is randomly collected from 10 areas of the data center on a 1.5 cm diameter disk of sticky
electrically conductive tape on a metal stub. If examination of the sticky tape in a scanning electron microscope
reveals no zinc whiskers, the data center is considered free of zinc whiskers.
Chapter 1. Introduction 13
Firmware updates
Several options are available to update the firmware for the server.
You can use the tools listed here to update the most current firmware for your server and the devices that are
installed in the server.
http://lenovopress.com/LP0656
http://datacentersupport.lenovo.com/products/servers/thinksystem/sr650/7X05/downloads
See the following table to determine the best Lenovo tool to use for installing and setting up the firmware:
I/O
Update Core Devices Com-
Methods System Firm- Graphi- mand
Suppor- Firmware ware cal user line Supports
Tool ted Updates Updates interface interface UXSPs
Lenovo XClarity Provisioning Manager In-band2 √ √
(LXPM)
On-
Target
On-
Target
Off-
Target
On-
Target
Off-
Target
Off-
Target
On-
Target
Off-
Target
Chapter 1. Introduction 15
I/O
Update Core Devices Com-
Methods System Firm- Graphi- mand
Suppor- Firmware ware cal user line Supports
Tool ted Updates Updates interface interface UXSPs
Lenovo XClarity Integrator (LXCI) for In-band √ All I/O √ √
Microsoft System Center Configuration devices
Manager On-
Target
Notes:
1. For I/O firmware updates.
2. For BMC and UEFI firmware updates.
Note: By default, the Lenovo XClarity Provisioning Manager Graphical User Interface is displayed when
you press F1. If you have changed that default to be the text-based system setup, you can bring up the
Graphical User Interface from the text-based system setup interface.
Additional information about using Lenovo XClarity Provisioning Manager to update firmware is available
at:
http://sysmgt.lenovofiles.com/help/topic/LXPM/platform_update.html
• Lenovo XClarity Controller
If you need to install a specific update, you can use the Lenovo XClarity Controller interface for a specific
server.
Notes:
– To perform an in-band update through Windows or Linux, the operating system driver must be installed
and the Ethernet-over-USB (sometimes called LAN over USB) interface must be enabled.
Additional information about configuring Ethernet over USB is available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
configuringUSB.html
– If you update firmware through the Lenovo XClarity Controller, make sure that you have downloaded
and installed the latest device drivers for the operating system that is running on the server.
Specific details about updating firmware using Lenovo XClarity Controller are available at:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_
manageserverfirmware.html
• Lenovo XClarity Essentials OneCLI
Lenovo XClarity Essentials OneCLI is a collection of command line applications that can be used to
manage Lenovo servers.Its update application can be used to update firmware and device drivers for your
servers. The update can be performed within the host operating system of the server (in-band) or remotely
through the BMC of the server (out-of-band).
Specific details about updating firmware using Lenovo XClarity Essentials OneCLI is available at:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_c_update.html
• Lenovo XClarity Essentials UpdateXpress
Tech Tips
Lenovo continually updates the support Web site with the latest tips and techniques that you can use to
solve issues that you might have with your server. These Tech Tips (also called retain tips or service bulletins)
provide procedures to work around issues related to the operation of your server.
Security advisories
Lenovo is committed to developing products and services that adhere to the highest security standards in
order to protect our customers and their data. When potential vulnerabilities are reported, it is the
responsibility of the Lenovo Product Security Incident Response Team (PSIRT) to investigate and provide
information to our customers so they may put mitigation plans in place as we work toward providing
solutions.
Chapter 1. Introduction 17
The list of current advisories is available at the following location:
https://datacentersupport.lenovo.com/product_security/home
The server can be turned on (power LED on) in any of the following ways:
• You can press the power button.
• The server can restart automatically after a power interruption.
• The server can respond to remote power-on requests sent to the Lenovo XClarity Controller.
For information about powering off the server, see “Turn off the server” on page 18.
To place the server in a standby state (power status LED flashes once per second):
Note: The Lenovo XClarity Controller can place the server in a standby state as an automatic response to a
critical system failure.
• Start an orderly shutdown using the operating system (if supported by your operating system).
• Press the power button to start an orderly shutdown (if supported by your operating system).
• Press and hold the power button for more than 4 seconds to force a shutdown.
When in a standby state, the server can respond to remote power-on requests sent to the Lenovo XClarity
Controller. For information about powering on the server, see “Turn on the server” on page 18.
Front view
The front view of the server varies by model.
The illustrations in this topic show the server front views based on the supported drive bays.
Notes:
• Your server might look different from the illustrations in this topic.
• The chassis for sixteen 2.5-inch-drive bays cannot be upgraded to the chassis for twenty-four 2.5-inch-
drive bays.
Figure 4. Front view of server models with eight 2.5-inch drive bays (0–7)
Figure 5. Front view of server models with sixteen 2.5-inch drive bays (0–15)
Figure 7. Front view of server models with twenty-four 2.5-inch drive bays (0–23)
Figure 8. Front view of server models with eight 3.5-inch drive bays (0–7)
Figure 9. Front view of server models with twelve 3.5-inch drive bays (0–11)
Callout Callout
1 Pull-out information tab 2 Front I/O assembly
The XClarity Controller network access label is attached on the top side of the pull-out information tab.
For information about the controls, connectors, and status LEDs on the front I/O assembly, see “Front I/O
assembly” on page 22.
3 5 Rack latches
If your server is installed in a rack, you can use the rack latches to help you slide the server out of the rack.
You also can use the rack latches and screws to secure the server in the rack so that the server cannot slide
out, especially in vibration-prone areas. For more information, refer to the Rack Installation Guide that comes
with your rail kit.
4 Drive bays
The number of the installed drives in your server varies by model. When you install drives, follow the order of
the drive bay numbers.
The EMI integrity and cooling of the server are protected by having all drive bays occupied. The vacant drive
bays must be occupied by drive bay fillers or drive fillers.
Used to attach a high-performance monitor, a direct-drive monitor, or other devices that use a VGA
connector.
7 Drive activity LED Solid green The drive is powered but not active.
Blinking yellow (blinking slowly, about one The drive is being rebuilt.
flash per second)
Blinking yellow (blinking rapidly, about four The RAID adapter is locating the drive.
flashes per second)
The following illustrations show the controls, connectors, and LEDs on the front I/O assembly of the server.
To locate the front I/O assembly, see “Front view” on page 19.
Figure 10. Front I/O assembly for server models with eight 3.5-inch-drive bays, eight 2.5-inch-drive bays, and sixteen 2.5-
inch-drive bays
Figure 11. Front I/O assembly for server models with twelve 3.5-inch-drive bays and twenty-four 2.5-inch-drive bays
Callout Callout
1 XClarity Controller USB connector 2 USB 3.0 connector
Depending on the setting, this connector supports USB 2.0 function, XClarity Controller management
function, or both.
• If the connector is set for USB 2.0 function, you can attach a device that requires a USB 2.0 connection,
such as a keyboard, a mouse, or a USB storage device.
• If the connector is set for XClarity Controller management function, you can attach a mobile device
installed with the application to run XClarity Controller event logs.
• If the connector is set to have both functions, you can press the system ID button for three seconds to
switch between the two functions.
Used to attach a device that requires a USB 2.0 or 3.0 connection, such as a keyboard, a mouse, or a USB
storage device.
You can press the power button to turn on the server when you finish setting up the server. You also can hold
the power button for several seconds to turn off the server if you cannot turn off the server from the operating
system. The power status LED helps you to determine the current power status.
Slow blinking Green The server is off and is ready to be powered on (standby state).
(about one flash
per second)
Fast blinking Green The server is off, but the XClarity Controller is initializing, and the server is not
(about four ready to be powered on.
flashes per
second)
The network activity LED on the front I/O assembly helps you identify the network connectivity and activity.
Use this system ID button and the blue system ID LED to visually locate the server. A system ID LED is also
located on the rear of the server. Each time you press the system ID button, the state of both the system ID
LEDs changes. The LEDs can be changed to on, blinking, or off. You can also use the Lenovo XClarity
Controller or a remote management program to change the state of the system ID LEDs to assist in visually
locating the server among other servers.
If the XClarity Controller USB connector is set to have both the USB 2.0 function and XClarity Controller
management function, you can press the system ID button for three seconds to switch between the two
functions.
The system error LED provides basic diagnostic functions for your server. If the system error LED is lit, one or
more LEDs elsewhere in the server might also be lit to direct you to the source of the error.
On Yellow An error has been detected on the server. Check the event log to determine the exact
Causes might include but not limited to the cause of the error.
following errors: Alternatively, follow the light path
diagnostics to determine if additional LEDs
• The temperature of the server reached
are lit that will direct you to identify the
the non-critical temperature threshold.
cause of the error. For information about
• The voltage of the server reached the light path diagnostics, see “Light path
non-critical voltage threshold. diagnostics” on page 285.
• A fan has been detected to be running at
low speed.
• A hot-swap fan has been removed.
• The power supply has a critical error.
• The power supply is not connected to
the power.
Rear view
The rear of the server provides access to several connectors and components.
Figure 12. Rear view of server models with six PCIe slots
Callout Callout
1 Ethernet connectors on the LOM adapter (available on 2 XClarity Controller network connector
some models)
9 PCIe slot 6 (on riser 2) 10 PCIe slot 4 (with a serial port module installed on
some models)
The LOM adapter provides two or four extra Ethernet connectors for network connections.
The leftmost Ethernet connector on the LOM adapter can be set as XClarity Controller network connector. To
set the Ethernet connector as XClarity Controller network connector, start Setup utility, go to BMC Settings
➙ Network Settings ➙ Network Interface Port and select Shared. Then, go to Shared NIC on and select
PHY card.
Used to attach an Ethernet cable to manage the system using XClarity Controller.
3 VGA connector
Used to attach a high-performance monitor, a direct-drive monitor, or other devices that use a VGA
connector.
Used to attach a device that requires a USB 2.0 or 3.0 connection, such as a keyboard, a mouse, or a USB
storage device.
5 NMI button
Press this button to force a nonmaskable interrupt (NMI) to the processor. By this way, you can blue screen
the server and take a memory dump. You might have to use a pen or the end of a straightened paper clip to
press the button.
The hot-swap redundant power supplies help you avoid significant interruption to the operation of the
system when a power supply fails. You can purchase a power supply option from Lenovo and install the
power supply to provide power redundancy without turning off the server.
On each power supply, there are three status LEDs near the power cord connector. For information about the
status LEDs, see “Rear view LEDs” on page 27.
8 9 10 11 12 13 PCIe slots
You can find the PCIe slot numbers on the rear of the chassis.
Notes:
• Your server supports PCIe slot 5 and PCIe slot 6 when two processors are installed.
• Do not install PCIe adapters with small form factor (SFF) connectors in PCIe slot 6.
• Observe the following PCIe slot selection priority when installing an Ethernet card or a converged network
adapter:
One processor 4, 2, 3, 1
Two processors 4, 2, 6, 3, 5, 1
There are five different riser cards that can be installed in riser 1.
• Type 1
– Slot 1: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 2: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 3: PCIe x16 (x8, x4, x1), full-height, half-length
• Type 2
– Slot 1: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 2: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 3: ML2 x8 (x8, x4, x1), full-height, half-length
• Type 3
– Slot 1: PCIe x16 (x16, x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 2: Not available
– Slot 3: PCIe x16 (x8, x4, x1), full-height, half-length
• Type 4
– Slot 1: PCIe x16 (x8, x4, x1), full-height, half-length/full-height, full-length
– Slot 2: Not available
– Slot 3: ML2 x16 (x16, x8, x4, x1), full-height, half-length
• Type 5
– Slot 1: PCIe x16 (x16, x8, x4, x1), full-height, half-length/full-height, full-length
Used to install up to two 3.5-inch hot-swap drives on the rear of the server. The rear 3.5-inch drive bays are
available on some models.
The number of the installed drives in your server varies by model. The EMI integrity and cooling of the server
are protected by having all drive bays occupied. The vacant drive bays must be occupied by drive bay fillers
or drive fillers.
1 System ID LED
The blue system ID LED helps you to visually locate the server. A system ID LED is also located on the front
of the server. Each time you press the system ID button, the state of both the system ID LEDs changes. The
LEDs can be changed to on, blinking, or off. You can also use the Lenovo XClarity Controller or a remote
management program to change the state of the system ID LEDs to assist in visually locating the server
among other servers.
The system error LED provides basic diagnostic functions for your server. If the system error LED is lit, one or
more LEDs elsewhere in the server might also be lit to direct you to the source of the error. For more
information, see “Front I/O assembly” on page 22.
LED Description
5 Power input LED • Green: The power supply is connected to the ac power source.
• Off: The power supply is disconnected from the ac power source or a power problem
occurs.
6 Power output LED • Green: The server is on and the power supply is working normally.
• Blinking green: The power supply is in zero-output mode (standby). When the server
power load is low, one of the installed power supplies enters into the standby state
while the other one delivers entire load. When the power load increases, the standby
power supply will switch to active state to provide sufficient power to the server.
To disable zero-output mode, start the Setup utility, go to System Settings ➙ Power
➙ Zero Output and select Disable. If you disable zero-output mode, both power
supplies will be in the active state.
• Off: The server is powered off, or the power supply is not working properly. If the
server is powered on but the power output LED is off, replace the power supply.
7 Power supply error LED • Yellow: The power supply has failed. To resolve the issue, replace the power supply.
• Off: The power supply is working normally.
Callout Callout
1 Riser 2 slot 2 Serial-port-module connector
Callout Callout
23 System fan 5 connector 24 System fan 6 connector
Notes:
• 1 Trusted Cryptography Module
• 2 Trusted Platform Module
Callout Callout
1 System power LED 2 System ID LED
When this LED is lit, it indicates that the server is powered on.
2 System ID LED
The blue system ID LED helps you to visually locate the server. A system ID LED is also located on the front
of the server. Each time you press the system ID button, the state of both the system ID LEDs changes. The
LEDs can be changed to on, blinking, or off. You can also use the Lenovo XClarity Controller or a remote
management program to change the state of the system ID LEDs to assist in visually locating the server
among other servers.
When this yellow LED is lit, one or more LEDs elsewhere in the server might also be lit to direct you to the
source of the error. For more information, see “Front I/O assembly” on page 22.
When a memory module error LED is lit, it indicates that the corresponding memory module has failed.
When a fan error LED is lit, it indicates that the corresponding system fan is operating slowly or has failed.
3 Boot backup XClarity J47 • Pins 1 and 2: The jumper is in default setting.
Controller • Pins 2 and 3: The tower server will boot by using a backup of the
XClarity Controller firmware.
6 Force XCC update jumper J45 • Pins 1 and 2: The jumper is in default setting.
• Pins 2 and 3: Force the Lenovo XClarity Controller to update to the
latest version.
Important:
• Before you move any jumpers, turn off the server; then, disconnect all power cords and external cables.
Do not open your server or attempt any repair before reading and understanding the following information:
– http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
– “Handling static-sensitive devices” on page 154
• Any system-board switch or jumper block that is not shown in the illustrations in this document are
reserved.
Note: Disengage all latches, release tabs, or locks on cable connectors when you disconnect cables from
the system board. Failing to release them before removing the cables will damage the cable sockets on the
system board, which are fragile. Any damage to the cable sockets might require replacing the system board.
Cable To
VGA cable on the left rack latch Front VGA connector on the system board
Figure 19. Cable routing for the front I/O assembly on the chassis
Cable To
1 Operator-information-panel cable Operator-information-panel connector on the system
board
2 Front USB cable Front USB connector on the system board
Figure 20. Cable routing for the front I/O assembly on the right rack latch
Cable To
Front-I/O-assembly cable Operator-information-panel connector and front USB
connector on the system board
GPU
Use the section to understand the cable routing for the GPUs.
Figure 21. Cable routing for server models with up to two GPUs
Cable From To
1 GPU power cable Power connector on the GPU GPU power connector 1 on the
installed in PCIe slot 5 system board
2 GPU power cable Power connector on the GPU GPU power connector 2 on the
installed in PCIe slot 1 system board
Figure 22. Cable routing for server models with up to three GPUs
Cable From To
1 GPU power cable Power connectors on the GPUs GPU power connector 1 on the
installed in PCIe slots 5 and 6 system board
2 GPU power cable Power connector on the GPU GPU power connector 2 on the
installed in PCIe slot 1 system board
Figure 23. Cable routing for server models with two Cambricon MLU100-C3 processing adapters
Cable From To
1 GPU power cable Power connectors on the adapters GPU power connector 1 on the
installed in PCIe slots 5 and 6 system board
Figure 24. Cable routing for server models with four Cambricon MLU100-C3 processing adapters
Cable From To
1 GPU power cable Power connectors on the adapters GPU power connector 1 on the
installed in PCIe slots 5 and 6 system board
2 GPU power cable Power connectors on the adapters GPU power connector 2 on the
installed in PCIe slots 1 and 2 system board
Backplane
Use the section to understand the cable routing for backplanes.
Before you route cables for backplanes, observe the adapter priority and the PCIe slot selection priority when
installing the NVMe switch adapter or a RAID adapter.
• Adapter priority: NVMe switch adapter, 24i RAID adapter, 8i HBA/RAID adapter, 16i HBA/RAID adapter
• PCIe slot selection priority when installing the NVMe switch adapter:
One processor 1
Two processors 1, 5, 6
– For server models with sixteen/twenty/twenty-four NVMe drives (with two processors installed):
One processor 1, 2, 3
Two processors 1, 2, 3, 5, 6
One processor 7, 4, 2, 3, 1
Two processors 7, 4, 2, 3, 1, 5, 6
Notes:
• The PCIe slot 7 refers to the RAID adapter slot on the system board.
• If the rear hot-swap drive assembly is installed, PCIe slots 1, 2, and 3 will become unavailable because the
space is occupied by the rear hot-swap drive assembly.
• The adapter priority of the 530-16i or 930-16i RAID adapter adapter can be higher than the 930-8i RAID
adapter when both 16i RAID adapter and 8i RAID adapter are chosen.
• Oversubscription may exist in some configurations installed with NVMe 810-4P/1610-8P/1611-8P switch
adapters. For details, see https://lenovopress.lenovo.com/lp1050-thinksystem-sr650-server#controllers-for-
internal-storage.
Figure 25. Cable routing for server models with eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 16i HBA/RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane* backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C2
• Gen 4: C1
Figure 26. Cable routing for server models with eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 24i RAID adapter
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane backplane RAID adapter
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 24i RAID
swap drive assembly swap drive assembly adapter
Figure 27. Cable routing for server models with eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 32i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane* backplane adapter
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C1 connector on the 32i RAID
swap drive assembly* swap drive assembly adapter
Server model: eight 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, two 8i HBA/RAID
adapters
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter in PCIe slot 4
might not be available on your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 3 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane* backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C0
• Gen 4: C0
Server model: eight 2.5-inch SAS/SATA drives, one 730-8i 4G Flash SAS/SATA RAID adapter with
CacheCade
Note: This configuration is available for some models only.
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 8i HBA/
backplane backplane RAID adapter installed in PCIe slot 4
Server model: eight 2.5-inch SAS/SATA drives, Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or
6258R processors, one 8i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Cable From To
1 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane* backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
2 Power cable for front backplane Power connector on front backplane Backplane power connector 2 on the
system board
Server model: four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-swap
drive assembly, two 8i HBA/RAID adapters
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter in PCIe slot 4
might not be available on your server.
Figure 31. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and two 8i HBA/RAID adapters
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane* backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
4 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter installed in
swap drive assembly* swap drive assembly PCIe slot 4
• Gen 3: C0
• Gen 4: C0
Server model: four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-swap
drive assembly, one 16i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 4 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 32. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 16i HBA/RAID adapter
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane* backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
4 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C2
• Gen 4: C1
Server model: four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-swap
drive assembly, one 24i RAID adapter
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly and the cable 4 might not be available
on your server.
Figure 33. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 24i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane backplane RAID adapter installed in PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane NVMe 3 connectors on front on the system board
backplane
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Server model: four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-swap
drive assembly, one 32i RAID adapter
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the cable 4 might not be available on
your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable:
– Cable 2 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 4 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane Power connector on front backplane Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane* backplane adapter on PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane NVMe 3 connectors on front on the system board
backplane
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C1 connector on the 32i RAID
swap drive assembly* swap drive assembly adapter on PCIe slot 5
Server model: four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, Intel Xeon 6137,
6242R, 6246R, 6248R, 6250, 6256, or 6258R processors, one 8i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
Figure 35. Cable routing for server models with four 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R processors, and one 8i HBA/RAID adapter
Cable From To
1 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane NVMe 3 connectors on front on the system board
backplane
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane* backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 Power cable for front backplane Power connector on front backplane Backplane power connector 2 on the
system board
Server model: sixteen 2.5-inch SAS/SATA drives, one 16i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Figure 36. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives and one 16i HBA/RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 2* backplane 2 adapter slot
• Gen 3: C2C3
• Gen 4: C1
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Figure 37. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one
Gen 3 8i HBA/RAID adapter, and one Gen 3 16i HBA/RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors of Gen 3 16i
backplane 1 backplane 1 HBA/RAID adapter on the RAID
adapter slot
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors of Gen 3 16i
backplane 2 backplane 2 HBA/RAID adapter on the RAID
adapter slot
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector of 8i HBA/RAID
swap drive assembly swap drive assembly adapter on PCIe slot 4
Figure 38. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one
Gen 4 8i HBA/RAID adapter, and one Gen 4 16i HBA/RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector of Gen 4 16i HBA/RAID
backplane 1 backplane 1 adapter on PCIe slot 4
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector of Gen 4 16i HBA/RAID
backplane 2 backplane 2 adapter on PCIe slot 4
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C0 connector of Gen 4 8i HBA/RAID
swap drive assembly swap drive assembly adapter on PCIe slot 2
Server model: sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 24i RAID
adapter
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly and the cable 5 might not be available
on your server.
Figure 39. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 24i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C4 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Server model: sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 32i RAID
adapter
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the cable 5 might not be available on
your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable:
– Cable 2 / 3 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 5 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 5
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 32i RAID
swap drive assembly* swap drive assembly adapter on PCIe slot 5
Server model: sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, three 8i HBA/RAID
adapters
Notes:
Figure 41. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
three 8i HBA/RAID adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
5 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter installed in
swap drive assembly* swap drive assembly PCIe slot 5
• Gen 3: C0
• Gen 4: C0
Server model: twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, one 16i HBA/
RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Figure 42. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 16i HBA/RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 2* backplane 2 adapter slot
• Gen 3: C2C3
• Gen 4: C1
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Server model: twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, one 24i RAID
adapter
Figure 43. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 24i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter on the riser assembly
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter on the riser assembly
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Note: The 24i RAID adapter can be installed in riser assembly 1 or riser assembly 2.
Server model: twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, one 32i RAID
adapter
Notes:
• The 32i RAID adapter can be installed in riser assembly 1 or riser assembly 2.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable (ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-
Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on backplane 1 Backplane power connector 1 on the
system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on the riser assembly
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on the riser assembly
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Server model: twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i HBA/RAID adapter, one 16i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 4 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
Figure 45. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 8i HBA/RAID adapter, and one 16i HBA/RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C2
• Gen 4: C1
Figure 46. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 SAS signal cable for the rear hot- Signal connector on the rear hot- C4 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 5
Server model: twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 32i RAID adapter
Notes:
Figure 47. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one 32i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 5
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 32i RAID
swap drive assembly* swap drive assembly adapter on PCIe slot 5
Figure 48. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
one 16i HBA/RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on backplane 1 on the system board
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 1
Server model: eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, one 16i HBA/
RAID adapter, one NVMe 1611–8P switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Figure 49. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
one 16i HBA/RAID adapter, and one NVMe 1611–8P switch adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C2 and C3 connectors on the NVMe
backplane 1 NVMe 3 connectors on backplane 1 1611–8P switch adapter installed in
PCIe slot 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 2* backplane 2 adapter slot
• Gen 3: C2C3
• Gen 4: C1
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0 and C1 connectors on the NVMe
backplane 2 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 2 PCIe slot 1
Server model: eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i HBA/RAID adapter, one 16i HBA/RAID adapter, one NVMe switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
– Cable 2 / 5 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 7 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
7 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C2
• Gen 4: C1
Server model: eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, three 8i HBA/RAID adapters, one NVMe switch adapter
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter in PCIe slot 6
might not be available on your server.
• Depending on the model, if the NVMe switch adapter is installed in PCIe slot 1, route the NVMe signal
cable along the right side of the chassis.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 5 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 7 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 51. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, three 8i HBA/RAID adapters, and one NVMe switch adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter installed in
swap drive assembly* swap drive assembly PCIe slot 6
• Gen 3: C0
• Gen 4: C0
Figure 52. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 24i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 6
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C4 connector on the 24i RAID
swap drive assembly swap drive assembly adapter installed in PCIe slot 6
Server model: eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 32i RAID adapter, one NVMe switch adapter
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the cable 7 might not be available on
your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable:
– Cable 2 / 4 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 6 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 53. Cable routing for server models with eight 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 32i RAID adapter, and one NVMe switch adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- C2 connector on the 32i RAID
swap drive assembly* swap drive assembly adapter on PCIe slot 6
Server model: sixteen 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, two NVMe 1610-4P
switch adapters
Figure 54. Cable routing for server models with sixteen 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, and
two NVMe 1610-4P switch adapters
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on NVMe 2–3 and NVMe 0–1 connectors
backplane 1 front backplane 1 on the system board
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 1610-4P switch adapter
installed in PCIe slot 6
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
PCIe slot 4
6 NVMe signal cable for front NVMe 1 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
RAID adapter slot on the system
board
7 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 1610-4P switch adapter
installed in PCIe slot 1
Server model: sixteen 2.5-inch NVMe drives, two NVMe 1611-8P switch adapters
Figure 55. Cable routing for server models with sixteen 2.5-inch NVMe drives, and two NVMe 1611-8P switch adapters
2 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 1 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 1 installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 2 installed in PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Server model: twenty 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, three NVMe 1610-4P
switch adapters
Figure 56. Cable routing for server models with twenty 2.5-inch NVMe drives, two NVMe 810-4P switch adapters, and
three NVMe 1610-4P switch adapters
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on NVMe 2–3 and NVMe 0–1 connectors
backplane 1 front backplane 1 on the system board
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 1610-4P switch adapter
installed in PCIe slot 6
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 1610-4P switch adapter
installed in PCIe slot 5
6 NVMe signal cable for front NVMe 2 connector on front C0 and C1 connectors on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
PCIe slot 4
7 NVMe signal cable for front NVMe 3 connector on front C0 and C1 connectors on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
RAID adapter slot on the system
board
8 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
9 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 3 front backplane 3 the NVMe 1610-4P switch adapter
installed in PCIe slot 1
Server model: twenty-four 2.5-inch SAS/SATA drives, one 8i HBA/RAID adapter, one 16i HBA/RAID
adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 3* backplane 3
• Gen 3: C2C3
• Gen 4: C1
Figure 58. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter on riser 1 assembly
3 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter on riser 1 assembly
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter on riser 1 assembly
Figure 59. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives and one 32i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on riser 1 assembly
3 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on riser 1 assembly
Server model: twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, four 8i HBA/
RAID adapters
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter in PCIe slot 6
might not be available on your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 6 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 7 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 60. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly,
and four 8i HBA/RAID adapters
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 5
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
7 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter installed in
swap drive assembly* swap drive assembly PCIe slot 6
• Gen 3: C0
• Gen 4: C0
Server model: twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, two 8i HBA/
RAID adapters, one 16i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 3 / 6 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 7 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
Server model: twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 8i HBA/
RAID adapter, one 24i RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C0
• Gen 4: C0
Server model: twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 8i HBA/
RAID adapter, one 32i RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 3 / 6 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 7 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 5
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C0
• Gen 4: C0
Server model: twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly, two 16i HBA/
RAID adapters
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 3 / 6 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 7 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 64. Cable routing for server models with twenty-four 2.5-inch SAS/SATA drives, the rear hot-swap drive assembly,
and two 16i HBA/RAID adapters
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 2* backplane 2 adapter slot
• Gen 3: C2C3
• Gen 4: C1
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
7 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C2
• Gen 4: C1
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, one 8i HBA/
RAID adapter, one 16i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 3* backplane 3
• Gen 3: C2C3
• Gen 4: C1
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, one 24i RAID
adapter
Figure 66. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 24i RAID adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter on riser 1 assembly
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter on riser 1 assembly
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter on riser 1 assembly
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, one 32i RAID
adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable (ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-
Bay X40 RAID Cable Kit).
Figure 67. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
and one 32i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on riser 1 assembly
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on riser 1 assembly
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on riser 1 assembly
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, four 8i HBA/RAID adapters
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter in PCIe slot 6
might not be available on your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
– Cable 2 / Cable 4 / Cable 7 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40
RAID Cable Kit
– Cable 8 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, two 8i HBA/RAID adapters, one 16i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 4 / 7 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 8 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 69. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, two 8i HBA/RAID adapters, and one 16i HBA/RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 5
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
8 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on PCIe slot 5
swap drive assembly* swap drive assembly
• Gen 3: C2
• Gen 4: C1
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i HBA/RAID adapter, one 24i RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 5
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C0
• Gen 4: C0
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i HBA/RAID adapter, one 32i RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 4 / 7 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 8 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 71. Cable routing for server models with twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, one 8i HBA/RAID adapter, and one 32i RAID adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 5
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connectors on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 5
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connectors on the 32i RAID
backplane 3* backplane 3 adapter on PCIe slot 5
8 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C0
• Gen 4: C0
Server model: twenty 2.5-inch SAS/SATA drives, four 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, two 16i HBA/RAID adapters
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 4 / 7 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 8: ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 2* backplane 2 adapter slot
• Gen 3: C2C3
• Gen 4: C1
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
Server model: sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, one 24i RAID
adapter, one NVMe switch adapter
Figure 73. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, one 24i RAID adapter, and one NVMe switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter on an available PCIe
slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter on an available PCIe
slot
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter on an available PCIe
slot
Server model: sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, one 32i RAID
adapter, one NVMe switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable (ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-
Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on an available PCIe slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter on an
backplane 2 available PCIe slot
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on an available PCIe slot
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on an available PCIe slot
Figure 75. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, one 32i HBA/RAID adapter, and one NVMe 1611-8P switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on an available PCIe slot
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C2 and C3 connectors on the NVMe
backplane 1 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 1 PCIe slot 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on an available PCIe slot
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on an available PCIe slot
8 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0 and C1 connectors on the NVMe
backplane 2 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 2 PCIe slot 1
Server model: sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i HBA/RAID adapter, one 24i RAID adapter, one NVMe switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit).
Figure 76. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, the rear hot-swap drive assembly, one 8i HBA/RAID adapter, one 24i RAID adapter, and one NVMe switch adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 6
9 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C0
• Gen 4: C0
Figure 77. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, one 8i HBA/RAID adapter, one 16i HBA/RAID adapter, and one NVMe 1611-8P switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1 backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C2 and C3 connectors on the NVMe
backplane 1 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 1 PCIe slot 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter installed in
backplane 2 backplane 2 PCIe slot 1
• Gen 3: C0C1
• Gen 4: C0
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter installed in
backplane 3 backplane 3 PCIe slot 1
• Gen 3: C2C3
• Gen 4: C1
8 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0 and C1 connectors on the NVMe
backplane 2 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 2 PCIe slot 1
Server model: sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, one 8i HBA/RAID adapter, one 32i RAID adapter, one NVMe switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 5 / 8 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 9 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Figure 78. Cable routing for server models with sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe
drives, the rear hot-swap drive assembly, one 8i HBA/RAID adapter, one 32i RAID adapter, and one NVMe switch adapter
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on PCIe slot 6
9 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C0
• Gen 4: C0
Server model: sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, two 16i HBA/RAID adapters, one NVMe switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 5 / 8 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 9 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on the RAID
backplane 2* backplane 2 adapter slot
• Gen 3: C2C3
• Gen 4: C1
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
8 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
9 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C2
• Gen 4: C1
Server model: sixteen 2.5-inch SAS/SATA drives, eight 2.5-inch SAS/SATA/NVMe drives, the rear hot-
swap drive assembly, two 8i HBA/RAID adapters, one 16i HBA/RAID adapter, one NVMe switch
adapter
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the cable 6 might not be available on
your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 / 5 / 7 : ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit
– Cable 6 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 6
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
6 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on PCIe slot 6
swap drive assembly* swap drive assembly
• Gen 3: C2
• Gen 4: C1
9 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Server model: twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe drives, one 24i RAID
adapter, two NVMe switch adapters
Figure 81. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe
drives, one 24i RAID adapter, and two NVMe switch adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 and C1 connectors on the 24i
backplane 1 backplane 1 RAID adapter installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 and C3 connectors on the 24i
backplane 2 backplane 2 RAID adapter installed in PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C4 and C5 connectors on the 24i
backplane 3 backplane 3 RAID adapter installed in PCIe slot 6
8 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
9 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
Server model: twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe drives, one 32i RAID
adapter, two NVMe switch adapters
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable (ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-
Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 6
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on PCIe slot 6
9 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
Server model: twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe drives, one 32i HBA/
RAID adapter, one NVMe 1611-8P switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *Ensure you use Gen 4 SAS signal cable (ThinkSystem SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-
Bay X40 RAID Cable Kit).
Figure 83. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe
drives, one 32i HBA/RAID adapter, and one NVMe 1611-8P switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front C0 connector on the 32i RAID
backplane 1* backplane 1 adapter on PCIe slot 6
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front C1 connector on the 32i RAID
backplane 2* backplane 2 adapter on PCIe slot 6
5 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
6 SAS signal cable for front SAS 0 and SAS 1 connectors on front C2 connector on the 32i RAID
backplane 3* backplane 3 adapter on PCIe slot 6
7 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
8 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C2 and C3 connectors on the NVMe
backplane 3 NVMe 3 connectors on front 1611–8P installed in PCIe slot 1
backplane 3
9 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0 and C1 connectors on the NVMe
backplane 2 NVMe 3 connectors on front 1611–8P installed in PCIe slot 1
backplane 2
Server model: twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe drives, three 8i
HBA/RAID adapters, two NVMe switch adapters
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 2
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
9 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Server model: twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe drives, one 8i HBA/
RAID adapter, one 16i HBA/RAID adapters, two NVMe switch adapters
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Figure 85. Cable routing for server models with twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe
drives, one 8i HBA/RAID adapter, one 16i HBA/RAID adapters, and two NVMe switch adapters
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 3* backplane 3
• Gen 3: C2C3
• Gen 4: C1
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 3 PCIe slot 1
7 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
8 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe switch adapter installed in
backplane 2 PCIe slot 5
9 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
Server model: twelve 2.5-inch SAS/SATA drives, twelve 2.5-inch SAS/SATA/NVMe drives, one 8i HBA/
RAID adapter, one 16i HBA/RAID adapter, one NVMe 1611-8P switch adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 1* backplane 1 adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
backplane 1 NVMe 3 connectors on front on the system board
backplane 1
4 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 3* backplane 3
• Gen 3: C2C3
• Gen 4: C1
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 16i HBA/RAID adapter on PCIe slot 4
backplane 2* backplane 2
• Gen 3: C0C1
• Gen 4: C0
6 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
8 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C2 and C3 connectors on the NVMe
backplane 3 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 3 PCIe slot 1
9 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0 and C1 connectors on the NVMe
backplane 2 NVMe 3 connectors on front 1611–8P switch adapter installed in
backplane 2 PCIe slot 1
Server model: sixteen 2.5-inch NVMe drives, eight SAS/SATA drives, two NVMe 810-4P switch
adapters, two NVMe 1610-4P switch adapters, one 8i HBA/RAID adapter
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Figure 87. Cable routing for server models with sixteen 2.5-inch NVMe drives, eight SAS/SATA drives, two NVMe 810-4P
switch adapters, two NVMe 1610-4P switch adapters and one 8i HBA/RAID adapter
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on NVMe 2–3 and NVMe 0–1 connectors
backplane 1 front backplane 1 on the system board
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 1610-4P switch adapter
installed in PCIe slot 6
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
PCIe slot 4
6 NVMe signal cable for front NVMe 1 connector on front C0 and C1 connector on the NVMe
backplane 2 backplane 2 810-4P switch adapter installed in
RAID adapter slot on the system
board
7 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 1610-4P switch adapter
installed in PCIe slot 1
8 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
9 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on PCIe slot 3
backplane 3* backplane 3
• Gen 3: C0C1
• Gen 4: C0
Server model: sixteen 2.5-inch NVMe drives, eight 2.5-inch SAS/SATA drives, one 8i HBA/RAID
adapter, two NVMe 1611-8P switch adapters
Notes:
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 2.5" SAS/SATA/AnyBay 8-Bay X40 RAID Cable Kit).
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 1 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 1 installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 2 installed in PCIe slot 5
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 SAS signal cable for front SAS 0 and SAS 1 connectors on front 8i HBA/RAID adapter on the RAID
backplane 3* backplane 3 adapter slot
• Gen 3: C0C1
• Gen 4: C0
6 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
Figure 89. Cable routing for server models with twenty-four 2.5-inch NVMe drives, four NVMe 810-4P switch adapters,
and one NVMe 1610-8P switch adapter
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 1 front backplane 1 the NVMe 810-4P switch adapter
installed in PCIe slot 6
3 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0 and C1 connectors on the NVMe
backplane 1 front backplane 1 1610-8P switch adapter installed in
PCIe slot 1
4 Power cable for front backplane 2 Power connector on front backplane Backplane power connector 2 on the
2 system board
5 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C2 and C3 connectors on the NVMe
backplane 2 front backplane 2 1610-8P switch adapter installed in
PCIe slot 1
6 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 2 front backplane 2 the NVMe 810-4P switch adapter
installed in PCIe slot 4
7 NVMe signal cable for onboard NVMe 0–1 and NVMe 2–3 connectors C0 and C1 connectors on the riser
NVMe connectors on the system board card 1
9 NVMe signal cable for front NVMe 0 and NVMe 1 connectors on C0, C1, C2, and C3 connectors on
backplane 3 front backplane 3 the NVMe 810-4P switch adapter
installed in RAID adapter slot on the
system board
10 NVMe signal cable for front NVMe 2 and NVMe 3 connectors on C0, C1, C2, and C3 connectors on
backplane 3 front backplane 3 the NVMe 810-4P switch adapter
installed in PCIe slot 2
Server model: twenty-four 2.5-inch NVMe drives, three NVMe 1611-8P switch adapters
Figure 90. Cable routing for server models with twenty-four 2.5-inch NVMe drives and three NVMe 1611-8P switch
adapters
Cable From To
1 Power cable for front backplane 1 Power connector on front backplane Backplane power connector 1 on the
1 system board
2 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 1 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 1 installed in PCIe slot 6
3 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 2 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 2 installed in PCIe slot 5
5 Power cable for front backplane 3 Power connector on front backplane Backplane power connector 3 on the
3 system board
6 NVMe signal cable for front NVMe 0, NVMe 1, NVMe 2, and C0, C1, C2, and C3 connectors on
backplane 3 NVMe 3 connectors on front the NVMe 1611-8P switch adapter
backplane 3 installed in PCIe slot 1
Server model: eight 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, two 8i HBA/RAID
adapters
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter in PCIe slot 4
might not be available on your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 : ThinkSystem SR550/SR590/SR650 3.5" SAS/SATA 8-Bay X40 RAID Cable Kit
– Cable 3 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable Power connector on the backplane Backplane power connector 1 on the
system board
2 SAS signal cable* SAS 0 and SAS 1 connectors on the 8i HBA/RAID adapter on the RAID
backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C0
• Gen 4: C0
Server model: eight 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 16i HBA/RAID
adapter
Notes:
Figure 92. Cable routing for server models with eight 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one 16i HBA/RAID adapter
2 SAS signal cable* SAS 0 and SAS 1 connectors on the 16i HBA/RAID adapter on the RAID
backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 SAS signal cable for the rear hot- Signal connector on the rear hot- 16i HBA/RAID adapter on the RAID
swap drive assembly* swap drive assembly adapter slot
• Gen 3: C2
• Gen 4: C1
Server model: twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one Gen 3 16i
HBA/RAID adapter
Note: The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is
installed. Depending on the model, the rear hot-swap drive assembly might not be available on your server.
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0, SAS 1, and SAS 2 connectors C0, C1, and C2 connectors on the 16i
on the backplane HBA/RAID adapter on the RAID
adapter slot
3 Power cable Power 2 connector on the front Backplane power connector 2 on the
backplane system board
4 SAS signal cable for the rear hot- Signal connector on the rear hot- C3 connector on the 16i HBA/RAID
swap drive assembly swap drive assembly adapter on the RAID adapter slot
Server model: twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one Gen 4 16i
HBA/RAID adapter
Notes:
Figure 94. Cable routing for server models with twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, and
one Gen 4 16i HBA/RAID adapter
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0 and SAS 1 connectors on the C0 connector on the 16i HBA/RAID
backplane adapter on the RAID adapter slot
4 SAS signal cable for the rear hot- SAS 2 connector on the backplane C1 connector on the 16i HBA/RAID
swap drive assembly and signal connector on the rear hot- adapter on the RAID adapter slot
swap drive assembly
Server model: twelve 3.5-inch SAS/SATA drives, the rear hot-swap drive assembly, one 8i HBA/RAID
adapter, one 16i HBA/RAID adapter
Notes:
• The cable routing illustration is based on the scenario that the rear hot-swap drive assembly is installed.
Depending on the model, the rear hot-swap drive assembly and the 8i HBA/RAID adapter might not be
available on your server.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable:
– Cable 2 : ThinkSystem SR590/SR650 3.5" SAS/SATA/AnyBay 12-Bay X40 RAID Cable Kit
– Cable 4 : ThinkSystem SR590/SR650 3.5" SAS/SATA 2-Bay Rear BP X40 RAID Cable Kit
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable* SAS 0, SAS 1, and SAS 2 connectors 16i HBA/RAID adapter on the RAID
on the backplane adapter slot
• Gen 3: C0C1C2
• Gen 4: C0C1
3 Power cable Power 2 connector on the front Backplane power connector 2 on the
backplane system board
4 SAS signal cable for the rear hot- Signal connector on the rear hot- 8i HBA/RAID adapter on PCIe slot 4
swap drive assembly* swap drive assembly
• Gen 3: C0
• Gen 4: C0
Figure 96. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one Gen 3 16i HBA/RAID adapter
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0, SAS 1, and SAS 2 connectors C0, C1, and C2 connectors on the 16i
on the backplane HBA/RAID adapter on the RAID
adapter slot
3 Power cable Power 2 connector on the front Backplane power connector 2 on the
backplane system board
4 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the front on the system board
backplane
5 SAS signal cable for the rear hot- Signal connector on the rear hot- C3 connector on the 16i HBA/RAID
swap drive assembly swap drive assembly adapter on the RAID adapter slot
Figure 97. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch SAS/SATA/NVMe drives,
the rear hot-swap drive assembly, and one Gen 4 16i HBA/RAID adapter
Cable From To
1 Power cable Power 1 connector on the front Backplane power connector 1 on the
backplane system board
2 SAS signal cable SAS 0 and SAS 1 connectors on the C0 connector on the 16i HBA/RAID
backplane adapter on the RAID adapter slot
3 Power cable Power 2 connector on the front Backplane power connector 2 on the
backplane system board
5 SAS signal cable for the rear hot- SAS 2 connector on the backplane C1 connector on the 16i HBA/RAID
swap drive assembly and signal connector on the rear hot- adapter on the RAID adapter slot
swap drive assembly
Server model: eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, one 8i HBA/RAID adapter
Notes:
• Depending on the backplane type, the connector location of the backplanes may vary slightly.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• *When Gen 4 HBA/RAID adapter is installed, ensure you use Gen 4 SAS signal cable (ThinkSystem
SR550/SR590/SR650 3.5" SAS/SATA 8-Bay X40 RAID Cable Kit).
Figure 98. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, and one 8i
HBA/RAID adapter
2 SAS signal cable* SAS 0 and SAS 1 connectors on the 8i HBA/RAID adapter on the RAID
backplane adapter slot
• Gen 3: C0C1
• Gen 4: C0
3 Power cable Power 2 connector on the backplane Backplane power connector 2 on the
system board
4 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the backplane on the system board
Server model: eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, the rear hot-swap drive
assembly, one Gen 3 8i HBA/RAID adapter
Note: This server model is supported in Chinese Mainland only.
Figure 99. Cable routing for server models with eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, the rear hot-
swap drive assembly, and one Gen 3 8i HBA/RAID adapter
2 SAS signal cable SAS 0 connector on the backplane C0 connector on the 8i HBA/RAID
adapter on the RAID adapter slot
3 SAS signal cable for the rear hot- Signal connector on the rear hot- C1 connector on the 8i HBA/RAID
swap drive assembly swap drive assembly adapter on the RAID adapter slot
4 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the backplane on the system board
Server model: eight 3.5-inch SAS/SATA drives, four 3.5-inch NVMe drives, the rear hot-swap drive
assembly, one Gen 4 8i HBA/RAID adapter
Notes:
• This server model is supported in Chinese Mainland only.
• Gen 4 HBA/RAID adapter cannot be installed in the inner raid adapter slot.
• Ensure you use Gen 4 SAS signal cable (ThinkSystem SR550/SR590/SR650 3.5" SAS/SATA 8-Bay X40
RAID Cable Kit).
Cable From To
1 Power cable Power 1 connector on the backplane Backplane power connector 1 on the
system board
2 SAS signal cable SAS 0 connector on the backplane C0 connector on the 8i HBA/RAID
and signal connector on the rear hot- adapter on the RAID adapter slot
swap drive assembly
3 NVMe signal cable NVMe 0, NVMe 1, NVMe 2, and NVMe 0–1 and NVMe 2–3 connectors
NVMe 3 connectors on the backplane on the system board
Parts list
Use the parts list to identify each of the components that are available for your server.
For more information about ordering the parts shown in Figure 101 “Server components” on page 147:
Note: Depending on the model, your server might look slightly different from the illustration.
The parts listed in the following table are identified as one of the following:
• Tier 1 customer replaceable unit (CRU): Replacement of Tier 1 CRUs is your responsibility. If Lenovo
installs a Tier 1 CRU at your request with no service agreement, you will be charged for the installation.
• Tier 2 customer replaceable unit (CRU): You may install a Tier 2 CRU yourself or request Lenovo to
install it, at no additional charge, under the type of warranty service that is designated for your server.
• Field replaceable unit (FRU): FRUs must be installed only by trained service technicians.
For more information about ordering the parts shown in Figure 101 “Server components” on page 147:
http://datacentersupport.lenovo.com/us/en/products/servers/thinksystem/sr650/7x05/parts
It is highly recommended that you check the power summary data for your server using Lenovo Capacity Planner
before purchasing any new parts.
1 Top cover √
2 RAID adapter √
3 LOM-adapter air baffle √
4 LOM adapter √
Memory module (DCPMM might look
5
slightly different from the illustration) √
6 Heat sink √
7 Processor √
TCM/TPM adapter (for Chinese
8
Mainland only) √
9 System board √
10 P4 GPU air baffle √
11 FHHL V100 GPU air baffle √
12 Fan √
13 Fan cage √
14 RAID super capacitor module √
15 Standard air baffle √
16 Serial port module √
Backplane, eight 2.5-inch hot-swap
17 √
drives
Backplane, twelve 3.5-inch hot-swap
18 √
drives
Backplane, eight 3.5-inch hot-swap
19 √
drives
Right rack latch, with front I/O
20
assembly √
Consumable
and
Index Description Tier 1 CRU Tier 2 CRU FRU
Structural
parts
Power cords
Several power cords are available, depending on the country and region where the server is installed.
To view the power cords that are available for the server:
1. Go to:
http://dcsc.lenovo.com/#/
2. Click Preconfigured Model or Configure to order.
3. Enter the machine type and model for your server to display the configurator page.
4. Click Power ➙ Power Cables to see all line cords.
Notes:
http://datacentersupport.lenovo.com/us/en/products/servers/thinksystem/sr650/7x05/parts
Note: If you replace a part, such as an adapter, that contains firmware, you might also need to update the
firmware for that part. For more information about updating firmware, see “Firmware updates” on page 14.
Installation Guidelines
Before installing components in your server, read the installation guidelines.
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
• Read the safety information and guidelines to ensure that you work safely.
– A complete list of safety information for all products is available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
– The following guidelines are available as well: “Handling static-sensitive devices” on page 154 and
“Working inside the server with the power on” on page 153.
• Make sure the components you are installing are supported by the server. For a list of supported optional
components for the server, see https://static.lenovo.com/us/en/serverproven/index.shtml.
• When you install a new server, download and apply the latest firmware. This will help ensure that any
known issues are addressed, and that your server is ready to work with optimal performance. Go to
ThinkSystem SR650 Drivers and Software to download firmware updates for your server.
Important: Some cluster solutions require specific code levels or coordinated code updates. If the
component is part of a cluster solution, verify that the latest level of code is supported for the cluster
solution before you update the code.
• It is good practice to make sure that the server is working correctly before you install an optional
component.
• Keep the working area clean, and place removed components on a flat and smooth surface that does not
shake or tilt.
• Do not attempt to lift an object that might be too heavy for you. If you have to lift a heavy object, read the
following precautions carefully:
– Make sure that you can stand steadily without slipping.
– Distribute the weight of the object equally between your feet.
– Use a slow lifting force. Never move suddenly or twist when you lift a heavy object.
– To avoid straining the muscles in your back, lift by standing or by pushing up with your leg muscles.
• Back up all important data before you make changes related to the disk drives.
Note: See the system specific instructions for removing or installing a hot-swap drive for any additional
procedures that you might need to perform before you remove or install the drive.
• After finishing working on the server, make sure you reinstall all safety shields, guards, labels, and ground
wires.
Notes:
1. The product is not suitable for use at visual display workplaces according to §2 of the Workplace
Regulations.
2. The set-up of the server is made in the server room only.
CAUTION:
This equipment must be installed or serviced by trained personnel, as defined by the NEC, IEC 62368-
1 & IEC 60950-1, the standard for Safety of Electronic Equipment within the Field of Audio/Video,
Information Technology and Communication Technology. Lenovo assumes you are qualified in the
servicing of equipment and trained in recognizing hazards energy levels in products. Access to the
equipment is by the use of a tool, lock and key, or other means of security, and is controlled by the
authority responsible for the location.
Important: Electrical grounding of the server is required for operator safety and correct system function.
Proper grounding of the electrical outlet can be verified by a certified electrician.
Use the following checklist to verify that there are no potentially unsafe conditions:
1. Make sure that the power is off and the power cord is disconnected.
2. Check the power cord.
• Make sure that the third-wire ground connector is in good condition. Use a meter to measure third-
wire ground continuity for 0.1 ohm or less between the external ground pin and the frame ground.
• Make sure that the power cord is the correct type.
To view the power cords that are available for the server:
a. Go to:
http://dcsc.lenovo.com/#/
b. Click Preconfigured Model or Configure to order.
Attention: The server might stop and loss of data might occur when internal server components are
exposed to static electricity. To avoid this potential problem, always use an electrostatic-discharge wrist
strap or other grounding systems when working inside the server with the power on.
• Avoid loose-fitting clothing, particularly around your forearms. Button or roll up long sleeves before
working inside the server.
• Prevent your necktie, scarf, badge rope, or long hair from dangling into the server.
• Remove jewelry, such as bracelets, necklaces, rings, cuff links, and wrist watches.
• Remove items from your shirt pocket, such as pens and pencils, in case they fall into the server as you
lean over it.
Attention: Prevent exposure to static electricity, which might lead to system halt and loss of data, by
keeping static-sensitive components in their static-protective packages until installation, and handling these
devices with an electrostatic-discharge wrist strap or other grounding system.
“Read the
installation
Guidelines” on
page 151
Step 2. Press the release latch 1 and pivot the security bezel outward to remove it from the chassis.
Attention: Before you ship the rack with the server installed, reinstall and lock the security bezel
into place.
“Read the
installation
Guidelines” on
page 151
Attention: Before you ship the rack with the server installed, reinstall and lock the security bezel into place.
Step 1. If the key is held inside the security bezel, remove it out of the security bezel.
Step 2. Carefully insert the tabs on the security bezel into the slots on the right rack latch. Then, press and
hold the release latch 1 and pivot the security bezel inward until the other side clicks into place.
Note: Depending on the model, the left rack latch might be assembled with a VGA connector and the right
rack latch might be assembled with the front I/O assembly.
Note: If the rack latches are not assembled with a VGA connector or the front I/O assembly, you can remove
the rack latches without powering off the server.
Figure 108. Cable routing for the VGA connector and the front I/O assembly on rack latches
Step 2. On each side of the server, remove the screws that secure the rack latch.
If you are instructed to return the old rack latches, follow all packaging instructions and use any packaging
materials that are provided.
Note: If the rack latches are not assembled with a VGA connector or the front I/O assembly, you can install
the rack latches without powering off the server.
Step 3. Install the screws to secure the rack latch on each side of the server.
Figure 115. Cable routing for the VGA connector and the front I/O assembly on rack latches
2. Complete the parts replacement. See “Complete the parts replacement” on page 282.
S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S014
Note: You can remove or install a hot-swap fan without powering off the server, which helps you avoid
significant interruption to the operation of the system.
Step 1. Use a screwdriver to turn the cover lock to the unlocked position as shown.
Step 2. Press the release button on the cover latch and then fully open the cover latch.
Step 3. Slide the top cover to the rear until it is disengaged from the chassis. Then, lift the top cover off the
chassis and place the top cover on a flat clean surface.
Attention:
• Handle the top cover carefully. Dropping the top cover with the cover latch open might damage
the cover latch.
• For proper cooling and airflow, install the top cover before you turn on the server. Operating the
server with the top cover removed might damage server components.
Note: A new top cover comes without a service label attached. If you need a service label, order it
together with the new top cover. The service label is free of charge.
Note: Before you slide the top cover forward, ensure that all the tabs on the top cover engage the chassis
correctly. If the tabs do not engage the chassis correctly, it will be very difficult to remove the top cover later.
Step 1. Ensure that the cover latch is in the open position. Lower the top cover onto the chassis until both
sides of the top cover engage the guides on both sides of the chassis.
After installing the top cover, complete the parts replacement. See “Complete the parts replacement” on
page 282.
The RAID super capacitor module protects the cache memory on the installed RAID adapter. You can
purchase a RAID super capacitor module from Lenovo.
If you are instructed to return the old RAID super capacitor module, follow all packaging instructions and use
any packaging materials that are provided.
Step 1. Gently press and hold the tab on the air baffle as shown.
Step 2. Insert the RAID super capacitor module into the holder on the air baffle.
Step 3. Press down the RAID super capacitor module to install it into the holder.
S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
Attention: For proper cooling and airflow, install the air baffle before you turn on the server.
Operating the server with the air baffle removed might damage server components.
After removing the standard air baffle, if there is a plastic filler installed in the air baffle, remove the plastic
filler.
S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
Attention: When removing a system fan without powering off the server, do not touch the system fan cage.
Figure 127. Viewing the fan error LEDs from the top of the system fans
S033
CAUTION:
Hazardous energy present. Voltages with hazardous energy might cause heating when shorted with
metal, which might result in spattered metal, burns, or both.
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
Attention: When installing a system fan without powering off the server, do not touch the system fan cage.
After installing the system fan, complete the parts replacement. See “Complete the parts replacement” on
page 282.
Step 1. Rotate the levers of the system fan cage to the rear of the server.
Step 2. Lift the system fan cage straight up and out of the chassis.
Step 1. Align both sides of the system fan cage with the corresponding mounting posts in the chassis.
Then, press the system fan cage straight down into the chassis.
Note: If there are system fans installed in the system fan cage, ensure that the system fans are
correctly connected to the system fan connectors on the system board.
Step 2. Rotate the levers of the system fan cage to the front of the server to secure the system fan cage.
Note: Depending on the model, your server and the front I/O assembly might look different from the
illustrations in this topic.
Step 1. Remove the screws that secure the front I/O assembly.
Step 2. Slide the front I/O assembly out of the assembly bay.
If you are instructed to return the old front I/O assembly, follow all packaging instructions and use any
packaging materials that are provided.
Note: The following procedure is based on the scenario that you are installing the front I/O assembly for
server models with eight 3.5-inch drive bays. The installation procedure is similar for the front I/O assembly
for server models with eight or sixteen 2.5-inch drive bays.
For server models with twelve 3.5-inch drive bays or twenty-four 2.5-inch drive bays, the front I/O assembly
is assembled with the right rack latch. See “Install the rack latches” on page 161 for the installation
procedures.
Before installing the front I/O assembly, touch the static-protective package that contains the new front I/O
assembly to any unpainted surface on the outside of the server. Then, take the new front I/O assembly out of
the package and place it on a static-protective surface.
Step 1. Insert the front I/O assembly into the assembly bay.
Step 2. Install the screws to secure the front I/O assembly in place.
Notes:
• The term “hot-swap drive” refers to all the supported types of hot-swap hard disk drives, hot-swap solid-
state drives, and hot-swap NVMe drives.
• Use any documentation that comes with the drive and follow those instructions in addition to the
instructions in this topic. Ensure that you have all the cables and other equipment that are specified in the
documentation that comes with the drive.
• The electromagnetic interference (EMI) integrity and cooling of the server are protected by having all drive
bays covered or occupied. The vacant bays are either covered by an EMI-protective panel or occupied by
drive fillers. When installing a drive, save the removed drive filler in case that you later remove the drive
and need the drive filler to cover the place.
• To avoid damage to the drive connectors, ensure that the top cover is in place and fully closed whenever
you install or remove a drive.
Attention: To ensure that there is adequate system cooling, do not operate the server for more than two
minutes without either a drive or a drive filler installed in each bay.
Figure 134. Opening the drive tray handle of a 2.5-inch hot-swap drive
Figure 135. Opening the drive tray handle of a 3.5-inch hot-swap drive
The following notes describe the type of drives that your server supports and other information that you must
consider when you install a drive.
• Depending on your server models, your server supports the following drive types:
– NVMe SSD
– SAS/SATA SSD
– SAS/SATA HDD
For a list of supported drives, see:
https://static.lenovo.com/us/en/serverproven/index.shtml
• The drive bays are numbered to indicate the installation order (starting from number “0”). Follow the
installation order when you install a drive. See “Front view” on page 19.
• You can mix drives of different types, different sizes, and different capacities in one system, but not in one
RAID array. The following order is recommended when installing the drives:
– Drive type priority: NVMe SSD, SAS SSD, SATA SSD, SAS HDD, SATA HDD
– Drive size priority: 2.5 inch, 3.5 inch
– Drive capacity priority: the lowest capacity first
• The drives in a single RAID array must be the same type, same size, and same capacity.
• Some server models support NVMe drives and the bays for installing NVMe drives vary by model:
Eight-2.5-inch-drive-bay models that support NVMe Up to four NVMe drives in the bays 4–7
drives
Sixteen-2.5-inch-drive-bay models that support NVMe Up to sixteen NVMe drives in the bays 0–15
drives
Twenty-2.5-inch-drive-bay models that support NVMe Up to sixteen NVMe drives in the bays 0–19
drives
Twenty-four-2.5-inch-drive-bay models that support Up to twenty-four NVMe drives in the bays 0–23
NVMe drives
Twelve-3.5-inch-drive-bay model that supports NVMe Up to four NVMe drives in the bays 8–11
drives
2. Touch the static-protective package that contains the new drive to any unpainted surface on the outside
of the server. Then, take the new drive out of the package and place it on a static-protective surface.
Step 1. Ensure that the drive tray handle is in the open position. Slide the drive into the drive bay until it
snaps into position.
Step 2. Close the drive tray handle to lock the drive in place.
Step 3. Check the drive status LED to verify that the drive is operating correctly.
• If the yellow drive status LED is lit continuously, that drive is faulty and must be replaced.
• If the green drive activity LED is flashing, the drive is being accessed.
Step 4. Continue to install additional hot-swap drives if necessary.
Backplane replacement
Use this information to remove and install a hot-swap-drive backplane.
Note: Depending on the specific type, your backplane might look different from the illustration in this topic.
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
Step 2. Record the cable connections on the backplane and then disconnect all cables from the
backplane. For information about the backplane cable routing, see “Backplane” on page 40.
If you are instructed to return the old backplane, follow all packaging instructions and use any packaging
materials that are provided.
Note:
Your server supports three types of 2.5-inch-drive backplanes: SATA/SAS 8-bay backplane (eight SATA/SAS
drive bays), AnyBay 8-bay backplane (four SATA/SAS drive bays and four NVMe drive bays), and NVMe 8-
bay backplane. Depending on the backplane type and quantity, the installation location of the backplanes
varies.
Before installing the 2.5-inch-drive backplane, touch the static-protective package that contains the new
backplane to any unpainted surface on the outside of the server. Then, take the new backplane out of the
package and place it on a static-protective surface.
Step 1. Pull the release pins and slightly slide the backplane in the direction as shown.
Step 2. Pivot the backplane backward slightly to release it from the four hooks on the chassis. Then,
carefully lift the backplane out of the chassis.
Step 3. Record the cable connections on the backplane and then disconnect all cables from the
backplane. For information about the backplane cable routing, see “Backplane” on page 40.
If you are instructed to return the old backplane, follow all packaging instructions and use any packaging
materials that are provided.
Notes:
• The procedure is based on the scenario that you want to install the backplane for up to twelve 3.5-inch
drives. The procedure is similar for the backplane for up to eight 3.5-inch drives.
• If you are installing the 3.5-inch-drive backplane with expander and the 8i HBA/RAID adapter for the
server models with twelve 3.5-inch-drive bays, GPU is not supported, the maximum supported processor
TDP is 165 watts, and you need to create the RAID volume to avoid the disorder of the HDD sequence.
Besides, if the rear hot-swap drive is installed, the server performance might be degraded.
Before installing the 3.5-inch-drive backplane, touch the static-protective package that contains the new
backplane to any unpainted surface on the outside of the server. Then, take the new backplane out of the
package and place it on a static-protective surface.
Attention:
• Disconnect all power cords for this task.
• If you are removing a DCPMM in App Direct or Mixed Memory Mode, make sure to back up the stored
data, and delete any created namespace.
• memory modules are sensitive to static discharge and require special handling. In addition to the standard
guidelines for “Handling static-sensitive devices” on page 154:
– Always wear an electrostatic-discharge strap when removing or installing memory modules.
Electrostatic-discharge gloves can also be used.
– Never hold two or more memory modules together so that they do not touch each other. Do not stack
memory modules directly on top of each other during storage.
– Never touch the gold memory module connector contacts or allow these contacts to touch the outside
of the memory module connector housing.
– Handle memory modules with care: never bend, twist, or drop a memory module.
– Do not use any metal tools (such as jigs or clamps) to handle the memory modules, because the rigid
metals may damage the memory modules.
– Do not insert memory modules while holding packages or passive components, which can cause
package cracks or detachment of passive components by the high insertion force.
If you are removing a DCPMM in App Direct or Mixed Memory Mode, make sure to:
Note: If one or more DCPMMs are secured with passphrase, make sure security of every unit is
disabled before performing secure erase. In case the passphrase is lost or forgotten, contact Lenovo
service.
If the App Direct capacity is not interleaved:
a. Delete the namespace and filesystem of the DCPMM unit to be replaced in the operating system.
b. Perform secure erase on the DCPMM unit that is to be replaced. Go to Intel Optane DCPMMs ➙
Security ➙ Press to Secure Erase to perform secure erase.
Note: A DCPMM module looks slightly different from a DRAM DIMM in the illustration, but the removal
method is the same.
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
Step 1. Open the retaining clips on each end of the memory module slot.
Attention: To avoid breaking the retaining clips or damaging the memory module slots, handle the
clips gently.
Step 2. Grasp the memory module at both ends and carefully lift it out of the slot.
Your server has 24 memory module slots. It supports up to 12 memory modules when one processor is
installed, and up to 24 memory modules when two processors are installed. It has the following features:
Depending on the memory modules installed, refer to below topics for detailed installation rules:
• “DRAM DIMM installation rules” on page 198
The following illustration helps you to locate the memory module slots on the system board.
Note: It is recommended to install memory modules with the same rank in each channel.
Independent mode
Independent mode provides high performance memory capability. You can populate all channels with no
matching requirements. Individual channels can run at different memory module timings, but all channels
must run at the same interface frequency.
Notes:
• All memory modules to be installed must be the same type.
• All Performance+ DIMMs in the server must be of the same type, rank, and capacity (the same Lenovo
part number) to operate at 2933 MHz in the configurations with two DIMMs per channel. Performance+
DIMMs cannot be mixed with other DIMMs.
• When you install memory modules with same rank and different capacity, install the memory module that
has the highest capacity first.
The following table shows the memory module population sequence for independent mode when only one
processor (Processor 1) is installed.
Notes:
• If there are three identical memory modules to be installed for Processor 1, and the three memory
modules have the same Lenovo part number, move the memory module to be installed in slot 8 to slot 1.
• If there are ten identical memory modules to be installed for Processor 1, and the ten memory modules
have the same Lenovo part number, move the memory module to be installed in slot 6 to slot 12.
The following table shows the memory module population sequence for independent mode when two
processors (Processor 1 and Processor 2) are installed.
Notes:
• If there are three identical memory modules to be installed for Processor 1, and the three memory
modules have the same Lenovo part number, move the memory module to be installed in slot 8 to slot 1.
• If there are three identical memory modules to be installed for Processor 2, and the three memory
modules have the same Lenovo part number, move the memory module to be installed in slot 20 to slot
13.
• If there are ten identical memory modules to be installed for Processor 1, and the ten memory modules
have the same Lenovo part number, move the memory module to be installed in slot 2 to slot 12.
• If there are ten identical memory modules to be installed for Processor 2, and the ten memory modules
have the same Lenovo part number, move the memory module to be installed in slot 14 to slot 24.
Mirroring mode
In mirroring mode, each memory module in a pair must be identical in size and architecture. The channels are
grouped in pairs with each channel receiving the same data. One channel is used as a backup of the other,
which provides redundancy.
Notes:
• Partial memory mirroring is a sub-function of memory mirroring, which requires to follow the installation
rules of mirroring mode.
• All memory modules to be installed must be the same type with the same capacity, frequency, voltage,
and ranks.
• All Performance+ DIMMs in the server must be of the same type, rank, and capacity (the same Lenovo
part number) to operate at 2933 MHz in the configurations with two DIMMs per channel. Performance+
DIMMs cannot be mixed with other DIMMs.
The following table shows the memory module population sequence for mirroring mode when only one
processor (Processor 1) is installed.
The following table shows the memory module population sequence for mirroring mode when two
processors (Processor 1 and Processor 2) are installed.
Notes:
• All memory modules to be installed must be the same type with the same capacity, frequency, voltage,
and ranks.
• All Performance+ DIMMs in the server must be of the same type, rank, and capacity (the same Lenovo
part number) to operate at 2933 MHz in the configurations with two DIMMs per channel. Performance+
DIMMs cannot be mixed with other DIMMs.
• If the rank of installed memory modules is one rank, follow the installation rules listed in the following
tables. If the rank of installed memory modules is more than one rank, follow the installation rules of
independent mode.
The following table shows the memory module population sequence for rank sparing mode when two
processors (Processor 1 and Processor 2) are installed.
Notes:
• Before installing DCPMMs and DRAM DIMMs, refer to and make sure to meet all the requirements.
• To verify if the presently installed processors support DCPMMs, examine the four digits in the processor
description. Only the processor with description meeting both of the following requirements support
DCPMMs.
– The first digit is 5 or a larger number.
Note: The only exception to this rule is Intel Xeon Silver 4215, which also supports DCPMM.
• DCPMMs are supported only by Intel Xeon SP Gen 2. For a list of supported processors and memory
modules, see http://www.lenovo.com/us/en/serverproven/
• When you install two or more DCPMMs, all DCPMMs must have the same Lenovo part number.
• All DRAM memory modules installed must have the same Lenovo part number.
• 16 GB RDIMM has two different types: 16 GB 1Rx4 and 16 GB 2Rx8. The part number of the two types
are different.
• Supported memory capacity range varies with the following types of DCPMMs.
– Large memory tier (L): The processors with L after the four digits (for example: Intel Xeon 5215 L)
– Medium memory tier (M): The processors with M after the four digits (for example: Intel Xeon Platinum
8280 M)
– Other: Other processors that support DCPMMs (for example: Intel Xeon Gold 5222)
In addition, you can take advantage of a memory configurator, which is available at the following site:
http://1config.lenovo.com/#/memory_configuration
The following illustration helps you to locate the memory module slots on the system board.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 1
Configuration
12 11 10 9 8 7 6 5 4 3 2 1
1 DCPMM and 6 D D D P D D D
DIMMs
2 DCPMMs and P D D D D P
4 DIMMs
2 DCPMMs and D D D P P D D D
6 DIMMs
2 DCPMMs and P D D D D D D D D P
8 DIMMs
4 DCPMMs and D D P D P P D P D D
6 DIMMs
6 DCPMMs and D P D P D P P D P D P D
6 DIMMs
Table 18. Supported DCPMM capacity in App Direct Mode with one processor
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √ √ √
1 6 M √ √ √
Other √ √ √2
L √ √ √
2 4 M √ √ √
Other √ √
L √ √ √
2 6 M √ √ √
Other √ √2
L √ √ √
2 8 M √ √ √
Other √2 √2
L √ √ √
4 6 M √ √
Other √2
L √ √ √
6 6 M √ √2
Other √1
Notes:
1. Supported DIMM capacity is up to 32 GB.
2. Supported DIMM capacity is up to 64 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Table 20. Supported DCPMM capacity in App Direct Mode with two processors
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √ √ √
1 12 M √ √ √
Other √ √ √2
L √ √ √
2 12 M √ √ √
Other √ √ √2
L √ √ √
4 8 M √ √ √
Other √ √
L √ √ √
4 12 M √ √ √
Other √ √2
L √ √ √
4 16
M √ √ √
Other √2 √2
L √ √ √
8 12 M √ √
Other √2
L √ √ √
12 12 M √ √2
Other √1
Notes:
1. Supported DIMM capacity is up to 32 GB.
2. Supported DIMM capacity is up to 64 GB.
Memory Mode
In this mode, DCPMMs act as volatile system memory, while DRAM DIMMs act as cache. Ensure that the
ratio of DRAM DIMM capacity to DCPMM apacity is between 1:2 and 1:16.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 1
Configuration
12 11 10 9 8 7 6 5 4 3 2 1
2 DCPMMs and 4 P D D D D P
DIMMs
2 DCPMMs and 6 D D D P P D D D
DIMMs
4 DCPMMs and 6 D D P D P P D P D D
DIMMs
6 DCPMMs and 6 D P D P D P P D P D P D
DIMMs
Table 22. Supported DCPMM capacity in Memory Mode with one processor
Total
Total
DCPM- Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DIMMs
Ms
L √1 √2 √3
2 4 M √1 √2 √3
Other √1 √2
L √1 √2
2 6 M √1 √2
Other √1
L √1 √2 √4
4 6 M √1 √2
Other √1
L √2 √3 √5
6 6 M √2 √3
Other √2
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 GB to 64 GB.
4. Supported DIMM capacity is 32 GB to 64 GB.
5. Supported DIMM capacity is 32 GB to 128 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 2 Processor 1
Configuration
24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
4 DCPMMs and P D D D D P P D D D D P
8 DIMMs
4 DCPMMs and D D D P P D D D D D D P P D D D
12 DIMMs
8 DCPMMs and D D P D P P D P D D D D P D P P D P D D
12 DIMMs
12 DCPMMs D P D P D P P D P D P D D P D P D P P D P D P D
and 12 DIMMs
Table 24. Supported DCPMM capacity in Memory Mode with two processors
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √1 √2 √3
4 8 M √1 √2 √3
Other √1 √2
L √1 √2
4 12 M √1 √2
Other √1
8 12 L √1 √2 √4
M √1 √2
Other √1
L √2 √3 √5
12 12 M √2 √3
Other √2
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 GB to 64 GB.
4. Supported DIMM capacity is 32 GB to 64 GB.
5. Supported DIMM capacity is 32 GB to 128 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 1
Configuration
12 11 10 9 8 7 6 5 4 3 2 1
2 DCPMMs and P D D D D P
4 DIMMs
2 DCPMMs and D D D P P D D D
6 DIMMs
4 DCPMMs and D D P D P P D P D D
6 DIMMs
6 DCPMMs and D P D P D P P D P D P D
6 DIMMs
Table 26. Supported DCPMM capacity in Mixed Memory Mode with one processor
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √1 √2
2 4 M √1 √2
Other √1
L √1 √2
2 6 M √1 √2
Other √1
L √1 √2 √3
4 6 M √1 √2
Other √1
L √1 √2 √3
6 6 M √1 √2
Other √1
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 to 64 GB.
P: Only Data Center Persistent Memory Module (DCPMM) can be installed on the corresponding DIMM slots.
Processor 2 Processor 1
Configuration
24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
4 DCPMMs and P D D D D P P D D D D P
8 DIMMs
4 DCPMMs and D D D P P D D D D D D P P D D D
12 DIMMs
8 DCPMMs and D D P D P P D P D D D D P D P P D P D D
12 DIMMs
12 DCPMMs D P D P D P P D P D P D D P D P D P P D P D P D
and 12 DIMMs
Table 28. Supported DCPMM capacity in Mixed Memory Mode with two processors
Total Total
Processor Family 128 GB DCPMM 256 GB DCPMM 512 GB DCPMM
DCPMMs DIMMs
L √1 √2
4 8 M √1 √2
Other √1
L √1 √2
4 12 M √1 √2
Other √1
L √1 √2 √3
8 12 M √1 √2
Other √1
L √1 √2 √3
12 12 M √1 √2
Other √1
Notes:
1. Supported DIMM capacity is 16 GB.
2. Supported DIMM capacity is 16 to 32 GB.
3. Supported DIMM capacity is 16 to 64 GB.
Attention:
• Disconnect all power cords for this task.
• memory modules are sensitive to static discharge and require special handling. In addition to the standard
guidelines for “Handling static-sensitive devices” on page 154:
– Always wear an electrostatic-discharge strap when removing or installing memory modules.
Electrostatic-discharge gloves can also be used.
– Never hold two or more memory modules together so that they do not touch each other. Do not stack
memory modules directly on top of each other during storage.
– Never touch the gold memory module connector contacts or allow these contacts to touch the outside
of the memory module connector housing.
– Handle memory modules with care: never bend, twist, or drop a memory module.
– Do not use any metal tools (such as jigs or clamps) to handle the memory modules, because the rigid
metals may damage the memory modules.
– Do not insert memory modules while holding packages or passive components, which can cause
package cracks or detachment of passive components by the high insertion force.
Note: Ensure that you observe the installation rules and sequence in “Memory module installation
rules” on page 197.
3. If you are going to install a DCPMM for the first time, refer to “DC Persistent Memory Module (DCPMM)
setup” in Setup Guide.
Note: A DCPMM module looks slightly different from a DRAM DIMM in the illustration, but the
installation method is the same.
Step 1. Open the retaining clips on each end of the memory module slot.
Attention: To avoid breaking the retaining clips or damaging the memory module slots, open and
close the clips gently.
Note: If there is a gap between the memory module and the retaining clips, the memory module
has not been correctly inserted. In this case, open the retaining clips, remove the memory module,
and then reinsert it.
If you have installed a DRAM DIMM, complete the parts replacement. See “Complete the parts replacement”
on page 282.
Notes:
• For a list of the supported RAID adapters, see:
https://static.lenovo.com/us/en/serverproven/index.shtml
• Depending on the specific type, your RAID adapter might look different from the illustrations in this topic.
• Depending on the specific server model, a NVMe switch adapter might be installed in the RAID adapter
slot. The NVMe switch adapter might be different from the RAID adapter illustration in this topic, but the
installation and removal procedures are the same.
Attention: Replacing the RAID adapter might impact your RAID configurations. Back up your data before
you begin to avoid any data loss due to a RAID configuration change.
Note: The following procedure is based on the scenario that the RAID adapter is installed in the RAID
adapter slot on the system board. For the procedure about removing the RAID adapter from the PCIe slot,
see “Remove a PCIe adapter” on page 224.
To remove the RAID adapter from the RAID adapter slot on the system board, complete the following steps:
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
If you are instructed to return the old RAID adapter, follow all packaging instructions and use any packaging
materials that are provided.
Ensure that you follow the installation order if you install more than one RAID adapter:
• The RAID adapter slot on the system board
• The PCIe slot 4 on the system board if the serial port module is not installed
• A PCIe slot on the riser card
To install the RAID adapter in the RAID adapter slot on the system board, complete the following steps:
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
Note: In the U.S., call 1-800-IBM-4333 for information about battery disposal.
• If you replace the original lithium battery with a heavy-metal battery or a battery with heavy-metal
components, be aware of the following environmental consideration. Batteries and accumulators that
contain heavy metals must not be disposed of with normal domestic waste. They will be taken back free
of charge by the manufacturer, distributor, or representative, to be recycled or disposed of in a proper
manner.
• To order replacement batteries, call 1-800-IBM-SERV within the U.S., and 1-800-465-7999 or 1-800-465-
6666 within Canada. Outside the U.S. and Canada, call your support center or business partner.
Note: After you replace the CMOS battery, you must reconfigure the server and reset the system date
and time.
S004
CAUTION:
When replacing the lithium battery, use only Lenovo specified part number or an equivalent type
battery recommended by the manufacturer. If your system has a module containing a lithium battery,
replace it only with the same module type made by the same manufacturer. The battery contains
lithium and can explode if not properly used, handled, or disposed of.
Do not:
• Throw or immerse into water
• Heat to more than 100°C (212°F)
• Repair or disassemble
S002
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off the
electrical current supplied to the device. The device also might have more than one power cord. To
remove all electrical current from the device, ensure that all power cords are disconnected from the
power source.
Attention:
• Failing to remove the CMOS battery properly might damage the socket on the system board.
Any damage to the socket might require replacing the system board.
• Do not tilt or push the CMOS battery by using excessive force.
The following tips describe information that you must consider when installing the CMOS battery.
• Lenovo has designed this product with your safety in mind. The lithium battery must be handled correctly
to avoid possible danger. If you install the CMOS battery, you must adhere to the following instructions.
Note: In the U. S., call 1-800-IBM-4333 for information about battery disposal.
• If you replace the original lithium battery with a heavy-metal battery or a battery with heavy-metal
components, be aware of the following environmental consideration. Batteries and accumulators that
contain heavy metals must not be disposed of with normal domestic waste. They will be taken back free
of charge by the manufacturer, distributor, or representative, to be recycled or disposed of in a proper
manner.
Note: After you install the CMOS battery, you must reconfigure the server and reset the system date and
time.
S004
CAUTION:
When replacing the lithium battery, use only Lenovo specified part number or an equivalent type
battery recommended by the manufacturer. If your system has a module containing a lithium battery,
replace it only with the same module type made by the same manufacturer. The battery contains
lithium and can explode if not properly used, handled, or disposed of.
Do not:
• Throw or immerse into water
• Heat to more than 100°C (212°F)
• Repair or disassemble
S002
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off the
electrical current supplied to the device. The device also might have more than one power cord. To
remove all electrical current from the device, ensure that all power cords are disconnected from the
power source.
Note: Depending on the specific type, your riser card might look different from the illustrations in this topic.
Step 2. Remove the PCIe adapters that are installed on the riser card. See “Remove a PCIe adapter from
the riser assembly” on page 225.
Step 3. Remove the two screws that secure the failing riser card. Then, remove the failing riser card from
the bracket.
Before installing a riser card, touch the static-protective package that contains the new riser card to any
unpainted surface on the outside of the server. Then, take the new riser card out of the package and place it
on a static-protective surface.
4. Complete the parts replacement. See “Complete the parts replacement” on page 282.
The PCIe adapter can be an Ethernet card, a host bus adapter, a RAID adapter, a PCIe solid-state drive, or
any other supported PCIe adapters. PCIe adapters vary by type, but the installation and removal procedures
are the same.
Notes:
• Depending on the specific type, your PCIe adapter might look different from the illustration in this topic.
• Use any documentation that comes with the PCIe adapter and follow those instructions in addition to the
instructions in this topic.
To remove a PCIe adapter from the riser assembly, complete the following steps:
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
Step 1. Press the tab to pivot the PCIe adapter retention latch to the open position.
Notes:
• The PCIe adapter might fit tightly into the PCIe slot. If necessary, alternatively move each side of
the PCIe adapter a small and equal amount until it is removed from the slot.
If you are instructed to return the old PCIe adapter, follow all packaging instructions and use any packaging
materials that are provided.
Notes:
• Depending on the specific type, your PCIe adapter might look different from the illustration in this topic.
• Use any documentation that comes with the PCIe adapter and follow those instructions in addition to the
instructions in this topic.
Step 1. Locate the PCIe slot 4. Then, pivot the PCIe adapter retention latch to the open position.
Step 2. Grasp the PCIe adapter by its edges and carefully pull it out of the PCIe slot.
Note: The PCIe adapter might fit tightly into the PCIe slot. If necessary, alternatively move each
side of the PCIe adapter a small and equal amount until it is removed from the slot.
If you are instructed to return the old PCIe adapter, follow all packaging instructions and use any packaging
materials that are provided.
Observe the following PCIe slot selection priority when installing a PCIe adapter:
One processor 1
Two processors 1, 5, 6
– For server models with sixteen/twenty/twenty-four NVMe drives (with two processors installed):
One processor 1, 2, 3
Two processors 1, 2, 3, 5, 6
One processor 7, 4, 2, 3, 1
Two processors 7, 4, 2, 3, 1, 5, 6
One processor 4, 2, 3, 1
Two processors 4, 2, 3, 1, 5
One processor 4, 2, 3, 1
Two processors 4, 2, 6, 3, 5, 1
Notes:
• Depending on the specific type, your PCIe adapter and riser card for the riser assembly might look
different from the illustration in this topic.
• Use any documentation that comes with the PCIe adapter and follow those instructions in addition to the
instructions in this topic.
• Do not install PCIe adapters with small form factor (SFF) connectors in PCIe slot 6.
• ThinkSystem Xilinx Alveo U50 Data Center Accelerator Adapter is supported only when the following
requirement are met:
To install a PCIe adapter on the riser assembly, complete the following steps:
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
Notes:
• Depending on the specific type, your PCIe adapter might look different from the illustration in this topic.
• Use any documentation that comes with the PCIe adapter and follow those instructions in addition to the
instructions in this topic.
To install a PCIe adapter on the system board, complete the following steps:
Watch the procedure
A video of this procedure is available at https://www.youtube.com/playlist?list=PLYV5R7hVcs-A25P7vBoGa_
wn7D7XTgDS_.
Step 2. Pivot the PCIe adapter retention latch to the closed position to secure the PCIe adapter in position.
GPU replacement
Use this information to remove and install the GPU.
This topic applies only to the full-height GPUs and the NVIDIA P4/T4 GPU. About the replacement procedure
for low-profile GPUs, refer to “PCIe adapter replacement” on page 224.
Remove a GPU
Use this information to remove a GPU.
Notes:
• Depending on the specific type, your GPU might look different from the illustration in this topic.
• Use any documentation that comes with the GPU and follow those instructions in addition to the
instructions in this topic.
• For full-height full-length GPUs, remove the GPU holder from the GPU assembly.
• For NVIDIA A10 GPU, if you are removing one A10 GPU on one riser assembly, remove the riser
assembly first, and then remove the A10 GPU air baffle.
• For NVIDIA A10 GPU, if you are removing two NVIDIA A10 GPUs on one riser assembly, remove
both the riser assembly and the FHFL GPU holder together first, and then remove the FHFL GPU
holder.
Install a GPU
Use this information to install a GPU.
Notes:
• Depending on the specific type, your GPU might look different from the illustrations in this topic.
• Use any documentation that comes with the GPU and follow those instructions in addition to the
instructions in this topic.
• For NVIDIA A10 GPU, if you are installing one NVIDIA A10 on the one riser assembly, install a A10
GPU air baffle on the large-size air baffle first.
• For NVIDIA A10 GPU, if you are installing two NVIDIA A10 GPUs on one riser assembly, install the
FHFL GPU holder on the riser assembly first.
2. Complete the parts replacement. See “Complete the parts replacement” on page 282.
The rear hot-swap drive assembly enables you to install up to two 3.5-inch hot-swap drives in the rear of the
server.
To remove the rear hot-swap drive assembly, complete the following steps:
Step 1. Disconnect the signal cable from the rear hot-swap drive assembly. See “Internal cable routing” on
page 33.
Step 2. Grasp the rear hot-swap drive assembly by its edges and carefully lift it straight up and off the
chassis.
If you are instructed to return the old rear hot-swap drive assembly, follow all packaging instructions and use
any packaging materials that are provided.
Before installing the rear hot-swap drive assembly, touch the static-protective package that contains the new
rear hot-swap drive assembly to any unpainted surface on the outside of the server. Then, take the new rear
hot-swap drive assembly out of the package and place it on a static-protective surface.
Note: If you are installing the ThinkSystem SR650 Rear 3.5 HDD kit Without Fan (provided for Chinese
Mainland only), the maximum supported processor TDP is 125 watts.
To install the rear hot-swap drive assembly, complete the following steps:
Step 1. Align the mounting stud on the system board with the corresponding hole in the rear hot-swap
drive assembly. Meanwhile, align the rear of the rear hot-swap drive assembly with the
corresponding rail guides in the rear of the chassis. Then, carefully press the rear hot-swap drive
assembly straight down into the chassis until it is fully seated.
Step 2. Connect the signal cable to the rear hot-swap drive assembly. See “Internal cable routing” on page
33.
Before removing the LOM adapter, remove the top cover. See “Remove the top cover” on page 165.
Step 1. Remove the LOM-adapter air baffle by pinching the tab 1 and then lifting the air baffle out of the
server.
Step 2. Loosen the thumbscrew that secures the LOM adapter.
Step 3. Push the LOM adapter out of the connector on the system board.
Step 4. Lift the LOM adapter off the server as shown.
If you are instructed to return the old LOM adapter, follow all of the packaging instructions and use any
packaging materials that are provided.
CAUTION:
Use a tool to remove the LOM adapter slot bracket to avoid injury.
2. Touch the static-protective package that contains the new LOM adapter to any unpainted surface on the
outside of the server. Then, take the new LOM adapter out of the package and place it on a static-
protective surface.
After installing the LOM adapter, complete the parts replacement. See “Complete the parts replacement” on
page 282.
Before removing the serial port module, remove the top cover. See “Remove the top cover” on page 165.
Step 2. Connect the cable of the serial port module to the serial-port-module connector on the system
board. For the location of the serial-port-module connector, refer to “System board components”
on page 29.
To remove the M.2 backplane and M.2 drive, complete the following steps:
Note: If the M.2 backplane has two M.2 drives, they will both release outward when you slide
the retainer backward.
c. Rotate the M.2 drive away from the M.2 backplane.
d. Pull it away from the connector 2 at an angle of approximately 30 degrees.
If you are instructed to return the old M.2 backplane or M.2 drive, follow all of the packaging instructions and
use any packaging materials that are provided.
Before adjusting the retainer on the M.2 backplane, locate the correct keyhole that the retainer should be
installed into to accommodate the particular size of the M.2 drive you wish to install.
To adjust the retainer on the M.2 backplane, complete the following steps:
Notes:
• Some M.2 backplanes support two identical M.2 drives. When two M.2 drives are installed, align and
support both M.2 drives when sliding the retainer forward to secure the M.2 drives.
• Install the M.2 drive in slot 0 first.
1 Slot 0
2 Slot 1
To install the M.2 backplane and M.2 drive, complete the following steps:
Step 1. Insert the M.2 drive at an angle of approximately 30 degrees into the connector.
Note: If your M.2 backplane supports two M.2 drives, insert the M.2 drives into the connectors at
both sides.
Step 2. Rotate the M.2 drive down until the notch 1 catches on the lip of the retainer 2 .
Attention: When sliding the retainer forward, ensure that the two nubs 3 on the retainer enter the
small holes 4 on the M.2 backplane. Once they enter the holes, you will hear a soft “click” sound.
2. If you have removed the riser 2 assembly, reinstall it. See “Install a riser card” on page 221.
3. Complete the parts replacement. See “Complete the parts replacement” on page 282.
4. Use the Lenovo XClarity Provisioning Manager to configure the RAID. For more information, see:
http://sysmgt.lenovofiles.com/help/topic/LXPM/RAID_setup.html
CAUTION:
Never remove the cover on a power supply or any part that has this label attached. Hazardous voltage,
current, and energy levels are present inside any component that has this label attached. There are no
serviceable parts inside these components. If you suspect a problem with one of these parts, contact
a service technician.
S002
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off the
electrical current supplied to the device. The device also might have more than one power cord. To
remove all electrical current from the device, ensure that all power cords are disconnected from the
power source.
S001
DANGER
The following tips describe the information that you must consider when you remove a power supply with dc
input.
CAUTION:
• 240 V dc input (input range: 180-300 V dc) is supported in Chinese Mainland ONLY. Power supply
with 240 V dc input cannot support hot plugging power cord function. Before removing the power
supply with dc input, please turn off server or disconnect dc power sources at the breaker panel or
by turning off the power source. Then, remove the power cord.
• In order for the ThinkSystem products to operate error free in both a DC or AC electrical
environment, a TN-S earthing system which complies to 60364-1 IEC 2005 standard has to be
present or installed.
在直流输入状态下,若电源供应器插座不支持热插拔功能,请务必不要对设备电源线进行热插拔,此操作可能
导致设备损坏及数据丢失。因错误执行热插拔导致的设备故障或损坏,不属于保修范围。
NEVER CONNECT AND DISCONNECT THE POWER SUPPLY CABLE AND EQUIPMENT WHILE YOUR
EQUIPMENT IS POWERED ON WITH DC SUPPLY (hot-plugging). Otherwise you may damage the
equipment and result in data loss, the damages and losses result from incorrect operation of the equipment
will not be covered by the manufacturers’ warranty.
S035
CAUTION:
Never remove the cover on a power supply or any part that has this label attached. Hazardous voltage,
current, and energy levels are present inside any component that has this label attached. There are no
serviceable parts inside these components. If you suspect a problem with one of these parts, contact
a service technician.
S019
CAUTION:
The power-control button on the device does not turn off the electrical current supplied to the device.
The device also might have more than one connection to dc power. To remove all electrical current
from the device, ensure that all connections to dc power are disconnected at the dc power input
terminals.
If you have installed the 2U CMA Upgrade Kit for Toolless Slide Rail or Toolless Slide Rail Kit with
2U CMA, do the following:
a. Press down the stop bracket 1 and rotate it to the open position.
b. Rotate the CMA out of the way to gain access to the power supply.
Step 2. Disconnect the power cord from the hot-swap power supply.
Note: If you are replacing two power supplies, do the power supply replacement one by one to
ensure that the power supply to the server is not interrupted. Do not disconnect the power cord
from the secondly replaced power supply until the power output LED for the firstly replaced power
supply is lit. For the location of the power output LED, refer to “Rear view LEDs” on page 27.
Note:
Slightly pull the power supply upwards when sliding the power supply out of the chassis, if you
have installed one of the following CMA kits:
• 2U CMA Upgrade Kit for Toolless Slide Rail
• Toolless Slide Rail Kit with 2U CMA
Important: To ensure proper cooling during normal server operation, both of the power supply bays
must be occupied. This means that each bay must have a power supply installed; or one has a power
supply installed and the other has a power supply filler installed.
2. If you are instructed to return the old hot-swap power supply, follow all packaging instructions and use
any packaging materials that are provided.
The following tips describe the type of power supply that the server supports and other information that you
must consider when you install a power supply:
Notes:
• Ensure that the two power supplies installed on the server have the same wattage.
• If you are replacing the existing power supply with a new power supply of different wattage, attach the
power information label that comes with this option onto the existing label near the power supply.
S035
CAUTION:
Never remove the cover on a power supply or any part that has this label attached. Hazardous voltage,
current, and energy levels are present inside any component that has this label attached. There are no
serviceable parts inside these components. If you suspect a problem with one of these parts, contact
a service technician.
S002
CAUTION:
The power-control button on the device and the power switch on the power supply do not turn off the
electrical current supplied to the device. The device also might have more than one power cord. To
remove all electrical current from the device, ensure that all power cords are disconnected from the
power source.
S001
The following tips describe the information that you must consider when you install a power supply with dc
input.
CAUTION:
• 240 V dc input (input range: 180-300 V dc) is supported in Chinese Mainland ONLY. Power supply
with 240 V dc input cannot support hot plugging power cord function. Before removing the power
supply with dc input, please turn off server or disconnect dc power sources at the breaker panel or
by turning off the power source. Then, remove the power cord.
• In order for the ThinkSystem products to operate error free in both a DC or AC electrical
environment, a TN-S earthing system which complies to 60364-1 IEC 2005 standard has to be
present or installed.
在直流输入状态下,若电源供应器插座不支持热插拔功能,请务必不要对设备电源线进行热插拔,此操作可能
导致设备损坏及数据丢失。因错误执行热插拔导致的设备故障或损坏,不属于保修范围。
NEVER CONNECT AND DISCONNECT THE POWER SUPPLY CABLE AND EQUIPMENT WHILE YOUR
EQUIPMENT IS POWERED ON WITH DC SUPPLY (hot-plugging). Otherwise you may damage the
equipment and result in data loss, the damages and losses result from incorrect operation of the equipment
will not be covered by the manufacturers’ warranty.
S035
CAUTION:
Never remove the cover on a power supply or any part that has this label attached. Hazardous voltage,
current, and energy levels are present inside any component that has this label attached. There are no
serviceable parts inside these components. If you suspect a problem with one of these parts, contact
a service technician.
CAUTION:
The power-control button on the device does not turn off the electrical current supplied to the device.
The device also might have more than one connection to dc power. To remove all electrical current
from the device, ensure that all connections to dc power are disconnected at the dc power input
terminals.
Before installing a hot-swap power supply, touch the static-protective package that contains the new hot-
swap power supply to any unpainted surface on the outside of the server. Then, take the new hot-swap
power supply out of the package and place it on a static-protective surface.
If you have installed the 2U CMA Upgrade Kit for Toolless Slide Rail or Toolless Slide Rail Kit with
2U CMA, do the following:
a. Press down the stop bracket 1 and rotate it to the open position.
b. Rotate the CMA out of the way to gain access to the power supply bay.
Step 2. If there is a power-supply filler installed, remove it.
For customers in Chinese Mainland, integrated TPM is not supported. However, customers in Chinese
Mainland can install a Trusted Cryptographic Module (TCM) adapter or a TPM adapter (sometimes called a
daughter card).
Before removing the TCM/TPM adapter, remove the top cover. See “Remove the top cover” on page 165.
Notes:
• Carefully handle the TCM/TPM adapter by its edges.
• Your TCM/TPM adapter might look slightly different from the illustration.
If you are instructed to return the old TCM/TPM adapter, follow all packaging instructions and use any
packaging materials that are provided.
Before installing the TCM/TPM adapter, touch the static-protective package that contains the new TCM/TPM
adapter to any unpainted surface on the outside of the server. Then, take the new TCM/TPM adapter out of
the package and place it on a static-protective surface.
Notes:
• Carefully handle the TCM by its edges.
• Your TCM/TPM adapter might look slightly different from the illustration.
After installing the TCM/TPM adapter, complete the parts replacement. See “Complete the parts
replacement” on page 282.
Attention: Before you begin replacing a processor, make sure that you are using Lenovo proven alcohol
cleaning pad and thermal grease.
Attention:
• Intel Xeon SP Gen 2 is supported on the system board with part number 01PE847. If you use the system
board with part number 01GV275, 01PE247, or 01PE934, update your system firmware to the latest level
before installing a Intel Xeon SP Gen 2. Otherwise, the system cannot be powered on.
• Each processor socket must always contain a cover or a PHM. When removing or installing a PHM,
protect empty processor sockets with a cover.
• Do not touch the processor socket or processor contacts. Processor-socket contacts are very fragile and
easily damaged. Contaminants on the processor contacts, such as oil from your skin, can cause
connection failures.
• Remove and install only one PHM at a time. If the system board supports multiple processors, install the
PHMs starting with the first processor socket.
• Do not allow the thermal grease on the processor or heat sink to come in contact with anything. Contact
with any surface can compromise the thermal grease, rendering it ineffective. Thermal grease can damage
components, such as electrical connectors in the processor socket. Do not remove the grease cover from
a heat sink until you are instructed to do so.
• To ensure the best performance, check the manufacturing date on the new heat sink and make sure it
does not exceed 2 years. Otherwise, wipe off the existing thermal grease and apply the new grease onto it
for optimal thermal performance.
Note: The heat sink, processor, and processor retainer for your system might be different than those shown
in the illustrations.
1. Remove the top cover. See “Remove the top cover” on page 165.
2. Remove the air baffle. See “Remove the air baffle” on page 170.
3. Remove any parts and disconnect any cables that might impede your access to the PHM.
Attention: To prevent damage to components, make sure that you follow the indicated loosening
sequence.
a. Fully loosen the Torx T30 captive fasteners on the processor-heat-sink module in the removal
sequence shown on the heat-sink label.
b. Lift the processor-heat-sink module from the processor socket.
1. Press the retaining clip at the corner of the processor retainer closest to the pry point; then, gently pry
this corner of the retainer away from the heat sink with a flat-bladed screwdriver, using a twisting
motion to break the processor-to-heat-sink seal.
2. Release the remaining retaining clips and lift the processor and retainer from the heat sink.
3. After separating the processor and retainer from the heat sink, hold the processor and retainer with
the thermal-grease side down and the processor-contact side up to prevent the processor from
falling out of the retainer.
Note: The processor retainer will be removed and discarded in a later step and replaced with a new
one.
• If you are replacing the processor, you will be reusing the heat sink. Wipe the thermal grease from the
bottom of the heat sink using an alcohol cleaning pad.
• If you are replacing the heat sink, you will be reusing the processor. Wipe the thermal grease from the top
of the processor using an alcohol cleaning pad.
If you are instructed to return the old processor or heat sink, follow all packaging instructions and use any
packaging materials that are provided.
Attention:
• Intel Xeon SP Gen 2 is supported on the system board with part number 01PE847. If you use the system
board with part number 01GV275, 01PE247, or 01PE934, update your system firmware to the latest level
before installing a Intel Xeon SP Gen 2. Otherwise, the system cannot be powered on.
• Each processor socket must always contain a cover or a PHM. When removing or installing a PHM,
protect empty processor sockets with a cover.
• Do not touch the processor socket or processor contacts. Processor-socket contacts are very fragile and
easily damaged. Contaminants on the processor contacts, such as oil from your skin, can cause
connection failures.
• Remove and install only one PHM at a time. If the system board supports multiple processors, install the
PHMs starting with the first processor socket.
• Do not allow the thermal grease on the processor or heat sink to come in contact with anything. Contact
with any surface can compromise the thermal grease, rendering it ineffective. Thermal grease can damage
components, such as electrical connectors in the processor socket. Do not remove the grease cover from
a heat sink until you are instructed to do so.
• To ensure the best performance, check the manufacturing date on the new heat sink and make sure it
does not exceed 2 years. Otherwise, wipe off the existing thermal grease and apply the new grease onto it
for optimal thermal performance.
Notes:
• PHMs are keyed for the socket where they can be installed and for their orientation in the socket.
• See https://static.lenovo.com/us/en/serverproven/index.shtml for a list of processors supported for your
server. All processors on the system board must have the same speed, number of cores, and frequency.
• Before you install a new PHM or replacement processor, update your system firmware to the latest level.
See “Firmware updates” on page 14.
• Installing an additional PHM can change the memory requirements for your system. See “Memory module
installation rules” on page 197 for a list of microprocessor-to-memory relationships.
• Optional devices available for your system might have specific processor requirements. See the
documentation that comes with the optional device for information.
• The PHM for your system might be different than the PHM shown in the illustrations.
• Intel Xeon 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R processor is supported only when the
following requirements are met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– The operating temperature is equal to or less than 30°C.
– Up to eight drives are installed in the drive bays 8–15.
• Intel Xeon 6144, 6146, 8160T, 6126T, 6244, and 6240Y processor, or processors with TDP equal to 200
watts or 205 watts (excluding 6137, 6242R, 6246R, 6248R, 6250, 6256, or 6258R) are supported only
when the following requirements are met:
– The server chassis is the twenty-four 2.5-inch-bay chassis.
– Up to eight drives are installed in the drive bays 8–15 if the operating temperature is equal to or less
than 35°C, or up to sixteen drives are installed in the drive bays 0–15 if the operating temperature is
equal to or less than 30°C.
Note: The heat sink, processor, and processor retainer for your system might be different than those shown
in the illustrations.
1. Remove the existing PHM, if one is installed. See “Remove a processor and heat sink” on page 264.
2. If you are replacing a heat sink, replace the processor retainer. Processor retainers should not be reused.
Note: Replacement processors come with both rectangular and square processor retainers. A
rectangular retainer comes attached to the processor. The square retainer can be discarded.
a. Remove the old processor retainer.
Note: When the processor is out of its retainer, hold the processor by the long edges to prevent
touching the contacts or the thermal grease, if it is applied.
With the processor-contact side up, flex the ends of the retainer down and away from the processor
to release the retaining clips; then, remove the processor from the retainer. Discard the old retainer.
b. Install a new processor retainer.
1) Position the processor on the new retainer so that the triangular marks align; then, insert the
unmarked end of the processor into the retainer.
2) Holding the inserted end of the processor in place, flex the opposite end of the retainer down
and away from the processor until you can press the processor under the clip on the retainer.
To prevent the processor from falling out of the retainer after it is inserted, keep the processor-
contact side up and hold the processor-retainer assembly by the sides of the retainer.
3) If there is any old thermal grease on the processor, gently clean the top of the processor using
an alcohol cleaning pad.
Note: If you are applying new thermal grease on the top of the processor, make sure to do it
after the alcohol has fully evaporated.
3. If you are replacing a processor:
a. Remove the processor identification label from the heat sink and replace it with the new label that
comes with the replacement processor.
b. To ensure the best performance, check the manufacturing date on the new heat sink and make sure
it does not exceed 2 years. Otherwise, wipe off the existing thermal grease and apply the new grease
onto it for optimal thermal performance.
c. Apply new thermal grease (1/2-syringe, 0.65 g) to the top of the new processor. If you have cleaned
the top of the processor with an alcohol cleaning pad, make sure to apply the new thermal grease
after the alcohol has fully evaporated.
4. If you are replacing a heat sink, remove the processor identification label from the old heat sink and
place it on the new heat sink in the same location. The label is on the side of the heat sink closest to the
triangular alignment mark.
If you are unable to remove the label and place it on the new heat sink, or if the label is damaged during
transfer, write the processor serial number from the processor identification label on the new heat sink in
the same location as the label would be placed using a permanent marker.
Notes:
• If you are replacing a processor, install the heat sink onto the processor and retainer while the
processor and retainer are in the shipping tray.
• If you are replacing a heat sink, remove the heat sink from its shipping tray and place the processor
and retainer in the opposite half of the heat sink shipping tray with the processor-contact side down.
To prevent the processor from falling out of the retainer, hold the processor-retainer assembly by its
sides with the processor-contact side up until you turn it over to fit in the shipping tray.
a. Align the triangular marks on the processor retainer and the heat sink or align the triangular mark on
the processor retainer with the notched corner of the heat sink.
b. Insert the processor-retainer clips into the holes on the heat sink.
c. Press the retainer into place until the clips at all four corners engage.
a. Align the triangular marks and guide pins on the processor socket with the PHM; then, insert
the PHM into the processor socket.
Attention: To prevent damage to components, make sure that you follow the indicated
tightening sequence.
b. Fully tighten the Torx T30 captive fasteners in the installation sequence shown on the heat-sink
label. Tighten the screws until they stop; then, visually inspect to make sure that there is no
gap between the screw shoulder beneath the heat sink and the microprocessor socket. (For
reference, the torque required for the nuts to fully tighten is 1.4 — 1.6 newton-meters, 12 — 14
inch-pounds).
Important: Before you return the system board, make sure that you install the processor socket dust covers
from the new system board. To replace a processor socket dust cover:
S017
CAUTION:
Hazardous moving fan blades nearby. Keep fingers and other body parts away.
S012
CAUTION:
Hot surface nearby.
Attention: Disengage all latches, cable clips, release tabs, or locks on cable connectors beforehand.
Failing to release them before removing the cables will damage the cable connectors on the system
board. Any damage to the cable connectors may require replacing the system board.
If you are instructed to return the old system board, follow all packaging instructions and use any packaging
materials that are provided.
Important: Before you return the system board, make sure that you install the processor socket dust covers
from the new system board. To replace a processor socket dust cover:
If you are planning to recycle the system board, follow the instructions in “Disassemble the system board for
recycle” on page 301 for compliance with local regulations.
Ensure that:
• The new system board is engaged by the mounting stud 3 on the chassis.
• The rear connectors on the new system board are inserted into the corresponding holes in the
rear panel.
• The release pin 1 secures the system board in place.
There are two methods available to update the machine type and serial number:
• From Lenovo XClarity Provisioning Manager
To update the machine type and serial number from Lenovo XClarity Provisioning Manager:
1. Start the server and press F1 to display the Lenovo XClarity Provisioning Manager interface.
2. If the power-on Administrator password is required, enter the password.
3. From the System Summary page, click Update VPD.
Where:
<m/t_model>
The server machine type and model number. Type mtm xxxxyyy, where xxxx is the machine type
and yyy is the server model number.
<s/n>
The serial number on the server. Type sn zzzzzzz, where zzzzzzz is the serial number.
[access_method]
The access method that you select to use from the following methods:
xcc_user_id
The BMC/IMM/XCC account name (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts).
Example commands are as follows:
onecli config set SYSTEM_PROD_DATA.SysInfoProdName <m/t_model> --bmc-username <xcc_user_id>
--bmc-password <xcc_password>
onecli config set SYSTEM_PROD_DATA.SysInfoSerialNum <s/n> --bmc-username <xcc_user_id>
--bmc-password <xcc_password>
– Online KCS access (unauthenticated and user restricted):
You do not need to specify a value for access_method when you use this access method.
Example commands are as follows:
onecli config set SYSTEM_PROD_DATA.SysInfoProdName <m/t_model>
onecli config set SYSTEM_PROD_DATA.SysInfoSerialNum <s/n>
Where:
xcc_external_ip
The BMC/IMM/XCC IP address. There is no default value. This parameter is required.
xcc_user_id
The BMC/IMM/XCC account (1 of 12 accounts). The default value is USERID.
xcc_password
The BMC/IMM/XCC account password (1 of 12 accounts).
Note: BMC, IMM, or XCC internal LAN/USB IP address, account name, and password are all
valid for this command.
Example commands are as follows:
onecli config set SYSTEM_PROD_DATA.SysInfoProdName <m/t_model>
--bmc <xcc_user_id>:<xcc_password>@<xcc_external_ip>
onecli config set SYSTEM_PROD_DATA.SysInfoSerialNum <s/n>
--bmc <xcc_user_id>:<xcc_password>@<xcc_external_ip>
4. Reset the Lenovo XClarity Controller to the factory defaults. Go to https://sysmgt.lenovofiles.com/help/
topic/com.lenovo.systems.management.xcc.doc/NN1ia_c_resettingthexcc.html for more information.
Enable TPM/TCM
The server supports Trusted Platform Module (TPM), Version 1.2 or Version 2.0.
Note: For customers in Chinese Mainland, integrated TPM is not supported. However, customers in Chinese
Mainland can install a Trusted Cryptographic Module (TCM) adapter or a NationZ TPM adapter (sometimes
called a daughter card). Customers in Chinese Mainland should download Lenovo Business Vantage to
enable TCM. For more information, see href=https://datacentersupport.lenovo.com/en/en/downloads/
ds548665-18alenovo_business_vantage_-release_letter-_20171205_v221770130-for-unknown-os and https://
download.lenovo.com/servers/mig/2021/02/09/43299/LBV_v2.2.177.0130_readme_20180903.txt.
When a system board is replaced, you must make sure that the TPM/TCM policy is set correctly.
CAUTION:
Take special care when setting the TPM/TCM policy. If it is not set correctly, the system board can
become unusable.
Note: Although the setting undefined is available as a policy setting, it should not be used.
• From Lenovo XClarity Essentials OneCLI
Note: Please note that a Local IPMI user and password must be setup in Lenovo XClarity Controller for
remote accessing to the target system.
To set the TPM policy from Lenovo XClarity Essentials OneCLI:
1. Read TpmTcmPolicyLock to check whether the TPM_TCM_POLICY has been locked:
OneCli.exe config show imm.TpmTcmPolicyLock --override --imm <userid>:<password>@<ip_address>
Notes:
– If the read back value is matched it means the TPM_TCM_POLICY has been set correctly.
imm.TpmTcmPolicy is defined as below:
– Value 0 use string “Undefined”, which means UNDEFINED policy.
– Value 1 use string “NeitherTpmNorTcm”, which means TPM_PERM_DISABLED.
– Value 2 use string “TpmOnly”, which means TPM_ALLOWED.
– Value 4 use string “TcmOnly”, which means TCM_ALLOWED.
– Below 4 steps must also be used to ‘lock’ the TPM_TCM_POLICY when using OneCli commands:
5. Read TpmTcmPolicyLock to check whether the TPM_TCM_POLICY has been locked , command as
below:
OneCli.exe config show imm.TpmTcmPolicyLock --override --imm <userid>:<password>@<ip_address>
The value must be 'Disabled', it means TPM_TCM_POLICY is NOT locked and must be set.
During the reset, UEFI will read the value from imm.TpmTcmPolicyLock, if the value is 'Enabled' and
the imm.TpmTcmPolicy value is invalid, UEFI will lock the TPM_TCM_POLICY setting.
The valid value for imm.TpmTcmPolicy includes 'NeitherTpmNorTcm', 'TpmOnly' and 'TpmOnly'.
If the imm.TpmTcmPolicy is set as 'Enabled' but imm.TpmTcmPolicy value is invalid, UEFI will reject
the 'lock' request and change imm.TpmTcmPolicy back to 'Disabled'.
8. Read back the value to check whether the ‘Lock’ is accepted or rejected. command as below:
OneCli.exe config show imm.TpmTcmPolicy --override --imm <userid>:<password>@<ip_address>
Note: If the read back value is changed from 'Disabled' to 'Enabled' that means the TPM_TCM_
POLICY has been locked successfully. There is no method to unlock a policy once it has been set
other than replacing system board.
imm.TpmTcmPolicyLock is defined as below:
Value 1 use string “Enabled" , which means lock the policy. Other values are not accepted.
Procedure also requires that Physical Presence is enabled. The Default value for FRU will be enabled.
PhysicalPresencePolicyConfiguration.PhysicalPresencePolicy=Enable
The Lenovo XClarity Provisioning Manager or the Lenovo XClarity Essentials OneCLI can be used to set the
TPM version.
Note: You can change the TPM version from 1.2 to 2.0 and back again. However, you can toggle
between versions a maximum of 128 times.
To set the TPM version to version 2.0:
OneCli.exe config set TrustedComputingGroup.DeviceOperation "Update to TPM2.0 compliant"
-–bmc userid:password@ip_address
where:
• <userid>:<password> are the credentials used to access the BMC (Lenovo XClarity Controller
interface) of your server. The default user ID is USERID, and the default password is PASSW0RD
(zero, not an uppercase o)
• <ip_address> is the IP address of the BMC.
For more information about the Lenovo XClarity Essentials OneCLI set command, see:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_set_command.html
where:
– <userid>:<password> are the credentials used to access the BMC (Lenovo XClarity Controller
interface) of your server. The default user ID is USERID, and the default password is PASSW0RD
(zero, not an uppercase o)
– <ip_address> is the IP address of the BMC.
For more information about the Lenovo XClarity Essentials OneCLI set command, see:
http://sysmgt.lenovofiles.com/help/topic/toolsctr_cli_lenovo/onecli_r_set_command.html
Note: Make sure the latest version of ThinkSystem M.2 with Mirroring Enablement Kit Firmware is applied to
avoid virtual disk/array missing after system board replacement.
Lenovo servers can be configured to automatically notify Lenovo Support if certain events are generated.
You can configure automatic notification, also known as Call Home, from management applications, such as
the Lenovo XClarity Administrator. If you configure automatic problem notification, Lenovo Support is
automatically alerted whenever a server encounters a potentially significant event.
To isolate a problem, you should typically begin with the event log of the application that is managing the
server:
• If you are managing the server from the Lenovo XClarity Administrator, begin with the Lenovo XClarity
Administrator event log.
• If you are using some other management application, begin with the Lenovo XClarity Controller event log.
Event logs
An alert is a message or other indication that signals an event or an impending event. Alerts are generated by
the Lenovo XClarity Controller or by UEFI in the servers. These alerts are stored in the Lenovo XClarity
Controller Event Log. If the server is managed by the Chassis Management Module 2 or by the Lenovo
XClarity Administrator, alerts are automatically forwarded to those management applications.
Note: For a listing of events, including user actions that might need to be performed to recover from an
event, see the Messages and Codes Reference, which is available at:
http://thinksystem.lenovofiles.com/help/topic/7X05/pdf_files.html
For more information about working with events from XClarity Administrator, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/events_vieweventlog.html
The Lenovo XClarity Controller monitors all components of the server and posts events in the Lenovo
XClarity Controller event log.
For more information about accessing the Lenovo XClarity Controller event log, see:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.systems.management.xcc.doc/event_log.html
If you are not sure about the cause of a problem and the power supplies are working correctly, complete the
following steps to attempt to resolve the problem:
1. Turn off the server.
2. Make sure that the server is cabled correctly.
3. Remove or disconnect the following devices if applicable, one at a time, until you find the failure. Turn on
and configure the server each time you remove or disconnect a device.
• Any external devices,
• Surge-suppressor device (on the server).
• Printer, mouse, and non-Lenovo devices,
• Each adapter.
• Hard disk drives.
• Memory modules until you reach the minimum configuration that is supported for the server.
If the problem is solved when you remove an adapter from the server, but the problem recurs when you
install the same adapter again, suspect the adapter. If the problem recurs when you replace the adapter with
a different one, try a different PCIe slot.
If the problem appears to be a networking problem and the server passes all system tests, suspect a network
cabling problem that is external to the server.
Complete the following steps to diagnose and resolve a suspected power problem.
Step 1. Check the event log and resolve any errors related to the power.
Note: Start with the event log of the application that is managing the server. For more information
about event logs, see “Event logs” on page 283.
Step 2. Check for short circuits, for example, if a loose screw is causing a short circuit on a circuit board.
If the server does not start from the minimum configuration, replace the components in the minimum
configuration one at a time until the problem is isolated.
Complete the following steps to attempt to resolve suspected problems with the Ethernet controller.
Step 1. Make sure that the correct device drivers, which come with the server are installed and that they
are at the latest level.
Step 2. Make sure that the Ethernet cable is installed correctly.
• The cable must be securely attached at all connections. If the cable is attached but the problem
remains, try a different cable.
• If you set the Ethernet controller to operate at 100 Mbps or 1000 Mbps, you must use Category
5 cabling.
Step 3. Determine whether the hub supports auto-negotiation. If it does not, try configuring the integrated
Ethernet controller manually to match the speed and duplex mode of the hub.
Step 4. Check the Ethernet controller LEDs on the rear panel of the server. These LEDs indicate whether
there is a problem with the connector, cable, or hub.
• The Ethernet link status LED is lit when the Ethernet controller receives a link pulse from the hub.
If the LED is off, there might be a defective connector or cable or a problem with the hub.
• The Ethernet transmit/receive activity LED is lit when the Ethernet controller sends or receives
data over the Ethernet network. If the Ethernet transmit/receive activity is off, make sure that the
hub and network are operating and that the correct device drivers are installed.
Step 5. Check the Network activity LED on the rear of the server. The Network activity LED is lit when data
is active on the Ethernet network. If the Network activity LED is off, make sure that the hub and
network are operating and that the correct device drivers are installed.
Step 6. Check for operating-system-specific causes of the problem, and also make sure that the operating
system drivers are installed correctly.
Step 7. Make sure that the device drivers on the client and server are using the same protocol.
If the Ethernet controller still cannot connect to the network but the hardware appears to be working, the
network administrator must investigate other possible causes of the error.
Troubleshooting by symptom
Use this information to find solutions to problems that have identifiable symptoms.
To use the symptom-based troubleshooting information in this section, complete the following steps:
1. Check the event log of the application that is managing the server and follow the suggested actions to
resolve any event codes.
The power button does not work (server does not start)
Note: The power button will not function until approximately 1 to 3 minutes after the server has been
connected to ac power.
Memory problems
Use this information to resolve issues related to memory.
• “Displayed system memory less than installed physical memory” on page 288
• “Multiple memory modules in a channel identified as failing” on page 290
• “Attempt to change to another DCPMM mode fails” on page 290
• “Extra namespace appears in an interleaved region” on page 290
Note: Each time you install or remove a memory module, you must disconnect the server from the power
source; then, wait 10 seconds before restarting the server.
1. Make sure that:
Note: DRAM DIMMs in these two modes act as cache, and are not applicable to memory
diagnostics.
5. Reverse the modules between the channels (of the same processor), and then restart the server. If the
problem is related to a memory module, replace the failing memory module.
Note: When DCPMMs are installed, only adopt this method in Memory Mode.
6. Re-enable all memory modules using the Setup Utility, and restart the system.
7. (Trained technician only) Install the failing memory module into a memory module connector for
processor 2 (if installed) to verify that the problem is not the processor or the memory module connector.
Important: Some cluster solutions require specific code levels or coordinated code updates. If the device is
part of a cluster solution, verify that the latest level of code is supported for the cluster solution before you
update the code.
Green hard disk drive activity LED does not represent actual state of associated drive
Complete the following steps until the problem is solved:
1. If the green hard disk drive activity LED does not flash when the drive is in use, run the diagnostics tests
for the hard disk drives. When you start a server and press F1, the Lenovo XClarity Provisioning Manager
interface is displayed by default. You can perform hard disk drive diagnostics from this interface. From
the Diagnostic page, click Run Diagnostic ➙ HDD test.
2. If the drive passes the test, replace the backplane.
3. If the drive fails the test, replace the drive.
Yellow hard disk drive status LED does not represent actual state of associated drive
Complete the following steps until the problem is solved:
1. Turn off the server.
2. Reseat the SAS/SATA adapter.
3. Reseat the backplane signal cable and backplane power cable.
4. Reseat the hard disk drive.
5. Turn on the server and observe the activity of the hard disk drive LEDs.
Screen is blank
1. If the server is attached to a KVM switch, bypass the KVM switch to eliminate it as a possible cause of
the problem: connect the monitor cable directly to the correct connector on the rear of the server.
2. The management controller remote presence function is disabled if you install an optional video adapter.
To use the management controller remote presence function, remove the optional video adapter.
3. If the server installed with the graphical adapters while turning on the server, the Lenovo logo displays
on the screen after approximately 3 minutes. This is normal operation while the system loads.
4. Make sure that:
• The server is turned on. If there is no power to the server.
The monitor has screen jitter, or the screen image is wavy, unreadable, rolling, or distorted.
1. If the monitor self-tests show that the monitor is working correctly, consider the location of the monitor.
Magnetic fields around other devices (such as transformers, appliances, fluorescents, and other
monitors) can cause screen jitter or wavy, unreadable, rolling, or distorted screen images. If this
happens, turn off the monitor.
Attention: Moving a color monitor while it is turned on might cause screen discoloration.
Move the device and the monitor at least 305 mm (12 in.) apart, and turn on the monitor.
Notes:
a. To prevent diskette drive read/write errors, make sure that the distance between the monitor and any
external diskette drive is at least 76 mm (3 in.).
b. Non-Lenovo monitor cables might cause unpredictable problems.
2. Reseat the monitor cable.
3. Replace the components listed in step 2 one at a time, in the order shown, restarting the server each
time:
a. Monitor cable
b. Video adapter (if one is installed)
c. Monitor
d. (Trained technician only) System board
Optional-device problems
Use this information to solve problems related to optional devices.
A Lenovo optional device that was just installed does not work.
1. Make sure that:
• The device is supported for the server (see https://static.lenovo.com/us/en/serverproven/index.shtml).
• You followed the installation instructions that came with the device and the device is installed
correctly.
• You have not loosened any other installed devices or cables.
• You updated the configuration information in system setup. When you start a server and press F1 to
display the system setup interface. Whenever memory or any other device is changed, you must
update the configuration.
2. Reseat the device that you just installed.
3. Replace the device that you just installed.
A Lenovo optional device that worked previously does not work now.
1. Make sure that all of the cable connections for the device are secure.
2. If the device comes with test instructions, use those instructions to test the device.
3. If the failing device is a SCSI device, make sure that:
• The cables for all external SCSI devices are connected correctly.
• The last device in each SCSI chain, or the end of the SCSI cable, is terminated correctly.
Serial-device problems
Use this information to solve problems with serial ports or devices.
• “Number of displayed serial ports is less than the number of installed serial ports” on page 296
• “Serial device does not work” on page 296
Number of displayed serial ports is less than the number of installed serial ports
Complete the following steps until the problem is solved.
1. Make sure that:
• Each port is assigned a unique address in the Setup utility and none of the serial ports is disabled.
• The serial-port adapter (if one is present) is seated correctly.
2. Reseat the serial port adapter.
3. Replace the serial port adapter.
Intermittent problems
Use this information to solve intermittent problems.
Video problems:
1. Make sure that all cables and the console breakout cable are properly connected and secure.
2. Make sure that the monitor is working properly by testing it on another compute node.
3. Test the console breakout cable on a working compute node to ensure that it is operating properly.
Replace the console breakout cable if it is defective.
Keyboard problems:
Make sure that all cables and the console breakout cable are properly connected and secure.
Mouse problems:
Make sure that all cables and the console breakout cable are properly connected and secure.
Power problems
Use this information to resolve issues related to power.
System error LED is on and event log "Power supply has lost input" is displayed
To resolve the problem, ensure that:
1. The power supply is properly connected to a power cord.
2. The power cord is connected to a properly grounded electrical outlet for the server.
Network problems
Use this information to resolve issues related to networking.
Observable problems
Use this information to solve observable problems.
• “The server immediately displays the POST Event Viewer when it is turned on” on page 298
• “Server is unresponsive (POST is complete and operating system is running)” on page 299
• “Server is unresponsive (cannot press F1 to start System Setup)” on page 299
• “Voltage system board fault is displayed in the event log” on page 299
• “Unusual smell” on page 300
• “Server seems to be running hot” on page 300
• “Cannot enter legacy mode after installing a new adapter” on page 300
• “Cracked parts or cracked chassis” on page 300
The server immediately displays the POST Event Viewer when it is turned on
Complete the following steps until the problem is solved.
1. Correct any errors that are indicated by the light path diagnostics LEDs.
2. Make sure that the server supports all the processors and that the processors match in speed and
cache size.
You can view processor details from system setup.
To determine if the processor is supported for the server, see https://static.lenovo.com/us/en/
serverproven/index.shtml.
3. (Trained technician only) Make sure that Processor 1 is seated correctly.
4. (Trained technician only) Remove Processor 2 and restart the server.
5. Replace the following components one at a time, in the order shown, restarting the server each time:
After a specified number of consecutive attempts (automatic or manual), the server to reverts to the default
UEFI configuration and starts System Setup so that you can make the necessary corrections to the
configuration and restart the server. If the server is unable to successfully complete POST with the default
configuration, there might be a problem with the system board.
You can specify the number of consecutive restart attempts in System Setup. Restart the server and press
F1 to display the Lenovo XClarity Provisioning Manager system setup interface. Then, click System Settings
➙ Recovery and RAS ➙ POST Attempts ➙ POST Attempts Limit. Available options are 3, 6, 9, and
disable.
Unusual smell
Complete the following steps until the problem is solved.
1. An unusual smell might be coming from newly installed equipment.
2. If the problem remains, contact Lenovo Support.
Software problems
Use this information to solve software problems.
1. To determine whether the problem is caused by the software, make sure that:
• The server has the minimum memory that is needed to use the software. For memory requirements,
see the information that comes with the software.
Note: If you have just installed an adapter or memory, the server might have a memory-address
conflict.
• The software is designed to operate on the server.
• Other software works on the server.
• The software works on another server.
2. If you receive any error messages while you use the software, see the information that comes with the
software for a description of the messages and suggested solutions to the problem.
3. Contact your place of purchase of the software.
1. Remove the system board from the server (see “Remove the system board” on page 273).
2. Refer to local environmental, waste or disposal regulations to ensure compliance.
After disassembling the system board, comply with local regulations when recycling.
On the World Wide Web, up-to-date information about Lenovo systems, optional devices, services, and
support are available at:
http://datacentersupport.lenovo.com
You can find the product documentation for your ThinkSystem products at the following location:
http://thinksystem.lenovofiles.com/help/index.jsp
You can take these steps to try to solve the problem yourself:
• Check all cables to make sure that they are connected.
• Check the power switches to make sure that the system and any optional devices are turned on.
• Check for updated software, firmware, and operating-system device drivers for your Lenovo product. The
Lenovo Warranty terms and conditions state that you, the owner of the Lenovo product, are responsible
for maintaining and updating all software and firmware for the product (unless it is covered by an
additional maintenance contract). Your service technician will request that you upgrade your software and
firmware if the problem has a documented solution within a software upgrade.
• If you have installed new hardware or software in your environment, check https://static.lenovo.com/us/en/
serverproven/index.shtml to make sure that the hardware and software is supported by your product.
• Go to http://datacentersupport.lenovo.com and check for information to help you solve the problem.
– Check the Lenovo forums at https://forums.lenovo.com/t5/Datacenter-Systems/ct-p/sv_eg to see if
someone else has encountered a similar problem.
Contacting Support
You can contact Support to obtain help for your issue.
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a service
provider authorized by Lenovo to provide warranty service, go to https://datacentersupport.lenovo.com/
serviceprovider and use filter searching for different countries. For Lenovo support telephone numbers, see
https://datacentersupport.lenovo.com/supportphonelist for your region support details.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only that
Lenovo product, program, or service may be used. Any functionally equivalent product, program, or service
that does not infringe any Lenovo intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any other product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document is not an offer and does not provide a license under any patents
or patent applications. You can send inquiries in writing to the following:
Lenovo (United States), Inc.
8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow
disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to
you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Important notes
Processor speed indicates the internal clock speed of the microprocessor; other factors also affect
application performance.
CD or DVD drive speed is the variable read rate. Actual speeds vary and are often less than the possible
maximum.
When referring to processor storage, real and virtual storage, or channel volume, KB stands for 1 024 bytes,
MB stands for 1 048 576 bytes, and GB stands for 1 073 741 824 bytes.
When referring to hard disk drive capacity or communications volume, MB stands for 1 000 000 bytes, and
GB stands for 1 000 000 000 bytes. Total user-accessible capacity can vary depending on operating
environments.
Maximum internal hard disk drive capacities assume the replacement of any standard hard disk drives and
population of all hard-disk-drive bays with the largest currently supported drives that are available from
Lenovo.
Maximum memory might require replacement of the standard memory with an optional memory module.
Each solid-state memory cell has an intrinsic, finite number of write cycles that the cell can incur. Therefore, a
solid-state device has a maximum number of write cycles that it can be subjected to, expressed as total
bytes written (TBW). A device that has exceeded this limit might fail to respond to system-generated
commands or might be incapable of being written to. Lenovo is not responsible for replacement of a device
that has exceeded its maximum guaranteed number of program/erase cycles, as documented in the Official
Published Specifications for the device.
Lenovo makes no representations or warranties with respect to non-Lenovo products. Support (if any) for the
non-Lenovo products is provided by the third party, not Lenovo.
Some software might differ from its retail version (if available) and might not include user manuals or all
program functionality.
http://thinksystem.lenovofiles.com/help/index.jsp
A E
air baffle enable
installing 172 TPM 278
removing 170 Ethernet
replacing 170 controller
assert troubleshooting 286
physical presence 280 Ethernet controller problems
solving 286
B
backplane
F
installing 190, 193 fan
removing 189, 192 installing 177
replacing 189 removing 175
bezel replacing 175
installing 155 fan error LED 30
removing 154 firmware updates 14
replacing 154 front I/O assembly 19, 22
installing 181
removing 180
replacing 180
C front view 19
cable routing
backplane 40
eight 2.5-inch drives 41 G
eight 3.5-inch SAS/SATA drives 133
front I/O assembly 35 gaseous contamination 13
GPU 36 Getting help 303
sixteen 2.5-inch drives 57 GPU
twelve 3.5-inch drives 136 installing 235
twenty 2.5-inch drives 82 replacing 232
twenty-four 2.5-inch drives 83 graphic processing unit
VGA connector 34 installing 235
CMOS battery guidelines
installing 216 options installation 151
removing 214 system reliability 153
replacing 214
collecting service data 304
completing
parts replacement 282 H
contamination, particulate and gaseous 13
handling static-sensitive devices 154
cover
hard disk drive
installing 166
replacing 183
removing 165
hard disk drive problems 290
replacing 165
hardware service and support telephone numbers 305
CPU
heat sink
installing 266
installing 266
creating a personalized support web page 303
removing 264
custom support web page 303
replacing 263
help 303
hot-swap drive
D installing 185
replacing 183
DC Persistent Memory Module (DCPMM) 202 hot-swap drives
DCPMM 288 removing 183
devices, static-sensitive hot-swap power supply
handling 154 installing 256
U
T UEFI Secure Boot 281
Taiwan Region import and export contact information 309 update firmware 14
TCM 278 updating,
TCM adapter machine type 276
installing 262 USB-device problems 293
removing 261
replacing 261
TCM policy 278
Tech Tips 17 V
telecommunication regulatory statement 308
VGA connector 19
telephone numbers 305
video problems 292
top cover
installing 166
removing 165
replacing 165 W
TPM 278
TPM 1.2 281 warranty 1
TPM 2.0 281 working inside the server
TPM adapter power on 153
installing 262