0% found this document useful (0 votes)
99 views42 pages

System Specs G6 Multinode

Uploaded by

f.terzini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views42 pages

System Specs G6 Multinode

Uploaded by

f.terzini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

System Specifications for

Multinode G6 Platforms
Platform ANY
NX-1065-G6/NX-3060-G6/NX-8035-G6
August 13, 2024
Contents

Visio Stencils......................................................................................... iii

1. Nutanix Hardware Naming Convention................................................4

2. System Specifications.........................................................................8
Node Naming (NX-1065-G6)........................................................................................................8
NX-1065-G6 System Specifications................................................................................. 10
Node Naming (NX-3060-G6)......................................................................................................13
NX-3060-G6 System Specifications................................................................................. 16
Node Naming (NX-8035-G6)......................................................................................................19
NX-8035-G6 System Specifications................................................................................. 23

3. Component Specifications................................................................ 28
Controls and LEDs for Multinode Platforms.............................................................................. 28
LED Meanings for Network Cards............................................................................................ 30
Power Supply Unit (PSU) Redundancy and Node Configuration................................................ 32
Nutanix DMI Information........................................................................................................... 32
Block Connection in a Customer Environment..........................................................................34
Connecting the Nutanix Block.........................................................................................34

4. Memory Configurations..................................................................... 36
Supported Memory Configurations........................................................................................... 36

Copyright..............................................................................................41
License.....................................................................................................................................41
Conventions..............................................................................................................................41
Default Cluster Credentials...................................................................................................... 41
Version..................................................................................................................................... 42
VISIO STENCILS
Visio stencils for Nutanix products are available on VisioCafe.

Platform | Visio Stencils | iii


1
NUTANIX HARDWARE NAMING
CONVENTION
Every Nutanix block has a unique name based on the standardized naming convention followed by
Nutanix.
The Nutanix hardware model name translates to the following naming convention:
Prefix-Body-Suffix or NX-WXYZ(G|C|S)-G(4|5|6|7)

Where:

• NX is the Nutanix appliance

• W is the product series

• X is the number of nodes

• Y is the chassis form-factor

• Z is the disk form-factor

• G|C|S indicates that a platform uses GPUs, a platform is an AHV-only node, or a platform supports only
one CPU respectively
• G(4|5|6|7) is the CPU generation

Example: NX-3460-G7

Platform | Nutanix Hardware Naming Convention | 4


Figure 1: Nutanix Hardware Naming Convention

The following section provides a detailed description of the Nutanix hardware naming convention:

Table 1: Prefix

Prefix Description

NX Indicates that the platform is sold directly by Nutanix and support


calls are handled by Nutanix.
NX stands for Nutanix.

Platform | Nutanix Hardware Naming Convention | 5


Table 2: Body

Body Description

W Indicates the product series and have one of the following values:

• 1 is for small or Remote Office/Branch Office (ROBO)


businesses
• 3 is for heavy compute
• 5 is for dedicated Files and Buckets
• 6 is for heavy storage
• 8 is for high-performance
• 9 is for high performance and experimental businesses

X Indicates the number of nodes and have one of the following


values:

• 1 is for single-node platforms


• 2 is for multinode platforms
• 3 is for multinode platforms
• 4 is for multinode platforms

Note: Though multinode platforms can have two, three, or four


nodes, the number is always mentioned as a generic zero, 0 in
the documentation.

Y Indicates the chassis form-factor and have one of the following


values:

• 2 for 1U-1N (one rack unit high with one node)


• 3 for 2U-2N (two rack units high with two nodes)
• 5 for 2U-1N (two rack units high with one node)
• 6 for 2U-4N (two rack units high with four nodes)
• 7 for 1U-1N (one rack unit high with one node)

Z Indicates the drive form-factor and have one of the following


values:

• 0 for 2.5 inch drives


• 5 for 3.5 inch drives

Platform | Nutanix Hardware Naming Convention | 6


Body Description

Indicates one of the following:


• G
• C • G at the end of the body stands for "graphics" and that the
platform is optimized for using the Graphics Processing Unit
• S (GPU) cards. For example, the NX-3155G-G7 platform.
• C at the end of the body stands for "cold node" and that the
platform is AHV-only. Such platforms should only be used for
storage. Cold nodes cannot run VMs. Hence, cold nodes are
the only exception where we allow mixing of the nodes with
different hypervisors in the same cluster. For example, the
NX-6035C and NX-6035C-G5 platforms.
• S at the end of the body stands for "single socket" where the
motherboard has only one CPU instead of the usual two that
we support. For example, the NX-1175S-G7 platform.

Table 3: Suffix

Suffix Description

G4 Indicates that the platform uses the Intel Haswell CPU

G5 Indicates that the platform uses the Intel Broadwell CPU

G6 Indicates that the platform uses the Intel Skylake CPU

G7 Indicates that the platform uses the Intel Cascade Lake CPU

Platform | Nutanix Hardware Naming Convention | 7


2
SYSTEM SPECIFICATIONS
Node Naming (NX-1065-G6)
Nutanix assigns a name to each node in a block, which varies by product type.
NX-1065-G6 platforms have one, two, three, or four nodes per block.

• Node A
• Node B
• Node C
• Node D
Physical drives are arranged in the chassis according the node order shown below. The first drive in each
node contains the Controller VM and metadata.

Figure 2: NX-1065-G6 front panel

Table 4: Supported drive configurations

Hybrid SSD + HDD One SSD and two HDDs per node

Platform | System Specifications | 8


Figure 3: NX-1065-G6 back panel

One NIC is supported. The supported NIC options are shown below.

• Quad-port 10 GbE
• Dual-port 10 GbE
• Dual-port 10GBase-T
• Dual-port 25 GbE

Figure 4: NIC options for the NX-1065-G6

Platform | System Specifications | 9


Figure 5: Exploded view of NX-1065-G6

NX-1065-G6 System Specifications

Table 5: System Characteristics

Nodes 4 x nodes per block

CPU
• 2 x Intel Xeon Silver_4114, 10-core Skylake @ 2.2 GHz (20 cores per node)
• 2 x Intel Xeon Silver_4108, 8-core Skylake @ 1.8 GHz (16 cores per node)

Memory
• DDR4-2666, 1.2V, 32 GB, RDIMM
4 x 32 GB = 128 GB
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB

Storage: Hybrid Carriers: 3.5-inch carriers

Platform | System Specifications | 10


1 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
2 x HDD

• 2 TB
• 4 TB
• 6 TB
• 8 TB
• 12 TB

Storage: Hybrid Carriers: 3.5-inch carriers


(SED)
1 x SSD

• 960 GB
• 1.92 TB
2 x HDD

• 2 TB
• 4 TB
• 6 TB
• 8 TB
• 12 TB

Hypervisor Boot 2 x M.2 Device


Drive
• 240 GB

Network Serverboard

• 1 x On-board SIOM, Port 1, 10GBase-T (IPMI failover)


• 1 x On-board SIOM, Port 2, 10GBase-T
• 1 x Dedicated IPMI port, 100M/1GbE
NICs in PCIe slot 1

• 1 x Dual-port 10 GbE NIC


• 1 x Quad-port 10 GbE NIC
• 1 x Dual-port 10 GBase-T NIC
• 1 x Dual-port 25 GbE NIC

Platform | System Specifications | 11


USB 2 x USB 3.0 on the serverboard

VGA 1 x VGA connector per node (15-pin female)

Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (only the slot on the right is active)

Chassis fans 4 x fans per block

Table 6: System Characteristics

Form factor 2 RU rack-mount chassis

Block Weight: 35.38 kg (78 lb.)


(standalone)
Depth: 764.75 mm (30.11 in.)
Width: 449 mm (17.68 in.)
Height: 88 mm (3.46 in.)

Note: The tolerance for dimensions more than 2 mm thickness is +/- 1.5 mm.

Block (package Weight: 47.6 kg (104.9 lb.)


with rack rails
and accessories)

Node Weight: 3.2 kg (7.1 lb.)


Depth: 594 mm (23.39 in.)
Width: 176 mm (6.93 in.)
Height: 38 mm (1.5 in.)

Rack rail length Minimum: 676 mm (26.61 in.)


Maximum: 1520 mm (59.84 in.)

Table 7: Block power and electrical

Power supplies 2200-watt output @ 220-240Vac, 9.8 - 10Amp, 50-60Hz


1200-watt output @ 100-240Vac, 7.5 - 9.8Amp, 50-60Hz

Power Maximum: 1519 W


consumption
Typical: 968 W
Block with 2 x 4-core CPU, 2.4 GHz, 256 GB x 4 nodes

Thermal Maximum: 5184 BTU/hr


dissipation
Typical: 3302 BTU/hr
Block with 2 x 14-core CPU, 2.4 GHz, 256 GB x 4 nodes

Platform | System Specifications | 12


Operating Operating temperature : 10-35C
environment
Non-operating temperature : -40-70C
Operating relative humidity : 20-95% , (non-condensing)
Non-operating relative humidity : 5-95% , (non-condensing)

Certifications

• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS

Node Naming (NX-3060-G6)


Nutanix assigns a name to each node in a block, which varies by product type.
NX-3060-G6 platforms have one, two, three, or four nodes per block.

• Node A
• Node B
• Node C
• Node D
Physical drives are arranged in the chassis according the node order shown below. The first drive in each
node contains the Controller VM and metadata.
For all versions of AOS, the NX-3060-G6 supports hard disk drives (HDD) and solid-state drives (SSD).
For versions of AOS 5.6 and later, the NX-3060-G6 also supports non-volatile memory express drives
(NVMe).

Platform | System Specifications | 13


The NX-3060-G6 supports partial population of drive slots with SSDs (not HDDs.) Any configuration that
includes NVMe drives must also include four SSDs, so partial population is not supported for configurations
including NVMe drives.

Figure 6: NX-3060-G6 front panel

Table 8: Supported drive configurations

Hybrid SSD + HDD Two SSDs and four HDDs per node

All-SSD Two SSDs and four empty slots per node

Note: When partially populating Four SSDs and two empty slots per node
a node with SSDs, you must load
the drive slots in order from left to Six SSDs per node
right.

Mixed SSD + NVMe Two NVMe drives and four SSDs per node

Note: The NVMe drives must go in the two leftmost drive


slots in each node.

Note: AOS 5.6 or later is required for NVMe drives.

Platform | System Specifications | 14


Figure 7: NX-3060-G6 back panel

One or two identical NICs are supported. The supported NIC options are shown below.

• Quad-port 10 GbE
• Dual-port 10 GbE
• Dual-port 10GBase-T
• Dual-port 25GbE

Figure 8: NIC options for the NX-3060-G6

Platform | System Specifications | 15


Figure 9: Exploded view of NX-3060-G6

NX-3060-G6 System Specifications

Table 9: System Characteristics

Nodes 4 x nodes per block

CPU
• 2 x Intel Xeon Gold_6138, 20-core Skylake @ 2.0 GHz (40 cores per node)
• 2 x Intel Xeon Gold_6130, 16-core Skylake @ 2.1 GHz (32 cores per node)
• 2 x Intel Xeon Gold_5120, 14-core Skylake @ 2.2 GHz (28 cores per node)
• 2 x Intel Xeon Gold_6126, 12-core Skylake @ 2.6 GHz (24 cores per node)
• 2 x Intel Xeon Silver_4116, 12-core Skylake @ 2.1 GHz (24 cores per node)
• 2 x Intel Xeon Silver_4114, 10-core Skylake @ 2.2 GHz (20 cores per node)
• 2 x Intel Xeon Silver_4108, 8-core Skylake @ 1.8 GHz (16 cores per node)

Memory
• DDR4-2666, 1.2V, 32 GB, RDIMM
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
24 x 32 GB = 768 GB

Storage: Hybrid Carriers: 2.5-inch carriers

Platform | System Specifications | 16


2 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
4 x HDD

• 2 TB

Storage: Hybrid Carriers: 2.5-inch carriers


(SED)
2 x SSD

• 960 GB
• 1.92 TB
4 x HDD

• 2 TB

Storage: All-flash Carriers: 2.5-inch carriers

2, 4, or 6 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB

Storage: All-flash Carriers: 2.5-inch carriers


(SED)
2, 4, or 6 x SSD

• 960 GB
• 1.92 TB

Storage: SSD Carriers: 2.5-inch carriers


with NVMe
4 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
2 x NVMe

• 1.6 TB

Platform | System Specifications | 17


Hypervisor Boot 2 x M.2 Device
Drive
• 240 GB

Network Serverboard

• 1 x On-board SIOM, Port 1, 10GBase-T (IPMI failover)


• 1 x On-board SIOM, Port 2, 10GBase-T
• 1 x Dedicated IPMI port, 100M/1GbE
NICs in PCIe slot 1 or 2

• 1 x Dual-port 10 GbE NIC


• 1 x Quad-port 10 GbE NIC
• 1 x Dual-port 10 GBase-T NIC
• 1 x Dual-port 25 GbE NIC

USB 2 x USB 3.0 on the serverboard

VGA 1 x VGA connector per node (15-pin female)

Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (both slots filled with NICs)

Chassis fans 4 x fans per block

Table 10: System Characteristics

Form factor 2 RU rack-mount chassis

Block Weight: 35.38 kg (78 lb.)


(standalone)
Depth: 728.93 mm (28.7 in.)
Width: 449 mm (17.68 in.)
Height: 88 mm (3.46 in.)

Note: The tolerance for dimensions more than 2 mm thickness is +/- 1.5 mm.

Block (package Weight: 47.6 kg (104.9 lb.)


with rack rails
and accessories)

Node Weight: 3.2 kg (7.1 lb.)


Depth: 594 mm (23.39 in.)
Width: 176 mm (6.93 in.)
Height: 38 mm (1.5 in.)

Rack rail length Minimum: 676 mm (26.61 in.)


Maximum: 1520 mm (59.84 in.)

Platform | System Specifications | 18


Table 11: Block power and electrical

Power supplies 2200-watt output @ 220-240Vac, 9.8 - 10Amp, 50-60Hz


1200-watt output @ 100-240Vac, 7.5 - 9.8Amp, 50-60Hz

Power Maximum: 2080 W


consumption
Typical: 1700 W
Block with 2 x 4-core CPU, 2.4 GHz, 256 GB x 4 nodes

Thermal Maximum: 7098 BTU/hr


dissipation
Typical: 5800 BTU/hr
Block with 2 x 14-core CPU, 2.4 GHz, 256 GB x 4 nodes

Operating Operating temperature : 10-35C


environment
Non-operating temperature : -40-70C
Operating relative humidity : 20-95% , (non-condensing)
Non-operating relative humidity : 5-95% , (non-condensing)

Certifications

• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS

Node Naming (NX-8035-G6)


Nutanix assigns a name to each node in a block, which varies by product type.
For the NX-8035-G6, the node names are:

Platform | System Specifications | 19


• Node A
• Node B
Node A physical drives are located in the left side of the chassis while Node B drives are located in the
right side. The SSDs contain the Controller VM and metadata.
For all versions of AOS, the NX-8035-G6 supports hard disk drives (HDD) and solid-state drives (SSD).
For versions of AOS 5.6 or later, the NX-8035-G6 also supports non-volatile memory express drives
(NVMe).
The NX-8035-G6 supports partial population of drive slots with SSDs (not HDDs.) Any configuration that
includes NVMe drives must also include four SSDs, so partial population is not supported for configurations
including NVMe drives.

Figure 10: Front panel of NX-8035-G6 block: Hybrid and all-SSD configurations

Figure 11: Front panel of NX-8035-G6 block: NVMe configuration

Table 12: Supported drive configurations

Hybrid SSD + HDD Two SSDs and four HDDs per node

Platform | System Specifications | 20


All-SSD Two SSDs and four empty slots per node

Note: When partially populating Four SSDs and two empty slots per node
a node with SSDs, you must load
the drive slots in order from left to Six SSDs per node
right.

Mixed SSD + NVMe Two NVMe drives and four SSDs per node

Note: The NVMe drives must go in the two bottom drive


slots in the left column in each node.

Note: AOS 5.6 and later is required for NVMe drives.

Figure 12: Back panel of NX-8035-G6 block

You can install one or two NICs. All installed NICs must be identical. Always populate the NIC slots in
order: NIC1, NIC2. Supported NICs include:

• Quad-port 10 GbE (PXE enabled)


• Dual-port 10 GbE (PXE enabled)
• Dual-port 10GBASE-T (PXE enabled)
• Dual-port 25 GB ConnectX-4 (at least one in each node required)

Platform | System Specifications | 21


Figure 13: NIC options

Figure 14: Exploded view (chassis)

Platform | System Specifications | 22


Figure 15: Exploded view (node)

NX-8035-G6 System Specifications

Table 13: System Characteristics

Nodes 2 x nodes per block

CPU
• 2 x Intel Xeon Gold_6152, 22-core Skylake @ 2.1 GHz (44 cores per node)
• 2 x Intel Xeon Gold_6148, 20-core Skylake @ 2.4 GHz (40 cores per node)
• 2 x Intel Xeon Gold_6140, 18-core Skylake @ 2.3 GHz (36 cores per node)
• 2 x Intel Xeon Gold_5120, 14-core Skylake @ 2.2 GHz (28 cores per node)
• 2 x Intel Xeon Silver_4116, 12-core Skylake @ 2.1 GHz (24 cores per node)
• 2 x Intel Xeon Silver_4114, 10-core Skylake @ 2.2 GHz (20 cores per node)
• 2 x Intel Xeon Gold_6134, 8-core Skylake @ 3.2 GHz (16 cores per node)
• 2 x Intel Xeon Silver_4108, 8-core Skylake @ 1.8 GHz (16 cores per node)
• 2 x Intel Xeon Bronze_3106, 8-core Skylake @ 1.7 GHz (16 cores per node)
• 2 x Intel Xeon Gold_6128, 6-core Skylake @ 3.4 GHz (12 cores per node)

Platform | System Specifications | 23


Memory
• DDR4-2666, 1.2V, 32 GB, RDIMM
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
24 x 32 GB = 768 GB
• DDR4-2666, 1.2V, 16 GB, RDIMM
6 x 16 GB = 96 GB

Storage: Hybrid Carriers: 3.5-inch carriers

2 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
4 x HDD

• 6 TB
• 8 TB
• 12 TB

Storage: Hybrid Carriers: 3.5-inch carriers


(SED)
2 x SSD

• 960 GB
• 1.92 TB
4 x HDD

• 6 TB
• 8 TB
• 12 TB

Storage: All-flash Carriers: 3.5-inch carriers

2, 4, or 6 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB

Platform | System Specifications | 24


Storage: All-flash Carriers: 3.5-inch carriers
(SED)
2, 4, or 6 x SSD

• 960 GB
• 1.92 TB

Storage: SSD Carriers: 3.5-inch carriers


with NVMe
4 x SSD

• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
2 x NVMe

• 1.6 TB

Hypervisor Boot 2 x M.2 Device


Drive
• 240 GB

Network Serverboard

• 1 x On-board SIOM, Port 1, 10GBase-T (IPMI failover)


• 1 x On-board SIOM, Port 2, 10GBase-T
• 1 x Dedicated IPMI port, 100M/1GbE
NICs in PCIe slot 1 or 2

• 1 x Dual-port 10 GbE NIC


• 1 x Quad-port 10 GbE NIC
• 1 x Dual-port 10 GBase-T NIC
• 1 x Dual-port 25 GbE NIC

USB 2 x USB 3.0 on the serverboard

VGA 1 x VGA connector per node (15-pin female)

Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (both slots filled with NICs)

Chassis fans 4 x fans per block

Table 14: System Characteristics

Form factor 2 RU rack-mount chassis

Platform | System Specifications | 25


Block Weight: 32 kg (70.5 lb.)
(standalone)
Depth: 764.75 mm (30.11 in.)
Width: 449 mm (17.68 in.)
Height: 88 mm (3.46 in.)

Note: The tolerance for dimensions more than 2 mm thickness is +/- 1.5 mm.

Block (package Weight: 45.68 kg (100.7 lb.)


with rack rails
and accessories)

Rack rail length Minimum: 650.25 mm (25.6 in.)


Maximum: 839.5 mm (33.05 in.)

Table 15: Block power and electrical

Power supplies 2200-watt output @ 220-240Vac, 9.8 - 10Amp, 50-60Hz


1200-watt output @ 100-240Vac, 7.5 - 9.8Amp, 50-60Hz

Power Maximum: 1443 W


consumption
Typical: 938 W
Block with 2 x 20-core CPU, 2.4 GHz, 24 x 32 GB DIMM, x 2 nodes

Thermal Maximum: 4925 BTU/hr


dissipation
Typical: 3202 BTU/hr
Block with 2 x 20-core CPU, 2.4 GHz, 24 x 32 GB DIMM, x 2 nodes

Operating Operating temperature : 10-35C


environment
Non-operating temperature : -40-70C
Operating relative humidity : 20-95% , (non-condensing)
Non-operating relative humidity : 5-95% , (non-condensing)

Certifications

• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC

Platform | System Specifications | 26


• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS

Platform | System Specifications | 27


3
COMPONENT SPECIFICATIONS
Controls and LEDs for Multinode Platforms

Figure 16: Front of chassis LEDs

Name Color Function

Power button Green Power On/Off

Network activity Green (flashing) 10 GB LAN1, LAN 2 activity

Alert indicator Red Solid - Overheating condition

Flashing every second - Fan


failure

Flashing every four seconds -


Power failure

UID button Blue Blinking Blue - Identified (on the


UID button and at the rear of the
node)

HDD activity (right LED) Blue or green HDD/SSD activity

Platform | Component Specifications | 28


Name Color Function

HDD failure (left LED) Red HDD/SSD failure by Nutanix


software

Table 16: Drive LEDs

Top LED: Activity Blue or green: Blinking = I/O activity, off = idle

Bottom LED: Status Solid red = failed drive, On five seconds after boot =
power on

Figure 17: Back panel LEDs of 2U4N platforms

Name Color Function

IPMI, left LED Green 100 Mbps

Amber 1 Gbps

IPMI, right LED Yellow Flashing - Activity

1 GbE, right LED Off No link or 10 Mbps

Green 100 Mbps

Amber 1 Gbps

1 GbE, left LED Yellow Flashing - Activity

10 GbE, top Link

10 GbE, bottom Activity

Platform | Component Specifications | 29


Name Color Function

Locator LED (UID) Blue Blinking - Node identified

Table 17: Power Supply LED Indicators

Single LED displays two colors

Power supply condition Green LED Amber LED

No AC power to all power supplies Off Off

Power supply critical events that cause a shutdown: Off Amber


Failure, Over Current Protection, Over Voltage
Protection, Fan Fail, Over Temperature Protection,
Under Voltage Protection.

Power supply warning events. Power supply continues Off 1 Hz blink amber
to operate. High temperature, over voltage, under
voltage and other conditions.

When AC is present only: 12VSB on (PS off) or PS in 1 Hz blink green Off


sleep state

Output on and OK Green Off

AC cord unplugged Off Amber

Power supply firmware updating mode 2 Hz blink green Off

For LED states for add-on NICs, see LED Meanings for Network Cards on page 30.

LED Meanings for Network Cards


Descriptions of LEDs for supported NICs.
Different NIC manufacturers use different LED colors and blink states. Not all NICs are supported for every
Nutanix platform. See the System Specifications for your platform to verify which NICs are supported.

Table 18: On-Board Ports

NIC Link (LNK) LED Activity (ACT) LED

1 GbE dedicated IPMI Green: 100 Mbps Blinking yellow: activity

Yellow: 1 Gbps

1 GbE shared IPMI Green: 1 Gbps Blinking yellow: activity

Yellow: 100 Mbps

i Unit identification LED Blinking blue: UUID has been


activated.

Platform | Component Specifications | 30


Table 19: SuperMicro NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 1 GbE Green: 100 Mbps Blinking yellow: activity

Yellow: 1Gb/s
OFF: 10Mb/s or No Connection

Dual-port 10G SFP+ Green: 10 Gb Blinking green: activity

Yellow: 1 Gb

Table 20: Silicom NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 10G SFP+ Green: all speeds Solid green: idle


Blinking green: activity

Quad-port 10G SFP+ Blue: 10 Gb Solid green: idle


Yellow: 1 Gb Blinking green: activity

Dual-port 10G BaseT Yellow: 1Gb/s Blinking green: activity

Green: 10Gb/s

Table 21: Mellanox NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual-port 10G SFP+ ConnectX-3 Green: 10Gb speed with no traffic Blinking yellow and green: activity
Pro
Blinking yellow: 10Gb speed with
traffic
Not illuminated: no connection

Dual-port 40G SFP+ ConnectX-3 Solid green: good link Blinking yellow: activity
Pro
Not illuminated: no activity

Dual-port 10G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical problem
with link Blinking green: valid link with
active traffic

Dual-port 25G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical problem
with link Blinking green: valid link with
active traffic

Platform | Component Specifications | 31


Power Supply Unit (PSU) Redundancy and Node Configuration
Note: Nutanix recommends that you carefully plan your AC power source needs, especially in cases where
the cluster consists of mixed models.
Nutanix recommends that you use 180~240V AC power source to secure PSU redundancy.
However, according to the table below and depending on the number of nodes in the chassis,
some NX Series platforms can work with redundant 100~210V AC power supply units.

Table 22: PSU Redundancy and Node Configuration

Nutanix Model Number of Nodes Redundancy at 110V Redundancy at 208v

NX-1065-G6 1 to 2 YES YES


3 to 4 NO YES

NX-1175S-G6 1 YES YES

NX-3060-G6 1 to 2 YES YES

3 to 4 NO YES

NX-3155G-G6 1, with GPU NO YES

1, without GPU YES YES

NX-3170-G6 1 NO YES

NX-5155-G6 1 YES YES

NX-8035-G6 1 YES YES

2 NO YES

NX-8155-G6 1 YES YES

Nutanix DMI Information


Format for Nutanix DMI strings.
VMware reads model information from the direct media interface (DMI) table.
For platforms with Intel Skylake CPUs, Nutanix provides model information to the DMI table in the following
format:
NX-motherboard_idNIC_id-HBA_id-G6
motherboard-id has the following options:

Table 23: ID options

Argument Option

T X11 multi-node motherboard

U X11 single-node motherboard

Platform | Component Specifications | 32


Argument Option

W X11 single-socket single-node motherboard

NIC_id has the following options:

Table 24: NIC ID options

Argument Option

00 uses on-board NIC

D1 dual-port 1G NIC

Q1 quad-port 1G NIC
DT dual-port 10GBaseT NIC

QT quad-port 10GBaseT NIC

DS dual-port SFP+ NIC

QS quad-port SFP+ NIC

HBA_id specifies the number of nodes and type of HBA controller. For example:

Table 25: HBA ID options

Argument Option

1NL3 single-node LSI3008

2NL3 2-node LSI3008

4NL3 4-node LSI3008

Table 26: Examples

DMI string Explanation Nutanix model

NX-TDT-4NL3-G6 X11 motherboard with on-board NX-1065-G6, NX-3060-G6


NIC, 4 nodes with LSI3008 HBA
controllers

NX-TDT-2NL3-G6 X11 motherboard with on-board NX-8035-G6


NIC, 2 nodes with LSI3008 HBA
controllers

NX-UDT-1NL3-G6 X11 motherboard with on-board NX-3155G-G6, NX-3170-G6,


NIC, one node with LSI3008 HBA NX-5155-G6, NX-8155-G6
controller

NX-W00-1NL3-G6 X11 single-socket motherboard NX-1175S-G6


with on-board NIC, one node with
LSI3008 HBA controller

Platform | Component Specifications | 33


Block Connection in a Customer Environment
After physically installing the Nutanix block in the datacenter, you can connect the network ports to the
customer's network.

• A switch that can auto-negotiate to 1Gbps is required for the IPMI ports on all G6 platforms.
• A 10 GbE switch that accepts SFP+ copper cables is required for most blocks.
• Nutanix recommends 10GbE connectivity for all nodes.
• The 10GbE NIC ports used on Nutanix nodes are passive. The maximum supported Twinax cable
length is 5 meters, per SFP+ specifications. For longer runs fiber cabling is required.
• Nutanix offers an SFP-10G-SR adapter to convert from SFP+ Twinax to optical. This allows a switch
with a 10GbE optical port to maintain the 10GbE link speed on an optical cable.
• Nutanix does not recommend the use of Fabric Extenders (FEX) or similar technologies for production
use cases. While initial, low-load implementations might run smoothly with such technologies, poor
performance, VM lockups, and other issues might occur as implementations scale upward (see
Knowledge Base article KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-blocking
switches with larger buffers for production workloads.

Tip: If you are configuring a cluster with multiple blocks, perform the following procedure on all blocks before
moving on to cluster configuration.

Connecting the Nutanix Block

Before you begin


This procedure requires the following components:

• One Nutanix block (installed in a rack but not yet connected to a power source)
• Customer networking hardware, including 10 GbE ports (SFP+ copper) and 1 GbE ports
• One 10 GbE SFP+ cable for each node (provided).
• One to three RJ45 cables for each node (customer provided)
• Two power cables (provided)

Caution: Note the orientation of the ports on the Nutanix block when you are cabling the ports.

Procedure

1. Connect the 10/100 or 10/100/1000 IPMI port of each node onto the customer switch with RJ45 cables.
The switch that the IPMI ports connect to must be capable of auto-negotiation to 1Gbps.

2. Connect one or more 10 GbE ports of each node to the customer switch with the SFP+ cables. If you
are using 10GBase-T, optimal resiliency and performance require CAT 6 cables.

3. (Optional) Connect one or more 1 GbE or 10 GBaseT ports of each node to the customer switch with
RJ45 cables. (For optimal 10GBaseT resiliency and performance use Cat 6 cables).

4. Connect both power supplies to a grounded, 208V power source (208V to 240V).

Tip: If you are configuring the block in a temporary location before installing it on a rack, the input can
be 120V. After moving the block into the datacenter, make sure that the block is connected to a 208V to
240V power source, which is required for power supply redundancy.

Platform | Component Specifications | 34


5. Confirm that the link indicator light next to each IPMI port is illuminated.

6. Turn on all nodes by pressing the power button on the front of each node (the top button).
The top power LED illuminates and the fans are noticeably louder for approximately 2 minutes.

Figure 18: Control panel with power button

Platform | Component Specifications | 35


4
MEMORY CONFIGURATIONS
Supported Memory Configurations
DIMM installation order for all Nutanix G6 platforms. When removing, replacing, or adding memory, use the
rules and guidelines in this topic.

DIMM Restrictions
DIMM type
Each G6 node must contain only DIMMs of the same type. For example, you cannot mix RDIMM
and LRDIMM in the same node.
DIMM capacity
Each G6 node must contain only DIMMs of the same capacity. For example, you cannot mix 32 GB
DIMMs and 64 GB DIMMs in the same node.
DIMM manufacturer
You can mix DIMMs from different manufacturers in the same node, but not in the same channel:

• DIMM slots are arranged on the motherboard in groups called channels. On G6 platforms, all
channels contain two DIMM slots (one blue and one black). Within a channel, all DIMMs must be
from the same manufacturer.
• When replacing a failed DIMM, ensure that you are replacing the old DIMM like-for-like.
• When adding new DIMMs to a node, if the new DIMMs and the original DIMMs are from different
manufacturers, you must arrange the DIMMs so that the original DIMMs and the new DIMMs are
not mixed in the same channel.

• EXAMPLE: You have an NX-3060-G6 node that has twelve 32GB DIMMs for a total of
384GB. You decide to upgrade to twenty-four 32 GB DIMMs for a total of 768 GB. When you
remove the node from the chassis and look at the motherboard, you see that each CPU has
six DIMMs. The DIMMs fill all blue DIMM slots, with all black DIMM slots empty. Remove
all DIMMs from one CPU and place them in the empty DIMM slots for the other CPU. Then
place all the new DIMMs in the DIMM slots for the first CPU, filling all slots. This way you can
ensure that the original DIMMs and the new DIMMs do not share channels.

Note: You do not need to balance numbers of DIMMs from different manufacturers within a node, so
long as you never mix them in the same channel.

DIMM speed
For G6 platforms, Nutanix supports higher-speed replacement DIMMs. You can mix DIMMs that
use different speeds in the same node, and in the same memory channel, under the following
conditions.

Platform | Memory Configurations | 36


• You can only use higher-speed replacement DIMMs from one NX generation later than your
platform. For G6 platforms, Nutanix supports the following mixes:

• Samsung 2666 MHz B-die and Samsung 2666 MHz C-die DIMMs
• Samsung 2666 MHz C-die and Samsung 2933 MHz C-die DIMMs
• Samsung 2666 MHz B-die and Samsung 2933 MHz C-die DIMMs

Note: When you mix DIMMs of different speeds in the same node, your system operates at the
lowest common DIMM speed or CPU supported frequency.

Balanced and Unbalanced Configurations


Memory performance is most efficient with a balanced configuration, where every memory channel
contains the same number of DIMMs. Nutanix supports unbalanced configurations, but be aware that these
configurations result in lower performance.

Memory Installation Order for G6 Platforms


A memory channel is a group of DIMM slots.
For multi-node and single-node G6 platforms, each CPU is associated with six memory channels. Each
memory channel contains two DIMM slots. Memory channels have one blue slot and one black slot each.
For the single-socket NX-1175S-G6, the CPU is associated with six memory channels. Each memory
channel contains one DIMM slot.

Figure 19: DIMM slots for a G6 multi-node motherboard

Platform | Memory Configurations | 37


Figure 20: DIMM slots for a G6 single-node motherboard

Platform | Memory Configurations | 38


Figure 21: DIMM slots for the NX-1175S-G6 single-socket motherboard

Note: DIMM slots on the motherboard are most commonly labeled as A1, A2, and so on. However, some
software tools report DIMM slot labels in a different format, such as 1A, 2A, or CPU1, CPU2, or DIMM1,
DIMM2.

Table 27: DIMM Installation Order for G6 Platforms

Number of Number of DIMMs Slots to use


CPUs

1 4 A1, B1, D1, E1

1 6 Fill all slots.

2 6 CPU1: A1, B1, C1 (blue slots)


CPU2: A1, B1, C1 (blue slots)

Platform | Memory Configurations | 39


Number of Number of DIMMs Slots to use
CPUs
2 8 CPU1: A1, B1, D1, E1 (blue slots)
CPU2: A1, B1, D1, E1 (blue slots)

2 12 CPU1: A1, B1, C1, D1, E1, F1 (blue slots)


CPU2: A1, B1, C1, D1, E1, F1 (blue slots)

2 16 CPU1: A1, B1, D1, E1 (blue slots)


CPU1: A2, B2, D2, E2 (black slots)
CPU2: A1, B1, D1, E1 (blue slots)
CPU2: A2, B2, D2, E2 (black slots)

2 24 Fill all slots.

Note: 24 DIMM configuration is not supported with NX-1065-G6. Maximum supported configuration for
NX-1065-G6 is 16 DIMMs.
COPYRIGHT
Copyright 2024 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

Platform | Copyright | 41
Interface Target Username Password

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: August 13, 2024 (2024-08-13T11:40:43-04:00)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy