System Specs G6 Multinode
System Specs G6 Multinode
Multinode G6 Platforms
Platform ANY
NX-1065-G6/NX-3060-G6/NX-8035-G6
August 13, 2024
Contents
2. System Specifications.........................................................................8
Node Naming (NX-1065-G6)........................................................................................................8
NX-1065-G6 System Specifications................................................................................. 10
Node Naming (NX-3060-G6)......................................................................................................13
NX-3060-G6 System Specifications................................................................................. 16
Node Naming (NX-8035-G6)......................................................................................................19
NX-8035-G6 System Specifications................................................................................. 23
3. Component Specifications................................................................ 28
Controls and LEDs for Multinode Platforms.............................................................................. 28
LED Meanings for Network Cards............................................................................................ 30
Power Supply Unit (PSU) Redundancy and Node Configuration................................................ 32
Nutanix DMI Information........................................................................................................... 32
Block Connection in a Customer Environment..........................................................................34
Connecting the Nutanix Block.........................................................................................34
4. Memory Configurations..................................................................... 36
Supported Memory Configurations........................................................................................... 36
Copyright..............................................................................................41
License.....................................................................................................................................41
Conventions..............................................................................................................................41
Default Cluster Credentials...................................................................................................... 41
Version..................................................................................................................................... 42
VISIO STENCILS
Visio stencils for Nutanix products are available on VisioCafe.
Where:
• G|C|S indicates that a platform uses GPUs, a platform is an AHV-only node, or a platform supports only
one CPU respectively
• G(4|5|6|7) is the CPU generation
Example: NX-3460-G7
The following section provides a detailed description of the Nutanix hardware naming convention:
Table 1: Prefix
Prefix Description
Body Description
W Indicates the product series and have one of the following values:
Table 3: Suffix
Suffix Description
G7 Indicates that the platform uses the Intel Cascade Lake CPU
• Node A
• Node B
• Node C
• Node D
Physical drives are arranged in the chassis according the node order shown below. The first drive in each
node contains the Controller VM and metadata.
Hybrid SSD + HDD One SSD and two HDDs per node
One NIC is supported. The supported NIC options are shown below.
• Quad-port 10 GbE
• Dual-port 10 GbE
• Dual-port 10GBase-T
• Dual-port 25 GbE
CPU
• 2 x Intel Xeon Silver_4114, 10-core Skylake @ 2.2 GHz (20 cores per node)
• 2 x Intel Xeon Silver_4108, 8-core Skylake @ 1.8 GHz (16 cores per node)
Memory
• DDR4-2666, 1.2V, 32 GB, RDIMM
4 x 32 GB = 128 GB
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
• 960 GB
• 1.92 TB
• 3.84 TB
2 x HDD
• 2 TB
• 4 TB
• 6 TB
• 8 TB
• 12 TB
• 960 GB
• 1.92 TB
2 x HDD
• 2 TB
• 4 TB
• 6 TB
• 8 TB
• 12 TB
Network Serverboard
Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (only the slot on the right is active)
Note: The tolerance for dimensions more than 2 mm thickness is +/- 1.5 mm.
Certifications
• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS
• Node A
• Node B
• Node C
• Node D
Physical drives are arranged in the chassis according the node order shown below. The first drive in each
node contains the Controller VM and metadata.
For all versions of AOS, the NX-3060-G6 supports hard disk drives (HDD) and solid-state drives (SSD).
For versions of AOS 5.6 and later, the NX-3060-G6 also supports non-volatile memory express drives
(NVMe).
Hybrid SSD + HDD Two SSDs and four HDDs per node
Note: When partially populating Four SSDs and two empty slots per node
a node with SSDs, you must load
the drive slots in order from left to Six SSDs per node
right.
Mixed SSD + NVMe Two NVMe drives and four SSDs per node
One or two identical NICs are supported. The supported NIC options are shown below.
• Quad-port 10 GbE
• Dual-port 10 GbE
• Dual-port 10GBase-T
• Dual-port 25GbE
CPU
• 2 x Intel Xeon Gold_6138, 20-core Skylake @ 2.0 GHz (40 cores per node)
• 2 x Intel Xeon Gold_6130, 16-core Skylake @ 2.1 GHz (32 cores per node)
• 2 x Intel Xeon Gold_5120, 14-core Skylake @ 2.2 GHz (28 cores per node)
• 2 x Intel Xeon Gold_6126, 12-core Skylake @ 2.6 GHz (24 cores per node)
• 2 x Intel Xeon Silver_4116, 12-core Skylake @ 2.1 GHz (24 cores per node)
• 2 x Intel Xeon Silver_4114, 10-core Skylake @ 2.2 GHz (20 cores per node)
• 2 x Intel Xeon Silver_4108, 8-core Skylake @ 1.8 GHz (16 cores per node)
Memory
• DDR4-2666, 1.2V, 32 GB, RDIMM
6 x 32 GB = 192 GB
8 x 32 GB = 256 GB
12 x 32 GB = 384 GB
16 x 32 GB = 512 GB
24 x 32 GB = 768 GB
• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
4 x HDD
• 2 TB
• 960 GB
• 1.92 TB
4 x HDD
• 2 TB
2, 4, or 6 x SSD
• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
• 960 GB
• 1.92 TB
• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
2 x NVMe
• 1.6 TB
Network Serverboard
Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (both slots filled with NICs)
Note: The tolerance for dimensions more than 2 mm thickness is +/- 1.5 mm.
Certifications
• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
• RCM
• VCCI-A
• BSMI
• EAC
• SABS
• INMETRO
• S-MARK
• UKRSEPRO
• BIS
Figure 10: Front panel of NX-8035-G6 block: Hybrid and all-SSD configurations
Hybrid SSD + HDD Two SSDs and four HDDs per node
Note: When partially populating Four SSDs and two empty slots per node
a node with SSDs, you must load
the drive slots in order from left to Six SSDs per node
right.
Mixed SSD + NVMe Two NVMe drives and four SSDs per node
You can install one or two NICs. All installed NICs must be identical. Always populate the NIC slots in
order: NIC1, NIC2. Supported NICs include:
CPU
• 2 x Intel Xeon Gold_6152, 22-core Skylake @ 2.1 GHz (44 cores per node)
• 2 x Intel Xeon Gold_6148, 20-core Skylake @ 2.4 GHz (40 cores per node)
• 2 x Intel Xeon Gold_6140, 18-core Skylake @ 2.3 GHz (36 cores per node)
• 2 x Intel Xeon Gold_5120, 14-core Skylake @ 2.2 GHz (28 cores per node)
• 2 x Intel Xeon Silver_4116, 12-core Skylake @ 2.1 GHz (24 cores per node)
• 2 x Intel Xeon Silver_4114, 10-core Skylake @ 2.2 GHz (20 cores per node)
• 2 x Intel Xeon Gold_6134, 8-core Skylake @ 3.2 GHz (16 cores per node)
• 2 x Intel Xeon Silver_4108, 8-core Skylake @ 1.8 GHz (16 cores per node)
• 2 x Intel Xeon Bronze_3106, 8-core Skylake @ 1.7 GHz (16 cores per node)
• 2 x Intel Xeon Gold_6128, 6-core Skylake @ 3.4 GHz (12 cores per node)
2 x SSD
• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
4 x HDD
• 6 TB
• 8 TB
• 12 TB
• 960 GB
• 1.92 TB
4 x HDD
• 6 TB
• 8 TB
• 12 TB
2, 4, or 6 x SSD
• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
• 960 GB
• 1.92 TB
• 960 GB
• 1.92 TB
• 3.84 TB
• 7.68 TB
2 x NVMe
• 1.6 TB
Network Serverboard
Expansion slot 2 x (x8) PCIe 3.0 (low-profile) per node (both slots filled with NICs)
Note: The tolerance for dimensions more than 2 mm thickness is +/- 1.5 mm.
Certifications
• Energy Star
• CSAus
• FCC
• CSA
• ICES
• CE
• KCC
Top LED: Activity Blue or green: Blinking = I/O activity, off = idle
Bottom LED: Status Solid red = failed drive, On five seconds after boot =
power on
Amber 1 Gbps
Amber 1 Gbps
Power supply warning events. Power supply continues Off 1 Hz blink amber
to operate. High temperature, over voltage, under
voltage and other conditions.
For LED states for add-on NICs, see LED Meanings for Network Cards on page 30.
Yellow: 1 Gbps
Yellow: 1Gb/s
OFF: 10Mb/s or No Connection
Yellow: 1 Gb
Green: 10Gb/s
Dual-port 10G SFP+ ConnectX-3 Green: 10Gb speed with no traffic Blinking yellow and green: activity
Pro
Blinking yellow: 10Gb speed with
traffic
Not illuminated: no connection
Dual-port 40G SFP+ ConnectX-3 Solid green: good link Blinking yellow: activity
Pro
Not illuminated: no activity
Dual-port 10G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical problem
with link Blinking green: valid link with
active traffic
Dual-port 25G SFP28 Solid yellow: good link Solid green: valid link with no
ConnectX-4 Lx traffic
Blinking yellow: physical problem
with link Blinking green: valid link with
active traffic
3 to 4 NO YES
NX-3170-G6 1 NO YES
2 NO YES
Argument Option
Argument Option
D1 dual-port 1G NIC
Q1 quad-port 1G NIC
DT dual-port 10GBaseT NIC
HBA_id specifies the number of nodes and type of HBA controller. For example:
Argument Option
• A switch that can auto-negotiate to 1Gbps is required for the IPMI ports on all G6 platforms.
• A 10 GbE switch that accepts SFP+ copper cables is required for most blocks.
• Nutanix recommends 10GbE connectivity for all nodes.
• The 10GbE NIC ports used on Nutanix nodes are passive. The maximum supported Twinax cable
length is 5 meters, per SFP+ specifications. For longer runs fiber cabling is required.
• Nutanix offers an SFP-10G-SR adapter to convert from SFP+ Twinax to optical. This allows a switch
with a 10GbE optical port to maintain the 10GbE link speed on an optical cable.
• Nutanix does not recommend the use of Fabric Extenders (FEX) or similar technologies for production
use cases. While initial, low-load implementations might run smoothly with such technologies, poor
performance, VM lockups, and other issues might occur as implementations scale upward (see
Knowledge Base article KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-blocking
switches with larger buffers for production workloads.
Tip: If you are configuring a cluster with multiple blocks, perform the following procedure on all blocks before
moving on to cluster configuration.
• One Nutanix block (installed in a rack but not yet connected to a power source)
• Customer networking hardware, including 10 GbE ports (SFP+ copper) and 1 GbE ports
• One 10 GbE SFP+ cable for each node (provided).
• One to three RJ45 cables for each node (customer provided)
• Two power cables (provided)
Caution: Note the orientation of the ports on the Nutanix block when you are cabling the ports.
Procedure
1. Connect the 10/100 or 10/100/1000 IPMI port of each node onto the customer switch with RJ45 cables.
The switch that the IPMI ports connect to must be capable of auto-negotiation to 1Gbps.
2. Connect one or more 10 GbE ports of each node to the customer switch with the SFP+ cables. If you
are using 10GBase-T, optimal resiliency and performance require CAT 6 cables.
3. (Optional) Connect one or more 1 GbE or 10 GBaseT ports of each node to the customer switch with
RJ45 cables. (For optimal 10GBaseT resiliency and performance use Cat 6 cables).
4. Connect both power supplies to a grounded, 208V power source (208V to 240V).
Tip: If you are configuring the block in a temporary location before installing it on a rack, the input can
be 120V. After moving the block into the datacenter, make sure that the block is connected to a 208V to
240V power source, which is required for power supply redundancy.
6. Turn on all nodes by pressing the power button on the front of each node (the top button).
The top power LED illuminates and the fans are noticeably louder for approximately 2 minutes.
DIMM Restrictions
DIMM type
Each G6 node must contain only DIMMs of the same type. For example, you cannot mix RDIMM
and LRDIMM in the same node.
DIMM capacity
Each G6 node must contain only DIMMs of the same capacity. For example, you cannot mix 32 GB
DIMMs and 64 GB DIMMs in the same node.
DIMM manufacturer
You can mix DIMMs from different manufacturers in the same node, but not in the same channel:
• DIMM slots are arranged on the motherboard in groups called channels. On G6 platforms, all
channels contain two DIMM slots (one blue and one black). Within a channel, all DIMMs must be
from the same manufacturer.
• When replacing a failed DIMM, ensure that you are replacing the old DIMM like-for-like.
• When adding new DIMMs to a node, if the new DIMMs and the original DIMMs are from different
manufacturers, you must arrange the DIMMs so that the original DIMMs and the new DIMMs are
not mixed in the same channel.
• EXAMPLE: You have an NX-3060-G6 node that has twelve 32GB DIMMs for a total of
384GB. You decide to upgrade to twenty-four 32 GB DIMMs for a total of 768 GB. When you
remove the node from the chassis and look at the motherboard, you see that each CPU has
six DIMMs. The DIMMs fill all blue DIMM slots, with all black DIMM slots empty. Remove
all DIMMs from one CPU and place them in the empty DIMM slots for the other CPU. Then
place all the new DIMMs in the DIMM slots for the first CPU, filling all slots. This way you can
ensure that the original DIMMs and the new DIMMs do not share channels.
Note: You do not need to balance numbers of DIMMs from different manufacturers within a node, so
long as you never mix them in the same channel.
DIMM speed
For G6 platforms, Nutanix supports higher-speed replacement DIMMs. You can mix DIMMs that
use different speeds in the same node, and in the same memory channel, under the following
conditions.
• Samsung 2666 MHz B-die and Samsung 2666 MHz C-die DIMMs
• Samsung 2666 MHz C-die and Samsung 2933 MHz C-die DIMMs
• Samsung 2666 MHz B-die and Samsung 2933 MHz C-die DIMMs
Note: When you mix DIMMs of different speeds in the same node, your system operates at the
lowest common DIMM speed or CPU supported frequency.
Note: DIMM slots on the motherboard are most commonly labeled as A1, A2, and so on. However, some
software tools report DIMM slot labels in a different format, such as 1A, 2A, or CPU1, CPU2, or DIMM1,
DIMM2.
Note: 24 DIMM configuration is not supported with NX-1065-G6. Maximum supported configuration for
NX-1065-G6 is 16 DIMMs.
COPYRIGHT
Copyright 2024 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Platform | Copyright | 41
Interface Target Username Password
Version
Last modified: August 13, 2024 (2024-08-13T11:40:43-04:00)