0% found this document useful (0 votes)
700 views27 pages

System Specs NX1065G9

System-Specs-NX1065G9

Uploaded by

Guilherme Castro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
700 views27 pages

System Specs NX1065G9

System-Specs-NX1065G9

Uploaded by

Guilherme Castro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

NX-1065-G9 System

Specifications
Platform ANY
NX-1065-G9
December 4, 2024
Contents

1. System Specifications.........................................................................3
Node Naming (NX-1065-G9)........................................................................................................3
NX-1065-G9 System Specifications................................................................................... 4

2. Component Specifications................................................................ 12
Controls and LEDs for Multinode Platforms.............................................................................. 12
Network Card LED Description................................................................................................ 14
Power Supply Unit Redundancy and Node Configuration..........................................................16
Nutanix DMI Information........................................................................................................... 16

3. Memory Configurations..................................................................... 18
Supported Memory Configurations........................................................................................... 18

4. Nutanix Hardware Naming Convention..............................................24

Copyright..............................................................................................26
License.....................................................................................................................................26
Conventions..............................................................................................................................26
Default Cluster Credentials...................................................................................................... 26
Version..................................................................................................................................... 27
1
SYSTEM SPECIFICATIONS
Node Naming (NX-1065-G9)
Nutanix assigns a name to each node in a block, which varies by product type.
NX-1065-G9 can support up to 3 nodes (versus 4 in prior generations) due to thermal and power
limitations.

• Node A
• Node B
• Node C
An empty node tray is present in the Node D slot, which must remain installed during operation.
The following figure shows the arrangement of drives in the chassis. The first drive in each node contains
the Controller VM and metadata.The three drive slots corresponding to Node D are populated with blank
drive carriers. Always operate with the three blank drive carriers for the airflow control essential for the
thermal control of the NX-1065-G9 chassis.

Figure 1: NX-1065-G9 Front Panel

Table 1: Supported Drive Configurations

Hybrid SSD + HDD 1 x SSD and 2 x HDDs per node

Platform | System Specifications | 3


Figure 2: NX-1065-G9 Back Panel

Figure 3: NX-1065-G9 Exploded View

NX-1065-G9 System Specifications

Platform | System Specifications | 4


Table 2: System Characteristics

Boot Device Boot Drive

• 2 x 512GB M.2 Boot Device

Chassis System Chassis

• 2U3N LFF Chassis

CPU Processor
Dual Intel Xeon® 5th Gen. (Emerald Rapids)

• 2 x Intel Xeon® Silver 4509Y [8 cores / 2.60 GHz / 125 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Silver 4510 [12 cores / 2.40 GHz / 150 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Gold 5515+ [8 cores / 3.20 GHz / 165 W], Excluded from ENERGY
STAR certification.
Dual Intel Xeon® 4th Gen. (Sapphire Rapids)

• 2 x Intel Xeon® Silver 4410T [10 cores / 2.70 GHz / 150 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Silver 4410Y [12 cores / 2.00 GHz / 150 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Silver 4416+ [20 cores / 2.00 GHz / 165 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Gold 5415+ [8 cores / 2.90 GHz / 150 W], Excluded from ENERGY
STAR certification.
• 2 x Intel Xeon® Gold 5416S [16 cores / 2.00 GHz / 150 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Gold 5418Y [24 cores / 2.00 GHz / 185 W]
• 2 x Intel Xeon® Gold 6416H [18 cores / 2.20 GHz / 165 W], Excluded from
ENERGY STAR certification.
• 2 x Intel Xeon® Gold 6426Y [16 cores / 2.50 GHz / 185 W], Excluded from
ENERGY STAR certification.

Platform | System Specifications | 5


Memory
Note:

• The minimum memory configuration required for Energy STAR 4.0


certification is 512 GB. If memory is less than 512 GB, the platform is not
Energy Star certified, even if the CPU is certified.
• Each node must contain DIMMs only of the same type, speed, and
capacity.

32GB DIMM

• 4 x 32 GB = 128 GB
• 8 x 32 GB = 256 GB
• 12 x 32 GB = 384 GB
• 16 x 32 GB = 512 GB
64GB DIMM

• 4 x 64 GB = 256 GB
• 8 x 64 GB = 512 GB
• 12 x 64 GB = 768 GB
• 16 x 64 GB = 1024 GB

Network Ports on Serverboard

• 1x 1GbE Dedicated IPMI


AIOM

• 1 x 2P 10GBase-T (Port 1 is shared IPMI)


• 1 x 2P 10GBase-T + 2P SFP+ (Port 1 is shared IPMI)
Add on NICs in PCIe slots

• 0, 1 or 2 x 10GbE 4P NIC
• 0, 1 or 2 x 10GBaseT 2P NIC
• 0, 1 or 2 x 10GBaseT 4P NIC
• 0, 1 or 2 x 25GbE 2P NIC
• 0, 1 or 2 x 25GbE 4P NIC

Network Cables Network Cable

• 1M-SFP28
• 3M-SFP28
• 3M-SFP+
• 5M-SFP28
• 5M-SFP+

Platform | System Specifications | 6


Power Cable Power Cable

• 2 x C13/14 4ft Power Cable

Power Supply Power Supply

• 2 x PWS,2200W,TITANIUM

Server Server

• 1, 2 or 3 x NX-1065-G9 Server

Storage

SSD+HDD 1 x SSD

• 1.92TB
• 3.84TB
• 7.68TB
2 x HDD

• 8TB
• 12TB
• 18TB

SSD+HDD SED 1 x SSD

• 3.84TB
• 7.68TB
2 x HDD

• 8.0TB
• 12.0TB

TPM TPM

• 0 or 1 x Unprovisioned Trusted Platform Module

Transceiver Transceiver

• SR SFP+ Transceiver

VGA 1 x VGA connector per node (15-pin female)

Chassis fans 4x 80 mm heavy duty fans with PWM fan speed controls

Platform | System Specifications | 7


Table 3: Block, power and electrical

Block
Note:

• Maximum measurements are shown. Width is from ear to ear; depth is


from ears to pull rings. The tolerance for dimensions more than 2 mm
thickness is +/- 1.5 mm.

• Width : 449 mm

• Depth : 774 mm

• Height : 88 mm

• Weight : 39.4 kg

• Rack Units : 2 U

Package Package with rack rails and accessories

• Weight : 51.4 kg

Shock 20G, square wave, one shock on each side

• Non-Operating : 10 ms
5G, half-sine, one shock on each side

• Operating : 10 ms

Platform | System Specifications | 8


Thermal Block with 2 x 16/12/10-core CPU 150W, 16 x 64GB x 3 nodes
Dissipation
(calculated) • Typical : 4938 BTU/hr
• Maximum : 6585 BTU/hr
Block with 2 x 16/12/10-core CPU 150W, 8 x 64GB x 3 nodes

• Typical : 4478 BTU/hr


• Maximum : 5971 BTU/hr
Block with 2 x 20/18-core CPU 165W, 16 x 64GB x 3 nodes

• Typical : 5322 BTU/hr


• Maximum : 7096 BTU/hr
Block with 2 x 20/18-core CPU 165W, 8 x 64GB x 3 nodes

• Typical : 4862 BTU/hr


• Maximum : 6483 BTU/hr
Block with 2 x 24/16-core CPU 185W, 16 x 64GB x 3 nodes

• Typical : 5629 BTU/hr


• Maximum : 7506 BTU/hr
Block with 2 x 8/12-core CPU 150W, 16 x 32GB x 3 nodes

• Typical : 4734 BTU/hr


• Maximum : 6312 BTU/hr
Block with 2 x 8/12-core CPU 150W, 8 x 32GB x 3 nodes

• Typical : 4401 BTU/hr


• Maximum : 5868 BTU/hr

Vibration at 5 to 200Hz. Approx. 30 min./axis


(Random)
• Non-Operating : 0.98 Grms
at 5 to 500 Hz. Approx. 15 min./axis

• Operating : 0.21 Grms

Vibration at 5 to 200 Hz. Approx. 15 min./axis


(Sinusoidal)
• Non-Operating : 0.5 G
at 5 to 200 Hz. Approx. 15 min./axis

• Operating : 0.25 G

Platform | System Specifications | 9


Power
Note:
consumption
(calculated)
• For the power consumption calculations, Max NIC and disk configurations
are considered.

Block with 2 x 24/16-core CPU 185W, 16 x 64GB x 3 nodes

• Maximum: 2200 VA
• Typical: 1650 VA
Block with 2 x 20/18-core CPU 165W, 16 x 64GB x 3 nodes

• Maximum: 2080 VA
• Typical: 1560 VA
Block with 2 x 16/12/10-core CPU 150W, 16 x 64GB x 3 nodes

• Maximum: 1930 VA
• Typical: 1447 VA
Block with 2 x 20/18-core CPU 165W, 8 x 64GB x 3 nodes

• Maximum: 1900 VA
• Typical: 1425 VA
Block with 2 x 8/12-core CPU 150W, 16 x 32GB x 3 nodes

• Maximum: 1850 VA
• Typical: 1388 VA
Block with 2 x 16/12/10-core CPU 150W, 8 x 64GB x 3 nodes

• Maximum: 1750 VA
• Typical: 1312 VA
Block with 2 x 8/12-core CPU 150W, 8 x 32GB x 3 nodes

• Maximum: 1720 VA
• Typical: 1290 VA

Operating Operating temperature : 10-30C


environment
Non-Operating temperature : -40-70C
Operating relative humidity : 8-90%
Non-operating relative humidity : 5-95%

Certifications

• BIS
• BSMI
• CE

Platform | System Specifications | 10


• CSA
• CSAus
• EAC
• FCC
• ICES
• KCC
• RCM
• Reach
• RoHS
• S-MARK
• SABS
• SII
• UKCA
• UL
• VCCI-A
• WEEE
• cUL

Platform | System Specifications | 11


2
COMPONENT SPECIFICATIONS
Controls and LEDs for Multinode Platforms

Figure 4: Front of Chassis LEDs

Note: The network activity LED indicator is only applicable to the AIOM (onboard NIC). It blinks only when
one of the AIOM ports is connected to the network.

Table 4: LEDs on the Front of the Chassis

Name Color Function

Power button Green Power On/Off

Network activity Flashing green Network activity

Alert indicator Solid red Overheating condition

Blinking red (1 Hz) Fan failure

Blinking red (0.25 - 0.5 Hz range) Power failure

Solid blue UID activated locally

Blinking Blue UID activated using IPMI

Platform | Component Specifications | 12


Name Color Function

UID button Blue Unit identification (UID) button


turns ON and OFF blue light
function of Information LED and
a blue LED in the rear of the
chassis to locate the node in large
racks.

Table 5: Drive LEDs

Activity Blue or green: Blinking = I/O activity, off = idle

Status Solid red = failed drive, On 5 seconds after boot =


power on

Back Panel LEDs


The back panel LED locations and their behavior are identical for all three multinode platforms (NX-1065-
G9, NX-3035-G9, and NX-3060-G9). The following image shows the LED locations for NX-3060-G9
platform, but the same applies to other multinode platforms.

Figure 5: Back Panel LEDs (NX-3060-G9 Shown)

Name Color Function

IPMI, Link LED (Left) Solid Green 100 Mbps

Solid Amber 1 Gbps

IPMI, Activity LED (Right) Blinking Amber Active

AIOM, Link LED (Left) Off No link

Green Linked at 10 Gb/s

Amber Linked at 1 Gb/s

AIOM, Activity LED (Right) Off No activity

Platform | Component Specifications | 13


Name Color Function

Blinking Green Link up (traffic flowing)

Locator LED (UID) Blue Blinking - Node identified

Table 6: Power Supply LED Indicators

Power supply condition LED status

No AC power to all power supplies Off

Power supply critical events that cause a shutdown: Failure, Over Steady amber
Current Protection, Over Voltage Protection, Fan Fail, Over
Temperature Protection, Under Voltage Protection.
Power supply warning events. Power supply continues to operate. High Blinking amber (1 Hz )
temperature, over voltage, under voltage and other conditions.

When AC is present only: 12VSB on (PS off) or PS in sleep state Blinking green (1 Hz) minute

Output on and OK Solid green

AC cord unplugged Solid amber

Power supply firmware updating mode Blinking green (2 Hz)

For LED states for add-on NICs, see Network Card LED Description on page 14.

Network Card LED Description


Different NIC manufacturers use different LED colors and blink states. Not all NICs are supported for every
Nutanix platform. See the system specifications for your platform to verify which NICs are supported.

Table 7: SuperMicro NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual/Quad Port 25GbE Green: 25 GbE Blinking: activity

Amber: less than 25 GbE Off : No ativity

OFF: No link

Dual Port 100GbE Green: 100 GbE Blinking: activity

Amber: less than 100 GbE Off : No activity

Platform | Component Specifications | 14


Table 8: Intel NICs

NIC Link (LNK) LED Activity (ACT) LED

Quad port 10G BaseT Green: 10 Gbps Blinking green: Transmitting or


receiving data
Yellow: 5/2.5/1Gbps
Off : No link
OFF: 100 Mbps

Quad port 10G SFP+ Green: 10 Gbps Blinking: Activity

Yellow: 1 Gbps Off : No activity

Dual Port 25G Green: 10 Gbps Green: SFP LAN port active

Amber: 10 Gbps

Table 9: Broadcom NICs

NIC Link (LNK) LED Activity (ACT) LED

Dual Port 25G Green: Linked at 25 Gb/s Blinking green: Traffic Flowing
Activity
Yellow: Linked at 10 Gb/s or 1 Gb/
s Off: No activity
Off: No link

Dual Port 10G Green: Linked at 10 Gb/s Blinking green: Link up (traffic
flowing)
Amber: Linked at 1 Gb/s
Off: No activity
Off: No link

Table 10: Mellanox NICs

NIC Bi-color LED (Yellow/ Single color LED Indicates


Green) (Green)

Dual port CX-6 25G 1 Hz blinking Yellow Off Beacon command for
locating the adapter card
Dual port CX-6 100G
4 Hz blinking Yellow ON Error
Indicates an error with
the link

Blinking Green LED Blinking Physical Activity

Solid Green LED ON Link Up

Platform | Component Specifications | 15


Power Supply Unit Redundancy and Node Configuration
Note: Carefully plan your AC power source needs, especially in cases where the cluster consists of mixed
models. Nutanix recommends that you use 208 V - 240 V AC power source to ensure Power Supply Unit
(PSU) redundancy.

Table 11: PSU Redundancy and Node Configuration

Nutanix model Number of nodes Redundancy at 110 V Redundancy at 208-240


V

NX-1065-G9 1, 2, or 3 No Yes

NX-3035-G9 1 or 2 No Yes
NX-3060-G9 1, 2, 3, or 4 No Yes

NX-1150S-G9 1 Yes Yes

NX-1175S-G9 1 Yes Yes

NX-3155-G9 1 No Yes

NX-8150-G9 1 No Yes

NX-8155-G9 1 No Yes

NX-8155A-G9 1 No Yes

NX-8170-G9 1 No Yes

NX-9151-G9 1 Not supported Yes (2+1 PSU


redundancy)
Two PSUs must remain
functional at all times.

Caution:
For all G9 platforms except NX-1175S-G9 and NX-1150S-G9: When the input power source is 110
V, a single PSU failure will cause all nodes in a block to power off.
For NX-1175S-G9 and NX-1150S-G9: The block can tolerate a single PSU failure when
connected to a 110 V input power source.

Nutanix DMI Information


vSphere reads model information from the direct media interface (DMI) table.
For NX-G9 Series platforms, Nutanix provides model information to the DMI table in the following format:
NX-motherboard_idNIC_id-HBA_id-G9
motherboard-id has the following options:

Argument Option

T X13 multi-node motherboard

U X13 single-node motherboard

Platform | Component Specifications | 16


Argument Option

W X13 single-socket single-node motherboard

NIC_id has the following options:

Argument Option

D1 dual-port 1G NIC

Q1 quad-port 1G NIC

DT dual-port 10GBaseT NIC

QT quad-port 10GBaseT NIC

DS dual-port SFP+ NIC

QS quad-port SFP+ NIC

HBA_id specifies the number of nodes and the type of HBA controller. For example:

Argument Option

1NL3 single-node LSI3808

2NL3 2-node LSI3808

4NL3 4-node LSI3808

Table 12: Examples

DMI string Explanation Nutanix model

NX-TDT-4NL3-G9 X13 motherboard with dual-port NX-3060-G9


10GBase-T NIC, 4 nodes with
LSI3808 HBA controllers

NX-TDT-2NL3-G9 X13 motherboard with dual-port NX-3035-G9


10GBase-T NIC, 2 nodes with
LSI3808 HBA controllers

Platform | Component Specifications | 17


3
MEMORY CONFIGURATIONS
Supported Memory Configurations
DIMM installation information for all Nutanix G9 platforms.

DIMM Restrictions
DIMM capacity
Each G9 node must contain only DIMMs of the same capacity. For example, you cannot mix 32 GB
DIMMs and 64 GB DIMMs in the same node.
DIMM speed
G9 platforms that use Intel Sapphire Rapids processors support 4800 MT/s DIMMs. The speed of
the CPU / DIMM interface is 4000 MT/s or 4800 MT/s based on the CPU SKU used.
G9 platforms that use Intel Emerald Rapids processors ship with 5600 MT/s DIMMs. The speed of
the CPU / DIMM interface depends on the CPU class and on whether you have installed one DIMM
per memory channel (1DPC) or two DIMMs per memory channel (2DPC).

• Platinum-8xxx: max 5600 MT/s at 1DPC; max 4400 MT/s at 2DPC.


• Gold-6xxx: max 5200 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Gold-5xxx: max 4800 MT/s at 1DPC; max 4400 MT/s at 2DPC.
• Silver-4xxx: max 4400 MT/s at both 1DPC and 2DPC.
• Bronze-3xxx (single socket boards only): max 4400 MT/s at both 1DPC and 2DPC.
If you install a 5600 MT/s DIMM in a G9 platform that uses Intel Sapphire Rapids processors, it runs
at a max of 4800 MT/s.
DIMM manufacturer
Each G9 node must contain only DIMMs from the same manufacturer.

Memory Installation Order for G9 Multi-Node Platforms


A memory channel is a group of DIMM slots.
For G9 multi-node platforms, each CPU is associated with eight memory channels that contain one blue
slot each.

Platform | Memory Configurations | 18


Figure 6: DIMM Slots for a G9 Multi-node Server Board

Table 13: DIMM Installation Order for G9 Multi-Node Platforms

Number of Slots to Use Supported Capacities


DIMMs
NX-1065-G9 NX-3035-G9 NX-3060-G9

4 CPU1: A1, G1 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB, 128 GB


GB
CPU2: A1, G1

8 CPU1: A1, C1, E1, G1 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB, 128 GB
GB
CPU2: A1, C1, E1, G1

12 CPU1: A1, C1, D1, E1, F1, 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB, 128 GB
G1 GB

CPU2: A1, C1, D1, E1, F1,


G1

16 Fill all slots. 32 GB, 64 GB 32 GB, 64 GB, 128 64 GB


GB

Memory Installation Order for G9 Single-Node Platforms


A memory channel is a group of DIMM slots.
For G9 single-node platforms, each CPU is associated with eight memory channels. Each memory channel
contains two DIMM slots, one blue slot and one black slot, for a total of 32 DIMM slots.

Platform | Memory Configurations | 19


Figure 7: DIMM Slots for a G9 Single-Node Server Board

Table 14: DIMM Installation Order for Single-Node Hyper G9 Platforms

Number Slots to Use Supported Capacities


of
DIMMs NX-8155-G9 NX-8170-G9

4 CPU1: A1, G1 (blue slots) 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
CPU2: A1, G1 (blue slots)

Platform | Memory Configurations | 20


Number Slots to Use Supported Capacities
of
DIMMs NX-8155-G9 NX-8170-G9

8 CPU1: A1, C1, E1, G1 (blue slots) 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
CPU2: A1, C1, E1, G1 (blue slots)

12 CPU1: A1, C1, D1, E1, F1, G1 (blue slots) 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
GB
CPU2: A1, C1, D1, E1, F1, G1 (blue slots)

16 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
slots) GB

CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue


slots)

24 CPU1: A1, B1, C1, D1, E1, F1, G1, H1 (blue 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB
slots) GB

CPU1: A2, C2, E2, G2 (black slots)


CPU2: A1, B1, C1, D1, E1, F1, G1, H1 (blue
slots)
CPU2: A2, C2, E2, G2 (black slots)

32 Fill all slots. 32 GB, 64 GB, 128 32 GB, 64 GB, 128 GB


GB

Memory Installation Order for NX-8155A-G9 Platform


A memory channel is a group of DIMM slots.
For G9 single-node AMD hyper platform, each CPU is associated with twelve memory channels. Each
memory channel contains one DIMM slots, for a total of 24 DIMM slots.

Platform | Memory Configurations | 21


Figure 8: DIMM Slots for NX-8155A-G9 Platform

Table 15: DIMM Installation Order for NX-8155A-G9 Platform

Number of Slots to Use Supported Capacities


DIMMs

4 CPU1: A1, G1 32 GB, 64 GB, 128 GB

CPU2: A1, G1

Platform | Memory Configurations | 22


Number of Slots to Use Supported Capacities
DIMMs
8 CPU1: A1, C1, G1, I1 32 GB, 64 GB, 128 GB

CPU2: A1, C1, G1, I1

12 CPU1: A1, B1, C1, G1, H1, I1 32 GB, 64 GB, 128 GB

CPU2: A1, B1, C1, G1, H1, I1

16 CPU1: A1, B1, C1, E1, G1, H1, I1, K1 32 GB, 64 GB, 128 GB

CPU2: A1, B1, C1, E1, G1, H1, I1, K1

20 CPU1: A1, B1, C1, D1, E1, G1, H1, I1, J1, K1 32 GB, 64 GB, 128 GB

CPU2: A1, B1, C1, D1, E1, G1, H1, I1, J1, K1

24 Fill all slots 32 GB, 64 GB, 128 GB

Platform | Memory Configurations | 23


4
NUTANIX HARDWARE NAMING
CONVENTION
Every Nutanix block has a unique name based on the standard Nutanix naming convention.
The Nutanix hardware model name uses the format prefix-body-suffix.
For all Nutanix platforms, the prefix is NX to indicate that the platform is sold directly by Nutanix and
support calls are handled by Nutanix.
body uses the format ABCD | Y. The following table describes the body values.

Figure 9: Nutanix Hardware Naming Convention

Table 16: Body

Body Description

A Indicates the product series and is one of the following values.

• 1 – Entry-level & ROBO


• 3 – Balanced compute and storage
• 8 – High performance
• 9 – Accelerated system

Platform | Nutanix Hardware Naming Convention | 24


Body Description

B Indicates the number of nodes.

• For single-node platforms, B is always 1.


• For multi-node platforms, B can be 1, 2, 3, or 4.

Note: For multi-node platforms, the documentation always


uses a generic zero for B.

C Indicates the chassis form factor and is one of the following


values.

• 1 – 1U1N (one rack unit high with one node)


• 3 – 2U2N (two rack units high with two nodes)
• 5 – 2U1N (two rack units high with one node)
• 6 – 2U4N (two rack units high with four nodes)
• 7 – 1U1N (one rack unit high with one node)

D Indicates the drive form factor and is one of the following values.

• 0 – 2.5 inch drives


• 1 – E1.S drives
• 3 – E3.S drives
• 5 – 3.5 inch drives

Y Indicates platform types, and takes one of the following values:

• S – Single socket
• G – GPU (Not used in G9 since GPU is available on multiple
models)

Table 17: Suffix

Suffix Description

G5 The platform uses the Intel Broadwell CPU.

G6 The platform uses the Intel Skylake CPU.

G7 The platform uses the Intel Cascade Lake CPU.

G8 or N-G8 The platform uses the Intel Ice Lake CPU.

G9 The platform uses the Intel Sapphire Rapids or Emerald Rapids


CPU.

A-G9 The platform uses the AMD Genoa CPU.

Platform | Nutanix Hardware Naming Convention | 25


COPYRIGHT
Copyright 2024 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.

root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

Platform | Copyright | 26
Interface Target Username Password

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: December 4, 2024 (2024-12-04T22:22:50+05:30)

Platform | Copyright | 27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy