Connectx - 6 en Card: 200gbe Ethernet Adapter Card
Connectx - 6 en Card: 200gbe Ethernet Adapter Card
PRODUCT BRIEF
ConnectX -6 EN Card
®
Compatibility
PCI Express Interface –– PCIe switch Downstream Port Operating Systems/Distributions* Connectivity
–– PCIe Gen 4.0, 3.0, 2.0, 1.1 compatible Containment (DPC) enablement for –– RHEL, SLES, Ubuntu and other major –– Up to two network ports
PCIe hot-plug Linux distributions
–– 2.5, 5.0, 8, 16 GT/s link rate –– Interoperability with Ethernet
–– Advanced Error Reporting (AER) –– Windows switches (up to 200GbE, as 4 lanes
–– 32 lanes as 2x 16-lanes of PCIe
–– Access Control Service (ACS) for –– FreeBSD of 50GbE data rate)
–– Support for PCIe x1, x2, x4, x8, and
peer-to-peer secure communication –– Passive copper cable with ESD
x16 configurations –– VMware
–– Process Address Space ID (PASID) protection
–– PCIe Atomic –– OpenFabrics Enterprise Distribution
Address Translation Services (ATS) –– Powered connectors for optical and
–– TLP (Transaction Layer Packet) (OFED)
–– IBM CAPIv2 (Coherent Accelerator active cable support
Processing Hints (TPH) –– OpenFabrics Windows Distribution
Processor Interface)
(WinOF-2)
–– Support for MSI/MSI-X mechanisms
Features*
Ethernet –– Advanced memory mapping support, Hardware-Based I/O Virtualization HPC Software Libraries
–– 200GbE / 100GbE / 50GbE / 40GbE / allowing user mode registration and - Mellanox ASAP2 –– HPC-X, OpenMPI, MVAPICH, MPICH,
25GbE / 10GbE / 1GbE remapping of memory (UMR) –– Single Root IOV OpenSHMEM, PGAS and varied
–– IEEE 802.3bj, 802.3bm 100 Gigabit –– Extended Reliable Connected –– Address translation and protection commercial packages
Ethernet transport (XRC)
–– VMware NetQueue support Management and Control
–– IEEE 802.3by, Ethernet Consortium –– Dynamically Connected transport
(DCT) • SR-IOV: Up to 1K Virtual Functions –– NC-SI, MCTP over SMBus and MCTP
25, 50 Gigabit Ethernet, supporting over PCIe - Baseboard Management
• SR-IOV: Up to 8 Physical Functions
all FEC modes –– On demand paging (ODP) Controller interface
per host
–– IEEE 802.3ba 40 Gigabit Ethernet –– MPI Tag Matching –– PLDM for Monitor and Control
–– Virtualization hierarchies (e.g., NPAR)
–– IEEE 802.3ae 10 Gigabit Ethernet –– Rendezvous protocol offload DSP0248
–– Virtualizing Physical Functions on a
–– IEEE 802.3az Energy Efficient –– Out-of-order RDMA supporting physical port –– PLDM for Firmware Update DSP0267
Ethernet Adaptive Routing –– SDN management interface for
–– SR-IOV on every Physical Function
–– IEEE 802.3ap based auto-negotiation –– Burst buffer offload managing the eSwitch
–– Configurable and user-programmable
and KR startup –– In-Network Memory registration-free –– I2C interface for device control and
QoS
–– IEEE 802.3ad, 802.1AX Link RDMA memory access configuration
–– Guaranteed QoS for VMs
Aggregation –– General Purpose I/O pins
CPU Offloads
–– IEEE 802.1Q, 802.1P VLAN tags and Storage Offloads –– SPI interface to Flash
priority –– RDMA over Converged Ethernet
–– Block-level encryption:
(RoCE) –– JTAG IEEE 1149.1 and IEEE 1149.6
–– IEEE 802.1Qau (QCN) – Congestion XTS-AES 256/512 bit key
Notification –– TCP/UDP/IP stateless offload Remote Boot
–– NVMe over Fabric offloads for target
–– IEEE 802.1Qaz (ETS) –– LSO, LRO, checksum offload machine –– Remote boot over Ethernet
–– IEEE 802.1Qbb (PFC) –– RSS (also on encapsulated packet), –– T10 DIF - signature handover –– Remote boot over iSCSI
TSS, HDS, VLAN and MPLS tag operation at wire speed, for ingress
–– IEEE 802.1Qbg insertion/stripping, Receive flow –– Unified Extensible Firmware
and egress traffic Interface (UEFI)
–– IEEE 1588v2 steering
–– Storage Protocols: SRP, iSER, NFS –– Pre-execution Environment (PXE)
–– Jumbo frame support (9.6KB) –– Data Plane Development Kit (DPDK) RDMA, SMB Direct, NVMe-oF
for kernel bypass application
Enhanced Features Overlay Networks
–– Open vSwitch (OVS) offload using
–– Hardware-based reliable transport –– RoCE over overlay networks
ASAP2
–– Collective operations offloads –– Stateless offloads for overlay
–– Flexible match-action flow tables
–– Vector collective operations offloads network tunneling protocols
–– Tunneling encapsulation /
–– Mellanox PeerDirect® RDMA decapsulation –– Hardware offload of encapsulation
(aka GPUDirect®) communication and decapsulation of VXLAN,
–– Intelligent interrupt coalescence
acceleration NVGRE, and GENEVE overlay
–– Header rewrite supporting hardware networks
–– 64/66 encoding
offload of NAT router
–– Enhanced Atomic operations
(*) This section describes hardware features and capabilities. Please refer to the driver and firmware release notes for feature availability.
1. By default, the above products are shipped with a tall bracket mounted; a short bracket is included as an accessory.
2. 100GbE can be supported as either 4x25G NRZ or 2x50G PAM4 when using QSFP56.
3. Contact Mellanox for other supported options.
1. Above OPNs support a single host; contact Mellanox for OCP OPNs with Mellanox Multi-Host support.
2. 100GbE can be supported as either 4x25G NRZ or 2x50G PAM4 when using QSFP56.
3. Above OCP3.0 OPNs come with Internal Lock Brackets; Contact Mellanox for additional bracket types,e.g., Pull Tab or Ejector latch.