0% found this document useful (0 votes)
113 views4 pages

Connectx - 6 en Card: 200gbe Ethernet Adapter Card

Uploaded by

Nb A Dung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views4 pages

Connectx - 6 en Card: 200gbe Ethernet Adapter Card

Uploaded by

Nb A Dung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ADAPTER CARD

PRODUCT BRIEF

ConnectX -6 EN Card
®

200GbE Ethernet Adapter Card


World’s first 200GbE Ethernet network interface card, enabling HIGHLIGHTS
industry-leading performance smart offloads and in-network FEATURES
computing for Cloud, Web 2.0, Big Data, Storage and Machine –– Up to 200GbE connectivity per port
Learning applications –– Maximum bandwidth of 200Gb/s
–– Up to 215 million messages/sec
ConnectX-6 EN provides up to two ports of 200GbE connectivity, sub 0.8usec latency and 215 million –– Sub 0.8usec latency
messages per second, enabling the highest performance and most flexible solution for the most
–– Block-level XTS-AES mode hardware
demanding data center applications.
encryption
ConnectX-6 is a groundbreaking addition to the Mellanox ConnectX series of industry-leading –– Optional FIPS-compliant adapter card
adapter cards. In addition to all the existing innovative features of past versions, ConnectX-6 offers –– Support both 50G SerDes (PAM4) and
a number of enhancements to further improve performance and scalability, such as support for 25G SerDes (NRZ) based ports
200/100/50/40/25/10/1 GbE Ethernet speeds and PCIe Gen 4.0. Moreover, ConnectX-6 Ethernet cards
–– Best-in-class packet pacing with
can connect up to 32-lanes of PCIe to achieve 200Gb/s of bandwidth, even on Gen 3.0 PCIe systems. sub-nanosecond accuracy
–– PCIe Gen4/Gen3 with up to x32 lanes
Cloud and Web 2.0 Environments –– RoHS compliant
Telco, Cloud and Web 2.0 customers developing their platforms on Software Defined Network (SDN) –– ODCC compatible
environments are leveraging the Virtual Switching capabilities of the Operating Systems on their servers BENEFITS
to enable maximum flexibility in the management and routing protocols of their networks.
–– Most intelligent, highest performance
Open vSwitch (OVS) is an example of a virtual switch that allows Virtual Machines to communicate fabric for compute and storage
among themselves and with the outside world. Software-based virtual switches, traditionally residing infrastructures
in the hypervisor, are CPU intensive, affecting system performance and preventing full utilization of –– Cutting-edge performance in
available CPU for compute functions. virtualized HPC networks including
Network Function Virtualization (NFV)
To address this, ConnectX-6 offers ASAP2 - Mellanox Accelerated Switch and Packet Processing®
–– Advanced storage capabilities
technology to offload the vSwitch/vRouter by handling the data plane in the NIC hardware while
including block-level encryption and
maintaining the control plane unmodified. As a result, significantly higher vSwitch/vRouter performance
checksum offloads
is achieved without the associated CPU load.
–– Host Chaining technology for
The vSwitch/vRouter offload functions supported by ConnectX-5 and ConnectX-6 include encapsulation economical rack design
and de-capsulation of overlay network headers, as well as stateless offloads of inner packets, packet –– Smart interconnect for x86, Power,
headers re-write (enabling NAT functionality), hairpin, and more. Arm, GPU and FPGA-based platforms
In addition, ConnectX-6 offers intelligent flexible pipeline capabilities, including programmable flexible –– Flexible programmable pipeline for
parser and flexible match-action tables, which enable hardware offloads for future protocols. new network flows
–– Enabler for efficient service chaining
–– Efficient I/O consolidation, lowering
data center costs and complexity

©2020 Mellanox Technologies. All rights reserved. †


For illustration only. Actual products may vary.
Mellanox ConnectX-6 EN Adapter Card page 2

Storage Environments Mellanox Socket Direct®


NVMe storage devices are gaining momentum, offering very fast Mellanox Socket Direct technology improves the performance of dual-
access to storage media. The evolving NVMe over Fabric (NVMe-oF) socket servers, such as by enabling each of their CPUs to access the
protocol leverages RDMA connectivity to remotely access NVMe network through a dedicated PCIe interface. As the connection from
storage devices efficiently, while keeping the end-to-end NVMe model each CPU to the network bypasses the QPI (UPI) and the second CPU,
at lowest latency. With its NVMe-oF target and initiator offloads, Socket Direct reduces latency and CPU utilization. Moreover, each
ConnectX-6 brings further optimization to NVMe-oF, enhancing CPU CPU handles only its own traffic (and not that of the second CPU), thus
utilization and scalability. optimizing CPU utilization even further.
Mellanox Socket Direct also enables GPUDirect® RDMA for all CPU/
Security GPU pairs by ensuring that GPUs are linked to the CPUs closest to the
ConnectX-6 block-level encryption offers a critical innovation to adapter card. Mellanox Socket Direct enables Intel® DDIO optimization
network security. As data in transit is stored or retrieved, it undergoes on both sockets by creating a direct connection between the sockets
encryption and decryption. The ConnectX-6 hardware offloads the IEEE and the adapter card.
AES-XTS encryption/decryption from the CPU, saving latency and CPU Mellanox Socket Direct technology is enabled by a main card that
utilization. It also guarantees protection for users sharing the same houses the ConnectX-6 adapter card and an auxiliary PCIe card
resources through the use of dedicated encryption keys. bringing in the remaining PCIe lanes. The ConnectX-6 Socket Direct
By performing block-storage encryption in the adapter, ConnectX-6 card is installed into two PCIe x16 slots and connected using a 350mm
excludes the need for self-encrypted disks. This gives customers the long harness. The two PCIe x16 slots may also be connected to the
freedom to choose their preferred storage device, including byte- same CPU. In this case the main advantage of the technology lies in
addressable and NVDIMM devices that traditionally do not provide delivering 200GbE to servers with PCIe Gen3-only support.
encryption. Moreover, ConnectX-6 can support Federal Information Please note that when using Mellanox Socket Direct in virtualization
Processing Standards (FIPS) compliance. or dual-port use cases, some restrictions may apply. For further details,
Contact Mellanox Customer Support.
Machine Learning and Big Data
Environments Host Management
Data analytics has become an essential function within many Mellanox host management and control capabilities include NC-SI over
enterprise data centers, clouds and hyperscale platforms. Machine MCTP over SMBus, and MCTP over PCIe - Baseboard Management
learning relies on especially high throughput and low latency to train Controller (BMC) interface, as well as PLDM for Monitor and Control
deep neural networks and to improve recognition and classification DSP0248 and PLDM for Firmware Update DSP0267.
accuracy. As the first adapter card to deliver 200GbE throughput,
ConnectX-6 is the perfect solution to provide machine learning
applications with the levels of performance and scalability that
they require. ConnectX-6 utilizes the RDMA technology to deliver
low-latency and high performance. ConnectX-6 enhances RDMA
network capabilities even further by delivering end-to-end packet level
flow control.

Compatibility
PCI Express Interface –– PCIe switch Downstream Port Operating Systems/Distributions* Connectivity
–– PCIe Gen 4.0, 3.0, 2.0, 1.1 compatible Containment (DPC) enablement for –– RHEL, SLES, Ubuntu and other major –– Up to two network ports
PCIe hot-plug Linux distributions
–– 2.5, 5.0, 8, 16 GT/s link rate –– Interoperability with Ethernet
–– Advanced Error Reporting (AER) –– Windows switches (up to 200GbE, as 4 lanes
–– 32 lanes as 2x 16-lanes of PCIe
–– Access Control Service (ACS) for –– FreeBSD of 50GbE data rate)
–– Support for PCIe x1, x2, x4, x8, and
peer-to-peer secure communication –– Passive copper cable with ESD
x16 configurations –– VMware
–– Process Address Space ID (PASID) protection
–– PCIe Atomic –– OpenFabrics Enterprise Distribution
Address Translation Services (ATS) –– Powered connectors for optical and
–– TLP (Transaction Layer Packet) (OFED)
–– IBM CAPIv2 (Coherent Accelerator active cable support
Processing Hints (TPH) –– OpenFabrics Windows Distribution
Processor Interface)
(WinOF-2)
–– Support for MSI/MSI-X mechanisms

©2020 Mellanox Technologies. All rights reserved.


Mellanox ConnectX-6 EN Adapter Card page 3

Features*
Ethernet –– Advanced memory mapping support, Hardware-Based I/O Virtualization HPC Software Libraries
–– 200GbE / 100GbE / 50GbE / 40GbE / allowing user mode registration and - Mellanox ASAP2 –– HPC-X, OpenMPI, MVAPICH, MPICH,
25GbE / 10GbE / 1GbE remapping of memory (UMR) –– Single Root IOV OpenSHMEM, PGAS and varied
–– IEEE 802.3bj, 802.3bm 100 Gigabit –– Extended Reliable Connected –– Address translation and protection commercial packages
Ethernet transport (XRC)
–– VMware NetQueue support Management and Control
–– IEEE 802.3by, Ethernet Consortium –– Dynamically Connected transport
(DCT) • SR-IOV: Up to 1K Virtual Functions –– NC-SI, MCTP over SMBus and MCTP
25, 50 Gigabit Ethernet, supporting over PCIe - Baseboard Management
• SR-IOV: Up to 8 Physical Functions
all FEC modes –– On demand paging (ODP) Controller interface
per host
–– IEEE 802.3ba 40 Gigabit Ethernet –– MPI Tag Matching –– PLDM for Monitor and Control
–– Virtualization hierarchies (e.g., NPAR)
–– IEEE 802.3ae 10 Gigabit Ethernet –– Rendezvous protocol offload DSP0248
–– Virtualizing Physical Functions on a
–– IEEE 802.3az Energy Efficient –– Out-of-order RDMA supporting physical port –– PLDM for Firmware Update DSP0267
Ethernet Adaptive Routing –– SDN management interface for
–– SR-IOV on every Physical Function
–– IEEE 802.3ap based auto-negotiation –– Burst buffer offload managing the eSwitch
–– Configurable and user-programmable
and KR startup –– In-Network Memory registration-free –– I2C interface for device control and
QoS
–– IEEE 802.3ad, 802.1AX Link RDMA memory access configuration
–– Guaranteed QoS for VMs
Aggregation –– General Purpose I/O pins
CPU Offloads
–– IEEE 802.1Q, 802.1P VLAN tags and Storage Offloads –– SPI interface to Flash
priority –– RDMA over Converged Ethernet
–– Block-level encryption:
(RoCE) –– JTAG IEEE 1149.1 and IEEE 1149.6
–– IEEE 802.1Qau (QCN) – Congestion XTS-AES 256/512 bit key
Notification –– TCP/UDP/IP stateless offload Remote Boot
–– NVMe over Fabric offloads for target
–– IEEE 802.1Qaz (ETS) –– LSO, LRO, checksum offload machine –– Remote boot over Ethernet
–– IEEE 802.1Qbb (PFC) –– RSS (also on encapsulated packet), –– T10 DIF - signature handover –– Remote boot over iSCSI
TSS, HDS, VLAN and MPLS tag operation at wire speed, for ingress
–– IEEE 802.1Qbg insertion/stripping, Receive flow –– Unified Extensible Firmware
and egress traffic Interface (UEFI)
–– IEEE 1588v2 steering
–– Storage Protocols: SRP, iSER, NFS –– Pre-execution Environment (PXE)
–– Jumbo frame support (9.6KB) –– Data Plane Development Kit (DPDK) RDMA, SMB Direct, NVMe-oF
for kernel bypass application
Enhanced Features Overlay Networks
–– Open vSwitch (OVS) offload using
–– Hardware-based reliable transport –– RoCE over overlay networks
ASAP2
–– Collective operations offloads –– Stateless offloads for overlay
–– Flexible match-action flow tables
–– Vector collective operations offloads network tunneling protocols
–– Tunneling encapsulation /
–– Mellanox PeerDirect® RDMA decapsulation –– Hardware offload of encapsulation
(aka GPUDirect®) communication and decapsulation of VXLAN,
–– Intelligent interrupt coalescence
acceleration NVGRE, and GENEVE overlay
–– Header rewrite supporting hardware networks
–– 64/66 encoding
offload of NAT router
–– Enhanced Atomic operations

(*) This section describes hardware features and capabilities. Please refer to the driver and firmware release notes for feature availability.

©2020 Mellanox Technologies. All rights reserved.


Mellanox ConnectX-6 EN Adapter Card page 4

Adapter Card Portfolio & Ordering Information


Table 1 - PCIe HHHL Form Factor
Max. Network Interface Supported Ethernet Host Interface
OPN
Speed Type Speed [GbE] [PCIe]
Gen 3.0 2x16 Socket Direct MCX614106A-CCAT
QSFP56
2x 100 GbE 100 /50/40/25/10/1
2
Contact Mellanox
Gen 4.0 x16
SFP-DD Contact Mellanox
Gen 3.0 2x16 Socket Direct MCX614105A-VCAT
1x 200 GbE QSFP56 200/1002/50/40/25/10/1
Gen 4.0 x16 Contact Mellanox
Gen 3.0 2x16 Socket Direct MCX614106A-VCAT
2x 200 GbE QSFP56 200/1002/50/40/25/10/1
Gen 4.0 x16 MCX613106A-VDAT

1. By default, the above products are shipped with a tall bracket mounted; a short bracket is included as an accessory.
2. 100GbE can be supported as either 4x25G NRZ or 2x50G PAM4 when using QSFP56.
3. Contact Mellanox for other supported options.

Table 2 - OCP 3.0 Small Form Factor


Max. Network Interface Supported Ethernet Host Interface
OPN
Speed Type Speed [GbE] [PCIe]
2x 200 GbE MCX613436A-VDAI
QSFP56 200/1002/50/40/25/10/1 Gen 4.0 x16
1x 200 GbE Contact Mellanox

1. Above OPNs support a single host; contact Mellanox for OCP OPNs with Mellanox Multi-Host support.
2. 100GbE can be supported as either 4x25G NRZ or 2x50G PAM4 when using QSFP56.
3. Above OCP3.0 OPNs come with Internal Lock Brackets; Contact Mellanox for additional bracket types,e.g., Pull Tab or Ejector latch.

350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085


Tel: 408-970-3400 • Fax: 408-970-3403
www.mellanox.com
© Copyright 2020. Mellanox Technologies. All rights reserved.
Mellanox, Mellanox logo, ConnectX, GPUDirect, Mellanox PeerDirect, Mellanox Multi-Host, and ASAP2 - Accelerated Switch and Packet Processing are registered trademarks of Mellanox Technologies, Ltd. 53724PB
All other trademarks are property of their respective owners. Rev 2.1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy