Poweredge Server Gpu Matrix
Poweredge Server Gpu Matrix
Spec Sheets
Memory Memory Max Power Slot GPU Auxiliary
Brand Model GPU Memory Graphic Bus/ System Interface Interconnect Bandwidth Workload1
ECC Bandwidth Consumption Width Height/Length Cable NOTE: Copy and paste links to browser. Clicking directly from XLS does not work consistently!
AMD MI300X OAM 192 GB HBM3 Y 5.3 TB/sec 750W AMD Infinity Fabric Links 896 GB/sec N/A N/A N/A AI / HPC https://www.amd.com/en/graphics/instinct-server-accelerators
AMD MI210 64 GB HBM2e Y 1638 GB/sec 300W PCIe Gen4x16/ Infinity Fabric Link bridge8 64 GB/sec (PCIe 4.0) DW FHFL CPU 8 pin HPC/Machine learning training https://www.amd.com/en/products/server-accelerators/amd-instinct-mi210
Intel Max1550 OAM 128 GB HBM2e Y 3276.8 GB/sec 600W Intel XeLink - N/A N/A N/A AI / HPC https://www.intel.com/content/www/us/en/products/sku/232873/intel-data-center-gpu-max-1550/support.html
Intel Max1100 48 GB HBM2e Y 1228.8 GB/sec 300W PCIe Gen5x16/ XeLink bridge8 128 GB/sec5 (PCIe 5.0) DW FHFL PCIe 16 pin AI / HPC https://www.intel.com/content/www/us/en/products/sku/232876/intel-data-center-gpu-max-1100/specifications.html
Intel Flex 140 12 GB GDDR6 Y 336 GB/Sec 75W PCIe Gen4 x8 32 GB/sec (PCIe 4.0) SW HHHL/FHHL N/A Inferencing/Edge https://www.intel.com/content/www/us/en/products/sku/230020/intel-data-center-gpu-flex-140/specifications.html
Nvidia H100 SXM5 (x8) 80 GB HBM3 Y 3 TB/sec 700W NVIDIA NVLink 900 GB/sec N/A N/A N/A AI / HPC https://www.nvidia.com/en-us/data-center/h100/
Nvidia H100 SXM5 (x4) 80 GB HBM3 Y 3 TB/sec 700W NVIDIA NVLink 900 GB/sec N/A N/A N/A AI / HPC https://www.nvidia.com/en-us/data-center/h100/
Nvidia A100 SXM4 (x8) 80 GB HBM2 Y 2039 GB/sec 500W NVIDIA NVLink 600 GB/sec (3rd Gen NVLink) N/A N/A N/A HPC/AI/Database Analytics https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/a100-80gb-datasheet-update-nvidia-us-1521051-
Nvidia A100 SXM4 (x4) 80 GB HBM2 Y 2039 GB/sec 500W NVIDIA NVLink 600 GB/sec (3rd Gen NVLink) N/A N/A N/A HPC/AI/Database Analytics https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/a100-80gb-datasheet-update-nvidia-us-1521051-
Nvidia A100 SXM4 (x4) 40 GB HBM2 Y 1555 GB/sec 400W NVIDIA NVLink 600 GB/sec (3rd Gen NVLink) N/A N/A N/A HPC/AI/Database Analytics https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/a100-80gb-datasheet-update-nvidia-us-1521051-
Nvidia H100 80 GB HBM2e Y 2000 GB/sec 300-350W PCIe Gen5x16/ NVLink bridge8 128 GB/sec5 (PCIe 5.0) DW FHFL PCIe 16 pin HPC/AI/Database Analytics https://www.nvidia.com/en-us/data-center/h100/
Nvidia A100 80 GB HBM2e Y 1935 GB/sec 300W PCIe Gen4x16/ NVLink bridge 8 5
64 GB/sec (PCIe 4.0) DW FHFL CPU 8 pin HPC/AI/Database Analytics https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a100/pdf/nvidia-a100-datasheet-us-nvidia-1758950-r4-
Nvidia L40S 48 GB GDDR6 Y 864 GB/sec 350W PCIe Gen4 x16 64 GB/sec5 (PCIe 4.0) DW FHFL PCIe 16 pin AI/Performance graphics/VDI https://www.nvidia.com/en-us/data-center/l40s/
Nvidia A30 24 GB HBM2 Y 933 GB/sec 165W PCIe Gen4x16/ NVLink bridge 8 5
64 GB/sec (PCIe 4.0) DW FHFL CPU 8 pin mainstream AI https://www.nvidia.com/content/dam/en-zz/Solutions/data-center/products/a30-gpu/pdf/a30-datasheet.pdf
Nvidia L40 48 GB GDDR6 Y 864 GB/sec 300W PCIe Gen4 x16 64 GB/sec (PCIe 4.0) DW FHFL PCIe 16 pin Performance graphics/VDI https://www.nvidia.com/en-us/data-center/l40/
8 5
https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a40/proviz-print-nvidia-a40-datasheet-us-nvidia-1469711-r8-
Nvidia A40 48 GB GDDR6 Y 696 GB/sec 300W PCIe Gen4x16/ NVLink bridge 64 GB/sec (PCIe 4.0) DW FHFL CPU 8 pin Performance graphics/VDI
web.pdf
Nvidia A16 64 GB GDDR6 Y 800 GB/sec 250W PCIe Gen4 x16 64 GB/sec (PCIe 4.0) DW FHFL CPU 8 pin VDI https://www.nvidia.com/en-us/data-center/products/a16-gpu/
Nvidia L4 24 GB GDDR6 Y 300 GB/s 72W PCIe Gen4 x16 64 GB/sec (PCIe 4.0) SW HHHL N/A Inferencing/Edge/VDI https://www.nvidia.com/L4
Nvidia L4 24 GB GDDR6 Y 300 GB/s 72W PCIe Gen4 x16 64 GB/sec (PCIe 4.0) SW FHHL N/A Inferencing/Edge/VDI https://www.nvidia.com/L4
Nvidia A2 (v2) 16 GB GDDR6 Y 200 GB/sec 60W PCIe Gen4 x8 32 GB/sec (PCIe 4.0) SW HHHL N/A Inferencing/Edge/VDI https://www.nvidia.com/en-us/data-center/products/a2/
Nvidia A2 (v2) 16 GB GDDR6 Y 200 GB/sec 60W PCIe Gen4 x8 32 GB/sec (PCIe 4.0) SW FHHL N/A Inferencing/Edge/VDI https://www.nvidia.com/en-us/data-center/products/a2/
Nvidia A10 24 GB GDDR6 Y 600 GB/sec 150W PCIe Gen4 x16 64 GB/sec (PCIe 4.0) SW FHFL PCIe 8 pin mainstream graphics/VDI https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/a10/pdf/datasheet-new/nvidia-a10-datasheet.pdf
Nvidia T4 16 GB GDDR6 Y 300 GB/sec 70W PCIe Gen3 x16 32 GB/sec (PCIe 3.0) SW HHHL N/A Inferencing/Edge/VDI https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-t4/t4-tensor-core-datasheet-951643.pdf
Nvidia T4 16 GB GDDR6 Y 300 GB/sec 70W PCIe Gen3 x16 32 GB/sec (PCIe 3.0) SW FHHL N/A Inferencing/Edge/VDI https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/tesla-t4/t4-tensor-core-datasheet-951643.pdf
1
suggested ideal workloads, but can be used for other workloads
2
Different SKUs are mentioned because different platforms might support different SKUs. This sheet doesn't specifically call out platform-SKU associations
3
upto 100GB/sec when RTX NVLink bridge is used, RTX NVLink bridge is only supported on T640
4
Structural Sparsity enabled
5
upto 600GB/sec for A100 and H100 when NVLink bridge is used, upto 200GB/sec for A30 when NVLink bridge is used, upto 112.5GB/sec for A40 when NVLink bridge is used, upto 400GB/sec for A800 when NVLink bridge is used
6
Peak performance numbers shared by Nvidia or AMD for MI100
7
Refer to Max#GPUs on supported platforms tab for detail support on Rome vs Milan processors
8
A100 w/Nvlink bridge is supported on R750XA and DSS8440; A40 w/Nvlink bridge is supported on R750XA, DSS8440 and T550; A30 w/NVLink bridge is supported on R750XA, DSS8440 and T550; ; MI210 w/Infinity Fabric Link bridge is supported on R750XA; H100 and A800 w/Nvlink
bridge will be supported on R750XA; Max1100 w/XeLink bridge is supported on R760XA
DW - Double Wide, SW - Single Wide, FH- Full Height, FL - Full Length, HH - Half Height, HL - Half Length
PLATFORM
A100 40GB A100 80GB Max 1550 OAM
H100 80GB PCIe A100 80GB PCIe H100 SXM5 (X8) A100 SXM4 (X8) H100 SXM5 (X4) L40S L40 L4 A40 A30 A16 A10 A2 T4 MI100 (EOML) MI210 MI300X OAM (X8) Flex 140 Max 1100
SXM4 (X4) SXM4 (X4) (X4)
XE8640 Shipping
3 3 3 3 3 3 3 3 3 3 3 3
R760XA Shipping (4 ) Shipping (4 ) Shipping (4 ) Shipping (4 ) Shipping (8 ) Shipping (4 ) Shipping (4 ) Shipping (4 ) Shipping (12 ) Shipping (4 ) Shipping (10 )* Shipping (4 )
R760 Shipping (2) Shipping (2) Shipping (2) Shipping (2) Shipping (4) Shipping (2) Shipping (2) Shipping (2) Shipping (6) Shipping (6)* Shipping (2)
R7625 Shipping (2) Shipping (2) Shipping (2) Shipping (2) Shipping (4) Shipping (2) Shipping (2) Shipping (2) Shipping (6) Shipping (2)
R7615 Shipping (2) Shipping (3) Shipping (3) Shipping (3) Shipping (4) Shipping (3) Shipping (3) Shipping (3) Shipping (6) Shipping (3)
XR7620 Shipping (2) Shipping (5) Shipping (2) Shipping (5) Shipping (2)
R750xa Shipping (43) Shipping (43) Shipping (43) Shipping (43) Shipping (63) Shipping (43) Shipping (43) Shipping (43) Shipping (43) Shipping (63) Shipping (63) Shipping (43) Shipping (43)
R750 Shipping (2) Shipping (2) Shipping (2) Shipping (6) Shipping (2) Shipping (2) Shipping (2) Shipping (3) Shipping (6) Shipping (6) Shipping (6)
R7525 - Rome & Milan Shipping (3) Shipping (3) Shipping (3) Shipping (6) Shipping (3) Shipping (3) Shipping (3) Shipping (3) Shipping (6) Shipping (6) Shipping (3) Shipping (3)
R7515 - Rome & Milan Shipping (4) Shipping (1) Shipping (1) Shipping (4) Shipping (4) Shipping (1)
XR12 Shipping (2) Shipping (3) Shipping (2) Shipping (2) Shipping (2) Shipping (2) Shipping (2)
R740/XD Shipping (3) Shipping (3) Shipping (3) Shipping (3) Shipping (3) Shipping (6) Shipping (6**)