CXL Introduction
CXL Introduction
As cloud computing becomes ubiquitous, we have to evolve the way data canters are
architected. In order to increase compute capacity and deliver faster data processing, we
need to integrate accelerators that excel at processing specific workloads
It was !!!
GPU, FPGs, AI Processor and SmartNIC that perform computation on data in motion and computational storage
to process data at rest. These devices already connect over PCI Express, but to better optimize how they work
together in heterogeneous system architectures, they need Compute Express Link. Developed through a
consortium of companies representing all major computer architectures, CXL is an open interconnect standard
that increases memory capacity and bandwidth and enables lower latency. It leverages the PCIe Express 5G
physical layer infrastructure to create a common memory space across the host and all devices.
It was !!!
GPU, FPGs, AI Processor and SmartNIC that perform computation on data in motion and computational storage
to process data at rest. These devices already connect over PCI Express, but to better optimize how they work
together in heterogeneous system architectures, they need Compute Express Link. Developed through a
consortium of companies representing all major computer architectures, CXL is an open interconnect standard
that increases memory capacity and bandwidth and enables lower latency fees. It leverages the PCIe Express
5G physical layer infrastructure to create a common memory space across the host and all devices.
CXL Benefits
CXL is a cache coherent standard which ensures that the host processor and CXL devices see the same data
when they need to access it. The CPU host is primarily responsible for coherency management, allowing for both
the CPU and device to share resources for higher performance and decreased software stack complexity. This
leads to reduced device costs and overhead traditionally associated with coherency across an IO link. The CXL
1.1 specification defines three new protocols. The CXL IO protocol is very similar to PCIe 5.0, with some
enhancements.
CXL is a cache coherent standard which ensures that the host processor and CXL devices see the same data
when they need to access it. The CPU host is primarily responsible for coherency management, allowing for both
the CPU and device to share resources for higher performance and decreased software stack complexity. This
leads to reduced device costs and overhead traditionally associated with coherency across an IO link. The CXL
1.1 specification defines three new protocols. The CXL IO protocol is very similar to PCIe 5.0, with some
enhancements.
It has 3 protocols first one is CXL.io, It is used for initialization link up, device discovery, enumeration
and register access. The CXL.cache protocol defines interactions between the host and device,
allowing attached CXL devices to efficiently cache host memory with extremely low latency using a
request and response approach. The CXL.mem protocol provides a host processor with access to
device attached memory using load and store commands. Different combinations of these protocols
result in three initial CXL usage models.
Type I devices coherently access host memory. This would be, for example, an accelerator with a Coherent
cache which wants to share access to host memory. Usage for type one devices includes PGAs, Nic, and
Nic Atomics.
Type II devices are able to coherently access host memory and allow the host to access device memory. For example,
this would be an accelerator with attached memory and optional Coherent cache. Usage for type two devices includes
GPU and dense computation
Reference
https://www.computeexpresslink.org/
And finally, type III devices allow a host to access and manage attached device memory. This would be
memory buffers or expanders that allow the host expanded access to additional memory. Usages for type
three include increased memory bandwidth and capacity expansion, as well as persistent memory.