-
-
Notifications
You must be signed in to change notification settings - Fork 350
Description
#2863, which implements a Zstd codec that uses nvcomp to decode on NVIDIA GPUs, includes a TODO for exposing devices and streams to users. This issue is to discuss how best to do that.
For background: some systems include multiple GPUs (devices) which lets users parallelize zarr operations (at a high level similar to how threads might be used, but there are many differences).
CUDA Streams (introduction) can be used to queue sequences of commands. Depending on the exact sequence of commands, the GPU might be able to process multiple things at once. The classic example, which should apply to zarr, is overlapping data transfer (from host to device memory) and some compute kernels (probably decompression in this case; though in principal the end user's computation might be able to run too).
There's lots to figure out still. I have some prototypes at https://github.com/TomAugspurger/cuda-streams-sample/ (https://github.com/TomAugspurger/cuda-streams-sample/blob/main/nvcomp-simple.py for nvcomp, discussed at TomAugspurger/cuda-streams-sample#4). We'll use this issue to solidify a proposal.