Skip to content

Support CUDA streams, devices #3271

@TomAugspurger

Description

@TomAugspurger

#2863, which implements a Zstd codec that uses nvcomp to decode on NVIDIA GPUs, includes a TODO for exposing devices and streams to users. This issue is to discuss how best to do that.

For background: some systems include multiple GPUs (devices) which lets users parallelize zarr operations (at a high level similar to how threads might be used, but there are many differences).

CUDA Streams (introduction) can be used to queue sequences of commands. Depending on the exact sequence of commands, the GPU might be able to process multiple things at once. The classic example, which should apply to zarr, is overlapping data transfer (from host to device memory) and some compute kernels (probably decompression in this case; though in principal the end user's computation might be able to run too).

There's lots to figure out still. I have some prototypes at https://github.com/TomAugspurger/cuda-streams-sample/ (https://github.com/TomAugspurger/cuda-streams-sample/blob/main/nvcomp-simple.py for nvcomp, discussed at TomAugspurger/cuda-streams-sample#4). We'll use this issue to solidify a proposal.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      pFad - Phonifier reborn

      Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

      Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


      Alternative Proxies:

      Alternative Proxy

      pFad Proxy

      pFad v3 Proxy

      pFad v4 Proxy