Skip to content

[FEA]: Constraints between library versions #788

@jmulcahy

Description

@jmulcahy

Is this a duplicate?

Area

cuda.bindings

Is your feature request related to a problem? Please describe.

I would like some method to ensure that all CUDA libraries have matching major and minor versions in order to ensure compatibility.

For example, if I (on Linux) pip install torch "numba_cuda[cu12]" nvidia-cuda-nvcc-cu12 I will get a mixture of CUDA 12.6 packages from torch and nvidia-cuda-nvcc-cu12 12.9. The version mismatch breaks numba_cuda's compilation.

Describe the solution you'd like

I would like major and minor versions of CUDA packages to be able to be constrained together.

This might look like pip install torch "numba_cuda[cu12]" "cuda-bindings[all]" resolving nvidia-cuda-nvcc-cu12 to 12.6.*.

Describe alternatives you've considered

I've tried a few different packages that I hoped to apply this constraint somehow. None seemed to. Part of the complication here is that PyTorch manages CUDA versions with an index flag that can't be embedded in pyproject.toml without the use of other tools.

Additional context

This could possibly be a numba_cuda issue, but it's not clear what that package could do about this situation. For example, numba_cuda[cu12] tries to constrain to CUDA 12.9 by setting cuda-bindings==12.9.* in its dependencies. However, pip install torch "numba_cuda[cu12]" installs nvidia-cuda-nvrtc-cu12 12.6 and nvidia-cuda-nvcc-cu12 12.9.

Metadata

Metadata

Assignees

No one assigned

    Labels

    triageNeeds the team's attention

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions