You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for bringing this up. I agree that we should constrain our supported CUDA version range. For now, the built CUDA version could be larger than 500M.
Summary
Using the
lightgbm
Python package with CUDA support currently requires either installing it withconda
or building it from source.LightGBM should provide pre-compiled wheels supporting at least the latest major version of CUDA (CUDA 12 as of this writing).
I am opening this as a tracking issue for this feature, and to have a public discussion about this topic.
Benefits of this work
conda
to use the CUDA version, without needing a build toolchainAcceptance Criteria
Approach
Some design choices need to be made:
lightgbm
(https://pypi.org/project/lightgbm/), or distribute under a separate name likelightgbm-cu12
?See some prior discussions from
xgboost
on implementation detailsrapids-dependency-file-generator
to generate pyproject.toml dmlc/xgboost#10803Notes
Issues people have had with building from source:
The text was updated successfully, but these errors were encountered: