Skip to content

Commit

Permalink
Bump torch minimum to mitigate CVE-2024-31580 & CVE-2024-31583 and en…
Browse files Browse the repository at this point in the history
…able numpy 2 compatibility (#8368)

This is a follow-up to the comments made in
#8296 (comment).

### Description

This bumps the minimum required `torch` version from 1.13.1 to 2.2.0 in
the first commit.

See GHSA-5pcm-hx3q-hm94 and
GHSA-pg7h-5qx3-wjr3 for more details
regarding the "High" severity scoring.

- https://nvd.nist.gov/vuln/detail/CVE-2024-31580
- https://nvd.nist.gov/vuln/detail/CVE-2024-31583

Additionally, PyTorch added support for numpy 2 starting with PyTorch
2.3.0. The second commit in this PR allows for numpy 1 or numpy 2 to be
used with torch>=2.3.0. I have included this commit in this PR as
upgrading to torch 2.2 means you might as well update to 2.3 to get the
numpy 2 compatibility.

A special case is being handled on Windows as PyTorch Windows binaries
had compatibilities issues with numpy 2 that were fixed in torch 2.4.1
(see
pytorch/pytorch#131668 (comment)).

Maintainers will need to update the required status checks for the
[`dev`](https://github.com/Project-MONAI/MONAI/tree/dev) branch to:
- Remove min-dep-pytorch (2.0.1)

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [X] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.

---------

Signed-off-by: James Butler <james.butler@revvity.com>
  • Loading branch information
jamesobutler authored Mar 4, 2025
1 parent a09c1f0 commit 2e391c8
Show file tree
Hide file tree
Showing 17 changed files with 48 additions and 79 deletions.
10 changes: 3 additions & 7 deletions .github/workflows/cron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,13 @@ jobs:
strategy:
matrix:
environment:
- "PT113+CUDA118"
- "PT210+CUDA121"
- "PT230+CUDA121"
- "PT240+CUDA126"
- "PTLATEST+CUDA126"
include:
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes
- environment: PT113+CUDA118
pytorch: "torch==1.13.1 torchvision==0.14.1 --extra-index-url https://download.pytorch.org/whl/cu121"
base: "nvcr.io/nvidia/pytorch:22.10-py3" # CUDA 11.8
- environment: PT210+CUDA121
pytorch: "pytorch==2.1.0 torchvision==0.16.0 --extra-index-url https://download.pytorch.org/whl/cu121"
- environment: PT230+CUDA121
pytorch: "pytorch==2.3.0 torchvision==0.18.0 --extra-index-url https://download.pytorch.org/whl/cu121"
base: "nvcr.io/nvidia/pytorch:23.08-py3" # CUDA 12.1
- environment: PT240+CUDA126
pytorch: "pytorch==2.4.0 torchvision==0.19.0 --extra-index-url https://download.pytorch.org/whl/cu121"
Expand Down
26 changes: 14 additions & 12 deletions .github/workflows/pythonapp-gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,21 @@ jobs:
strategy:
matrix:
environment:
- "PT113+CUDA116"
- "PT210+CUDA121DOCKER"
- "PT230+CUDA124DOCKER"
- "PT240+CUDA125DOCKER"
- "PT250+CUDA126DOCKER"
include:
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes
- environment: PT113+CUDA116
pytorch: "torch==1.13.1 torchvision==0.14.1"
base: "nvcr.io/nvidia/cuda:11.6.1-devel-ubuntu18.04"
- environment: PT210+CUDA121DOCKER
# 23.08: 2.1.0a0+29c30b1
- environment: PT230+CUDA124DOCKER
# 24.04: 2.3.0a0+6ddf5cf85e
pytorch: "-h" # we explicitly set pytorch to -h to avoid pip install error
base: "nvcr.io/nvidia/pytorch:23.08-py3"
- environment: PT210+CUDA121DOCKER
# 24.08: 2.3.0a0+40ec155e58.nv24.3
base: "nvcr.io/nvidia/pytorch:24.04-py3"
- environment: PT240+CUDA125DOCKER
# 24.06: 2.4.0a0+f70bd71a48
pytorch: "-h" # we explicitly set pytorch to -h to avoid pip install error
base: "nvcr.io/nvidia/pytorch:24.06-py3"
- environment: PT250+CUDA126DOCKER
# 24.08: 2.5.0a0+872d972e41
pytorch: "-h" # we explicitly set pytorch to -h to avoid pip install error
base: "nvcr.io/nvidia/pytorch:24.08-py3"
container:
Expand All @@ -49,7 +51,7 @@ jobs:
apt-get update
apt-get install -y wget
if [ ${{ matrix.environment }} = "PT113+CUDA116" ]
if [ ${{ matrix.environment }} = "PT230+CUDA124" ]
then
PYVER=3.9 PYSFX=3 DISTUTILS=python3-distutils && \
apt-get update && apt-get install -y --no-install-recommends \
Expand Down Expand Up @@ -114,7 +116,7 @@ jobs:
# build for the current self-hosted CI Tesla V100
BUILD_MONAI=1 TORCH_CUDA_ARCH_LIST="7.0" ./runtests.sh --build --disttests
./runtests.sh --quick --unittests
if [ ${{ matrix.environment }} = "PT113+CUDA116" ]; then
if [ ${{ matrix.environment }} = "PT230+CUDA124" ]; then
# test the clang-format tool downloading once
coverage run -m tests.clang_format_utils
fi
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pythonapp-min.yml
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ jobs:
strategy:
fail-fast: false
matrix:
pytorch-version: ['1.13.1', '2.0.1', '2.2.2', '2.3.1', '2.4.1', 'latest']
pytorch-version: ['2.3.1', '2.4.1', '2.5.1', 'latest']
timeout-minutes: 40
steps:
- uses: actions/checkout@v4
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pythonapp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ jobs:
- if: runner.os == 'windows'
name: Install torch cpu from pytorch.org (Windows only)
run: |
python -m pip install torch==1.13.1+cpu torchvision==0.14.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
python -m pip install torch==2.4.1 torchvision==0.19.1+cpu --index-url https://download.pytorch.org/whl/cpu
- if: runner.os == 'Linux'
name: Install itk pre-release (Linux only)
run: |
Expand All @@ -103,7 +103,7 @@ jobs:
- name: Install the dependencies
run: |
python -m pip install --user --upgrade pip wheel
python -m pip install torch==1.13.1 torchvision==0.14.1
python -m pip install torch==2.4.1 torchvision==0.19.1
cat "requirements-dev.txt"
python -m pip install -r requirements-dev.txt
python -m pip list
Expand Down Expand Up @@ -155,7 +155,7 @@ jobs:
# install the latest pytorch for testing
# however, "pip install monai*.tar.gz" will build cpp/cuda with an isolated
# fresh torch installation according to pyproject.toml
python -m pip install torch>=1.13.1 torchvision
python -m pip install torch>=2.3.0 torchvision
- name: Check packages
run: |
pip uninstall monai
Expand Down
4 changes: 2 additions & 2 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
-f https://download.pytorch.org/whl/cpu/torch-1.13.1%2Bcpu-cp39-cp39-linux_x86_64.whl
torch>=1.13.1
-f https://download.pytorch.org/whl/cpu/torch-2.3.0%2Bcpu-cp39-cp39-linux_x86_64.whl
torch>=2.3.0
pytorch-ignite==0.4.11
numpy>=1.20
itk>=5.2
Expand Down
4 changes: 2 additions & 2 deletions environment-dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ channels:
- nvidia
- conda-forge
dependencies:
- numpy>=1.24,<2.0
- pytorch>=1.13.1
- numpy>=1.24,<3.0
- pytorch>=2.3.0
- torchio
- torchvision
- pytorch-cuda>=11.6
Expand Down
11 changes: 3 additions & 8 deletions monai/engines/evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
from monai.utils import ForwardMode, IgniteInfo, ensure_tuple, min_version, optional_import
from monai.utils.enums import CommonKeys as Keys
from monai.utils.enums import EngineStatsKeys as ESKeys
from monai.utils.module import look_up_option, pytorch_after
from monai.utils.module import look_up_option

if TYPE_CHECKING:
from ignite.engine import Engine, EventEnum
Expand Down Expand Up @@ -269,13 +269,8 @@ def __init__(
amp_kwargs=amp_kwargs,
)
if compile:
if pytorch_after(2, 1):
compile_kwargs = {} if compile_kwargs is None else compile_kwargs
network = torch.compile(network, **compile_kwargs) # type: ignore[assignment]
else:
warnings.warn(
"Network compilation (compile=True) not supported for Pytorch versions before 2.1, no compilation done"
)
compile_kwargs = {} if compile_kwargs is None else compile_kwargs
network = torch.compile(network, **compile_kwargs) # type: ignore[assignment]
self.network = network
self.compile = compile
self.inferer = SimpleInferer() if inferer is None else inferer
Expand Down
10 changes: 2 additions & 8 deletions monai/engines/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@
from monai.utils import AdversarialIterationEvents, AdversarialKeys, GanKeys, IgniteInfo, min_version, optional_import
from monai.utils.enums import CommonKeys as Keys
from monai.utils.enums import EngineStatsKeys as ESKeys
from monai.utils.module import pytorch_after

if TYPE_CHECKING:
from ignite.engine import Engine, EventEnum
Expand Down Expand Up @@ -183,13 +182,8 @@ def __init__(
amp_kwargs=amp_kwargs,
)
if compile:
if pytorch_after(2, 1):
compile_kwargs = {} if compile_kwargs is None else compile_kwargs
network = torch.compile(network, **compile_kwargs) # type: ignore[assignment]
else:
warnings.warn(
"Network compilation (compile=True) not supported for Pytorch versions before 2.1, no compilation done"
)
compile_kwargs = {} if compile_kwargs is None else compile_kwargs
network = torch.compile(network, **compile_kwargs) # type: ignore[assignment]
self.network = network
self.compile = compile
self.optimizer = optimizer
Expand Down
7 changes: 1 addition & 6 deletions monai/networks/blocks/crossattention.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import torch.nn as nn

from monai.networks.layers.utils import get_rel_pos_embedding_layer
from monai.utils import optional_import, pytorch_after
from monai.utils import optional_import

Rearrange, _ = optional_import("einops.layers.torch", name="Rearrange")

Expand Down Expand Up @@ -84,11 +84,6 @@ def __init__(
if causal and sequence_length is None:
raise ValueError("sequence_length is necessary for causal attention.")

if use_flash_attention and not pytorch_after(minor=13, major=1, patch=0):
raise ValueError(
"use_flash_attention is only supported for PyTorch versions >= 2.0."
"Upgrade your PyTorch or set the flag to False."
)
if use_flash_attention and save_attn:
raise ValueError(
"save_attn has been set to True, but use_flash_attention is also set"
Expand Down
7 changes: 1 addition & 6 deletions monai/networks/blocks/selfattention.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
import torch.nn.functional as F

from monai.networks.layers.utils import get_rel_pos_embedding_layer
from monai.utils import optional_import, pytorch_after
from monai.utils import optional_import

Rearrange, _ = optional_import("einops.layers.torch", name="Rearrange")

Expand Down Expand Up @@ -90,11 +90,6 @@ def __init__(
if causal and sequence_length is None:
raise ValueError("sequence_length is necessary for causal attention.")

if use_flash_attention and not pytorch_after(minor=13, major=1, patch=0):
raise ValueError(
"use_flash_attention is only supported for PyTorch versions >= 2.0."
"Upgrade your PyTorch or set the flag to False."
)
if use_flash_attention and save_attn:
raise ValueError(
"save_attn has been set to True, but use_flash_attention is also set"
Expand Down
14 changes: 3 additions & 11 deletions monai/networks/blocks/upsample.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@
import torch.nn as nn

from monai.networks.layers.factories import Conv, Pad, Pool
from monai.networks.utils import CastTempType, icnr_init, pixelshuffle
from monai.utils import InterpolateMode, UpsampleMode, ensure_tuple_rep, look_up_option, pytorch_after
from monai.networks.utils import icnr_init, pixelshuffle
from monai.utils import InterpolateMode, UpsampleMode, ensure_tuple_rep, look_up_option

__all__ = ["Upsample", "UpSample", "SubpixelUpsample", "Subpixelupsample", "SubpixelUpSample"]

Expand Down Expand Up @@ -164,15 +164,7 @@ def __init__(
align_corners=align_corners,
)

# Cast to float32 as 'upsample_nearest2d_out_frame' op does not support bfloat16
# https://github.com/pytorch/pytorch/issues/86679. This issue is solved in PyTorch 2.1
if pytorch_after(major=2, minor=1):
self.add_module("upsample_non_trainable", upsample)
else:
self.add_module(
"upsample_non_trainable",
CastTempType(initial_type=torch.bfloat16, temporary_type=torch.float32, submodule=upsample),
)
self.add_module("upsample_non_trainable", upsample)
if post_conv:
self.add_module("postconv", post_conv)
elif up_mode == UpsampleMode.PIXELSHUFFLE:
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
requires = [
"wheel",
"setuptools",
"torch>=1.13.1",
"torch>=2.3.0",
"ninja",
"packaging"
]
Expand Down
5 changes: 3 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
torch>=1.13.1,<2.6
numpy>=1.24,<2.0
torch>=2.3.0,<2.6; sys_platform != 'win32'
torch>=2.4.1,<2.6; sys_platform == 'win32'
numpy>=1.24,<3.0
5 changes: 3 additions & 2 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,9 @@ setup_requires =
ninja
packaging
install_requires =
torch>=1.13.1
numpy>=1.24,<2.0
torch>=2.3.0; sys_platform != 'win32'
torch>=2.4.1; sys_platform == 'win32'
numpy>=1.24,<3.0

[options.extras_require]
all =
Expand Down
6 changes: 2 additions & 4 deletions tests/integration/test_integration_bundle_run.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,7 @@ def test_tiny(self):
)
with open(meta_file, "w") as f:
json.dump(
{"version": "0.1.0", "monai_version": "1.1.0", "pytorch_version": "1.13.1", "numpy_version": "1.22.2"},
f,
{"version": "0.1.0", "monai_version": "1.1.0", "pytorch_version": "2.3.0", "numpy_version": "1.22.2"}, f
)
cmd = ["coverage", "run", "-m", "monai.bundle"]
# test both CLI entry "run" and "run_workflow"
Expand Down Expand Up @@ -114,8 +113,7 @@ def test_scripts_fold(self):
)
with open(meta_file, "w") as f:
json.dump(
{"version": "0.1.0", "monai_version": "1.1.0", "pytorch_version": "1.13.1", "numpy_version": "1.22.2"},
f,
{"version": "0.1.0", "monai_version": "1.1.0", "pytorch_version": "2.3.0", "numpy_version": "1.22.2"}, f
)

os.mkdir(scripts_dir)
Expand Down
6 changes: 3 additions & 3 deletions tests/metrics/test_surface_dice.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def test_tolerance_euclidean_distance_with_spacing(self):
expected_res0[1, 1] = np.nan
for b, c in np.ndindex(batch_size, n_class):
np.testing.assert_allclose(expected_res0[b, c], res0[b, c].cpu())
np.testing.assert_array_equal(agg0.cpu(), np.nanmean(np.nanmean(expected_res0, axis=1), axis=0))
np.testing.assert_allclose(agg0.cpu(), np.nanmean(np.nanmean(expected_res0, axis=1), axis=0))
np.testing.assert_equal(not_nans.cpu(), torch.tensor(2))

def test_tolerance_euclidean_distance(self):
Expand Down Expand Up @@ -126,7 +126,7 @@ def test_tolerance_euclidean_distance(self):
expected_res0[1, 1] = np.nan
for b, c in np.ndindex(batch_size, n_class):
np.testing.assert_allclose(expected_res0[b, c], res0[b, c].cpu())
np.testing.assert_array_equal(agg0.cpu(), np.nanmean(np.nanmean(expected_res0, axis=1), axis=0))
np.testing.assert_allclose(agg0.cpu(), np.nanmean(np.nanmean(expected_res0, axis=1), axis=0))
np.testing.assert_equal(not_nans.cpu(), torch.tensor(2))

def test_tolerance_euclidean_distance_3d(self):
Expand Down Expand Up @@ -173,7 +173,7 @@ def test_tolerance_euclidean_distance_3d(self):
expected_res0[1, 1] = np.nan
for b, c in np.ndindex(batch_size, n_class):
np.testing.assert_allclose(expected_res0[b, c], res0[b, c].cpu())
np.testing.assert_array_equal(agg0.cpu(), np.nanmean(np.nanmean(expected_res0, axis=1), axis=0))
np.testing.assert_allclose(agg0.cpu(), np.nanmean(np.nanmean(expected_res0, axis=1), axis=0))
np.testing.assert_equal(not_nans.cpu(), torch.tensor(2))

def test_tolerance_all_distances(self):
Expand Down
2 changes: 1 addition & 1 deletion tests/nonconfig_workflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def initialize(self):
self._monai_version = "1.1.0"

if self._pytorch_version is None:
self._pytorch_version = "1.13.1"
self._pytorch_version = "2.3.0"

if self._numpy_version is None:
self._numpy_version = "1.22.2"
Expand Down

0 comments on commit 2e391c8

Please sign in to comment.