Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update anomaly transforms #4059

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@ All notable changes to this project will be documented in this file.

### Bug fixes

- Update anomaly base transforms to use square resizing
(<https://github.com/openvinotoolkit/training_extensions/pull/4059>)
- Fix Combined Dataloader & unlabeled warmup loss in Semi-SL
(<https://github.com/openvinotoolkit/training_extensions/pull/3723>)
- Revert #3579 to fix issues with replacing coco_instance with a different format in some dataset
Expand Down
2 changes: 1 addition & 1 deletion src/otx/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

__version__ = "2.2.0rc10"
__version__ = "2.2.0rc11"

import os
from pathlib import Path
Expand Down
17 changes: 7 additions & 10 deletions src/otx/recipe/_base_/data/anomaly.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
task: ANOMALY_CLASSIFICATION
input_size: 256
input_size: [256, 256]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input_size:
- 256
- 256

data_format: mvtec
mem_cache_size: 1GB
mem_cache_img_max_size: null
Expand All @@ -13,11 +13,10 @@ train_subset:
batch_size: 32
num_workers: 4
transforms:
- class_path: otx.core.data.transform_libs.torchvision.ResizetoLongestEdge
- class_path: torchvision.transforms.v2.Resize
init_args:
size: $(input_size)
size: [256, 256]
chuneuny-emily marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

size: $(input_size)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I use $(input_size) it resizes only along one dimension. And, when I pass $(input_size), $(input_size), I get found str expected int error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want to use list as input size then you need to set input size as list in yaml file as my comment above.

antialias: true
- class_path: otx.core.data.transform_libs.torchvision.PadtoSquare
- class_path: torchvision.transforms.v2.ToDtype
init_args:
dtype: ${as_torch_dtype:torch.float32}
Expand All @@ -36,11 +35,10 @@ val_subset:
batch_size: 32
num_workers: 4
transforms:
- class_path: otx.core.data.transform_libs.torchvision.ResizetoLongestEdge
- class_path: torchvision.transforms.v2.Resize
init_args:
size: $(input_size)
size: [256, 256]
antialias: true
- class_path: otx.core.data.transform_libs.torchvision.PadtoSquare
- class_path: torchvision.transforms.v2.ToDtype
init_args:
dtype: ${as_torch_dtype:torch.float32}
Expand All @@ -59,11 +57,10 @@ test_subset:
batch_size: 32
num_workers: 4
transforms:
- class_path: otx.core.data.transform_libs.torchvision.ResizetoLongestEdge
- class_path: torchvision.transforms.v2.Resize
init_args:
size: $(input_size)
size: [256, 256]
antialias: true
- class_path: otx.core.data.transform_libs.torchvision.PadtoSquare
- class_path: torchvision.transforms.v2.ToDtype
init_args:
dtype: ${as_torch_dtype:torch.float32}
Expand Down
Loading