You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When inferring with STFPM on XPU, error is raised at L37 with error 'TypeError: can't convert xpu:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.'.
It can be fixed by executing connected_components_gpu function not connected_components_cpu. Could you change code to execute connected_components_cpu if masks is on CPU and execute connected_components_gpu in other case?
Dataset
MVTec
Model
STFPM
Steps to reproduce the behavior
Infer STFPM model on XPU
OS information
OS information:
OS: Ubuntu 22.04.3 LTS
Python version: 3.10
Anomalib version: 0.5.1
PyTorch version: 2.1
CUDA/cuDNN version: N/A
GPU models and configuration: max gpu 1100
Any other relevant information: N/A
Expected behavior
Infer a model without errors.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Run on OTX
Logs
Traceback (most recent call last):
File "/home/sdp/miniconda3/envs/eunwoosh_otx/bin/otx", line 8, in<module>sys.exit(main())
File "/home/sdp/eunwoosh/otx/src/otx/cli/tools/cli.py", line 77, in main
results = globals()[f"otx_{name}"]()
File "/home/sdp/eunwoosh/otx/src/otx/cli/tools/eval.py", line 141, in main
predicted_validation_dataset = task.infer(
File "/home/sdp/eunwoosh/otx/src/otx/algorithms/anomaly/tasks/inference.py", line 227, in infer
self.trainer.predict(model=self.model, datamodule=datamodule)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 892, in predict
return call._call_and_handle_interrupt(
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 938, in _predict_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run
results = self._run_stage()
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1190, in _run_stage
returnself._run_predict()
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1244, in _run_predict
returnself.predict_loop.run()
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/loops/dataloader/prediction_loop.py", line 100, in advance
dl_predictions, dl_batch_indices = self.epoch_loop.run(
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/prediction_epoch_loop.py", line 100, in advance
self._predict_step(batch, batch_idx, dataloader_idx)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/prediction_epoch_loop.py", line 129, in _predict_step
predictions = self.trainer._call_strategy_hook("predict_step", *step_kwargs.values())
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 408, in predict_step
return self.model.predict_step(*args, **kwargs)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/anomalib/models/components/base/anomaly_module.py", line 96, in predict_step
outputs["pred_boxes"], outputs["box_scores"] = masks_to_boxes(
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/anomalib/data/utils/boxes.py", line 36, in masks_to_boxes
batch_comps = connected_components_cpu(masks).squeeze(1)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/anomalib/utils/cv/connected_components.py", line 45, in connected_components_cpu
mask = mask.squeeze().numpy().astype(np.uint8)
File "/home/sdp/miniconda3/envs/eunwoosh_otx/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/functional/_tensor_method.py", line 13, in _numpy
return torch._C._TensorBase.numpy(x)
TypeError: can't convert xpu:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Describe the bug
anomalib/src/anomalib/data/utils/boxes.py
Lines 34 to 37 in 34b3a90
When inferring with STFPM on XPU, error is raised at L37 with error 'TypeError: can't convert xpu:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.'.
It can be fixed by executing
connected_components_gpu
function notconnected_components_cpu
. Could you change code to executeconnected_components_cpu
if masks is on CPU and executeconnected_components_gpu
in other case?Dataset
MVTec
Model
STFPM
Steps to reproduce the behavior
Infer STFPM model on XPU
OS information
OS information:
Expected behavior
Infer a model without errors.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Run on OTX
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: