Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Device mismatch in SwinV2 #976

Merged
merged 1 commit into from
Sep 1, 2022
Merged

[Fix] Device mismatch in SwinV2 #976

merged 1 commit into from
Sep 1, 2022

Conversation

a-mos
Copy link

@a-mos a-mos commented Aug 17, 2022

Motivation

Since logit_scale in the WindowMSAV2 module is a learnable parameter and it could potentially be on the GPU, while torch instantiates torch.tensor() on the CPU, this results in an error :
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument max in method wrapper_clamp_Tensor)

Modification

Forced to create a tensor on same device.

BC-breaking (Optional)

Currently SwinV2 is not implemented in downstream repositories?

Use cases

Minimal example to reproduce the error:

import torch
from mmcls.models.backbones import SwinTransformerV2
model = SwinTransformerV2().to('cuda:0')
model(torch.rand((2, 3, 512, 512), device='cuda:0'))

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

@CLAassistant
Copy link

CLAassistant commented Aug 17, 2022

CLA assistant check
All committers have signed the CLA.

@Ezra-Yu Ezra-Yu requested a review from yingfhu August 18, 2022 02:35
@yingfhu
Copy link
Collaborator

yingfhu commented Aug 18, 2022

In pytorch1.12.0, torch.clamp arguments should be on same device. Thanks for your work.

Copy link
Collaborator

@yingfhu yingfhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@codecov
Copy link

codecov bot commented Aug 19, 2022

Codecov Report

Merging #976 (2072f7a) into dev (6474ea2) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##              dev     #976   +/-   ##
=======================================
  Coverage   86.13%   86.13%           
=======================================
  Files         140      140           
  Lines        9674     9674           
  Branches     1677     1677           
=======================================
  Hits         8333     8333           
  Misses       1090     1090           
  Partials      251      251           
Flag Coverage Δ
unittests 86.06% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/models/utils/attention.py 96.91% <ø> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@mzr1996 mzr1996 merged commit 517bd3d into open-mmlab:dev Sep 1, 2022
Ezra-Yu pushed a commit to Ezra-Yu/mmclassification that referenced this pull request Sep 6, 2022
@yaqi0510
Copy link

Dear a-mos,

First of all, we want to express our gratitude for your significant PR in the MMClassification project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR.

If you are Chinese or have WeChat,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:)

We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. We look forward to seeing you there! Join us :https://discord.gg/raweFPmdzG
Thank you again for your contribution❤

Best regards!@a-mos

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants