-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning #13483
base: master
Are you sure you want to change the base?
Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning #13483
Conversation
All Contributors have signed the CLA. ✅ |
👋 Hello @Bala-Vignesh-Reddy, thank you for submitting an
🛠️ Notes and Feedback:Your PR for replacing all instances of If possible, please provide a minimum reproducible example (MRE) for broader validation, including any specific configurations or edge cases you've tested this with. This will help other contributors and engineers verify the fixes more effectively. 🐛 For additional guidance, you can refer to our Contributing Guide and CI Documentation. 🔔 Next Steps: |
I have read the CLA Document and I sign the CLA |
May resolve #13226 |
@Bala-Vignesh-Reddy please review and resolve failing CI tests. Thank you! |
…Reddy/yolov5 into fix-deprecated-amp
Description
This PR addresses issue #13226 which raises a FutureWarning due to the deprecation of
torch.cuda.amp
as of PyTorch 2.4. This replaces all instances oftorch.cuda.amp
withtorch.amp
to resolve the warning.Key Changes
torch.cuda.amp.autocast
withtorch.amp.autocast('cuda', ...)
.torch.cuda.amp.GradScaler
withtorch.amp.GradScaler('cuda')
.Steps to Reproduce and Testing
To verify the fix, I used a custom test script that was previously showing the deprecation warning. The following tests were performed:
Test Script:
The test script used for verification:
Screenshots
Warning Before the Change:
This screenshot shows the deprecation warning before the fix.
Warning After the Change:
This screenshot shows that the warning is no longer present after applying the fix.
Purpose & Impact
Additional Comments
Since the original PR #13244 has been inactive, I have submitted this new PR with the updated changes to resolve the issue #13226 .
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Updated the PyTorch mixed precision (AMP) method usage to align with the latest
torch.amp
standards for better compatibility and future-proofing.📊 Key Changes
torch.cuda.amp
usages withtorch.amp
across various files:val.py
,common.py
,train.py
,segment/train.py
, andutils/autobatch.py
.autocast
andGradScaler
methods to specify"cuda"
explicitly.🎯 Purpose & Impact
torch.amp
is more generic and future-focused compared totorch.cuda.amp
."cuda"
helps clarify intent and avoids potential confusion, particularly for non-CPU environments.