Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning #13483

Open
wants to merge 19 commits into
base: master
Choose a base branch
from

Conversation

Bala-Vignesh-Reddy
Copy link

@Bala-Vignesh-Reddy Bala-Vignesh-Reddy commented Jan 5, 2025

Description

This PR addresses issue #13226 which raises a FutureWarning due to the deprecation of torch.cuda.amp as of PyTorch 2.4. This replaces all instances of torch.cuda.amp with torch.amp to resolve the warning.


Key Changes

  • Replaced all instances of torch.cuda.amp.autocast with torch.amp.autocast('cuda', ...).
  • Replaced torch.cuda.amp.GradScaler with torch.amp.GradScaler('cuda').

Steps to Reproduce and Testing

To verify the fix, I used a custom test script that was previously showing the deprecation warning. The following tests were performed:

  1. Test Script: I ran a test script using YOLOv5 and confirmed that the deprecation warning no longer appears after applying the changes.
  2. Warning Before Fix: The screenshot below shows the deprecation warning when installing the model directly from the repo.
  3. Warning After Fix: After applying the fix, the warning no longer appears, as shown in the second screenshot.

Test Script:

The test script used for verification:

import torch
from pathlib import Path
import cv2
import numpy as np
from PIL import Image

# model = torch.hub.load('ultralytics/yolov5', 'yolov5')    # latest model causes warning
model = torch.hub.load('./YOLOV5', 'yolov5n', source='local', weights_only=True)
model.eval()

cap = cv2.VideoCapture(0)

while True:
    ret, frame = cap.read()
    if not ret:
        break
    results = model(frame)
    for det in results.xyxy[0]:
        x1, y1, x2, y2, conf, cls = det.tolist()
        label = f'{results.names[int(cls)]} {conf:.2f}'
        print(label)
        cv2.putText(frame, label, (int(x1), int(y1) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2 )
    render_frame = results.render()[0]
    cv2.imshow('Detection', render_frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Screenshots

Warning Before the Change:

Warning Before
This screenshot shows the deprecation warning before the fix.

Warning After the Change:

Warning After
This screenshot shows that the warning is no longer present after applying the fix.


Purpose & Impact

  • Improved Compatibility: Ensures that the code is compatible with PyTorch 2.4 and newer versions, reducing future issues.
  • Performance: Leverages the latest improvements in automatic mixed precision (AMP) for better performance on CUDA-enabled devices.

Additional Comments

Since the original PR #13244 has been inactive, I have submitted this new PR with the updated changes to resolve the issue #13226 .


🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Updated the PyTorch mixed precision (AMP) method usage to align with the latest torch.amp standards for better compatibility and future-proofing.

📊 Key Changes

  • Replaced torch.cuda.amp usages with torch.amp across various files:
    • val.py, common.py, train.py, segment/train.py, and utils/autobatch.py.
  • Updated autocast and GradScaler methods to specify "cuda" explicitly.

🎯 Purpose & Impact

  • Consistency with PyTorch: Ensures compatibility with the latest PyTorch API, as torch.amp is more generic and future-focused compared to torch.cuda.amp.
  • Enhanced Stability: Reduces potential deprecation or compatibility issues in future PyTorch releases.
  • Improved Usability: Explicitly specifying "cuda" helps clarify intent and avoids potential confusion, particularly for non-CPU environments.
  • Seamless Transition: Existing functionality remains unchanged for users, ensuring continuity without disruptions. 🚀

Copy link
Contributor

github-actions bot commented Jan 5, 2025

All Contributors have signed the CLA. ✅
Posted by the CLA Assistant Lite bot.

@UltralyticsAssistant UltralyticsAssistant added bug Something isn't working devops GitHub Devops or MLops enhancement New feature or request labels Jan 5, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @Bala-Vignesh-Reddy, thank you for submitting an ultralytics/yolov5 🚀 PR! To ensure a seamless integration of your work, please review the following checklist:

  • Define a Purpose: Clearly explain the purpose of your fix or feature in your PR description, and link to any relevant issues. Ensure your commit messages are clear, concise, and adhere to the project's conventions.
  • Synchronize with Source: Confirm your PR is synchronized with the ultralytics/yolov5 main branch. If it's behind, update it by clicking the 'Update branch' button or by running git pull and git merge main locally.
  • Ensure CI Checks Pass: Verify all Ultralytics Continuous Integration (CI) checks are passing. If any checks fail, please address the issues.
  • Update Documentation: Update the relevant documentation for any new or modified features.
  • Add Tests: If applicable, include or update tests to cover your changes, and confirm that all tests are passing.
  • Sign the CLA: Please ensure you have signed our Contributor License Agreement (CLA) if this is your first Ultralytics PR by writing "I have read the CLA Document and I sign the CLA" in a new message.
  • Minimize Changes: Limit your changes to the minimum necessary for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." — Bruce Lee

🛠️ Notes and Feedback:

Your PR for replacing all instances of torch.cuda.amp with torch.amp, as detailed in your description and diff, looks well-crafted and effectively aligns YOLOv5 with the most current torch.amp standards. Ensuring compatibility with PyTorch 2.4 is crucial for robustness, and we appreciate the inclusion of test scripts and comparison screenshots! 🎉

If possible, please provide a minimum reproducible example (MRE) for broader validation, including any specific configurations or edge cases you've tested this with. This will help other contributors and engineers verify the fixes more effectively. 🐛

For additional guidance, you can refer to our Contributing Guide and CI Documentation.


🔔 Next Steps:
An Ultralytics engineer will review your PR in detail soon. If there are any updates or changes on your end, don’t hesitate to update the PR. Thank you again for contributing to Ultralytics! 🚀

@Bala-Vignesh-Reddy
Copy link
Author

I have read the CLA Document and I sign the CLA

@Bala-Vignesh-Reddy Bala-Vignesh-Reddy changed the title Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning (#13226) Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning Jan 5, 2025
@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 6, 2025

May resolve #13226

@glenn-jocher glenn-jocher linked an issue Jan 6, 2025 that may be closed by this pull request
2 tasks
@glenn-jocher
Copy link
Member

@Bala-Vignesh-Reddy please review and resolve failing CI tests. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working devops GitHub Devops or MLops enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

关于yolov5在mac设备上使用mps加速出现的各种问题
3 participants