Fix: Managed the import of torch.amp to be compatible with all pytorch versions #13487
+27
−17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I have read the CLA Document and I sign the CLA
Changes made
While I was training YOLO V5, I encountered warning :
To address this, I updated the import in train.py using a try-except block for compatibility with all PyTorch versions in requirements.txt:
Additionally, the variable amp (a boolean indicating whether to use automated precision/mixed precision training) was renamed to use_amp for clarity, since amp is also the module name.
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Improves AMP (Automatic Mixed Precision) integration with enhanced compatibility and functionality.
📊 Key Changes
torch.cuda.amp
iftorch.amp
is not available (ensures compatibility across PyTorch versions).amp
variable withuse_amp
for better clarity and consistency.GradScaler
) and automatic casting (autocast
), for seamless device type support (e.g., CPU, GPU).🎯 Purpose & Impact
torch.amp
.