Update PyTorch compatibility documentation

This commit is contained in:
Shao
2025-11-19 09:52:47 -07:00
parent bb692dfd84
commit 407a9d4cb0

View File

@@ -399,18 +399,19 @@ with ROCm.
**Note:** Only official release exists. **Note:** Only official release exists.
Key features and enhancements for PyTorch 2.8 with ROCm 7.1 Key features and enhancements for PyTorch 2.9 with ROCm 7.1.1
================================================================================ ================================================================================
- Added OCP Micro-scaling Format (mx-fp8/mx-fp4) support for advanced precision training.
- MIOpen deep learning optimizations: Further optimized NHWC BatchNorm feature. - `torch.backends.miopen.immediate` flag to toggle MIOpen Immediate Mode independently of
deterministic and benchmark settings, providing finer control over convolution execution.
- Added float8 support for the DeepSpeed extension, allowing for decreased - rocSOLVER now used for Cholesky inversion operations, providing improved numerical stability
memory footprint and increased throughput in training and inference workloads. and performance for linear algebra workloads.
- ``torch.nn.functional.scaled_dot_product_attention`` now calling optimized - MI355X GPU testing enabled in CI.
flash attention kernel automatically.
Key features and enhancements for PyTorch 2.7/2.8 with ROCm 7.0 Key features and enhancements for PyTorch 2.7/2.8 with ROCm 7.1.1
================================================================================ ================================================================================
- Enhanced TunableOp framework: Introduces ``tensorfloat32`` support for - Enhanced TunableOp framework: Introduces ``tensorfloat32`` support for