mirror of
https://github.com/ROCm/ROCm.git
synced 2026-04-05 03:01:17 -04:00
Fix PyTorch Compatibility link and remove incomplete rows (#4195)
* fix pytorch-compatibility filename fix links * remove incomplete rows in pytorch-compatibility * fix broken refs
This commit is contained in:
@@ -22,7 +22,7 @@ ROCm Version,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0, 6.1.2, 6.1.1, 6.1.0, 6.0.2, 6.
|
||||
,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908
|
||||
,,,,,,,,,,,
|
||||
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix-past-60:,,,,,,,,,,
|
||||
:doc:`PyTorch <../compatibility/pytorch-compatiblity>`,"2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13"
|
||||
:doc:`PyTorch <../compatibility/pytorch-compatibility>`,"2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13"
|
||||
:doc:`TensorFlow <rocm-install-on-linux:install/3rd-party/tensorflow-install>`,"2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.14.0, 2.13.1, 2.12.1","2.14.0, 2.13.1, 2.12.1"
|
||||
:doc:`JAX <rocm-install-on-linux:install/3rd-party/jax-install>`,0.4.35,0.4.35,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26
|
||||
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.14.1,1.14.1
|
||||
|
||||
|
@@ -47,7 +47,7 @@ compatibility and system requirements.
|
||||
,gfx908,gfx908,gfx908
|
||||
,,,
|
||||
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix:,,
|
||||
:doc:`PyTorch <../compatibility/pytorch-compatiblity>`,"2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13"
|
||||
:doc:`PyTorch <../compatibility/pytorch-compatibility>`,"2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13"
|
||||
:doc:`TensorFlow <rocm-install-on-linux:install/3rd-party/tensorflow-install>`,"2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1"
|
||||
:doc:`JAX <rocm-install-on-linux:install/3rd-party/jax-install>`,0.4.35,0.4.35,0.4.26
|
||||
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.17.3,1.17.3,1.17.3
|
||||
|
||||
@@ -576,14 +576,6 @@ PyTorch interacts with the CUDA or ROCm environment.
|
||||
- Globally enables or disables the PyTorch C++ implementation within SDPA.
|
||||
- 2.1
|
||||
- ❌
|
||||
* - ``allow_fp16_bf16_reduction_math_sdp``
|
||||
- Globally enables FP16 and BF16 precision for reduction operations within
|
||||
SDPA.
|
||||
- 2.1
|
||||
-
|
||||
..
|
||||
FIXME:
|
||||
- Partial?
|
||||
|
||||
.. Need to validate and extend.
|
||||
|
||||
@@ -671,15 +663,6 @@ of computational resources and scalability for large-scale tasks.
|
||||
those on separate machines.
|
||||
- 1.8
|
||||
- 5.4
|
||||
* - RPC Device Map Passing
|
||||
- RPC Device Map Passing in PyTorch refers to a feature of the Remote
|
||||
Procedure Call (RPC) framework that enables developers to control and
|
||||
specify how tensors are transferred between devices during remote
|
||||
operations. It allows fine-grained management of device placement when
|
||||
sending tensors across nodes in distributed training or execution
|
||||
scenarios.
|
||||
- 1.9
|
||||
-
|
||||
* - Gloo
|
||||
- Gloo is designed for multi-machine and multi-GPU setups, enabling
|
||||
efficient communication and synchronization between processes. Gloo is
|
||||
@@ -687,24 +670,6 @@ of computational resources and scalability for large-scale tasks.
|
||||
(DDP) and RPC frameworks, alongside other backends like NCCL and MPI.
|
||||
- 1.0
|
||||
- 2.0
|
||||
* - MPI
|
||||
- MPI (Message Passing Interface) in PyTorch refers to the use of the MPI
|
||||
backend for distributed communication in the ``torch.distributed`` module.
|
||||
It enables inter-process communication, primarily in distributed
|
||||
training settings, using the widely adopted MPI standard.
|
||||
- 1.9
|
||||
-
|
||||
* - TorchElastic
|
||||
- TorchElastic is a PyTorch library that enables fault-tolerant and
|
||||
elastic training in distributed environments. It is designed to handle
|
||||
dynamically changing resources, such as adding or removing nodes during
|
||||
training, which is especially useful in cloud-based or preemptible
|
||||
environments.
|
||||
- 1.9
|
||||
-
|
||||
|
||||
..
|
||||
FIXME: RPC Device Map Passing "Since ROCm version"
|
||||
|
||||
torch.compiler
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Reference in New Issue
Block a user