diff --git a/docs/deploy/docker.md b/docs/deploy/docker.md
index 86a6d9b11..ade97332e 100644
--- a/docs/deploy/docker.md
+++ b/docs/deploy/docker.md
@@ -4,9 +4,9 @@
Docker containers share the kernel with the host operating system, therefore the
ROCm kernel-mode driver must be installed on the host. Please refer to
-[](/deploy/linux/install) for details. The other user-space parts
-(like the HIP-runtime or math libraries) of the ROCm stack will be loaded from
-the container image and don't need to be installed to the host.
+{ref}`using-the-package-manager` on installing `amdgpu-dkms`. The other
+user-space parts (like the HIP-runtime or math libraries) of the ROCm stack will
+be loaded from the container image and don't need to be installed to the host.
(docker-access-gpus-in-container)=
diff --git a/docs/deploy/linux/install.md b/docs/deploy/linux/install.md
index abcb41414..ae201e525 100644
--- a/docs/deploy/linux/install.md
+++ b/docs/deploy/linux/install.md
@@ -405,6 +405,8 @@ Users installing multiple versions of the ROCm stack must use the
release-specific base URL.
```
+(using-the-package-manager)=
+
#### Using the Package Manager
::::::{tab-set}
diff --git a/docs/deploy/linux/prerequisites.md b/docs/deploy/linux/prerequisites.md
index a53b416e3..353e2c13f 100644
--- a/docs/deploy/linux/prerequisites.md
+++ b/docs/deploy/linux/prerequisites.md
@@ -98,6 +98,8 @@ To verify that your system has a ROCm-capable GPU, use these steps:
2. Verify from the output that the listed product names match with the Product
Id given in the table above.
+(setting_group_permissions)=
+
### Setting Permissions for Groups
This section provides steps to add any current user to a video group to access
diff --git a/docs/examples/troubleshooting.md b/docs/examples/troubleshooting.md
index dcb37e5e0..e56919059 100644
--- a/docs/examples/troubleshooting.md
+++ b/docs/examples/troubleshooting.md
@@ -7,13 +7,15 @@
hipErrorNoBinaryForGPU: Unable to find code object for all current devices!
```
-Ans: The error denotes that the installation of PyTorch and/or other dependencies or libraries do not support the current GPU.
+Ans: The error denotes that the installation of PyTorch and/or other
+dependencies or libraries do not support the current GPU.
**Workaround:**
To implement a workaround, follow these steps:
-1. Confirm that the hardware supports the ROCm stack. Refer to the Hardware and Software Support document at [https://docs.amd.com](https://docs.amd.com).
+1. Confirm that the hardware supports the ROCm stack. Refer to
+{ref}`supported_gpus`.
2. Determine the gfx target.
@@ -29,16 +31,21 @@ To implement a workaround, follow these steps:
```
:::{note}
- Recompile PyTorch with the right gfx target if compiling from the source if the hardware is not supported. For wheels or Docker installation, contact ROCm support [^ROCm_issues].
+ Recompile PyTorch with the right gfx target if compiling from the source if
+ the hardware is not supported. For wheels or Docker installation, contact
+ ROCm support [^ROCm_issues].
:::
**Q: Why am I unable to access Docker or GPU in user accounts?**
-Ans: Ensure that the user is added to docker, video, and render Linux groups as described in the ROCm Installation Guide at [https://docs.amd.com](https://docs.amd.com).
+Ans: Ensure that the user is added to docker, video, and render Linux groups as
+described in the ROCm Installation Guide at {ref}`setting_group_permissions`.
**Q: Can I install PyTorch directly on bare metal?**
-Ans: Bare-metal installation of PyTorch is supported through wheels. Refer to Option 2: Install PyTorch Using Wheels Package in the section [Installing PyTorch](/ROCm/docs/how_to/pytorch_install/pytorch_install) of this guide for more information.
+Ans: Bare-metal installation of PyTorch is supported through wheels. Refer to
+Option 2: Install PyTorch Using Wheels Package in the section
+{ref}`install_pytorch_using_wheels` of this guide for more information.
**Q: How do I profile PyTorch workloads?**
diff --git a/docs/how_to/deep_learning_rocm.md b/docs/how_to/deep_learning_rocm.md
index e388d4f86..f65b07851 100644
--- a/docs/how_to/deep_learning_rocm.md
+++ b/docs/how_to/deep_learning_rocm.md
@@ -4,7 +4,7 @@ The following sections cover the different framework installations for ROCm and
Deep Learning applications. {numref}`Rocm-Compat-Frameworks-Flowchart` provides
the sequential flow for the use of each framework. Refer to the ROCm Compatible
Frameworks Release Notes for each framework's most current release notes at
-[Framework Release Notes](https://docs.amd.com/bundle/ROCm-Compatible-Frameworks-Release-Notes/page/Framework_Release_Notes.html).
+{ref}`ml_framework_compat_matrix`.
```{figure} ../data/how_to/magma_install/image.005.png
:name: Rocm-Compat-Frameworks-Flowchart
diff --git a/docs/how_to/pytorch_install/pytorch_install.md b/docs/how_to/pytorch_install/pytorch_install.md
index 990f09f14..dcc6156c0 100644
--- a/docs/how_to/pytorch_install/pytorch_install.md
+++ b/docs/how_to/pytorch_install/pytorch_install.md
@@ -14,10 +14,12 @@ automatic differentiation. Other advanced features include:
### Installing PyTorch
-To install ROCm on bare metal, refer to the section
-[ROCm Installation](https://docs.amd.com/bundle/ROCm-Deep-Learning-Guide-v5.4-/page/Prerequisites.html#d2999e60).
-The recommended option to get a PyTorch environment is through Docker. However,
-installing the PyTorch wheels package on bare metal is also supported.
+To install ROCm on bare metal, refer to the sections
+[GPU and OS Support (Linux)](../../release/gpu_os_support.md) and
+[Compatibility](../../release/compatibility.md) for hardware, software and
+3rd-party framework compatibility between ROCm and PyTorch. The recommended
+option to get a PyTorch environment is through Docker. However, installing the
+PyTorch wheels package on bare metal is also supported.
#### Option 1 (Recommended): Use Docker Image with PyTorch Pre-Installed
@@ -51,6 +53,8 @@ Follow these steps:
onto the container.
:::
+(install_pytorch_using_wheels)=
+
#### Option 2: Install PyTorch Using Wheels Package
PyTorch supports the ROCm platform by providing tested wheels packages. To
@@ -77,9 +81,9 @@ To install PyTorch using the wheels package, follow these installation steps:
b. Download a base OS Docker image and install ROCm following the
installation directions in the section
- [Installation](https://docs.amd.com/bundle/ROCm-Deep-Learning-Guide-v5.4-/page/Prerequisites.html#d2999e60).
- ROCm 5.2 is installed in this example, as supported by the installation
- matrix from .
+ [Installation](../../deploy/linux/install.md). ROCm 5.2 is installed in
+ this example, as supported by the installation matrix from
+ .
or
diff --git a/docs/how_to/tensorflow_install/tensorflow_install.md b/docs/how_to/tensorflow_install/tensorflow_install.md
index c9220d690..ae8f55bff 100644
--- a/docs/how_to/tensorflow_install/tensorflow_install.md
+++ b/docs/how_to/tensorflow_install/tensorflow_install.md
@@ -16,8 +16,8 @@ The following sections contain options for installing TensorFlow.
#### Option 1: Install TensorFlow Using Docker Image
To install ROCm on bare metal, follow the section
-[ROCm Installation](https://docs.amd.com/bundle/ROCm-Deep-Learning-Guide-v5.4-/page/Prerequisites.html#d2999e60).
-The recommended option to get a TensorFlow environment is through Docker.
+[Installation (Linux)](../../deploy/linux/install.md). The recommended option to
+get a TensorFlow environment is through Docker.
Using Docker provides portability and access to a prebuilt Docker container that
has been rigorously tested within AMD. This might also save compilation time and
diff --git a/docs/reference/openmp/openmp.md b/docs/reference/openmp/openmp.md
index c1205dd31..7de1755c4 100644
--- a/docs/reference/openmp/openmp.md
+++ b/docs/reference/openmp/openmp.md
@@ -156,6 +156,8 @@ For more details on tracing, refer to the ROCm Profiling Tools document on
The OpenMP programming model is greatly enhanced with the following new features
implemented in the past releases.
+(openmp_usm)=
+
### Unified Shared Memory
Unified Shared Memory (USM) provides a pointer-based approach to memory
diff --git a/docs/reference/rocmcc/rocmcc.md b/docs/reference/rocmcc/rocmcc.md
index 3085bb54c..0747f416b 100644
--- a/docs/reference/rocmcc/rocmcc.md
+++ b/docs/reference/rocmcc/rocmcc.md
@@ -666,9 +666,8 @@ The following OpenMP pragma is available on MI200, and it must be executed with
omp requires unified_shared_memory
```
-For more details on
-[USM](https://docs.amd.com/bundle/OpenMP-Support-Guide-v5.4/page/OpenMP_Features.html#d90e61),
-refer to the OpenMP Support Guide at [https://docs.amd.com](https://docs.amd.com).
+For more details on USM refer to the {ref}`openmp_usm` section of the OpenMP
+Guide.
### Support Status of Other Clang Options
diff --git a/docs/release/3rd_party_support_matrix.md b/docs/release/3rd_party_support_matrix.md
index 7e9ba5d82..f241a2b6a 100644
--- a/docs/release/3rd_party_support_matrix.md
+++ b/docs/release/3rd_party_support_matrix.md
@@ -4,6 +4,8 @@ ROCm™ supports various 3rd party libraries and frameworks. Supported versions
are tested and known to work. Non-supported versions of 3rd parties may also
work, but aren't tested.
+(ml_framework_compat_matrix)=
+
## Deep Learning
ROCm releases support the most recent and two prior releases of PyTorch and
diff --git a/docs/release/gpu_os_support.md b/docs/release/gpu_os_support.md
index 4241f75ea..893917f8d 100644
--- a/docs/release/gpu_os_support.md
+++ b/docs/release/gpu_os_support.md
@@ -24,6 +24,8 @@ ROCm supports virtualization for select GPUs only as shown below.
| VMWare | ESXi 8 | MI210 | Ubuntu 20.04 (`5.15.0-56-generic`), SLES 15 SP4 (`5.14.21-150400.24.18-default`) |
| VMWare | ESXi 7 | MI210 | Ubuntu 20.04 (`5.15.0-56-generic`), SLES 15 SP4 (`5.14.21-150400.24.18-default`) |
+(supported_gpus)=
+
## GPU Support Table
::::{tab-set}
diff --git a/docs/understand/file_reorg.md b/docs/understand/file_reorg.md
index 79c16587b..605ab2896 100644
--- a/docs/understand/file_reorg.md
+++ b/docs/understand/file_reorg.md
@@ -162,6 +162,6 @@ correct header file and use correct search paths.
## References
-[ROCm deprecation warning](https://docs.amd.com/bundle/ROCm-Release-Notes-v5.4.3/page/Deprecations_and_Warnings.html)
+{ref}`ROCm deprecation warning <5_4_0_filesystem_reorg_deprecation_notice>`
[Linux File System Standard](https://refspecs.linuxfoundation.org/fhs.shtml)
diff --git a/tools/autotag/templates/rocm_changes/5.1.0.md b/tools/autotag/templates/rocm_changes/5.1.0.md
index 4f87972f4..5738816f9 100644
--- a/tools/autotag/templates/rocm_changes/5.1.0.md
+++ b/tools/autotag/templates/rocm_changes/5.1.0.md
@@ -60,7 +60,8 @@ ROCDebugger Machine Interface (MI) extends support to lanes. The following enhan
- MI varobjs are now lane-aware.
-For more information, refer to the ROC Debugger User Guide at .
+For more information, refer to the ROC Debugger User Guide at
+{doc}`ROCgdb `.
##### Enhanced - clone-inferior Command
@@ -82,7 +83,7 @@ This release includes support for AMD Radeon™ Pro W6800, in addition to other
- Various other bug fixes and performance improvements
-For more information, see
+For more information, see {doc}`Documentation `.
#### Checkpoint Restore Support With CRIU
diff --git a/tools/autotag/templates/rocm_changes/5.2.0.md b/tools/autotag/templates/rocm_changes/5.2.0.md
index c6fd77ee8..fd72bfb13 100644
--- a/tools/autotag/templates/rocm_changes/5.2.0.md
+++ b/tools/autotag/templates/rocm_changes/5.2.0.md
@@ -271,7 +271,8 @@ The new APIs for virtual memory management are as follows:
hipError_t hipMemUnmap(void* ptr, size_t size);
```
-For more information, refer to the HIP API documentation at
+For more information, refer to the HIP API documentation at
+{doc}`hip:.doxygen/docBin/html/modules`.
##### Planned HIP Changes in Future Releases
@@ -287,7 +288,8 @@ This release introduces a new ROCm C++ library for accelerating mixed precision
rocWMMA is released as a header library and includes test and sample projects to validate and illustrate example usages of the C++ API. GEMM matrix multiplication is used as primary validation given the heavy precedent for the library. However, the usage portfolio is growing significantly and demonstrates different ways rocWMMA may be consumed.
-For more information, refer to .
+For more information, refer to
+[Communication Libraries](../../../../docs/reference/gpu_libraries/communication.md).
#### OpenMP Enhancements in This Release
diff --git a/tools/autotag/templates/rocm_changes/5.4.0.md b/tools/autotag/templates/rocm_changes/5.4.0.md
index 05b83bf46..5ca2e3e29 100644
--- a/tools/autotag/templates/rocm_changes/5.4.0.md
+++ b/tools/autotag/templates/rocm_changes/5.4.0.md
@@ -95,6 +95,8 @@ The `hipcc` and `hipconfig` Perl scripts are deprecated. In a future release, co
>
> There will be a transition period where the Perl scripts and compiled binaries are available before the scripts are removed. There will be no functional difference between the Perl scripts and their compiled binary counterpart. No user action is required. Once these are available, users can optionally switch to `hipcc.bin` and `hipconfig.bin`. The `hipcc`/`hipconfig` soft link will be assimilated to point from `hipcc`/`hipconfig` to the respective compiled binaries as the default option.
+(5_4_0_filesystem_reorg_deprecation_notice)=
+
##### Linux Filesystem Hierarchy Standard for ROCm
ROCm packages have adopted the Linux foundation filesystem hierarchy standard in this release to ensure ROCm components follow open source conventions for Linux-based distributions. While moving to a new filesystem hierarchy, ROCm ensures backward compatibility with its 5.1 version or older filesystem hierarchy. See below for a detailed explanation of the new filesystem hierarchy and backward compatibility.
@@ -205,9 +207,8 @@ The test was incorrectly using the `hipDeviceAttributePageableMemoryAccess` devi
`hipHostMalloc()` allocates memory with fine-grained access by default when the environment variable `HIP_HOST_COHERENT=1` is used.
-For more information, refer to the HIP Programming Guide at
+For more information, refer to {doc}`hip:.doxygen/docBin/html/index`.
-
#### SoftHang with `hipStreamWithCUMask` test on AMD Instinct™