mirror of
https://github.com/ROCm/ROCm.git
synced 2026-01-09 14:48:06 -05:00
Remove links to docs.amd.com (#2200)
* Remove links to docs.amd.com * Fix linking to list item (not possible)
This commit is contained in:
committed by
GitHub
parent
2829c088c2
commit
5752b5986c
@@ -4,9 +4,9 @@
|
||||
|
||||
Docker containers share the kernel with the host operating system, therefore the
|
||||
ROCm kernel-mode driver must be installed on the host. Please refer to
|
||||
[](/deploy/linux/install) for details. The other user-space parts
|
||||
(like the HIP-runtime or math libraries) of the ROCm stack will be loaded from
|
||||
the container image and don't need to be installed to the host.
|
||||
{ref}`using-the-package-manager` on installing `amdgpu-dkms`. The other
|
||||
user-space parts (like the HIP-runtime or math libraries) of the ROCm stack will
|
||||
be loaded from the container image and don't need to be installed to the host.
|
||||
|
||||
(docker-access-gpus-in-container)=
|
||||
|
||||
|
||||
@@ -405,6 +405,8 @@ Users installing multiple versions of the ROCm stack must use the
|
||||
release-specific base URL.
|
||||
```
|
||||
|
||||
(using-the-package-manager)=
|
||||
|
||||
#### Using the Package Manager
|
||||
|
||||
::::::{tab-set}
|
||||
|
||||
@@ -98,6 +98,8 @@ To verify that your system has a ROCm-capable GPU, use these steps:
|
||||
2. Verify from the output that the listed product names match with the Product
|
||||
Id given in the table above.
|
||||
|
||||
(setting_group_permissions)=
|
||||
|
||||
### Setting Permissions for Groups
|
||||
|
||||
This section provides steps to add any current user to a video group to access
|
||||
|
||||
@@ -7,13 +7,15 @@
|
||||
hipErrorNoBinaryForGPU: Unable to find code object for all current devices!
|
||||
```
|
||||
|
||||
Ans: The error denotes that the installation of PyTorch and/or other dependencies or libraries do not support the current GPU.
|
||||
Ans: The error denotes that the installation of PyTorch and/or other
|
||||
dependencies or libraries do not support the current GPU.
|
||||
|
||||
**Workaround:**
|
||||
|
||||
To implement a workaround, follow these steps:
|
||||
|
||||
1. Confirm that the hardware supports the ROCm stack. Refer to the Hardware and Software Support document at [https://docs.amd.com](https://docs.amd.com).
|
||||
1. Confirm that the hardware supports the ROCm stack. Refer to
|
||||
{ref}`supported_gpus`.
|
||||
|
||||
2. Determine the gfx target.
|
||||
|
||||
@@ -29,16 +31,21 @@ To implement a workaround, follow these steps:
|
||||
```
|
||||
|
||||
:::{note}
|
||||
Recompile PyTorch with the right gfx target if compiling from the source if the hardware is not supported. For wheels or Docker installation, contact ROCm support [^ROCm_issues].
|
||||
Recompile PyTorch with the right gfx target if compiling from the source if
|
||||
the hardware is not supported. For wheels or Docker installation, contact
|
||||
ROCm support [^ROCm_issues].
|
||||
:::
|
||||
|
||||
**Q: Why am I unable to access Docker or GPU in user accounts?**
|
||||
|
||||
Ans: Ensure that the user is added to docker, video, and render Linux groups as described in the ROCm Installation Guide at [https://docs.amd.com](https://docs.amd.com).
|
||||
Ans: Ensure that the user is added to docker, video, and render Linux groups as
|
||||
described in the ROCm Installation Guide at {ref}`setting_group_permissions`.
|
||||
|
||||
**Q: Can I install PyTorch directly on bare metal?**
|
||||
|
||||
Ans: Bare-metal installation of PyTorch is supported through wheels. Refer to Option 2: Install PyTorch Using Wheels Package in the section [Installing PyTorch](/ROCm/docs/how_to/pytorch_install/pytorch_install) of this guide for more information.
|
||||
Ans: Bare-metal installation of PyTorch is supported through wheels. Refer to
|
||||
Option 2: Install PyTorch Using Wheels Package in the section
|
||||
{ref}`install_pytorch_using_wheels` of this guide for more information.
|
||||
|
||||
**Q: How do I profile PyTorch workloads?**
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ The following sections cover the different framework installations for ROCm and
|
||||
Deep Learning applications. {numref}`Rocm-Compat-Frameworks-Flowchart` provides
|
||||
the sequential flow for the use of each framework. Refer to the ROCm Compatible
|
||||
Frameworks Release Notes for each framework's most current release notes at
|
||||
[Framework Release Notes](https://docs.amd.com/bundle/ROCm-Compatible-Frameworks-Release-Notes/page/Framework_Release_Notes.html).
|
||||
{ref}`ml_framework_compat_matrix`.
|
||||
|
||||
```{figure} ../data/how_to/magma_install/image.005.png
|
||||
:name: Rocm-Compat-Frameworks-Flowchart
|
||||
|
||||
@@ -14,10 +14,12 @@ automatic differentiation. Other advanced features include:
|
||||
|
||||
### Installing PyTorch
|
||||
|
||||
To install ROCm on bare metal, refer to the section
|
||||
[ROCm Installation](https://docs.amd.com/bundle/ROCm-Deep-Learning-Guide-v5.4-/page/Prerequisites.html#d2999e60).
|
||||
The recommended option to get a PyTorch environment is through Docker. However,
|
||||
installing the PyTorch wheels package on bare metal is also supported.
|
||||
To install ROCm on bare metal, refer to the sections
|
||||
[GPU and OS Support (Linux)](../../release/gpu_os_support.md) and
|
||||
[Compatibility](../../release/compatibility.md) for hardware, software and
|
||||
3rd-party framework compatibility between ROCm and PyTorch. The recommended
|
||||
option to get a PyTorch environment is through Docker. However, installing the
|
||||
PyTorch wheels package on bare metal is also supported.
|
||||
|
||||
#### Option 1 (Recommended): Use Docker Image with PyTorch Pre-Installed
|
||||
|
||||
@@ -51,6 +53,8 @@ Follow these steps:
|
||||
onto the container.
|
||||
:::
|
||||
|
||||
(install_pytorch_using_wheels)=
|
||||
|
||||
#### Option 2: Install PyTorch Using Wheels Package
|
||||
|
||||
PyTorch supports the ROCm platform by providing tested wheels packages. To
|
||||
@@ -77,9 +81,9 @@ To install PyTorch using the wheels package, follow these installation steps:
|
||||
|
||||
b. Download a base OS Docker image and install ROCm following the
|
||||
installation directions in the section
|
||||
[Installation](https://docs.amd.com/bundle/ROCm-Deep-Learning-Guide-v5.4-/page/Prerequisites.html#d2999e60).
|
||||
ROCm 5.2 is installed in this example, as supported by the installation
|
||||
matrix from <http://pytorch.org/>.
|
||||
[Installation](../../deploy/linux/install.md). ROCm 5.2 is installed in
|
||||
this example, as supported by the installation matrix from
|
||||
<http://pytorch.org/>.
|
||||
|
||||
or
|
||||
|
||||
|
||||
@@ -16,8 +16,8 @@ The following sections contain options for installing TensorFlow.
|
||||
#### Option 1: Install TensorFlow Using Docker Image
|
||||
|
||||
To install ROCm on bare metal, follow the section
|
||||
[ROCm Installation](https://docs.amd.com/bundle/ROCm-Deep-Learning-Guide-v5.4-/page/Prerequisites.html#d2999e60).
|
||||
The recommended option to get a TensorFlow environment is through Docker.
|
||||
[Installation (Linux)](../../deploy/linux/install.md). The recommended option to
|
||||
get a TensorFlow environment is through Docker.
|
||||
|
||||
Using Docker provides portability and access to a prebuilt Docker container that
|
||||
has been rigorously tested within AMD. This might also save compilation time and
|
||||
|
||||
@@ -156,6 +156,8 @@ For more details on tracing, refer to the ROCm Profiling Tools document on
|
||||
The OpenMP programming model is greatly enhanced with the following new features
|
||||
implemented in the past releases.
|
||||
|
||||
(openmp_usm)=
|
||||
|
||||
### Unified Shared Memory
|
||||
|
||||
Unified Shared Memory (USM) provides a pointer-based approach to memory
|
||||
|
||||
@@ -666,9 +666,8 @@ The following OpenMP pragma is available on MI200, and it must be executed with
|
||||
omp requires unified_shared_memory
|
||||
```
|
||||
|
||||
For more details on
|
||||
[USM](https://docs.amd.com/bundle/OpenMP-Support-Guide-v5.4/page/OpenMP_Features.html#d90e61),
|
||||
refer to the OpenMP Support Guide at [https://docs.amd.com](https://docs.amd.com).
|
||||
For more details on USM refer to the {ref}`openmp_usm` section of the OpenMP
|
||||
Guide.
|
||||
|
||||
### Support Status of Other Clang Options
|
||||
|
||||
|
||||
@@ -4,6 +4,8 @@ ROCm™ supports various 3rd party libraries and frameworks. Supported versions
|
||||
are tested and known to work. Non-supported versions of 3rd parties may also
|
||||
work, but aren't tested.
|
||||
|
||||
(ml_framework_compat_matrix)=
|
||||
|
||||
## Deep Learning
|
||||
|
||||
ROCm releases support the most recent and two prior releases of PyTorch and
|
||||
|
||||
@@ -24,6 +24,8 @@ ROCm supports virtualization for select GPUs only as shown below.
|
||||
| VMWare | ESXi 8 | MI210 | Ubuntu 20.04 (`5.15.0-56-generic`), SLES 15 SP4 (`5.14.21-150400.24.18-default`) |
|
||||
| VMWare | ESXi 7 | MI210 | Ubuntu 20.04 (`5.15.0-56-generic`), SLES 15 SP4 (`5.14.21-150400.24.18-default`) |
|
||||
|
||||
(supported_gpus)=
|
||||
|
||||
## GPU Support Table
|
||||
|
||||
::::{tab-set}
|
||||
|
||||
@@ -162,6 +162,6 @@ correct header file and use correct search paths.
|
||||
|
||||
## References
|
||||
|
||||
[ROCm deprecation warning](https://docs.amd.com/bundle/ROCm-Release-Notes-v5.4.3/page/Deprecations_and_Warnings.html)
|
||||
{ref}`ROCm deprecation warning <5_4_0_filesystem_reorg_deprecation_notice>`
|
||||
|
||||
[Linux File System Standard](https://refspecs.linuxfoundation.org/fhs.shtml)
|
||||
|
||||
@@ -60,7 +60,8 @@ ROCDebugger Machine Interface (MI) extends support to lanes. The following enhan
|
||||
|
||||
- MI varobjs are now lane-aware.
|
||||
|
||||
For more information, refer to the ROC Debugger User Guide at <https://docs.amd.com>.
|
||||
For more information, refer to the ROC Debugger User Guide at
|
||||
{doc}`ROCgdb <rocgdb:index>`.
|
||||
|
||||
##### Enhanced - clone-inferior Command
|
||||
|
||||
@@ -82,7 +83,7 @@ This release includes support for AMD Radeon™ Pro W6800, in addition to other
|
||||
|
||||
- Various other bug fixes and performance improvements
|
||||
|
||||
For more information, see <https://docs.amd.com/bundle/MIOpen_gh-pages/page/releasenotes.html>
|
||||
For more information, see {doc}`Documentation <miopen:index>`.
|
||||
|
||||
#### Checkpoint Restore Support With CRIU
|
||||
|
||||
|
||||
@@ -271,7 +271,8 @@ The new APIs for virtual memory management are as follows:
|
||||
hipError_t hipMemUnmap(void* ptr, size_t size);
|
||||
```
|
||||
|
||||
For more information, refer to the HIP API documentation at <https://docs.amd.com/bundle/HIP_API_Guide/page/modules.html>
|
||||
For more information, refer to the HIP API documentation at
|
||||
{doc}`hip:.doxygen/docBin/html/modules`.
|
||||
|
||||
##### Planned HIP Changes in Future Releases
|
||||
|
||||
@@ -287,7 +288,8 @@ This release introduces a new ROCm C++ library for accelerating mixed precision
|
||||
|
||||
rocWMMA is released as a header library and includes test and sample projects to validate and illustrate example usages of the C++ API. GEMM matrix multiplication is used as primary validation given the heavy precedent for the library. However, the usage portfolio is growing significantly and demonstrates different ways rocWMMA may be consumed.
|
||||
|
||||
For more information, refer to <https://docs.amd.com/category/libraries>.
|
||||
For more information, refer to
|
||||
[Communication Libraries](../../../../docs/reference/gpu_libraries/communication.md).
|
||||
|
||||
#### OpenMP Enhancements in This Release
|
||||
|
||||
|
||||
@@ -95,6 +95,8 @@ The `hipcc` and `hipconfig` Perl scripts are deprecated. In a future release, co
|
||||
>
|
||||
> There will be a transition period where the Perl scripts and compiled binaries are available before the scripts are removed. There will be no functional difference between the Perl scripts and their compiled binary counterpart. No user action is required. Once these are available, users can optionally switch to `hipcc.bin` and `hipconfig.bin`. The `hipcc`/`hipconfig` soft link will be assimilated to point from `hipcc`/`hipconfig` to the respective compiled binaries as the default option.
|
||||
|
||||
(5_4_0_filesystem_reorg_deprecation_notice)=
|
||||
|
||||
##### Linux Filesystem Hierarchy Standard for ROCm
|
||||
|
||||
ROCm packages have adopted the Linux foundation filesystem hierarchy standard in this release to ensure ROCm components follow open source conventions for Linux-based distributions. While moving to a new filesystem hierarchy, ROCm ensures backward compatibility with its 5.1 version or older filesystem hierarchy. See below for a detailed explanation of the new filesystem hierarchy and backward compatibility.
|
||||
@@ -205,9 +207,8 @@ The test was incorrectly using the `hipDeviceAttributePageableMemoryAccess` devi
|
||||
|
||||
`hipHostMalloc()` allocates memory with fine-grained access by default when the environment variable `HIP_HOST_COHERENT=1` is used.
|
||||
|
||||
For more information, refer to the HIP Programming Guide at
|
||||
For more information, refer to {doc}`hip:.doxygen/docBin/html/index`.
|
||||
|
||||
<https://docs.amd.com/bundle/HIP-Programming-Guide-v5.4/page/Introduction_to_HIP_Programming_Guide.html>
|
||||
|
||||
#### SoftHang with `hipStreamWithCUMask` test on AMD Instinct™
|
||||
|
||||
|
||||
Reference in New Issue
Block a user