Compare commits

..

1 Commits

Author SHA1 Message Date
Aditya Lad
b52e5eebbe Merge pull request #1282 from RadeonOpenCompute/master
Update 3.9 branch.
2020-11-11 14:21:53 -08:00
198 changed files with 741 additions and 17367 deletions

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
* @saadrahim @Rmalavally @amd-aakash @zhang2amd @jlgreathouse @samjwu @MathiasMagnus

View File

@@ -1,12 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "pip" # See documentation for possible values
directory: "/docs/sphinx" # Location of package manifests
open-pull-requests-limit: 10
schedule:
interval: "daily"

View File

@@ -1,56 +0,0 @@
name: Linting
on:
push:
branches:
- develop
- main
pull_request:
branches:
- develop
- main
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
lint-rest:
name: "RestructuredText"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install rst-lint
run: pip install restructuredtext-lint
- name: Lint ResT files
run: rst-lint ${{ join(github.workspace, '/docs') }}
lint-md:
name: "Markdown"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Use markdownlint-cli2
uses: DavidAnson/markdownlint-cli2-action@v10.0.1
with:
globs: '**/*.md'
spelling:
name: "Spelling"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Fetch config
shell: sh
run: |
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.spellcheck.yaml -O
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.wordlist.txt >> .wordlist.txt
- name: Run spellcheck
uses: rojopolis/spellcheck-github-actions@0.30.0
- name: On fail
if: failure()
run: |
echo "Please check for spelling mistakes or add them to '.wordlist.txt' in either the root of this project or in rocm-docs-core."

18
.gitignore vendored
View File

@@ -1,18 +0,0 @@
.venv
.vscode
build
# documentation artifacts
_build/
_images/
_static/
_templates/
_toc.yml
docBin/
_doxygen/
_readthedocs/
# avoid duplicating contributing.md due to conf.py
docs/contributing.md
docs/release.md
docs/CHANGELOG.md

View File

@@ -1,14 +0,0 @@
config:
default: true
MD013: false
MD026:
punctuation: '.,;:!'
MD029:
style: ordered
MD033: false
MD034: false
MD041: false
ignores:
- CHANGELOG.md
- "{,docs/}{RELEASE,release}.md"
- tools/autotag/templates/**/*.md

View File

@@ -1,21 +0,0 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.10"
apt_packages:
- "doxygen"
- "graphviz" # For dot graphs in doxygen
python:
install:
- requirements: docs/sphinx/requirements.txt
sphinx:
configuration: docs/conf.py
formats: []

View File

@@ -1,29 +0,0 @@
# isv_deployment_win
ABI
# gpu_aware_mpi
DMA
GDR
HCA
MPI
MVAPICH
Mellanox's
NIC
OFED
OSU
OpenFabrics
PeerDirect
RDMA
UCX
ib_core
# linear algebra
LAPACK
MMA
backends
cuSOLVER
cuSPARSE
# tuning_guides
BMC
DGEMM
HPCG
HPL
IOPM

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -1,246 +0,0 @@
# Contributing to ROCm Docs
AMD values and encourages the ROCm community to contribute to our code and
documentation. This repository is focused on ROCm documentation and this
contribution guide describes the recommend method for creating and modifying our
documentation.
While interacting with ROCm Documentation, we encourage you to be polite and
respectful in your contributions, content or otherwise. Authors, maintainers of
these docs act on good intentions and to the best of their knowledge.
Keep that in mind while you engage. Should you have issues with contributing
itself, refer to
[discussions](https://github.com/RadeonOpenCompute/ROCm/discussions) on the
GitHub repository.
## Supported Formats
Our documentation includes both markdown and rst files. Markdown is encouraged
over rst due to the lower barrier to participation. GitHub flavored markdown is preferred
for all submissions as it will render accurately on our GitHub repositories. For existing documentation,
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html) markdown
is used to implement certain features unsupported in GitHub markdown. This is
not encouraged for new documentation. AMD will transition
to stricter use of GitHub flavored markdown with a few caveats. ROCm documentation
also uses [sphinx-design](https://sphinx-design.readthedocs.io/en/latest/index.html)
in our markdown and rst files. We also will use breathe syntax for doxygen documentation
in our markdown files. Other design elements for effective HTML rendering of the documents
may be added to our markdown files. Please see
[GitHub](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github)'s
guide on writing and formatting on GitHub as a starting point.
ROCm documentation adds additional requirements to markdown and rst based files
as follows:
- Level one headers are only used for page titles. There must be only one level
1 header per file for both Markdown and Restructured Text.
- Pass [markdownlint](https://github.com/markdownlint/markdownlint) check via
our automated github action on a Pull Request (PR).
## Filenames and folder structure
Please use snake case for file names. Our documentation follows pitchfork for
folder structure. All documentation is in /docs except for special files like
the contributing guide in the / folder. All images used in the documentation are
place in the /docs/data folder.
## How to provide feedback for for ROCm documentation
There are three standard ways to provide feedback for this repository.
### Pull Request
All contributions to ROCm documentation should arrive via the
[GitHub Flow](https://docs.github.com/en/get-started/quickstart/github-flow)
targetting the develop branch of the repository. If you are unable to contribute
via the GitHub Flow, feel free to email us. TODO, confirm email address.
### GitHub Issue
Issues on existing or absent docs can be filed as [GitHub issues
](https://github.com/RadeonOpenCompute/ROCm/issues).
### Email Feedback
## Language and Style
Adopting Microsoft CPP-Docs guidelines for [Voice and Tone
](https://github.com/MicrosoftDocs/cpp-docs/blob/main/styleguide/voice-tone.md).
ROCm documentation templates to be made public shortly. ROCm templates dictate
the recommended structure and flow of the documentation. Guidelines on how to
integrate figures, equations, and tables are all based off
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html).
Font size and selection, page layout, white space control, and other formatting
details are controlled via rocm-docs-core, sphinx extention. Please raise issues
in rocm-docs-core for any formatting concerns and changes requested.
## Building Documentation
While contributing, one may build the documentation locally on the command-line
or rely on Continuous Integration for previewing the resulting HTML pages in a
browser.
### Command line documentation builds
Python versions known to build documentation:
- 3.8
To build the docs locally using Python Virtual Environment (`venv`), execute the
following commands from the project root:
```sh
python3 -mvenv .venv
# Windows
.venv/Scripts/python -m pip install -r docs/sphinx/requirements.txt
.venv/Scripts/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
# Linux
.venv/bin/python -m pip install -r docs/sphinx/requirements.txt
.venv/bin/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
```
Then open up `_build/html/index.html` in your favorite browser.
### Pull Requests documentation builds
When opening a PR to the `develop` branch on GitHub, the page corresponding to
the PR (`https://github.com/RadeonOpenCompute/ROCm/pull/<pr_number>`) will have
a summary at the bottom. This requires the user be logged in to GitHub.
- There, click `Show all checks` and `Details` of the Read the Docs pipeline. It
will take you to `https://readthedocs.com/projects/advanced-micro-devices-rocm/
builds/<some_build_num>/`
- The list of commands shown are the exact ones used by CI to produce a render
of the documentation.
- There, click on the small blue link `View docs` (which is not the same as the
bigger button with the same text). It will take you to the built HTML site with
a URL of the form `https://
advanced-micro-devices-demo--<pr_number>.com.readthedocs.build/projects/alpha/en
/<pr_number>/`.
### Build the docs using VS Code
One can put together a productive environment to author documentation and also
test it locally using VS Code with only a handful of extensions. Even though the
extension landscape of VS Code is ever changing, here is one example setup that
proved useful at the time of writing. In it, one can change/add content, build a
new version of the docs using a single VS Code Task (or hotkey), see all errors/
warnings emitted by Sphinx in the Problems pane and immediately see the
resulting website show up on a locally serving web server.
#### Configuring VS Code
1. Install the following extensions:
- Python (ms-python.python)
- Live Server (ritwickdey.LiveServer)
2. Add the following entries in `.vscode/settings.json`
```json
{
"liveServer.settings.root": "/.vscode/build/html",
"liveServer.settings.wait": 1000,
"python.terminal.activateEnvInCurrentTerminal": true
}
```
The settings in order are set for the following reasons:
- Sets the root of the output website for live previews. Must be changed
alongside the `tasks.json` command.
- Tells live server to wait with the update to give time for Sphinx to
regenerate site contents and not refresh before all is don. (Empirical value)
- Automatic virtual env activation is a nice touch, should you want to build
the site from the integrated terminal.
3. Add the following tasks in `.vscode/tasks.json`
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Docs",
"type": "process",
"windows": {
"command": "${workspaceFolder}/.venv/Scripts/python.exe"
},
"command": "${workspaceFolder}/.venv/bin/python3",
"args": [
"-m",
"sphinx",
"-j",
"auto",
"-T",
"-b",
"html",
"-d",
"${workspaceFolder}/.vscode/build/doctrees",
"-D",
"language=en",
"${workspaceFolder}/docs",
"${workspaceFolder}/.vscode/build/html"
],
"problemMatcher": [
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):(\\d+):\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
},
},
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):{1,2}\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"severity": 2,
"message": 3
}
}
],
"group": {
"kind": "build",
"isDefault": true
}
},
],
}
```
> (Implementation detail: two problem matchers were needed to be defined,
> because VS Code doesn't tolerate some problem information being potentially
> absent. While a single regex could match all types of errors, if a capture
> group remains empty (the line number doesn't show up in all warning/error
> messages) but the `pattern` references said empty capture group, VS Code
> discards the message completely.)
4. Configure Python virtual environment (venv)
- From the Command Palette, run `Python: Create Environment`
- Select `venv` environment and the `docs/sphinx/requirements.txt` file.
_(Simply pressing enter while hovering over the file from the dropdown is
insufficient, one has to select the radio button with the 'Space' key if
using the keyboard.)_
5. Build the docs
- Launch the default build Task using either:
- a hotkey _(default is 'Ctrl+Shift+B')_ or
- by issuing the `Tasks: Run Build Task` from the Command Palette.
6. Open the live preview
- Navigate to the output of the site within VS Code, right-click on
`.vscode/build/html/index.html` and select `Open with Live Server`. The
contents should update on every rebuild without having to refresh the
browser.
<!-- markdownlint-restore -->

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

759
README.md
View File

@@ -1,54 +1,705 @@
# AMD ROCm™ Platform
ROCm™ is an open-source stack for GPU computation. ROCm is primarily Open-Source
Software (OSS) that allows developers the freedom to customize and tailor their
GPU software for their own needs while collaborating with a community of other
developers, and helping each other find solutions in an agile, flexible, rapid
and secure manner.
ROCm is a collection of drivers, development tools and APIs enabling GPU
programming from the low-level kernel to end-user applications. ROCm is powered
by AMDs Heterogeneous-computing Interface for Portability (HIP), an OSS C++ GPU
programming environment and its corresponding runtime. HIP allows ROCm
developers to create portable applications on different platforms by deploying
code on a range of platforms, from dedicated gaming GPUs to exascale HPC
clusters. ROCm supports programming models such as OpenMP and OpenCL, and
includes all the necessary OSS compilers, debuggers and libraries. ROCm is fully
integrated into ML frameworks such as PyTorch and TensorFlow. ROCm can be
deployed in many ways, including through the use of containers such as Docker,
Spack, and your own build from source.
ROCms goal is to allow our users to maximize their GPU hardware investment.
ROCm is designed to help develop, test and deploy GPU accelerated HPC, AI,
scientific computing, CAD, and other applications in a free, open-source,
integrated and secure software ecosystem.
This repository contains the manifest file for ROCm™ releases, changelogs, and
release information. The file default.xml contains information for all
repositories and the associated commit used to build the current ROCm release.
The default.xml file uses the repo Manifest format.
The develop branch of this repository contains content for the next
ROCm release.
## ROCm Documentation
ROCm Documentation is available online at
[rocm.docs.amd.com](https://rocm.docs.amd.com). Source code for the documenation
is located in the docs folder of most repositories that are part of ROCm.
### How to build documentation via Sphinx
```bash
cd docs
pip3 install -r sphinx/requirements.txt
python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
```
## Older ROCm™ Releases
For release information for older ROCm™ releases, refer to
[CHANGELOG](./CHANGELOG.md).
# AMD ROCm™ Patch Release Notes v3.9.1
Users of RPM-based operating systems (RHEL, CentOS, and others) are recommended to use the ROCm v3.9.1 patch release due to a potential compatibility issue with certain drivers.
# AMD ROCm Release Notes v3.9.0
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
It also covers known issues in this release.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [ROCm Installation Updates](#ROCm-Installation-Updates)
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [ROCm Compiler Enhancements](#ROCm-Compiler-Enhancements)
* [ROCm System Management Information](#ROCm-System-Management-Information)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [ROCM AOMP Enhancements](#ROCm-AOMP-Enhancements)
- [Fixed Defects](#Fixed-Defects)
- [Known Issues](#Known-Issues)
- [Deprecations](#Deprecations)
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
* [ROCm Binary Package Structure](#ROCm-Binary-Package-Structure)
* [ROCm Platform Packages](#ROCm-Platform-Packages)
# Supported Operating Systems
## Support for Ubuntu 20.04.1
In this release, AMD ROCm extends support to Ubuntu 20.04.1, including v5.4 and v5.6-oem.
**Note**: AMD ROCm only supports Long Term Support (LTS) versions of Ubuntu. Versions other than LTS may work with ROCm, however, they are not officially supported.
## Support for SLES 15 SP2
This release extends support to SLES 15 SP2.
## List of Supported Operating Systems
The AMD ROCm platform is designed to support the following operating systems:
* Ubuntu 20.04.1 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
* SLES 15 SP2
# ROCm Installation Updates
## Fresh Installation of AMD ROCm v3.9 Recommended
A fresh and clean installation of AMD ROCm v3.9 is recommended. An upgrade from previous releases to AMD ROCm v3.9 is not supported.
For more information, refer to the AMD ROCm Installation Guide at:
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
**Note**: *render group* is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use *video group*.
* For ROCm v3.5 and releases thereafter,the *clinfo* path is changed to - */opt/rocm/opencl/bin/clinfo*.
* For ROCm v3.3 and older releases, the *clinfo* path remains unchanged - */opt/rocm/opencl/bin/x86_64/clinfo*.
## ROCm MultiVersion Installation Update
With the AMD ROCm v3.9 release, the following ROCm multi-version installation changes apply:
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
* Multi-version installation of ROCm should be performed by installing rocm-dev<version> using each of the desired ROCm versions. For example, rocm-dev3.7.0, rocm-dev3.8.0, rocm-dev3.9.0.
* Version files must be created for each multi-version rocm <= 3.9.0
* command: echo <version> | sudo tee /opt/rocm-<version>/.info/version
* example: echo 3.9.0 | sudo tee /opt/rocm-3.9.0/.info/version
* The rock-dkms loadable kernel modules should be installed using a single rock-dkms package.
* ROCm v3.9 and above will not set any *ldconfig* entries for ROCm libraries for multi-version installation. Users must set *LD_LIBRARY_PATH* to load the ROCm library version of choice.
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
# AMD ROCm Documentation Updates
## AMD ROCm Installation Guide
The AMD ROCm Installation Guide in this release includes:
* Updated Supported Environments
* Multi-version Installation Instructions
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
## ROCm Compiler Documentation Updates
The ROCm Compiler documentation updates include,
* OpenMP Extras v12.9-0
* OpenMP-Extras Installation
* OpenMP-Extras Source Build
* AOMP-v11.9-0
* AOMP Source Build
For more information, see
https://rocmdocs.amd.com/en/latest/Programming_Guides/openmp_support.html
For the updated ROCm SMI API Guide, see
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_Manual_v3.9.pdf
## ROCm System Management Information
ROCM-SMI version: 1.4.1 | Kernel version: 5.6.20
* ROCm SMI and Command Line Interface
* ROCm SMI APIs for Compute Unit Occupancy
* Usage
* Optional Arguments
* Display Options
* Topology
* Pages Information
* Hardware-related Information
* Software-related/controlled information
* Set Options
* Reset Options
* Auto-response Options
* Output Options
For more information, refer to
https://rocmdocs.amd.com/en/latest/ROCm_System_Managment/ROCm-System-Managment.html#rocm-command-line-interface
For ROCm SMI API Guide, see
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_Manual_v3.9.pdf
## AMD ROCm - HIP Documentation Updates
* HIP Porting Guide CU_Pointer_Attribute_Memory_Type
For more information, refer to
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html#hip-porting-guide
## General AMD ROCm Documentation Links
Access the following links for more information:
* For AMD ROCm documentation, see
https://rocmdocs.amd.com/en/latest/
* For installation instructions on supped platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
* For AMD ROCm binary structure, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#build-amd-rocm
* For AMD ROCm Release History, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
# What\'s New in This Release
## ROCm Compiler Enhancements
The ROCm compiler support in the llvm-amdgpu-12.0.dev-amd64.deb package is enhanced to include support for OpenMP. To utilize this support, the additional package openmp-extras_12.9-0_amd64.deb is required.
Note, by default, both packages are installed during the ROCm v3.9 installation. For information about ROCm installation, refer to the ROCm Installation Guide.
AMD ROCm supports the following compilers:
* C++ compiler - Clang++
* C compiler - Clang
* Flang - FORTRAN compiler (FORTRAN 2003 standard)
**NOTE** : All of the above-mentioned compilers support:
* OpenMP standard 4.5 and an evolving subset of the OpenMP 5.0 standard
* OpenMP computational offloading to the AMD GPUs
For more information about AMD ROCm compilers, see the Compiler Documentation section at,
https://rocmdocs.amd.com/en/latest/index.html
### Auxiliary Package Supporting OpenMP
The openmp-extras_12.9-0_amd64.deb auxiliary package supports OpenMP within the ROCm compiler. It contains OpenMP specific header files, which are installed in /opt/rocm/llvm/include as well as runtime libraries, fortran runtime libraries, and device bitcode files in /opt/rocm/llvm/lib. The auxiliary package also consists of examples in the /opt/rocm/llvm/examples folder.
**NOTE**: The optional AOMP package resides in /opt/rocm//aomp/bin/clang and the ROCm compiler, which supports OpenMP for AMDGPU, is located in /opt/rocm/llvm/bin/clang.
### AOMP Optional Package Deprecation
Before the AMD ROCm v3.9 release, the optional AOMP package provided support for OpenMP. While AOMP is available in this release, the optional package may be deprecated from ROCm in the future. It is recommended you transition to the ROCm compiler or AOMP standalone releases for OpenMP support.
### Understanding ROCm Compiler OpenMP Support and AOMP OpenMP Support
The AOMP OpenMP support in ROCm v3.9 is based on the standalone AOMP v11.9-0, with LLVM v11 as the underlying system. However, the ROCm compiler's OpenMP support is based on LLVM v12 (upstream).
**NOTE**: Do not combine the object files from the two LLVM implementations. You must rebuild the application in its entirety using either the AOMP OpenMP or the ROCm OpenMP implementation.
### Example OpenMP Using the ROCm Compiler
```
$ cat helloworld.c
#include <stdio.h>
#include <omp.h>
int main(void) {
int isHost = 1;
#pragma omp target map(tofrom: isHost)
{
isHost = omp_is_initial_device();
printf("Hello world. %d\n", 100);
for (int i =0; i<5; i++) {
printf("Hello world. iteration %d\n", i);
}
}
printf("Target region executed on the %s\n", isHost ? "host" : "device");
return isHost;
}
$ /opt/rocm/llvm/bin/clang -O3 -target x86_64-pc-linux-gnu -fopenmp -fopenmp-targets=amdgcn-amd-amdhsa -Xopenmp-target=amdgcn-amd-amdhsa -march=gfx900 helloworld.c -o helloworld
$ export LIBOMPTARGET_KERNEL_TRACE=1
$ ./helloworld
DEVID: 0 SGN:1 ConstWGSize:256 args: 1 teamsXthrds:( 1X 256) reqd:( 1X 0) n:__omp_offloading_34_af0aaa_main_l7
Hello world. 100
Hello world. iteration 0
Hello world. iteration 1
Hello world. iteration 2
Hello world. iteration 3
Hello world. iteration 4
Target region executed on the device
```
For more examples, see */opt/rocm/llvm/examples*.
## ROCm SYSTEM MANAGEMENT INFORMATION
The AMD ROCm v3.9 release consists of the following ROCm System Management Information (SMI) enhancements:
* Shows the hardware topology
* The ROCm-SMI showpids option shows per-process Compute Unit (CU) Occupancy, VRAM usage, and SDMA usage
* Support for GPU Reset Event and Thermal Throttling Event in ROCm-SMI Library
### ROCm-SMI Hardware Topology
The ROCm-SMI Command Line Interface (CLI) is enhanced to include new options to denote GPU inter-connect topology in the system along with the relative distance between each other and the closest NUMA (CPU) node for each GPU.
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/ROCMCLI1.PNG)
### Compute Unit Occupancy
The AMD ROCm stack now supports a user process in querying Compute Unit (CU) occupancy at a particular moment. This service can be accessed to determine if a process P is using sufficient compute units.
A periodic collection is used to build the profile of a compute unit occupancy for a workload.
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/ROCMCLI2.PNG)
ROCm supports this capability only on GFX9 devices. Users can access the functionality in two ways:
* indirectly from the SMI library
* directly via Sysfs
**NOTE**: On systems that have both GFX9 and non-GFX9 devices, users should interpret the compute unit (CU) occupancy value carefully as the service does not support non-GFX9 devices.
### Accessing Compute Unit Occupancy Indirectly
The ROCm System Management Interface (SMI) library provides a convenient interface to determine the CU occupancy for a process. To get the CU occupancy of a process reported in percentage terms, invoke the SMI interface using rsmi_compute_process_info_by_pid_get(). The value is reported through the member field cu_occupancy of struct rsmi_process_info_t.
```
/**
* @brief Encodes information about a process
* @cu_occupancy Compute Unit usage in percent
*/
typedef struct {
- - -,
uint32_t cu_occupancy;
} rsmi_process_info_t;
/**
* API to get information about a process
rsmi_status_t
rsmi_compute_process_info_by_pid_get(uint32_t pid,
rsmi_process_info_t *proc);
```
### Accessing Compute Unit Occupancy Directly Using SYSFS
Information provided by SMI library is built from sysfs. For every valid device, ROCm stack surfaces a file by the name cu_occupancy in Sysfs. Users can read this file to determine how that device is being used by a particular workload. The general structure of the file path is /proc/<pid>/stats_<gpuid>/cu_occupancy
```
/**
* CU occupancy files for processes P1 and P2 on two devices with
* ids: 1008 and 112326
*/
/sys/devices/virtual/kfd/kfd/proc/<Pid_1>/stats_1008/cu_occupancy
/sys/devices/virtual/kfd/kfd/proc/<Pid_1>/stats_2326/cu_occupancy
/sys/devices/virtual/kfd/kfd/proc/<Pid_2>/stats_1008/cu_occupancy
/sys/devices/virtual/kfd/kfd/proc/<Pid_2>/stats_2326/cu_occupancy
// To get CU occupancy for a process P<i>
for each valid-device from device-list {
path-1 = Build path for cu_occupancy file;
path-2 = Build path for file Gpu-Properties;
cu_in_use += Open and Read the file path-1;
cu_total_cnt += Open and Read the file path-2;
}
cu_percent = ((cu_in_use * 100) / cu_total_cnt);
```
### GPU Reset Event and Thermal Throttling Event
The ROCm-SMI library clients can now register for the following events:
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/ROCMCLI3.PNG)
## ROCm Math and Communication Libraries
### rocfft_execution_info_set_stream API
rocFFT is a software library for computing Fast Fourier Transforms (FFT). It is part of AMDs software ecosystem based on ROCm. In addition to AMD GPU devices, the library can be compiled with the CUDA compiler using HIP tools for running on Nvidia GPU devices.
The rocfft_execution_info_set_stream API is a function to specify optional and additional information to control execution. This API specifies the compute stream, which must be invoked before the call to rocfft_execute. Compute stream is the underlying device queue/stream where the library computations are inserted.
#### PREREQUISITES
Using the compute stream API makes the following assumptions:
* This stream already exists in the program and assigns work to the stream
* The stream must be of type hipStream_t. Note, it is an error to pass the address of a hipStream_t object
#### PARAMETERS
Input
* info execution info handle
* stream underlying compute stream
### Improved GEMM Performance
Currently, rocblas_gemm_ext2() supports matrix multiplication D <= alpha * A * B + beta * C, where the A, B, C, and D matrices are single-precision float, column-major, and non-transposed, except that the row stride of C may equal 0. This means the first row of C is broadcast M times in C:
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/GEMM2.PNG)
If an optimized kernel solution for a particular problem is not available, a slow fallback algorithm is used, and the first time a fallback algorithm is used, the following message is printed to standard error:
*“Warning: Using slow on-host algorithm, because it is not implemented in Tensile yet.”
**NOTE**: ROCBLAS_LAYER controls the logging of the calls. It is recommended to use logging with the rocblas_gemm_ext2() feature, to identify the precise parameters which are passed to it.
* Setting the ROCBLAS_LAYER environment variable to 2 will print the problem parameters as they are being executed.
* Setting the ROCBLAS_LAYER environment variable to 4 will collect all of the sizes, and print them out at the end of program execution.
For more logging information, refer to https://rocblas.readthedocs.io/en/latest/logging.html.
### New Matrix Pruning Functions
In this release, the following new Matrix Pruning functions are introduced.
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/matrix.png)
### rocSOLVER General Matrix Singular Value Decomposition API
The rocSOLVER General Matrix Singular Value Decomposition (GESVD) API is now available in the AMD ROCm v3.9 release.
GESVD computes the Singular Values and, optionally, the Singular Vectors of a general m-by-n matrix A (Singular Value Decomposition).
The SVD of matrix A is given by:
```
A = U * S * V'
```
For more information, refer to
https://rocsolver.readthedocs.io/en/latest/userguide_api.html
## ROCm AOMP ENHANCEMENTS
### AOMP v11.9-0
The source code base for this release is the upstream LLVM 11 monorepo release/11.x sources as of August 18, 2020, with the hash value
*1e6907f09030b636054b1c7b01de36f281a61fa2*
The llvm-project branch used to build this release is aomp11. In addition to completing the source tarball, the artifacts of this release include the file llvm-project.patch. This file shows the delta from the llvm-project upstream release/11.x. The size of this patch XXXX lines in XXX files. These changes include support for flang driver, OMPD support, and the hsa libomptarget plugin. The goal is to reduce this with continued upstreaming activity.
The changes for this release of AOMP are:
* Fix compiler warnings for build_project.sh and build_openmp.sh.
* Fix: [flang] The AOMP 11.7-1 Fortran compiler claims to support the -isystem flag, but ignores it.
* Fix: [flang] producing internal compiler error when a character is used with KIND.
* Fix: [flang] openmp map clause on complex allocatable expressions !$omp target data map( chunk%tiles(1)%field%density0).
* DeviceRTL memory footprint has been reduced from ~2.3GB to ~770MB for AMDGCN target.
* Workaround for red_bug_51 failing on gfx908.
* Switch to python3 for ompd and rocgdb.
* Now require cmake 3.13.4 to compile from source.
* Fix aompcc to accept file type cxx.
### AOMP v11.08-0
The source code base for this release is the upstream LLVM 11 monorepo release/11.x sources as of August 18, 2020 with the hash value
*aabff0f7d564b22600b33731e0d78d2e70d060b4*
The amd-llvm-project branch used to build this release is amd-stg-openmp. In addition to complete source tarball, the artifacts of this release includes the file llvm-project.patch. This file shows the delta from the llvm-project upstream release/11.x which is currently at 32715 lines in 240 files. These changes include support for flang driver, OMPD support and the hsa libomptarget plugin. Our goal is to reduce this with continued upstreaming activity.
These are the major changes for this release of AOMP:
* Switch to the LLVM 11.x stable code base.
* OMPD updates for flang.
* To support debugging OpenMP, selected OpenMP runtime sources are included in lib-debug/src/openmp. The ROCgdb debugger will find these automatically.
* Threadsafe hsa plugin for libomptarget.
* Updates to support device libraries.
* Openmpi configure issue with real16 resolved.
* DeviceRTL memory use is now independent of number of openmp binaries.
* Startup latency on first kernel launch reduced by order of magnitude.
### AOMP v11.07-1
The source code base for this release is the upstream LLVM 11 monorepo development sources as July 10, 2020 with hash valued 979c5023d3f0656cf51bd645936f52acd62b0333 The amd-llvm-project branch used to build this release is amd-stg-openmp. In addition to complete source tarball, the artifacts of this release includes the file llvm-project.patch. This file shows the delta from the llvm-project upstream trunk which is currently at 34121 lines in 277 files. Our goal is to reduce this with continued upstreaming activity.
* Inclusion of OMPD support which is not yet upstream
* Build of ROCgdb
* Host runtime optimisation. GPU image information is now mostly read on the host instead of from the GPU.
* Fixed the source build scripts so that building from the source tarball does not fail because of missing test directories. This fixes issue #116.
# Fixed Defects
The following defects are fixed in this release:
* Random Soft Hang Observed When Running ResNet-Based Models
* (AOMP) Undefined Hidden Symbol Linker Error Causes Compilation Failure in HIP
* MIGraphx -> test_gpu_ops_test FAILED
* Unable to install RDC on CentOS/RHEL 7.8/8.2 & SLES
# Known Issues
The following are the known issues in this release.
## (AOMP) HIP EXAMPLE DEVICE_LIB FAILS TO COMPILE
The HIP example device_lib fails to compile and displays the following error:
*lld: error: undefined hidden symbol: inc_arrayval
The recommended workaround is to use */opt/rocm/hip/bin/hipcc to compile HIP applications*.
## HIPFORT INSTALLATION FAILURE
Hipfort fails to install during the ROCm installation.
As a workaround, you may force install hipfort using the following instructions:
### Ubuntu
```
sudo apt-get -o Dpkg::Options::="--force-overwrite" install hipfort
```
### SLES
Zypper gives you an option to continue with the overwrite during the installation.
### CentOS
Download hipfort to a temporary location and force install with rpm:
```
yum install --downloadonly --downloaddir=/tmp/hipfort hipfort
rpm -i --replacefiles hipfort<package-version>
```
## MEMORY FAULT ACCESS ERROR DURING MEMORY TEST OF ROCM VALIDATION SUITE
When the ROCm Validation Suite (RVS) is installed using the prebuilt Debian/rpm package and run for the first time, the memory module displays the following error message,
“Memory access fault by GPU node-<x> (Agent handle: 0xa55170) on address 0x7fc268c00000. Reason: Page not present or supervisor privilege.
Aborted (core dumped)”
As a workaround, run the test again. Subsequent runs appear to fix the error.
**NOTE**: The error may appear after a system reboot. Run the test again to fix the issue.
Note, reinstallation of ROCm Validation Suite is not required.
# Deprecations
This section describes deprecations and removals in AMD ROCm.
## WARNING: COMPILER-GENERATED CODE OBJECT VERSION 2 DEPRECATION
Compiler-generated code object version 2 is no longer supported and will be removed shortly. AMD ROCm users must plan for the code object version 2 deprecation immediately.
Support for loading code object version 2 is also being deprecated with no announced removal release.
# Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.9.x packages.
For more information on ROCM installation on all platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# Hardware and Software Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX8 GPUs
* "Polaris 11" chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100
* "Polaris 12" chips, such as on the AMD Radeon RX 550 and Radeon RX 540
* GFX7 GPUs
* "Hawaii" chips, such as the AMD Radeon R9 390X and FirePro W9100
As described in the next section, GFX8 GPUs require PCI Express 3.0 (PCIe 3.0) with support for PCIe atomics. This requires both CPU and motherboard support. GFX9 GPUs require PCIe 3.0 with support for PCIe atomics by default, but they can operate in most cases without this capability.
The integrated GPUs in AMD APUs are not officially supported targets for ROCm.
As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven Ridge" APUs are enabled in our upstream drivers and the ROCm OpenCL runtime.
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units).
#### Supported CPUs
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
In particular, the CPU and every active PCIe point between the CPU and GPU require support for PCIe 3.0 and PCIe atomics.
The CPU root must indicate PCIe AtomicOp Completion capabilities and any intermediate switch must indicate PCIe AtomicOp Routing capabilities.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* AMD Ryzen CPUs
* The CPUs in AMD Ryzen APUs
* AMD Ryzen Threadripper CPUs
* AMD EPYC CPUs
* Intel Xeon E7 v3 or newer CPUs
* Intel Xeon E5 v3 or newer CPUs
* Intel Xeon E3 v3 or newer CPUs
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer)
* Some Ivy Bridge-E systems
Beginning with ROCm 1.8, GFX9 GPUs (such as Vega 10) no longer require PCIe atomics.
We have similarly opened up more options for number of PCIe lanes.
GFX9 GPUs can now be run on CPUs without PCIe atomics and on older PCIe generations, such as PCIe 2.0.
This is not supported on GPUs below GFX9, e.g. GFX8 cards in the Fiji and Polaris families.
If you are using any PCIe switches in your system, please note that PCIe Atomics are only supported on some switches, such as Broadcom PLX.
When you install your GPUs, make sure you install them in a PCIe 3.1.0 x16, x8, x4, or x1 slot attached either directly to the CPU's Root I/O controller or via a PCIe switch directly attached to the CPU's Root I/O controller.
In our experience, many issues stem from trying to use consumer motherboards which provide physical x16 connectors that are electrically connected as e.g. PCIe 2.0 x4, PCIe slots connected via the Southbridge PCIe I/O controller, or PCIe slots connected through a PCIe switch that does
not support PCIe atomics.
If you attempt to run ROCm on a system without proper PCIe atomic support, you may see an error in the kernel log (`dmesg`):
```
kfd: skipped device 1002:7300, PCI rejects atomics
```
Experimental support for our Hawaii (GFX7) GPUs (Radeon R9 290, R9 390, FirePro W9100, S9150, S9170)
does not require or take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above for compatibility purposes.
#### Not supported or limited support under ROCm
##### Limited support
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
* In addition, various "Carrizo" and "Bristol Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
* AMD "Raven Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Raven Ridge" requires the use of upstream kernel drivers.
* In addition, various "Raven Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
##### Not supported
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
#### ROCm support in upstream Linux kernels
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
As such, users have the option of either using the ROCK kernel driver that are part of AMD's ROCm repositories or using the upstream driver and only installing ROCm user-level utilities from AMD's ROCm repositories.
These releases of the upstream Linux kernel support the following GPUs in ROCm:
* 4.17: Fiji, Polaris 10, Polaris 11
* 4.18: Fiji, Polaris 10, Polaris 11, Vega10
* 4.20: Fiji, Polaris 10, Polaris 11, Vega10, Vega 7nm
The upstream driver may be useful for running ROCm software on systems that are not compatible with the kernel driver available in AMD's repositories.
For users that have the option of using either AMD's or the upstreamed driver, there are various tradeoffs to take into consideration:
| | Using AMD's `rock-dkms` package | Using the upstream kernel driver |
| ---- | ------------------------------------------------------------| ----- |
| Pros | More GPU features, and they are enabled earlier | Includes the latest Linux kernel features |
| | Tested by AMD on supported distributions | May work on other distributions and with custom kernels |
| | Supported GPUs enabled regardless of kernel version | |
| | Includes the latest GPU firmware | |
| Cons | May not work on all Linux distributions or versions | Features and hardware support varies depending on kernel version |
| | Not currently supported on kernels newer than 5.4 | Limits GPU's usage of system memory to 3/8 of system memory (before 5.6). For 5.6 and beyond, both DKMS and upstream kernels allow use of 15/16 of system memory. |
| | | IPC and RDMA capabilities are not yet enabled |
| | | Not tested by AMD to the same level as `rock-dkms` package |
| | | Does not include most up-to-date firmware |
## Machine Learning and High Performance Computing Software Stack for AMD GPU
For an updated version of the software stack for AMD GPU, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu

View File

@@ -1,512 +0,0 @@
# Release Notes
<!-- Do not edit this file! This file is autogenerated with -->
<!-- tools/autotag/tag_script.py -->
<!-- Disable lints since this is an auto-generated file. -->
<!-- markdownlint-disable blanks-around-headers -->
<!-- markdownlint-disable no-duplicate-header -->
<!-- markdownlint-disable no-blanks-blockquote -->
<!-- markdownlint-disable ul-indent -->
<!-- markdownlint-disable no-trailing-spaces -->
<!-- spellcheck-disable -->
The release notes for the ROCm platform.
-------------------
## ROCm 5.2.0
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable no-duplicate-header -->
### What's New in This Release
#### HIP Enhancements
The ROCm v5.2 release consists of the following HIP enhancements:
##### HIP Installation Guide Updates
The HIP Installation Guide is updated to include building HIP tests from source on the AMD and NVIDIA platforms.
For more details, refer to the HIP Installation Guide v5.2.
##### Support for device-side malloc on HIP-Clang
HIP-Clang now supports device-side malloc. This implementation does not require the use of `hipDeviceSetLimit(hipLimitMallocHeapSize,value)` nor respect any setting. The heap is fully dynamic and can grow until the available free memory on the device is consumed.
The test codes at the following link show how to implement applications using malloc and free functions in device kernels:
<https://github.com/ROCm-Developer-Tools/HIP/blob/develop/tests/src/deviceLib/hipDeviceMalloc.cpp>
##### New HIP APIs in This Release
The following new HIP APIs are available in the ROCm v5.2 release. Note that this is a pre-official version (beta) release of the new APIs:
###### Device management HIP APIs
The new device management HIP APIs are as follows:
- Gets a UUID for the device. This API returns a UUID for the device.
```h
hipError_t hipDeviceGetUuid(hipUUID* uuid, hipDevice_t device);
```
> **Note**
>
> This new API corresponds to the following CUDA API:
>
> ```h
> CUresult cuDeviceGetUuid(CUuuid* uuid, CUdevice dev);
> ```
- Gets default memory pool of the specified device
```h
hipError_t hipDeviceGetDefaultMemPool(hipMemPool_t* mem_pool, int device);
```
- Sets the current memory pool of a device
```h
hipError_t hipDeviceSetMemPool(int device, hipMemPool_t mem_pool);
```
- Gets the current memory pool for the specified device
```h
hipError_t hipDeviceGetMemPool(hipMemPool_t* mem_pool, int device);
```
###### New HIP Runtime APIs in Memory Management
The new Stream Ordered Memory Allocator functions of HIP runtime APIs in memory management are as follows:
- Allocates memory with stream ordered semantics
```h
hipError_t hipMallocAsync(void** dev_ptr, size_t size, hipStream_t stream);
```
- Frees memory with stream ordered semantics
```h
hipError_t hipFreeAsync(void* dev_ptr, hipStream_t stream);
```
- Releases freed memory back to the OS
```h
hipError_t hipMemPoolTrimTo(hipMemPool_t mem_pool, size_t min_bytes_to_hold);
```
- Sets attributes of a memory pool
```h
hipError_t hipMemPoolSetAttribute(hipMemPool_t mem_pool, hipMemPoolAttr attr, void* value);
```
- Gets attributes of a memory pool
```h
hipError_t hipMemPoolGetAttribute(hipMemPool_t mem_pool, hipMemPoolAttr attr, void* value);
```
- Controls visibility of the specified pool between devices
```h
hipError_t hipMemPoolSetAccess(hipMemPool_t mem_pool, const hipMemAccessDesc* desc_list, size_t count);
```
- Returns the accessibility of a pool from a device
```h
hipError_t hipMemPoolGetAccess(hipMemAccessFlags* flags, hipMemPool_t mem_pool, hipMemLocation* location);
```
- Creates a memory pool
```h
hipError_t hipMemPoolCreate(hipMemPool_t* mem_pool, const hipMemPoolProps* pool_props);
```
- Destroys the specified memory pool
```h
hipError_t hipMemPoolDestroy(hipMemPool_t mem_pool);
```
- Allocates memory from a specified pool with stream ordered semantics
```h
hipError_t hipMallocFromPoolAsync(void** dev_ptr, size_t size, hipMemPool_t mem_pool, hipStream_t stream);
```
- Exports a memory pool to the requested handle type
```h
hipError_t hipMemPoolExportToShareableHandle(
void* shared_handle,
hipMemPool_t mem_pool,
hipMemAllocationHandleType handle_type,
unsigned int flags);
```
- Imports a memory pool from a shared handle
```h
hipError_t hipMemPoolImportFromShareableHandle(
hipMemPool_t* mem_pool,
void* shared_handle,
hipMemAllocationHandleType handle_type,
unsigned int flags);
```
- Exports data to share a memory pool allocation between processes
```h
hipError_t hipMemPoolExportPointer(hipMemPoolPtrExportData* export_data, void* dev_ptr);
Import a memory pool allocation from another process.t
hipError_t hipMemPoolImportPointer(
void** dev_ptr,
hipMemPool_t mem_pool,
hipMemPoolPtrExportData* export_data);
```
###### HIP Graph Management APIs
The new HIP Graph Management APIs are as follows:
- Enqueues a host function call in a stream
```h
hipError_t hipLaunchHostFunc(hipStream_t stream, hipHostFn_t fn, void* userData);
```
- Swaps the stream capture mode of a thread
```h
hipError_t hipThreadExchangeStreamCaptureMode(hipStreamCaptureMode* mode);
```
- Sets a node attribute
```h
hipError_t hipGraphKernelNodeSetAttribute(hipGraphNode_t hNode, hipKernelNodeAttrID attr, const hipKernelNodeAttrValue* value);
```
- Gets a node attribute
```h
hipError_t hipGraphKernelNodeGetAttribute(hipGraphNode_t hNode, hipKernelNodeAttrID attr, hipKernelNodeAttrValue* value);
```
###### Support for Virtual Memory Management APIs
The new APIs for virtual memory management are as follows:
- Frees an address range reservation made via hipMemAddressReserve
```h
hipError_t hipMemAddressFree(void* devPtr, size_t size);
```
- Reserves an address range
```h
hipError_t hipMemAddressReserve(void** ptr, size_t size, size_t alignment, void* addr, unsigned long long flags);
```
- Creates a memory allocation described by the properties and size
```h
hipError_t hipMemCreate(hipMemGenericAllocationHandle_t* handle, size_t size, const hipMemAllocationProp* prop, unsigned long long flags);
```
- Exports an allocation to a requested shareable handle type
```h
hipError_t hipMemExportToShareableHandle(void* shareableHandle, hipMemGenericAllocationHandle_t handle, hipMemAllocationHandleType handleType, unsigned long long flags);
```
- Gets the access flags set for the given location and ptr
```h
hipError_t hipMemGetAccess(unsigned long long* flags, const hipMemLocation* location, void* ptr);
```
- Calculates either the minimal or recommended granularity
```h
hipError_t hipMemGetAllocationGranularity(size_t* granularity, const hipMemAllocationProp* prop, hipMemAllocationGranularity_flags option);
```
- Retrieves the property structure of the given handle
```h
hipError_t hipMemGetAllocationPropertiesFromHandle(hipMemAllocationProp* prop, hipMemGenericAllocationHandle_t handle);
```
- Imports an allocation from a requested shareable handle type
```h
hipError_t hipMemImportFromShareableHandle(hipMemGenericAllocationHandle_t* handle, void* osHandle, hipMemAllocationHandleType shHandleType);
```
- Maps an allocation handle to a reserved virtual address range
```h
hipError_t hipMemMap(void* ptr, size_t size, size_t offset, hipMemGenericAllocationHandle_t handle, unsigned long long flags);
```
- Maps or unmaps subregions of sparse HIP arrays and sparse HIP mipmapped arrays
```h
hipError_t hipMemMapArrayAsync(hipArrayMapInfo* mapInfoList, unsigned int count, hipStream_t stream);
```
- Release a memory handle representing a memory allocation, that was previously allocated through hipMemCreate
```h
hipError_t hipMemRelease(hipMemGenericAllocationHandle_t handle);
```
- Returns the allocation handle of the backing memory allocation given the address
```h
hipError_t hipMemRetainAllocationHandle(hipMemGenericAllocationHandle_t* handle, void* addr);
```
- Sets the access flags for each location specified in desc for the given virtual address range
```h
hipError_t hipMemSetAccess(void* ptr, size_t size, const hipMemAccessDesc* desc, size_t count);
```
- Unmaps memory allocation of a given address range
```h
hipError_t hipMemUnmap(void* ptr, size_t size);
```
For more information, refer to the HIP API documentation at
{doc}`hip:.doxygen/docBin/html/modules`.
##### Planned HIP Changes in Future Releases
Changes to `hipDeviceProp_t`, `HIPMEMCPY_3D`, and `hipArray` structures (and related HIP APIs) are planned in the next major release. These changes may impact backward compatibility.
Refer to the Release Notes document in subsequent releases for more information.
ROCm Math and Communication Libraries
In this release, ROCm Math and Communication Libraries consist of the following enhancements and fixes:
New rocWMMA for Matrix Multiplication and Accumulation Operations Acceleration
This release introduces a new ROCm C++ library for accelerating mixed precision matrix multiplication and accumulation (MFMA) operations leveraging specialized GPU matrix cores. rocWMMA provides a C++ API to facilitate breaking down matrix multiply accumulate problems into fragments and using them in block-wise operations that are distributed in parallel across GPU wavefronts. The API is a header library of GPU device code, meaning matrix core acceleration may be compiled directly into your kernel device code. This can benefit from compiler optimization in the generation of kernel assembly and does not incur additional overhead costs of linking to external runtime libraries or having to launch separate kernels.
rocWMMA is released as a header library and includes test and sample projects to validate and illustrate example usages of the C++ API. GEMM matrix multiplication is used as primary validation given the heavy precedent for the library. However, the usage portfolio is growing significantly and demonstrates different ways rocWMMA may be consumed.
For more information, refer to
[Communication Libraries](../../../../docs/reference/gpu_libraries/communication.md).
#### OpenMP Enhancements in This Release
##### OMPT Target Support
The OpenMP runtime in ROCm implements a subset of the OMPT device APIs, as described in the OpenMP specification document. These are APIs that allow first-party tools to examine the profile and traces for kernels that execute on a device. A tool may register callbacks for data transfer and kernel dispatch entry points. A tool may use APIs to start and stop tracing for device-related activities such as data transfer and kernel dispatch timings and associated metadata. If device tracing is enabled, trace records for device activities are collected during program execution and returned to the tool using the APIs described in the specification.
Following is an example demonstrating how a tool would use the OMPT target APIs supported. The README in /opt/rocm/llvm/examples/tools/ompt outlines the steps to follow, and you can run the provided example as indicated below:
```sh
cd /opt/rocm/llvm/examples/tools/ompt/veccopy-ompt-target-tracing
make run
```
The file `veccopy-ompt-target-tracing.c` simulates how a tool would initiate device activity tracing. The file `callbacks.h` shows the callbacks that may be registered and implemented by the tool.
### Deprecations and Warnings
#### Linux Filesystem Hierarchy Standard for ROCm
ROCm packages have adopted the Linux foundation filesystem hierarchy standard in this release to ensure ROCm components follow open source conventions for Linux-based distributions. While moving to a new filesystem hierarchy, ROCm ensures backward compatibility with its 5.1 version or older filesystem hierarchy. See below for a detailed explanation of the new filesystem hierarchy and backward compatibility.
##### New Filesystem Hierarchy
The following is the new filesystem hierarchy:
```text
/opt/rocm-<ver>
| --bin
| --All externally exposed Binaries
| --libexec
| --<component>
| -- Component specific private non-ISA executables (architecture independent)
| --include
| -- <component>
| --<header files>
| --lib
| --lib<soname>.so -> lib<soname>.so.major -> lib<soname>.so.major.minor.patch
(public libraries linked with application)
| --<component> (component specific private library, executable data)
| --<cmake>
| --components
| --<component>.config.cmake
| --share
| --html/<component>/*.html
| --info/<component>/*.[pdf, md, txt]
| --man
| --doc
| --<component>
| --<licenses>
| --<component>
| --<misc files> (arch independent non-executable)
| --samples
```
> **Note**
>
> ROCm will not support backward compatibility with the v5.1(old) file system hierarchy in its next major release.
For more information, refer to <https://refspecs.linuxfoundation.org/fhs.shtml>.
##### Backward Compatibility with Older Filesystems
ROCm has moved header files and libraries to its new location as indicated in the above structure and included symbolic-link and wrapper header files in its old location for backward compatibility.
> **Note**
>
> ROCm will continue supporting backward compatibility until the next major release.
##### Wrapper header files
Wrapper header files are placed in the old location (`/opt/rocm-xxx/<component>/include`) with a warning message to include files from the new location (`/opt/rocm-xxx/include`) as shown in the example below:
```h
// Code snippet from hip_runtime.h
#pragma message “This file is deprecated. Use file from include path /opt/rocm-ver/include/ and prefix with hip”.
#include "hip/hip_runtime.h"
```
The wrapper header files backward compatibility deprecation is as follows:
- `#pragma` message announcing deprecation -- ROCm v5.2 release
- `#pragma` message changed to `#warning` -- Future release
- `#warning` changed to `#error` -- Future release
- Backward compatibility wrappers removed -- Future release
##### Library files
Library files are available in the `/opt/rocm-xxx/lib` folder. For backward compatibility, the old library location (`/opt/rocm-xxx/<component>/lib`) has a soft link to the library at the new location.
Example:
```log
$ ls -l /opt/rocm/hip/lib/
total 4
drwxr-xr-x 4 root root 4096 May 12 10:45 cmake
lrwxrwxrwx 1 root root 24 May 10 23:32 libamdhip64.so -> ../../lib/libamdhip64.so
```
##### CMake Config files
All CMake configuration files are available in the `/opt/rocm-xxx/lib/cmake/<component>` folder. For backward compatibility, the old CMake locations (`/opt/rocm-xxx/<component>/lib/cmake`) consist of a soft link to the new CMake config.
Example:
```log
$ ls -l /opt/rocm/hip/lib/cmake/hip/
total 0
lrwxrwxrwx 1 root root 42 May 10 23:32 hip-config.cmake -> ../../../../lib/cmake/hip/hip-config.cmake
```
#### Planned deprecation of hip-rocclr and hip-base packages
In the ROCm v5.2 release, hip-rocclr and hip-base packages (Debian and RPM) are planned for deprecation and will be removed in a future release. hip-runtime-amd and hip-dev(el) will replace these packages respectively. Users of hip-rocclr must install two packages, hip-runtime-amd and hip-dev, to get the same set of packages installed by hip-rocclr previously.
Currently, both package names hip-rocclr (or) hip-runtime-amd and hip-base (or) hip-dev(el) are supported.
Deprecation of Integrated HIP Directed Tests
The integrated HIP directed tests, which are currently built by default, are deprecated in this release. The default building and execution support through CMake will be removed in future release.
### Fixed Defects
| Fixed Defect | Fix |
|------------------------------------------------------------------------------|----------|
| ROCmInfo does not list gpus | Code fix |
| Hang observed while restoring cooperative group samples | Code fix |
| ROCM-SMI over SRIOV: Unsupported commands do not return proper error message | Code fix |
### Known Issues
This section consists of known issues in this release.
#### Compiler Error on gfx1030 When Compiling at -O0
##### Issue
A compiler error occurs when using -O0 flag to compile code for gfx1030 that calls atomicAddNoRet, which is defined in amd_hip_atomic.h. The compiler generates an illegal instruction for gfx1030.
##### Workaround
The workaround is not to use the -O0 flag for this case. For higher optimization levels, the compiler does not generate an invalid instruction.
#### System Freeze Observed During CUDA Memtest Checkpoint
##### Issue
Checkpoint/Restore in Userspace (CRIU) requires 20 MB of VRAM approximately to checkpoint and restore. The CRIU process may freeze if the maximum amount of available VRAM is allocated to checkpoint applications.
##### Workaround
To use CRIU to checkpoint and restore your application, limit the amount of VRAM the application uses to ensure at least 20 MB is available.
#### HPC test fails with the “HSA_STATUS_ERROR_MEMORY_FAULT” error
##### Issue
The compiler may incorrectly compile a program that uses the `__shfl_sync(mask, value, srcLane)` function when the "value" parameter to the function is undefined along some path to the function. For most functions, uninitialized inputs cause undefined behavior, but the definition for `__shfl_sync` should allow for undefined values.
##### Workaround
The workaround is to initialize the parameters to `__shfl_sync`.
> **Note**
>
> When the `-Wall` compilation flag is used, the compiler generates a warning indicating the variable is initialized along some path.
Example:
```cpp
double res = 0.0; // Initialize the input to __shfl_sync.
if (lane == 0) {
res = <some expression>
}
res = __shfl_sync(mask, res, 0);
```
#### Kernel produces incorrect result
##### Issue
In recent changes to Clang, insertion of the noundef attribute to all the function arguments has been enabled by default.
In the HIP kernel, variable var in shfl_sync may not be initialized, so LLVM IR treats it as undef.
So, the function argument that is potentially undef (because it is not intialized) has always been assumed to be noundef by LLVM IR (since Clang has inserted noundef attribute). This leads to ambiguous kernel execution.
##### Workaround
- Skip adding `noundef` attribute to functions tagged with convergent attribute. Refer to <https://reviews.llvm.org/D124158> for more information.
- Introduce shuffle attribute and add it to `__shfl` like APIs at hip headers. Clang can skip adding noundef attribute, if it finds that argument is tagged with shuffle attribute. Refer to <https://reviews.llvm.org/D125378> for more information.
- Introduce clang builtin for `__shfl` to identify it and skip adding `noundef` attribute.
- Introduce `__builtin_freeze` to use on the relevant arguments in library wrappers. The library/header need to insert freezes on the relevant inputs.
#### Issue with Applications Triggering Oversubscription
There is a known issue with applications that trigger oversubscription. A hardware hang occurs when ROCgdb is used on AMD Instinct™ MI50 and MI100 systems.
This issue is under investigation and will be fixed in a future release.

BIN
ROCm_SMI_Manual_v3.9.pdf Normal file

Binary file not shown.

View File

@@ -1,7 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="roc-github"
fetch="https://github.com/RadeonOpenCompute/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
@@ -12,16 +12,16 @@ fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-5.5.1"
remote="roc-github"
sync-c="true"
sync-j="4" />
<default revision="refs/tags/rocm-3.9.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="rocm_smi_lib" />
<project name="rocm-core" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" remote="roc-github" />
<project name="rocm-cmake" />
<project name="rocminfo" />
<project name="rocprofiler" remote="rocm-devtools" />
@@ -31,49 +31,54 @@ fetch="https://github.com/KhronosGroup/" />
<project name="clang-ocl" />
<!--HIP Projects-->
<project name="HIP" remote="rocm-devtools" />
<project name="hipamd" remote="rocm-devtools" />
<project name="HIP-Examples" remote="rocm-devtools" />
<project name="ROCclr" remote="rocm-devtools" />
<project name="HIPIFY" remote="rocm-devtools" />
<project name="HIPCC" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" />
<project name="llvm-project" path="llvm_amd-stg-open" />
<project name="ROCm-Device-Libs" />
<project name="atmi" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocm_bandwidth_test" />
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
<!-- gdb projects -->
<project name="ROCgdb" remote="rocm-devtools" />
<project name="ROCdbgapi" remote="rocm-devtools" />
<!-- ROCm Libraries -->
<project name="rdc" />
<project groups="mathlibs" name="rocBLAS" remote="rocm-swplat" />
<project groups="mathlibs" name="Tensile" remote="rocm-swplat" />
<project groups="mathlibs" name="hipBLAS" remote="rocm-swplat" />
<project groups="mathlibs" name="rocFFT" remote="rocm-swplat" />
<project groups="mathlibs" name="hipFFT" remote="rocm-swplat" />
<project groups="mathlibs" name="rocRAND" remote="rocm-swplat" />
<project groups="mathlibs" name="rocSPARSE" remote="rocm-swplat" />
<project groups="mathlibs" name="rocSOLVER" remote="rocm-swplat" />
<project groups="mathlibs" name="hipSOLVER" remote="rocm-swplat" />
<project groups="mathlibs" name="hipSPARSE" remote="rocm-swplat" />
<project groups="mathlibs" name="rocALUTION" remote="rocm-swplat" />
<project name="rocBLAS" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
<project name="rocSPARSE" remote="rocm-swplat" />
<project name="rocSOLVER" remote="rocm-swplat" />
<project name="hipSPARSE" remote="rocm-swplat" />
<project name="rocALUTION" remote="rocm-swplat" />
<project name="MIOpenGEMM" remote="rocm-swplat" />
<project name="MIOpen" remote="rocm-swplat" />
<project groups="mathlibs" name="rccl" remote="rocm-swplat" />
<project name="rccl" remote="rocm-swplat" />
<project name="MIVisionX" remote="gpuopen-libs" />
<project groups="mathlibs" name="rocThrust" remote="rocm-swplat" />
<project groups="mathlibs" name="hipCUB" remote="rocm-swplat" />
<project groups="mathlibs" name="rocPRIM" remote="rocm-swplat" />
<project groups="mathlibs" name="rocWMMA" remote="rocm-swplat" />
<project name="rocThrust" remote="rocm-swplat" />
<project name="hipCUB" remote="rocm-swplat" />
<project name="rocPRIM" remote="rocm-swplat" />
<project name="hipfort" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" />
<project name="ROCmValidationSuite" remote="rocm-devtools" />
<!-- Projects for AOMP -->
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" revision="refs/tags/rocm-uc-3.9.0" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" revision="refs/tags/rocm-uc-3.9.0" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" revision="refs/tags/rocm-uc-3.9.0" />
</manifest>

View File

@@ -1,6 +0,0 @@
# 404 Page Not Found
Page could not be found.
Return to [home](./index) or please use the links from the sidebar to find what
you are looking for.

View File

@@ -1,74 +0,0 @@
# About ROCm Documentation
ROCm documentation is made available under open source [licenses](licensing.md).
Documentation is built using open source toolchains. Contributions to our
documentation is encouraged and welcome. As a contributor, please familiarize
yourself with our documentation toolchain.
## ReadTheDocs
[ReadTheDocs](https://docs.readthedocs.io/en/stable/) is our front end for the
our documentation. By front end, this is the tool that serves our HTML based
documentation to our end users.
## Doxygen
[Doxygen](https://www.doxygen.nl/) is the most common inline code documentation
standard. ROCm projects are use Doxygen for public API documentation (unless the
upstream project is using a different tool).
## Sphinx
[Sphinx](https://www.sphinx-doc.org/en/master/) is a documentation generator
originally used for python. It is now widely used in the Open Source community.
Originally, sphinx supported RST based documentation. Markdown support is now
available. ROCm documentation plans to default to markdown for new projects.
Existing projects using RST are under no obligation to convert to markdown. New
projects that believe markdown is not suitable should contact the documentation
team prior to selecting RST.
### MyST
[Markedly Structured Text (MyST)](https://myst-tools.org/docs/spec) is an extended
flavor of Markdown ([CommonMark](https://commonmark.org/)) influenced by reStructuredText (RST) and Sphinx.
It is integrated via [`myst-parser`](https://myst-parser.readthedocs.io/en/latest/).
A cheat sheet that showcases how to use the MyST syntax is available over at [the Jupyter
reference](https://jupyterbook.org/en/stable/reference/cheatsheet.html).
### Sphinx Theme
ROCm is using the
[Sphinx Book Theme](https://sphinx-book-theme.readthedocs.io/en/latest/). This
theme is used by Jupyter books. ROCm documentation applies some customization
include a header and footer on top of the Sphinx Book Theme. A future custom
ROCm theme will be part of our documentation goals.
### Sphinx Design
Sphinx Design is an extension for sphinx based websites that add design
functionality. Please see the documentation
[here](https://sphinx-design.readthedocs.io/en/latest/index.html). ROCm
documentation uses sphinx design for grids, cards, and synchronized tabs.
Other features may be used in the future.
### Sphinx External TOC
ROCm uses the
[sphinx-external-toc](https://sphinx-external-toc.readthedocs.io/en/latest/intro.html)
for our navigation. This tool allows a YAML file based left navigation menu. This
tool was selected due to its flexibility that allows scripts to operate on the
YAML file. Please transition to this file for the project's navigation. You can
see the `_toc.yml.in` file in this repository in the docs/sphinx folder for an
example.
### Breathe
Sphinx uses [Breathe](https://www.breathe-doc.org/) to integrate Doxygen
content.
## `rocm-docs-core` pip package
[rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) is an AMD
maintained project that applies customization for our documentation. This
project is the tool most ROCm repositories will use as part of the documentation
build.

View File

@@ -1,76 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
import shutil
from rocm_docs import ROCmDocs
shutil.copy2('../CONTRIBUTING.md','./contributing.md')
shutil.copy2('../RELEASE.md','./release.md')
# Keep capitalization due to similar linking on GitHub's markdown preview.
shutil.copy2('../CHANGELOG.md','./CHANGELOG.md')
# configurations for PDF output by Read the Docs
project = "ROCm Documentation"
author = "Advanced Micro Devices, Inc."
copyright = "Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved."
version = "5.2.0"
release = "5.2.0"
setting_all_article_info = True
all_article_info_os = ["linux"]
all_article_info_author = ""
# pages with specific settings
article_pages = [
{"file":"deploy/linux/index", "os":["linux"]},
{"file":"deploy/linux/install_overview", "os":["linux"]},
{"file":"deploy/linux/prerequisites", "os":["linux"]},
{"file":"deploy/linux/quick_start", "os":["linux"]},
{"file":"deploy/linux/install", "os":["linux"]},
{"file":"deploy/linux/upgrade", "os":["linux"]},
{"file":"deploy/linux/uninstall", "os":["linux"]},
{"file":"deploy/linux/package_manager_integration", "os":["linux"]},
{"file":"deploy/docker", "os":["linux"]},
{"file":"release/gpu_os_support", "os":["linux"]},
{"file":"release/docker_support_matrix", "os":["linux"]},
{"file":"reference/gpu_libraries/communication", "os":["linux"]},
{"file":"reference/ai_tools", "os":["linux"]},
{"file":"reference/management_tools", "os":["linux"]},
{"file":"reference/validation_tools", "os":["linux"]},
{"file":"reference/framework_compatibility/framework_compatibility", "os":["linux"]},
{"file":"reference/computer_vision", "os":["linux"]},
{"file":"how_to/deep_learning_rocm", "os":["linux"]},
{"file":"how_to/gpu_aware_mpi", "os":["linux"]},
{"file":"how_to/magma_install/magma_install", "os":["linux"]},
{"file":"how_to/pytorch_install/pytorch_install", "os":["linux"]},
{"file":"how_to/system_debugging", "os":["linux"]},
{"file":"how_to/tensorflow_install/tensorflow_install", "os":["linux"]},
{"file":"examples/machine_learning", "os":["linux"]},
{"file":"examples/inception_casestudy/inception_casestudy", "os":["linux"]},
{"file":"understand/file_reorg", "os":["linux"]},
{"file":"understand/isv_deployment_win", "os":["windows"]},
]
external_toc_path = "./sphinx/_toc.yml"
docs_core = ROCmDocs("ROCm 5.2.0 Documentation Home")
docs_core.setup()
external_projects_current_project = "rocm"
for sphinx_var in ROCmDocs.SPHINX_VARS:
globals()[sphinx_var] = getattr(docs_core, sphinx_var)
html_theme_options = {
"link_main_doc": False
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 939 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 537 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 465 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

View File

@@ -1,90 +0,0 @@
# Deploy ROCm Docker containers
## Prerequisites
Docker containers share the kernel with the host operating system, therefore the
ROCm kernel-mode driver must be installed on the host. Please refer to
{ref}`using-the-package-manager` on installing `amdgpu-dkms`. The other
user-space parts (like the HIP-runtime or math libraries) of the ROCm stack will
be loaded from the container image and don't need to be installed to the host.
(docker-access-gpus-in-container)=
## Accessing GPUs in containers
In order to access GPUs in a container (to run applications using HIP, OpenCL or
OpenMP offloading) explicit access to the GPUs must be granted.
The ROCm runtimes make use of multiple device files:
- `/dev/kfd`: the main compute interface shared by all GPUs
- `/dev/dri/renderD<node>`: direct rendering interface (DRI) devices for each
GPU. **`<node>`** is a number for each card in the system starting from 128.
Exposing these devices to a container is done by using the
[`--device`](https://docs.docker.com/engine/reference/commandline/run/#device)
option, i.e. to allow access to all GPUs expose `/dev/kfd` and all
`/dev/dri/renderD` devices:
```shell
docker run --device /dev/kfd --device /dev/renderD128 --device /dev/renderD129 ...
```
More conveniently, instead of listing all devices, the entire `/dev/dri` folder
can be exposed to the new container:
```shell
docker run --device /dev/kfd --device /dev/dri
```
Note that this gives more access than strictly required, as it also exposes the
other device files found in that folder to the container.
(docker-restrict-gpus)=
### Restricting a container to a subset of the GPUs
If a `/dev/dri/renderD` device is not exposed to a container then it cannot use
the GPU associated with it; this allows to restrict a container to any subset of
devices.
For example to allow the container to access the first and third GPU start it
like:
```shell
docker run --device /dev/kfd --device /dev/dri/renderD128 --device /dev/dri/renderD130 <image>
```
### Additional Options
The performance of an application can vary depending on the assignment of GPUs
and CPUs to the task. Typically, `numactl` is installed as part of many HPC
applications to provide GPU/CPU mappings. This Docker runtime option supports
memory mapping and can improve performance.
```shell
--security-opt seccomp=unconfined
```
This option is recommended for Docker Containers running HPC applications.
```shell
docker run --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined ...
```
## Docker images in the ROCm ecosystem
### Base images
<https://github.com/RadeonOpenCompute/ROCm-docker> hosts images useful for users
wishing to build their own containers leveraging ROCm. The built images are
available from [Docker Hub](https://hub.docker.com/u/rocm). In particular
`rocm/rocm-terminal` is a small image with the prerequisites to build HIP
applications, but does not include any libraries.
### Applications
AMD provides pre-built images for various GPU-ready applications through its
Infinity Hub at <https://www.amd.com/en/technologies/infinity-hub>.
Examples for invoking each application and suggested parameters used for
benchmarking are also provided there.

View File

@@ -1,53 +0,0 @@
# Deploy ROCm on Linux
Start with {doc}`/deploy/linux/quick_start` or follow the detailed
instructions below.
## Prepare to Install
::::{grid} 1 1 2 2
:gutter: 1
:::{grid-item-card} Prerequisites
:link: prerequisites
:link-type: doc
The prerequisites page lists the required steps *before* installation.
:::
:::{grid-item-card} Install Choices
:link: install_overview
:link-type: doc
Package manager vs AMDGPU Installer
Standard Packages vs Multi-Version Packages
:::
::::
## Choose your install method
::::{grid} 1 1 2 2
:gutter: 1
:::{grid-item-card} Package Manager
:link: os-native/index
:link-type: doc
Directly use your distribution's package manager to install ROCm.
:::
:::{grid-item-card} AMDGPU Installer
:link: installer/index
:link-type: doc
Use an installer tool that orchestrates changes via the package
manager.
:::
::::
## See Also
- {doc}`/release/gpu_os_support`

View File

@@ -1,71 +0,0 @@
# ROCm Installation Options (Linux)
Users installing ROCm must choose between various installation options. A new
user should follow the [Quick Start guide](./quick_start).
## Package Manager versus AMDGPU Installer?
ROCm supports two methods for installation:
- Directly using the Linux distribution's package manager
- The `amdgpu-install` script
There is no difference in the final installation state when choosing either
option.
Using the distribution's package manager lets the user install,
upgrade and uninstall using familiar commands and workflows. Third party
ecosystem support is the same as your OS package manager.
The `amdgpu-install` script is a wrapper around the package manager. The same
packages are installed by this script as the package manager system.
The installer automates the installation process for the AMDGPU
and ROCm stack. It handles the complete installation process
for ROCm, including setting up the repository, cleaning the system, updating,
and installing the desired drivers and meta-packages. Users who are
less familiar with the package manager can choose this method for ROCm
installation.
(installation-types)=
## Single Version ROCm install versus Multi-Version
ROCm packages are versioned with both semantic versioning that is package
specific and a ROCm release version.
### Single-version Installation
The single-version ROCm installation refers to the following:
- Installation of a single instance of the ROCm release on a system
- Use of non-versioned ROCm meta-packages
### Multi-version Installation
The multi-version installation refers to the following:
- Installation of multiple instances of the ROCm stack on a system. Extending
the package name and its dependencies with the release version adds the
ability to support multiple versions of packages simultaneously.
- Use of versioned ROCm meta-packages.
```{attention}
ROCm packages that were previously installed from a single-version installation
must be removed before proceeding with the multi-version installation to avoid
conflicts.
```
```{note}
Multiversion install is not available for the kernel driver module, also referred to as AMDGPU.
```
The following image demonstrates the difference between single-version and
multi-version ROCm installation types:
```{figure-md} install-types
<img src="/data/deploy/linux/image.001.png" alt="">
ROCm Installation Types
```

View File

@@ -1,31 +0,0 @@
# AMDGPU Install Script
::::{grid} 2 3 3 3
:gutter: 1
:::{grid-item-card} Install
:link: install
:link-type: doc
How to install ROCm?
:::
:::{grid-item-card} Upgrade
:link: upgrade
:link-type: doc
Instructions for upgrading an existing ROCm installation.
:::
:::{grid-item-card} Uninstall
:link: uninstall
:link-type: doc
Steps for removing ROCm packages libraries and tools.
:::
::::
## See Also
- {doc}`/release/gpu_os_support`

View File

@@ -1,331 +0,0 @@
# Installation with install script
Prior to beginning, please ensure you have the [prerequisites](../prerequisites)
installed.
## Download the Installer Script
To download and install the `amdgpu-install` script on the system, use the
following commands based on your distribution.
::::::{tab-set}
:::::{tab-item} Ubuntu
:sync: ubuntu
::::{tab-set}
:::{tab-item} Ubuntu 18.04
:sync: ubuntu-18.04
```shell
sudo apt update
wget https://repo.radeon.com/amdgpu-install/22.20/ubuntu/bionic/amdgpu-install_22.20.50200-1_all.deb
sudo apt install ./amdgpu-install_22.20.50200-1_all.deb
```
:::
:::{tab-item} Ubuntu 20.04
:sync: ubuntu-20.04
```shell
sudo apt update
wget https://repo.radeon.com/amdgpu-install/22.20/ubuntu/focal/amdgpu-install_22.20.50200-1_all.deb
sudo apt install ./amdgpu-install_22.20.50200-1_all.deb
```
:::
::::
:::::
:::::{tab-item} Red Hat Enterprise Linux
:sync: RHEL
::::{tab-set}
:::{tab-item} RHEL 7.9
:sync: RHEL-7.9
:sync: RHEL-7
```shell
sudo yum install https://repo.radeon.com/amdgpu-install/22.20/rhel/7.9/amdgpu-install-22.20.50200-1.el7.noarch.rpm
```
:::
:::{tab-item} RHEL 8.5
:sync: RHEL-8.5
:sync: RHEL-8
```shell
sudo yum install https://repo.radeon.com/amdgpu-install/22.20/rhel/8.5/amdgpu-install-22.20.50200-1.el8.noarch.rpm
```
:::
:::{tab-item} RHEL 8.6
:sync: RHEL-8.6
:sync: RHEL-8
```shell
sudo yum install https://repo.radeon.com/amdgpu-install/22.20/rhel/8.6/amdgpu-install-22.20.50200-1.el8.noarch.rpm
```
:::
::::
:::::
:::::{tab-item} SUSE Linux Enterprise Server 15
:sync: SLES15
::::{tab-set}
:::{tab-item} Service Pack 4
:sync: SLES15-SP4
```shell
sudo zypper --no-gpg-checks install https://repo.radeon.com/amdgpu-install/22.20/sle/15.4/amdgpu-install-22.20.50200-1.noarch.rpm
```
:::
:::{tab-item} Service Pack 3
:sync: SLES15-SP3
```shell
sudo zypper --no-gpg-checks install https://repo.radeon.com/amdgpu-install/22.20/sle/15.3/amdgpu-install-22.20.50200-1.noarch.rpm
```
:::
::::
:::::
::::::
## Use cases
Instead of installing individual applications or libraries the installer script
groups packages into specific use cases, matching typical workflows and runtimes.
To display a list of available use cases execute the command:
```shell
sudo amdgpu-install --list-usecase
```
The available use-cases will be printed in a format similar to the example
output below.
```none
If --usecase option is not present, the default selection is "graphics,opencl,hip"
Available use cases:
rocm(for users and developers requiring full ROCm stack)
- OpenCL (ROCr/KFD based) runtime
- HIP runtimes
- Machine learning framework
- All ROCm libraries and applications
- ROCm Compiler and device libraries
- ROCr runtime and thunk
lrt(for users of applications requiring ROCm runtime)
- ROCm Compiler and device libraries
- ROCr runtime and thunk
opencl(for users of applications requiring OpenCL on Vega or
later products)
- ROCr based OpenCL
- ROCm Language runtime
openclsdk (for application developers requiring ROCr based OpenCL)
- ROCr based OpenCL
- ROCm Language runtime
- development and SDK files for ROCr based OpenCL
hip(for users of HIP runtime on AMD products)
- HIP runtimes
hiplibsdk (for application developers requiring HIP on AMD products)
- HIP runtimes
- ROCm math libraries
- HIP development libraries
```
To install use cases specific to your requirements, use the installer
`amdgpu-install` as follows:
- To install a single use case add it with the `--usecase` option:
```shell
sudo amdgpu-install --usecase=rocm
```
- For multiple use cases separate them with commas:
```shell
sudo amdgpu-install --usecase=hiplibsdk,rocm
```
## Single-version ROCm Installation
By default (without the `--rocmrelease` option)
the installer script will install packages in the single-version layout.
## Multi-version ROCm Installation
For the multi-version ROCm installation you must use the installer script from
the latest release of ROCm that you wish to install.
**Example:** If you want to install ROCm releases 5.1.3 and 5.2
simultaneously, you are required to download the installer from the latest ROCm
release v5.2.
### Add Required Repositories
You must add the ROCm repositories manually for all ROCm releases
you want to install except the latest one. The `amdgpu-install` script
automatically adds the required repositories for the latest release.
Run the following commands based on your distribution to add the repositories:
::::::{tab-set}
:::::{tab-item} Ubuntu
:sync: ubuntu
::::{tab-set}
:::{tab-item} Ubuntu 18.04
:sync: ubuntu-18.04
```shell
for ver in 5.1.3; do
echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/rocm-keyring.gpg] https://repo.radeon.com/rocm/apt/$ver bionic main" | sudo tee /etc/apt/sources.list.d/rocm.list
done
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update
```
:::
:::{tab-item} Ubuntu 20.04
:sync: ubuntu-20.04
```shell
for ver in 5.1.3; do
echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/rocm-keyring.gpg] https://repo.radeon.com/rocm/apt/$ver focal main" | sudo tee /etc/apt/sources.list.d/rocm.list
done
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update
```
:::
::::
:::::
:::::{tab-item} Red Hat Enterprise Linux
:sync: RHEL
::::{tab-set}
:::{tab-item} RHEL 7
:sync: RHEL-7
```shell
for ver in 5.1.3; do
sudo tee --append /etc/yum.repos.d/rocm.repo <<EOF
[ROCm-$ver]
name=ROCm$ver
baseurl=https://repo.radeon.com/rocm/yum/$ver/main
enabled=1
priority=50
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo yum clean all
```
:::
:::{tab-item} RHEL 8
:sync: RHEL-8
```shell
for ver in 5.1.3; do
sudo tee --append /etc/yum.repos.d/rocm.repo <<EOF
[ROCm-$ver]
name=ROCm$ver
baseurl=https://repo.radeon.com/rocm/rhel8/$ver/main
enabled=1
priority=50
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo yum clean all
```
:::
::::
:::::
:::::{tab-item} SUSE Linux Enterprise Server 15
:sync: SLES15
::::{tab-set}
:::{tab-item} Service Pack 3
:sync: SLES15-SP3
```shell
for ver in 5.1.3; do
sudo tee --append /etc/zypp/repos.d/rocm.repo <<EOF
name=rocm
baseurl=https://repo.radeon.com/rocm/$ver/sle/15.3/main/x86_64
enabled=1
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo zypper ref
```
:::
:::{tab-item} Service Pack 4
:sync: SLES15-SP4
```shell
for ver in 5.1.3; do
sudo tee --append /etc/zypp/repos.d/rocm.repo <<EOF
name=rocm
baseurl=https://repo.radeon.com/rocm/$ver/sle/15.4/main/x86_64
enabled=1
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo zypper ref
```
:::
::::
:::::
::::::
### Install packages
Use the installer script as given below:
```none
sudo amdgpu-install --usecase=rocm --rocmrelease=<release-number-1>
sudo amdgpu-install --usecase=rocm --rocmrelease=<release-number-2>
sudo amdgpu-install --usecase=rocm --rocmrelease=<release-number-3>
```
Following are examples of ROCm multi-version installation. The kernel-mode
driver, associated with the ROCm release v5.3, will be installed as its latest
release in the list.
```none
sudo amdgpu-install --usecase=rocm --rocmrelease=5.1.3
sudo amdgpu-install --usecase=rocm --rocmrelease=5.2.0
```
## Additional options
### Unattended installation
Adding `-y` as a parameter to `amdgpu-install` skips user prompts (for
automation). Example: `amdgpu-install -y --usecase=rocm`
### Skipping kernel mode driver installation
The installer script tries to install the kernel mode driver along with the
requested use cases. This might be unnecessary as in the case of docker
containers or you may wish to keep a specific version when using multi-version
installation, and not have the last installed version overwrite the kernel mode
driver.
To skip the installation of the kernel-mode driver add the `--no-dkms` option
when calling the installer script.

View File

@@ -1,25 +0,0 @@
# Installer Script Uninstallation (Linux)
To uninstall all ROCm packages and the kernel-mode driver the following commands
can be used.
::::{rubric} Uninstalling Single-Version Install
::::
```console shell
sudo amdgpu-install --uninstall
```
::::{rubric} Uninstalling a Specific ROCm Release
::::
```console shell
sudo amdgpu-install --uninstall --rocmrelease=<release-number>
```
::::{rubric} Uninstalling all ROCm Releases
::::
```console shell
sudo amdgpu-install --uninstall --rocmrelease=all
```

Some files were not shown because too many files have changed in this diff Show More