Compare commits

..

1 Commits

Author SHA1 Message Date
Roopa Malavally
53c2f5acde Update README.md 2020-06-02 17:46:17 -07:00
197 changed files with 1208 additions and 18290 deletions

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
* @saadrahim @Rmalavally @amd-aakash @zhang2amd @jlgreathouse @samjwu @MathiasMagnus

View File

@@ -1,12 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "pip" # See documentation for possible values
directory: "/docs/sphinx" # Location of package manifests
open-pull-requests-limit: 10
schedule:
interval: "daily"

View File

@@ -1,56 +0,0 @@
name: Linting
on:
push:
branches:
- develop
- main
pull_request:
branches:
- develop
- main
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
lint-rest:
name: "RestructuredText"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install rst-lint
run: pip install restructuredtext-lint
- name: Lint ResT files
run: rst-lint ${{ join(github.workspace, '/docs') }}
lint-md:
name: "Markdown"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Use markdownlint-cli2
uses: DavidAnson/markdownlint-cli2-action@v10.0.1
with:
globs: '**/*.md'
spelling:
name: "Spelling"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Fetch config
shell: sh
run: |
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.spellcheck.yaml -O
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.wordlist.txt >> .wordlist.txt
- name: Run spellcheck
uses: rojopolis/spellcheck-github-actions@0.30.0
- name: On fail
if: failure()
run: |
echo "Please check for spelling mistakes or add them to '.wordlist.txt' in either the root of this project or in rocm-docs-core."

18
.gitignore vendored
View File

@@ -1,18 +0,0 @@
.venv
.vscode
build
# documentation artifacts
_build/
_images/
_static/
_templates/
_toc.yml
docBin/
_doxygen/
_readthedocs/
# avoid duplicating contributing.md due to conf.py
docs/contributing.md
docs/release.md
docs/CHANGELOG.md

View File

@@ -1,14 +0,0 @@
config:
default: true
MD013: false
MD026:
punctuation: '.,;:!'
MD029:
style: ordered
MD033: false
MD034: false
MD041: false
ignores:
- CHANGELOG.md
- "{,docs/}{RELEASE,release}.md"
- tools/autotag/templates/**/*.md

View File

@@ -1,21 +0,0 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.10"
apt_packages:
- "doxygen"
- "graphviz" # For dot graphs in doxygen
python:
install:
- requirements: docs/sphinx/requirements.txt
sphinx:
configuration: docs/conf.py
formats: []

View File

@@ -1,29 +0,0 @@
# isv_deployment_win
ABI
# gpu_aware_mpi
DMA
GDR
HCA
MPI
MVAPICH
Mellanox's
NIC
OFED
OSU
OpenFabrics
PeerDirect
RDMA
UCX
ib_core
# linear algebra
LAPACK
MMA
backends
cuSOLVER
cuSPARSE
# tuning_guides
BMC
DGEMM
HPCG
HPL
IOPM

File diff suppressed because it is too large Load Diff

View File

@@ -1,246 +0,0 @@
# Contributing to ROCm Docs
AMD values and encourages the ROCm community to contribute to our code and
documentation. This repository is focused on ROCm documentation and this
contribution guide describes the recommend method for creating and modifying our
documentation.
While interacting with ROCm Documentation, we encourage you to be polite and
respectful in your contributions, content or otherwise. Authors, maintainers of
these docs act on good intentions and to the best of their knowledge.
Keep that in mind while you engage. Should you have issues with contributing
itself, refer to
[discussions](https://github.com/RadeonOpenCompute/ROCm/discussions) on the
GitHub repository.
## Supported Formats
Our documentation includes both markdown and rst files. Markdown is encouraged
over rst due to the lower barrier to participation. GitHub flavored markdown is preferred
for all submissions as it will render accurately on our GitHub repositories. For existing documentation,
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html) markdown
is used to implement certain features unsupported in GitHub markdown. This is
not encouraged for new documentation. AMD will transition
to stricter use of GitHub flavored markdown with a few caveats. ROCm documentation
also uses [sphinx-design](https://sphinx-design.readthedocs.io/en/latest/index.html)
in our markdown and rst files. We also will use breathe syntax for doxygen documentation
in our markdown files. Other design elements for effective HTML rendering of the documents
may be added to our markdown files. Please see
[GitHub](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github)'s
guide on writing and formatting on GitHub as a starting point.
ROCm documentation adds additional requirements to markdown and rst based files
as follows:
- Level one headers are only used for page titles. There must be only one level
1 header per file for both Markdown and Restructured Text.
- Pass [markdownlint](https://github.com/markdownlint/markdownlint) check via
our automated github action on a Pull Request (PR).
## Filenames and folder structure
Please use snake case for file names. Our documentation follows pitchfork for
folder structure. All documentation is in /docs except for special files like
the contributing guide in the / folder. All images used in the documentation are
place in the /docs/data folder.
## How to provide feedback for for ROCm documentation
There are three standard ways to provide feedback for this repository.
### Pull Request
All contributions to ROCm documentation should arrive via the
[GitHub Flow](https://docs.github.com/en/get-started/quickstart/github-flow)
targetting the develop branch of the repository. If you are unable to contribute
via the GitHub Flow, feel free to email us. TODO, confirm email address.
### GitHub Issue
Issues on existing or absent docs can be filed as [GitHub issues
](https://github.com/RadeonOpenCompute/ROCm/issues).
### Email Feedback
## Language and Style
Adopting Microsoft CPP-Docs guidelines for [Voice and Tone
](https://github.com/MicrosoftDocs/cpp-docs/blob/main/styleguide/voice-tone.md).
ROCm documentation templates to be made public shortly. ROCm templates dictate
the recommended structure and flow of the documentation. Guidelines on how to
integrate figures, equations, and tables are all based off
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html).
Font size and selection, page layout, white space control, and other formatting
details are controlled via rocm-docs-core, sphinx extention. Please raise issues
in rocm-docs-core for any formatting concerns and changes requested.
## Building Documentation
While contributing, one may build the documentation locally on the command-line
or rely on Continuous Integration for previewing the resulting HTML pages in a
browser.
### Command line documentation builds
Python versions known to build documentation:
- 3.8
To build the docs locally using Python Virtual Environment (`venv`), execute the
following commands from the project root:
```sh
python3 -mvenv .venv
# Windows
.venv/Scripts/python -m pip install -r docs/sphinx/requirements.txt
.venv/Scripts/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
# Linux
.venv/bin/python -m pip install -r docs/sphinx/requirements.txt
.venv/bin/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
```
Then open up `_build/html/index.html` in your favorite browser.
### Pull Requests documentation builds
When opening a PR to the `develop` branch on GitHub, the page corresponding to
the PR (`https://github.com/RadeonOpenCompute/ROCm/pull/<pr_number>`) will have
a summary at the bottom. This requires the user be logged in to GitHub.
- There, click `Show all checks` and `Details` of the Read the Docs pipeline. It
will take you to `https://readthedocs.com/projects/advanced-micro-devices-rocm/
builds/<some_build_num>/`
- The list of commands shown are the exact ones used by CI to produce a render
of the documentation.
- There, click on the small blue link `View docs` (which is not the same as the
bigger button with the same text). It will take you to the built HTML site with
a URL of the form `https://
advanced-micro-devices-demo--<pr_number>.com.readthedocs.build/projects/alpha/en
/<pr_number>/`.
### Build the docs using VS Code
One can put together a productive environment to author documentation and also
test it locally using VS Code with only a handful of extensions. Even though the
extension landscape of VS Code is ever changing, here is one example setup that
proved useful at the time of writing. In it, one can change/add content, build a
new version of the docs using a single VS Code Task (or hotkey), see all errors/
warnings emitted by Sphinx in the Problems pane and immediately see the
resulting website show up on a locally serving web server.
#### Configuring VS Code
1. Install the following extensions:
- Python (ms-python.python)
- Live Server (ritwickdey.LiveServer)
2. Add the following entries in `.vscode/settings.json`
```json
{
"liveServer.settings.root": "/.vscode/build/html",
"liveServer.settings.wait": 1000,
"python.terminal.activateEnvInCurrentTerminal": true
}
```
The settings in order are set for the following reasons:
- Sets the root of the output website for live previews. Must be changed
alongside the `tasks.json` command.
- Tells live server to wait with the update to give time for Sphinx to
regenerate site contents and not refresh before all is don. (Empirical value)
- Automatic virtual env activation is a nice touch, should you want to build
the site from the integrated terminal.
3. Add the following tasks in `.vscode/tasks.json`
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Docs",
"type": "process",
"windows": {
"command": "${workspaceFolder}/.venv/Scripts/python.exe"
},
"command": "${workspaceFolder}/.venv/bin/python3",
"args": [
"-m",
"sphinx",
"-j",
"auto",
"-T",
"-b",
"html",
"-d",
"${workspaceFolder}/.vscode/build/doctrees",
"-D",
"language=en",
"${workspaceFolder}/docs",
"${workspaceFolder}/.vscode/build/html"
],
"problemMatcher": [
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):(\\d+):\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
},
},
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):{1,2}\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"severity": 2,
"message": 3
}
}
],
"group": {
"kind": "build",
"isDefault": true
}
},
],
}
```
> (Implementation detail: two problem matchers were needed to be defined,
> because VS Code doesn't tolerate some problem information being potentially
> absent. While a single regex could match all types of errors, if a capture
> group remains empty (the line number doesn't show up in all warning/error
> messages) but the `pattern` references said empty capture group, VS Code
> discards the message completely.)
4. Configure Python virtual environment (venv)
- From the Command Palette, run `Python: Create Environment`
- Select `venv` environment and the `docs/sphinx/requirements.txt` file.
_(Simply pressing enter while hovering over the file from the dropdown is
insufficient, one has to select the radio button with the 'Space' key if
using the keyboard.)_
5. Build the docs
- Launch the default build Task using either:
- a hotkey _(default is 'Ctrl+Shift+B')_ or
- by issuing the `Tasks: Run Build Task` from the Command Palette.
6. Open the live preview
- Navigate to the output of the site within VS Code, right-click on
`.vscode/build/html/index.html` and select `Open with Live Server`. The
contents should update on every rebuild without having to refresh the
browser.
<!-- markdownlint-restore -->

BIN
HIPClang2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

694
README.md
View File

@@ -1,54 +1,670 @@
# AMD ROCm™ Platform
# AMD ROCm Release Notes v3.5.0
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
It also covers known issues and deprecated features in the ROCm v3.5.0 release.
ROCm™ is an open-source stack for GPU computation. ROCm is primarily Open-Source
Software (OSS) that allows developers the freedom to customize and tailor their
GPU software for their own needs while collaborating with a community of other
developers, and helping each other find solutions in an agile, flexible, rapid
and secure manner.
AMD ROCm Documentation Website - http://rocmdocs.amd.com
ROCm is a collection of drivers, development tools and APIs enabling GPU
programming from the low-level kernel to end-user applications. ROCm is powered
by AMDs Heterogeneous-computing Interface for Portability (HIP), an OSS C++ GPU
programming environment and its corresponding runtime. HIP allows ROCm
developers to create portable applications on different platforms by deploying
code on a range of platforms, from dedicated gaming GPUs to exascale HPC
clusters. ROCm supports programming models such as OpenMP and OpenCL, and
includes all the necessary OSS compilers, debuggers and libraries. ROCm is fully
integrated into ML frameworks such as PyTorch and TensorFlow. ROCm can be
deployed in many ways, including through the use of containers such as Docker,
Spack, and your own build from source.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [Documentation Updates](#Documentation-Updates)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [Upgrading to This Release](#Upgrading-to-This-Release)
* [Heterogeneous-Compute Interface for Portability](#Heterogeneous-Compute-Interface-for-Portability)
* [Radeon Open Compute Common Language Runtime](#Radeon-Open-Compute-Common-Language-Runtime)
* [OpenCL Runtime](#OpenCL-Runtime)
* [AMD ROCm GNU Debugger ROCgdb](#AMD-ROCm-GNU-Debugger-ROCgdb)
* [AMD ROCm Debugger API Library](#AMD-ROCm-Debugger-API-Library)
* [rocProfiler Dispatch Callbacks Start/Stop API](#rocProfiler-Dispatch-Callbacks-Start-Stop-API)
* [ROCm Communications Collective Library](#ROCm-Communications-Collective-Library)
* [NVIDIA Communications Collective Library Version Compatibility](#NVIDIA-Communications-Collective-Library-Version-Compatibility)
* [MIOpen Optional Kernel Package Installation](#MIOpen-Optional-Kernel-Package-Installation)
* [New SMI Event Interface and Library](#New-SMI-Event-Interface-and-Library)
* [API for CPU Affinity](#API-for-CPU-Affinity)
* [Radeon Performance Primitives Library](#Radeon-Performance-Primitives-Library)
- [Fixed Issues](#Fixed-Issues)
ROCms goal is to allow our users to maximize their GPU hardware investment.
ROCm is designed to help develop, test and deploy GPU accelerated HPC, AI,
scientific computing, CAD, and other applications in a free, open-source,
integrated and secure software ecosystem.
- [Known Issues](#Known-Issues)
This repository contains the manifest file for ROCm™ releases, changelogs, and
release information. The file default.xml contains information for all
repositories and the associated commit used to build the current ROCm release.
- [Deprecations](#Deprecations)
* [Heterogeneous Compute Compiler](#Heterogeneous-Compute-Compiler)
The default.xml file uses the repo Manifest format.
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
The develop branch of this repository contains content for the next
ROCm release.
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
* [ROCm Binary Package Structure](#ROCm-Binary-Package-Structure)
* [ROCm Platform Packages](#ROCm-Platform-Packages)
## ROCm Documentation
# Supported Operating Systems and Documentation Updates
ROCm Documentation is available online at
[rocm.docs.amd.com](https://rocm.docs.amd.com). Source code for the documenation
is located in the docs folder of most repositories that are part of ROCm.
## Supported Operating Systems
### How to build documentation via Sphinx
The AMD ROCm v3.5.x platform is designed to support the following operating systems:
```bash
cd docs
* Ubuntu 16.04.6(Kernel 4.15) and 18.04.4(Kernel 5.3)
* CentOS 7.7 (Kernel 3.10-1062) and RHEL 7.8(Kernel 3.10.0-1127)(Using devtoolset-7 runtime support)
* SLES 15 SP1
* CentOS and RHEL 8.1(Kernel 4.18.0-147)
pip3 install -r sphinx/requirements.txt
python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
## Documentation Updates
### HIP-Clang Compile
* [HIP FAQ - Transition from HCC to HIP-Clang](https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq)
* [HIP-Clang Porting Guide](https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html#hip-porting-guide)
* [HIP - Glossary of Terms](https://rocmdocs.amd.com/en/latest/ROCm_Glossary/ROCm-Glossary.html)
### AMD ROCDebugger (ROCgdb)
* [ROCgdb User Guide](https://github.com/RadeonOpenCompute/ROCm/blob/master/gdb.pdf)
* [ROCgdbapi Guide](https://github.com/RadeonOpenCompute/ROCm/blob/master/amd-dbgapi.pdf)
### AMD ROCm Systems Management Interface
* [System Management Interface Event API Guide](https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_Manual.pdf)
### AMD ROCm Deep Learning
* [MIOpen API](https://github.com/ROCmSoftwarePlatform/MIOpen)
### AMD ROCm Glossary of Terms
* [Updated Glossary of Terms and Definitions](https://rocmdocs.amd.com/en/latest/ROCm_Glossary/ROCm-Glossary.html)
### General AMD ROCm Documentatin Links
Access the following links for more information on:
* For AMD ROCm documentation, see
https://rocmdocs.amd.com/en/latest/
* For installation instructions on supported platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
* For AMD ROCm binary structure, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#machine-learning-and-high-performance-computing-software-stack-for-amd-gpu-v3-3-0
* For AMD ROCm Release History, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
# What\'s New in This Release
## Upgrading to This Release
You must perform a fresh and a clean AMD ROCm install to successfully upgrade from v3.3 to v3.5. The following changes apply in this release:
* HCC is deprecated and replaced with the HIP-Clang compiler
* HIP-HCC runtime is changed to Radeon Open Compute Common Language Runtime (HIP-ROCClr)
* In the v3.5 release, the firmware is separated from the kernel package. The difference is as follows:
* v3.5 release has two separate rock-dkms and rock-dkms-firmware packages
* v3.3 release had the firmware as part of the rock-dkms package
## rocProf Command Line Tool Python Requirement
SQLite3 is a required Python module for the rocprof command-line tool. You can install the SQLite3 Python module using the pip utility and set env var ROCP_PYTHON_VERSION to the Python version, which includes the SQLite3 module.
## Heterogeneous-Compute Interface for Portability
In this release, the Heterogeneous Compute Compiler (HCC) compiler is deprecated and the HIP-Clang compiler is introduced for compiling Heterogeneous-Compute Interface for Portability (HIP) programs.
NOTE: The HCC environment variables will be gradually deprecated in subsequent releases.
The majority of the codebase for the HIP-Clang compiler has been upstreamed to the Clang trunk. The HIP-Clang implementation has undergone a strict code review by the LLVM/Clang community and comprehensive tests consisting of LLVM/Clang build bots. These reviews and tests resulted in higher productivity, code quality, and lower cost of maintenance.
![ScreenShot](HIPClang2.png)
For most HIP applications, the transition from HCC to HIP-Clang is transparent and efficient as the HIPCC and HIP cmake files automatically choose compilation options for HIP-Clang and hide the difference between the HCC and HIP-Clang code. However, minor changes may be required as HIP-Clang has a stricter syntax and semantic checks compared to HCC.
NOTE: Native HCC language features are no longer supported.
## Radeon Open Compute Common Language Runtime
In this release, the HIP runtime API is implemented on top of Radeon Open Compute Common Language Runtime (ROCclr). ROCclr is an abstraction layer that provides the ability to interact with different runtime backends such as ROCr.
## OpenCL Runtime
The following OpenCL runtime changes are made in this release:
* AMD ROCm OpenCL Runtime extends support to OpenCL2.2
* The developer branch is changed from master to master-next
## AMD ROCm GNU Debugger (ROCgdb)
The AMD ROCm Debugger (ROCgdb) is the AMD ROCm source-level debugger for Linux based on the GNU Debugger (GDB). It enables heterogeneous debugging on the AMD ROCm platform of an x86-based host architecture along with AMD GPU architectures and supported by the AMD Debugger API Library (ROCdbgapi).
The AMD ROCm Debugger is installed by the rocm-gdb package. The rocm-gdb package is part of the rocm-dev meta-package, which is in the rocm-dkms package.
The current AMD ROCm Debugger (ROCgdb) is an initial prototype that focuses on source line debugging. Note, symbolic variable debugging capabilities are not currently supported.
You can use the standard GDB commands for both CPU and GPU code debugging. For more information about ROCgdb, refer to the ROCgdb User Guide, which is installed at:
* /opt/rocm/share/info/gdb.info as a texinfo file
* /opt/rocm/share/doc/gdb/gdb.pdf as a PDF file
The AMD ROCm Debugger User Guide is available as a PDF at:
https://github.com/RadeonOpenCompute/ROCm/blob/master/gdb.pdf
For more information about GNU Debugger (GDB), refer to the GNU Debugger (GDB) web site at: http://www.gnu.org/software/gdb
## AMD ROCm Debugger API Library
The AMD ROCm Debugger API Library (ROCdbgapi) implements an AMD GPU debugger application programming interface (API) that provides the support necessary for a client of the library to control the execution and inspect the state of AMD GPU devices.
The following AMD GPU architectures are supported:
* Vega 10
* Vega 7nm
The AMD ROCm Debugger API Library is installed by the rocm-dbgapi package. The rocm-gdb package is part of the rocm-dev meta-package, which is in the rocm-dkms package.
The AMD ROCm Debugger API Specification is available as a PDF at:
https://github.com/RadeonOpenCompute/ROCm/blob/master/amd-dbgapi.pdf
## rocProfiler Dispatch Callbacks Start Stop API
In this release, a new rocprofiler start/stop API is added to enable/disable GPU kernel HSA dispatch callbacks. The callback can be registered with the 'rocprofiler_set_hsa_callbacks' API. The API helps you eliminate some profiling performance impact by invoking the profiler only for kernel dispatches of interest. This optimization will result in significant performance gains.
The API provides the following functions:
* *hsa_status_t rocprofiler_start_queue_callbacks();* is used to start profiling
* *hsa_status_t rocprofiler_stop_queue_callbacks();* is used to stop profiling.
For more information on kernel dispatches, see the HSA Platform System Architecture Specification guide at http://www.hsafoundation.com/standards/.
## ROCm Communications Collective Library
The ROCm Communications Collective Library (RCCL) consists of the following enhancements:
* Re-enable target 0x803
* Build time improvements for the HIP-Clang compiler
### NVIDIA Communications Collective Library Version Compatibility
AMD RCCL is now compatible with NVIDIA Communications Collective Library (NCCL) v2.6.4 and provides the following features:
* Network interface improvements with API v3
* Network topology detection
* Improved CPU type detection
* Infiniband adaptive routing support
## MIOpen Optional Kernel Package Installation
MIOpen provides an optional pre-compiled kernel package to reduce startup latency.
NOTE: The installation of this package is optional. MIOpen will continue to function as expected even if you choose to not install the pre-compiled kernel package. This is because MIOpen compiles the kernels on the target machine once the kernel is run. However, the compilation step may significantly increase the startup time for different operations.
To install the kernel package for your GPU architecture, use the following command:
*apt-get install miopen-kernels-<arch>-<num cu>*
* <arch> is the GPU architecture. For example, gfx900, gfx906
* <num cu> is the number of CUs available in the GPU. For example, 56 or 64
## New SMI Event Interface and Library
An SMI event interface is added to the kernel and ROCm SMI lib for system administrators to get notified when specific events occur. On the kernel side, AMDKFD_IOC_SMI_EVENTS input/output control is enhanced to allow notifications propagation to user mode through the event channel.
On the ROCm SMI lib side, APIs are added to set an event mask and receive event notifications with a timeout option. Further, ROCm SMI API details can be found in the PDF generated by Doxygen from source or by referring to the rocm_smi.h header file (see the rsmi_event_notification_* functions).
For the more details about ROCm SMI API, see
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_Manual.pdf
## API for CPU Affinity
A new API is introduced for aiding applications to select the appropriate memory node for a given accelerator(GPU).
The API for CPU affinity has the following signature:
*rsmi_status_t rsmi_topo_numa_affinity_get(uint32_t dv_ind, uint32_t *numa_node);*
This API takes as input, device index (dv_ind), and returns the NUMA node (CPU affinity), stored at the location pointed by numa_node pointer, associated with the device.
Non-Uniform Memory Access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor.
## Radeon Performance Primitives Library
The new Radeon Performance Primitives (RPP) library is a comprehensive high-performance computer vision library for AMD (CPU and GPU) with the HIP and OpenCL backend. The target operating system is Linux.
![ScreenShot](RPP.png)
For more information about prerequisites and library functions, see
https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX/tree/master/docs
# Fixed Issues
## Device printf Support for HIP-Clang
HIP now supports the use of printf in the device code. The parameters and return value for the device-side printf follow the POSIX.1 standard, with the exception that the "%n" specifier is not supported. A call to printf blocks the calling wavefront until the operation is completely processed by the host.
No host-side runtime calls by the application are needed to cause the output to appear. There is also no limit on the number of device-side calls to printf or the amount of data that is printed.
For more details, refer the HIP Programming Guide at:
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-GUIDE.html#hip-guide
## Assertions in HIP Device Code
Previously, a failing assertion caused early termination of kernels and the application to exit with a line number, file, and failing condition printed to the screen.
This issue is now fixed and the assert() and abort() functions are implemented for HIP device code.
NOTE: There may be a performance impact in the use of device assertions in its current form.
You may choose to disable the assertion in the production code. For example, to disable an assertion of:
*assert(foo != 0);*
you may comment it out as:
*//assert(foo != 0);*
NOTE: Assertions are currently enabled by default.
# Known Issues
The following are the known issues in the v3.5 release.
## HIPify-Clang Installation Failure on CentOS/RHEL
HIPify-Clang fails to install on CentOS/RHEL with the following error:
*file from install of hipify-clang conflicts with file from package hip-base*
**Workaround**: This is a known issue and the following workaround is recommended for a successful installation of HIPify-Clang on CentOS/RHEL:
* Download HIPify-Clang RPM. For example, *hipify-clang-11.0.0.x86_64.rpm*
* Perform a force install using the following command:
*sudo rpm -ivh --force hipify-clang-11.0.0.x86_64.rpm*
## Failure to Process Breakpoint before Queue Destroy Results in ROCm Debugger Error
When ROCgdb is in non-stop mode with an application that rapidly creates and destroys queues, a breakpoint may be reported that is not processed by the debugger before the queue is deleted. In some cases, this can result in the following error that prevents further debugging:
*[amd-dbgapi]: fatal error: kfd_queue_id 2 should have been reported as a NEW_QUEUE before next_pending_event failed (rc=-2)*
There are no known workarounds at this time.
## Failure to Process Breakpoint before Queue Destroy Results in ROCm Debugger API Error
When the ROCdbgapi library is used with an application that rapidly creates and destroys queues, a breakpoint may be reported that is not processed by the client before the queue is deleted. In some cases, this can result in a fatal error and the following error log message is produced:
*[amd-dbgapi]: fatal error: kfd_queue_id 2 should have been reported as a NEW_QUEUE before next_pending_event failed (rc=-2)*
There are no known workarounds at this time.
## rocThrust and hipCUB Unit Test Failures
The following unit test failures have been observed due to known issues in the ROCclr runtime.
rocThrust
* sort
* sort_by_key
hipCUB
* BlockDiscontinuity
* BlockExchange
* BlockHistogram
* BlockRadixSort
* BlockReduce
* BlockScan
There are no known workarounds in the current release.
## Multiple GPU Configuration Freezes with Imagenet Training and tf_cnn_benchmark on TensorFlow
A random freeze has been observed with Imagenet training and tf_cnn_benchmark on TensorFlow when multiple GPU configurations are involved.
There is no freeze observed with single GPUs.
There are no known workarounds at this time.
## Issue with Running AMD ROCm v3.3 User Mode with AMD ROCm v3.5 DKMS Kernel Module
Running AMD ROCm v3.3 in the user mode with the AMD ROCm v3.5 DKMS kernel module will cause the following features to be broken:
* IPC import/export, cross memory copy (used by UCX and MPI)
* Experimental GDB support
**Resolution**: Install ROCm v3.5 Thunk (*Hsakmt*) when using ROCm 3.5 Kernel Fusion Driver (KFD).
## SQLite3 Library Not Found in ROCProfiler
The ROCProfiler tool appears to be broken when the SQLite3 library is not found.
**Resolution**: Install the SQLite3 Python module separately and ensure the environment variable is set to ROCP_PYTHON_VERSION to confirm the Python version, which includes the SQLite3 module.
# Deprecations
Install ROCm v3.5 Thunk (Hsakmt) when using ROCm 3.5 Kernel Fusion Driver (KFD). You can access the Thunk package at:
https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface
## Heterogeneous Compute Compiler
In this release, the Heterogeneous Compute Compiler (HCC) compiler is deprecated and the HIP-Clang compiler is introduced for compiling Heterogeneous-Compute Interface for Portability (HIP) programs.
For more information, see HIP documentation at:
https://rocmdocs.amd.com/en/latest/Programming_Guides/Programming-Guides.html
## Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.5.x packages.
For more information on ROCM installation on all platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
## Hardware and Software Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX8 GPUs
* "Polaris 11" chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100
* "Polaris 12" chips, such as on the AMD Radeon RX 550 and Radeon RX 540
* GFX7 GPUs
* "Hawaii" chips, such as the AMD Radeon R9 390X and FirePro W9100
As described in the next section, GFX8 GPUs require PCI Express 3.0 (PCIe 3.0) with support for PCIe atomics. This requires both CPU and motherboard support. GFX9 GPUs require PCIe 3.0 with support for PCIe atomics by default, but they can operate in most cases without this capability.
The integrated GPUs in AMD APUs are not officially supported targets for ROCm.
As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven Ridge" APUs are enabled in our upstream drivers and the ROCm OpenCL runtime.
However, they are not enabled in our HCC or HIP runtimes, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
#### Supported CPUs
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
In particular, the CPU and every active PCIe point between the CPU and GPU require support for PCIe 3.0 and PCIe atomics.
The CPU root must indicate PCIe AtomicOp Completion capabilities and any intermediate switch must indicate PCIe AtomicOp Routing capabilities.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* AMD Ryzen CPUs
* The CPUs in AMD Ryzen APUs
* AMD Ryzen Threadripper CPUs
* AMD EPYC CPUs
* Intel Xeon E7 v3 or newer CPUs
* Intel Xeon E5 v3 or newer CPUs
* Intel Xeon E3 v3 or newer CPUs
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer)
* Some Ivy Bridge-E systems
Beginning with ROCm 1.8, GFX9 GPUs (such as Vega 10) no longer require PCIe atomics.
We have similarly opened up more options for number of PCIe lanes.
GFX9 GPUs can now be run on CPUs without PCIe atomics and on older PCIe generations, such as PCIe 2.0.
This is not supported on GPUs below GFX9, e.g. GFX8 cards in the Fiji and Polaris families.
If you are using any PCIe switches in your system, please note that PCIe Atomics are only supported on some switches, such as Broadcom PLX.
When you install your GPUs, make sure you install them in a PCIe 3.1.0 x16, x8, x4, or x1 slot attached either directly to the CPU's Root I/O controller or via a PCIe switch directly attached to the CPU's Root I/O controller.
In our experience, many issues stem from trying to use consumer motherboards which provide physical x16 connectors that are electrically connected as e.g. PCIe 2.0 x4, PCIe slots connected via the Southbridge PCIe I/O controller, or PCIe slots connected through a PCIe switch that does
not support PCIe atomics.
If you attempt to run ROCm on a system without proper PCIe atomic support, you may see an error in the kernel log (`dmesg`):
```
kfd: skipped device 1002:7300, PCI rejects atomics
```
## Older ROCm™ Releases
Experimental support for our Hawaii (GFX7) GPUs (Radeon R9 290, R9 390, FirePro W9100, S9150, S9170)
does not require or take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above for compatibility purposes.
For release information for older ROCm™ releases, refer to
[CHANGELOG](./CHANGELOG.md).
#### Not supported or limited support under ROCm
##### Limited support
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HCC, HIP, or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
* In addition, various "Carrizo" and "Bristol Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
* AMD "Raven Ridge" APUs are enabled to run OpenCL, but do not yet support HCC, HIP, or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Raven Ridge" requires the use of upstream kernel drivers.
* In addition, various "Raven Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
##### Not supported
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
#### ROCm support in upstream Linux kernels
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
As such, users have the option of either using the ROCK kernel driver that are part of AMD's ROCm repositories or using the upstream driver and only installing ROCm user-level utilities from AMD's ROCm repositories.
These releases of the upstream Linux kernel support the following GPUs in ROCm:
* 4.17: Fiji, Polaris 10, Polaris 11
* 4.18: Fiji, Polaris 10, Polaris 11, Vega10
* 4.20: Fiji, Polaris 10, Polaris 11, Vega10, Vega 7nm
The upstream driver may be useful for running ROCm software on systems that are not compatible with the kernel driver available in AMD's repositories.
For users that have the option of using either AMD's or the upstreamed driver, there are various tradeoffs to take into consideration:
| | Using AMD's `rock-dkms` package | Using the upstream kernel driver |
| ---- | ------------------------------------------------------------| ----- |
| Pros | More GPU features, and they are enabled earlier | Includes the latest Linux kernel features |
| | Tested by AMD on supported distributions | May work on other distributions and with custom kernels |
| | Supported GPUs enabled regardless of kernel version | |
| | Includes the latest GPU firmware | |
| Cons | May not work on all Linux distributions or versions | Features and hardware support varies depending on kernel version |
| | Not currently supported on kernels newer than 5.4 | Limits GPU's usage of system memory to 3/8 of system memory (before 5.6). For 5.6 and beyond, both DKMS and upstream kernels allow use of 15/16 of system memory. |
| | | IPC and RDMA capabilities are not yet enabled |
| | | Not tested by AMD to the same level as `rock-dkms` package |
| | | Does not include most up-to-date firmware |
## Machine Learning and High Performance Computing Software Stack for AMD GPU
ROCm Version 3.3.0
### ROCm Binary Package Structure
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
In AMD's package distributions, these software projects are provided as a separate packages.
This allows users to install only the packages they need, if they do not wish to install all of ROCm.
These packages will install most of the ROCm software into `/opt/rocm/` by default.
The packages for each of the major ROCm components are:
* ROCm Core Components
- ROCk Kernel Driver: `rock-dkms`
- ROCr Runtime: `hsa-rocr-dev`, `hsa-ext-rocr-dev`
- ROCt Thunk Interface: `hsakmt-roct`, `hsakmt-roct-dev`
* ROCm Support Software
- ROCm SMI: `rocm-smi`
- ROCm cmake: `rocm-cmake`
- rocminfo: `rocminfo`
- ROCm Bandwidth Test: `rocm_bandwidth_test`
* ROCm Development Tools
- HCC compiler: `hcc`
- HIP: `hip_base`, `hip_doc`, `hip_hcc`, `hip_samples`
- ROCm Device Libraries: `rocm-device-libs`
- ROCm OpenCL: `rocm-opencl`, `rocm-opencl-devel` (on RHEL/CentOS), `rocm-opencl-dev` (on Ubuntu)
- ROCM Clang-OCL Kernel Compiler: `rocm-clang-ocl`
- Asynchronous Task and Memory Interface (ATMI): `atmi`
- ROCr Debug Agent: `rocr_debug_agent`
- ROCm Code Object Manager: `comgr`
- ROC Profiler: `rocprofiler-dev`
- ROC Tracer: `roctracer-dev`
- Radeon Compute Profiler: `rocm-profiler`
* ROCm Libraries
- rocALUTION: `rocalution`
- rocBLAS: `rocblas`
- hipBLAS: `hipblas`
- hipCUB: `hipCUB`
- rocFFT: `rocfft`
- rocRAND: `rocrand`
- rocSPARSE: `rocsparse`
- hipSPARSE: `hipsparse`
- ROCm SMI Lib: `rocm_smi_lib64`
- rocThrust: `rocThrust`
- MIOpen: `MIOpen-HIP` (for the HIP version), `MIOpen-OpenCL` (for the OpenCL version)
- MIOpenGEMM: `miopengemm`
- MIVisionX: `mivisionx`
- RCCL: `rccl`
To make it easier to install ROCm, the AMD binary repositories provide a number of meta-packages that will automatically install multiple other packages.
For example, `rocm-dkms` is the primary meta-package that is used to install most of the base technology needed for ROCm to operate.
It will install the `rock-dkms` kernel driver, and another meta-package (`rocm-dev`) which installs most of the user-land ROCm core components, support software, and development tools.
The `rocm-utils` meta-package will install useful utilities that, while not required for ROCm to operate, may still be beneficial to have.
Finally, the `rocm-libs` meta-package will install some (but not all) of the libraries that are part of ROCm.
The chain of software installed by these meta-packages is illustrated below
```
rocm-dkms
|--rock-dkms
\--rocm-dev
|--comgr
|--hcc
|--hip_base
|--hip_doc
|--hip_hcc
|--hip_samples
|--hsakmt-roct
|--hsakmt-roct-dev
|--hsa-amd-aqlprofile
|--hsa-ext-rocr-dev
|--hsa-rocr-dev
|--rocm-cmake
|--rocm-device-libs
|--rocm-smi
|--rocprofiler-dev
|--rocr_debug_agent
\--rocm-utils
|--rocminfo
\--rocm-clang-ocl # This will cause OpenCL to be installed
rocm-libs
|--hipblas
|--hipcub
|--hipsparse
|--rocalution
|--rocblas
|--rocfft
|--rocprim
|--rocrand
|--rocsparse
\--rocthrust
```
These meta-packages are not required but may be useful to make it easier to install ROCm on most systems.
Note:Some users may want to skip certain packages. For instance, a user that wants to use the upstream kernel drivers (rather than those supplied by AMD) may want to skip the `rocm-dkms` and `rock-dkms` packages, and instead directly install `rocm-dev`.
Similarly, a user that only wants to install OpenCL support instead of HCC and HIP may want to skip the `rocm-dkms` and `rocm-dev` packages. Instead, they could directly install `rock-dkms`, `rocm-opencl`, and `rocm-opencl-dev` and their dependencies.
### ROCm Platform Packages
Drivers, ToolChains, Libraries, and Source Code
The latest supported version of the drivers, tools, libraries and source code for the ROCm platform have been released and are available from the following GitHub repositories:
#### ROCm Core Components
- [ROCk Kernel Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-3.3.0)
- [ROCr Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/rocm-3.3.0)
- [ROCT Thunk Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-3.3.0)
#### ROCm Support Software
- [ROCm SMI](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-3.3.0)
- [ROCm cmake](https://github.com/RadeonOpenCompute/rocm-cmake/tree/rocm-3.3.0)
- [rocminfo](https://github.com/RadeonOpenCompute/rocminfo/tree/rocm-3.3.0)
- [ROCm Bandwidth Test](https://github.com/RadeonOpenCompute/rocm_bandwidth_test/tree/rocm-3.3.0)
#### ROCm Development ToolChain
- [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/rocm-3.3.0)
- [HIP](https://github.com/ROCm-Developer-Tools/HIP/tree/rocm-3.3.0)
- [ROCm Device Libraries HCC](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/roc-ocl-3.3.0)
- [ROCm OpenCL Runtime](http://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/tree/roc-3.3.0)
- [ROCm LLVM OCL](http://github.com/RadeonOpenCompute/llvm-project/tree/rocm-ocl-3.3.0)
- [ROCm Device Libraries OCL](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/rocm-ocl-3.3.0)
- [ROCM Clang-OCL Kernel Compiler](https://github.com/RadeonOpenCompute/clang-ocl/tree/rocm-3.3.0)
- [Asynchronous Task and Memory Interface (ATMI)](https://github.com/RadeonOpenCompute/atmi/tree/rocm-3.3.0)
- [ROCr Debug Agent](https://github.com/ROCm-Developer-Tools/rocr_debug_agent/tree/roc-3.3.0)
- [ROCm Code Object Manager](https://github.com/RadeonOpenCompute/ROCm-CompilerSupport/tree/rocm-3.3.0)
- [ROC Profiler](https://github.com/ROCm-Developer-Tools/rocprofiler/tree/roc-3.3.0)
- [ROC Tracer](https://github.com/ROCm-Developer-Tools/roctracer/tree/roc-3.3.0)
- [AOMP](https://github.com/ROCm-Developer-Tools/aomp/tree/roc-3.3.0)
- [Radeon Compute Profiler](https://github.com/GPUOpen-Tools/RCP/tree/3a49405)
- [ROCmValidationSuite](https://github.com/ROCm-Developer-Tools/ROCmValidationSuite/tree/roc-3.3.0)
- Example Applications:
- [HCC Examples](https://github.com/ROCm-Developer-Tools/HCC-Example-Application/tree/ffd65333)
- [HIP Examples](https://github.com/ROCm-Developer-Tools/HIP-Examples/tree/rocm-3.3.0)
#### ROCm Libraries
- [rocBLAS](https://github.com/ROCmSoftwarePlatform/rocBLAS/tree/rocm-3.3.0)
- [hipBLAS](https://github.com/ROCmSoftwarePlatform/hipBLAS/tree/rocm-3.3.0)
- [rocFFT](https://github.com/ROCmSoftwarePlatform/rocFFT/tree/rocm-3.3)
- [rocRAND](https://github.com/ROCmSoftwarePlatform/rocRAND/tree/rocm-3.3.0)
- [rocSPARSE](https://github.com/ROCmSoftwarePlatform/rocSPARSE/tree/rocm-3.3.0)
- [hipSPARSE](https://github.com/ROCmSoftwarePlatform/hipSPARSE/tree/rocm-3.3.0)
- [rocALUTION](https://github.com/ROCmSoftwarePlatform/rocALUTION/tree/rocm-3.3.0)
- [MIOpenGEMM](https://github.com/ROCmSoftwarePlatform/MIOpenGEMM/tree/b51a125)
- [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen/tree/roc-3.3.0)
- [rocThrust](https://github.com/ROCmSoftwarePlatform/rocThrust/tree/rocm-3.3.0)
- [ROCm SMI Lib](https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/rocm-3.3.0)
- [RCCL](https://github.com/ROCmSoftwarePlatform/rccl/tree/rocm-3.3.0)
- [MIVisionX](https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX/commit/755e7a08d5299a95c42def092af7c736d5eda90c)
- [MIVisionX] https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX/tree/1.7)
- [hipCUB](https://github.com/ROCmSoftwarePlatform/hipCUB/tree/rocm-3.3.0)
- [AMDMIGraphX](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/commit/d1e945dabce0078d44c78de67b00232b856e18bc)
Features and enhancements introduced in previous versions of ROCm can be found in [version_history.md](version_history.md)

View File

@@ -1,245 +0,0 @@
# Release Notes
<!-- Do not edit this file! This file is autogenerated with -->
<!-- tools/autotag/tag_script.py -->
<!-- Disable lints since this is an auto-generated file. -->
<!-- markdownlint-disable blanks-around-headers -->
<!-- markdownlint-disable no-duplicate-header -->
<!-- markdownlint-disable no-blanks-blockquote -->
<!-- markdownlint-disable ul-indent -->
<!-- markdownlint-disable no-trailing-spaces -->
<!-- spellcheck-disable -->
The release notes for the ROCm platform.
-------------------
## ROCm 5.4.0
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable no-duplicate-header -->
### What's New in This Release
#### HIP Enhancements
The ROCm v5.4 release consists of the following HIP enhancements:
##### Support for Wall Clock64
A new timer function wall_clock64() is supported, which returns wall clock count at a constant frequency on the device.
```h
long long int wall_clock64();
```
It returns wall clock count at a constant frequency on the device, which can be queried via HIP API with the hipDeviceAttributeWallClockRate attribute of the device in the HIP application code.
Example:
```h
int wallClkRate = 0; //in kilohertz
+HIPCHECK(hipDeviceGetAttribute(&wallClkRate, hipDeviceAttributeWallClockRate, deviceId));
```
Where hipDeviceAttributeWallClockRate is a device attribute.
> **Note**
>
> The wall clock frequency is a per-device attribute.
##### New Registry Added for GPU_MAX_HW_QUEUES
The GPU_MAX_HW_QUEUES registry defines the maximum number of independent hardware queues allocated per process per device.
The environment variable controls how many independent hardware queues HIP runtime can create per process, per device. If the application allocates more HIP streams than this number, then the HIP runtime reuses the same hardware queues for the new streams in a round-robin manner.
> **Note**
>
> This maximum number does not apply to hardware queues created for CU-masked HIP streams or cooperative queues for HIP Cooperative Groups (there is only one queue per device).
For more details, refer to the HIP Programming Guide.
#### New HIP APIs in This Release
The following new HIP APIs are available in the ROCm v5.4 release.
> **Note**
>
> This is a pre-official version (beta) release of the new APIs.
##### Error Handling
```h
hipError_t hipDrvGetErrorName(hipError_t hipError, const char** errorString);
```
This returns HIP errors in the text string format.
```h
hipError_t hipDrvGetErrorString(hipError_t hipError, const char** errorString);
```
This returns text string messages with more details about the error.
For more information, refer to the HIP API Guide.
##### HIP Tests Source Separation
With ROCm v5.4, a separate GitHub project is created at
<https://github.com/ROCm-Developer-Tools/hip-tests>
This contains HIP catch2 tests and samples, and new tests will continue to develop.
In future ROCm releases, catch2 tests and samples will be removed from the HIP project.
### OpenMP Enhancements
This release consists of the following OpenMP enhancements:
- Enable new device RTL in libomptarget as default.
- New flag `-fopenmp-target-fast` to imply `-fopenmp-target-ignore-env-vars -fopenmp-assume-no-thread-state -fopenmp-assume-no-nested-parallelism`.
- Support for the collapse clause and non-unit stride in cases where the No-Loop specialized kernel is generated.
- Initial implementation of optimized cross-team sum reduction for float and double type scalars.
- Pool-based optimization in the OpenMP runtime to reduce locking during data transfer.
### Deprecations and Warnings
#### HIP Perl Scripts Deprecation
The `hipcc` and `hipconfig` Perl scripts are deprecated. In a future release, compiled binaries will be available as `hipcc.bin` and `hipconfig.bin` as replacements for the Perl scripts.
> **Note**
>
> There will be a transition period where the Perl scripts and compiled binaries are available before the scripts are removed. There will be no functional difference between the Perl scripts and their compiled binary counterpart. No user action is required. Once these are available, users can optionally switch to `hipcc.bin` and `hipconfig.bin`. The `hipcc`/`hipconfig` soft link will be assimilated to point from `hipcc`/`hipconfig` to the respective compiled binaries as the default option.
(5_4_0_filesystem_reorg_deprecation_notice)=
##### Linux Filesystem Hierarchy Standard for ROCm
ROCm packages have adopted the Linux foundation filesystem hierarchy standard in this release to ensure ROCm components follow open source conventions for Linux-based distributions. While moving to a new filesystem hierarchy, ROCm ensures backward compatibility with its 5.1 version or older filesystem hierarchy. See below for a detailed explanation of the new filesystem hierarchy and backward compatibility.
##### New Filesystem Hierarchy
The following is the new filesystem hierarchy:
```text
/opt/rocm-<ver>
| --bin
| --All externally exposed Binaries
| --libexec
| --<component>
| -- Component specific private non-ISA executables (architecture independent)
| --include
| -- <component>
| --<header files>
| --lib
| --lib<soname>.so -> lib<soname>.so.major -> lib<soname>.so.major.minor.patch
(public libraries linked with application)
| --<component> (component specific private library, executable data)
| --<cmake>
| --components
| --<component>.config.cmake
| --share
| --html/<component>/*.html
| --info/<component>/*.[pdf, md, txt]
| --man
| --doc
| --<component>
| --<licenses>
| --<component>
| --<misc files> (arch independent non-executable)
| --samples
```
> **Note**
>
> ROCm will not support backward compatibility with the v5.1(old) file system hierarchy in its next major release.
For more information, refer to <https://refspecs.linuxfoundation.org/fhs.shtml>.
##### Backward Compatibility with Older Filesystems
ROCm has moved header files and libraries to its new location as indicated in the above structure and included symbolic-link and wrapper header files in its old location for backward compatibility.
> **Note**
>
> ROCm will continue supporting backward compatibility until the next major release.
##### Wrapper header files
Wrapper header files are placed in the old location (`/opt/rocm-xxx/<component>/include`) with a warning message to include files from the new location (`/opt/rocm-xxx/include`) as shown in the example below:
```h
// Code snippet from hip_runtime.h
#pragma message “This file is deprecated. Use file from include path /opt/rocm-ver/include/ and prefix with hip”.
#include "hip/hip_runtime.h"
```
The wrapper header files backward compatibility deprecation is as follows:
- `#pragma` message announcing deprecation -- ROCm v5.2 release
- `#pragma` message changed to `#warning` -- Future release
- `#warning` changed to `#error` -- Future release
- Backward compatibility wrappers removed -- Future release
##### Library files
Library files are available in the `/opt/rocm-xxx/lib` folder. For backward compatibility, the old library location (`/opt/rocm-xxx/<component>/lib`) has a soft link to the library at the new location.
Example:
```log
$ ls -l /opt/rocm/hip/lib/
total 4
drwxr-xr-x 4 root root 4096 May 12 10:45 cmake
lrwxrwxrwx 1 root root 24 May 10 23:32 libamdhip64.so -> ../../lib/libamdhip64.so
```
##### CMake Config files
All CMake configuration files are available in the `/opt/rocm-xxx/lib/cmake/<component>` folder. For backward compatibility, the old CMake locations (`/opt/rocm-xxx/<component>/lib/cmake`) consist of a soft link to the new CMake config.
Example:
```log
$ ls -l /opt/rocm/hip/lib/cmake/hip/
total 0
lrwxrwxrwx 1 root root 42 May 10 23:32 hip-config.cmake -> ../../../../lib/cmake/hip/hip-config.cmake
```
### Fixed Defects
The following defects are fixed in this release.
These defects were identified and documented as known issues in previous ROCm releases and are fixed in this release.
#### Memory Allocated Using hipHostMalloc() with Flags Did Not Exhibit Fine-Grain Behavior
##### Issue
The test was incorrectly using the `hipDeviceAttributePageableMemoryAccess` device attribute to determine coherent support.
##### Fix
`hipHostMalloc()` allocates memory with fine-grained access by default when the environment variable `HIP_HOST_COHERENT=1` is used.
For more information, refer to {doc}`hip:.doxygen/docBin/html/index`.
#### SoftHang with `hipStreamWithCUMask` test on AMD Instinct™
##### Issue
On GFX10 GPUs, kernel execution hangs when it is launched on streams created using `hipStreamWithCUMask`.
##### Fix
On GFX10 GPUs, each workgroup processor encompasses two compute units, and the compute units must be enabled as a pair. The `hipStreamWithCUMask` API unit test cases are updated to set compute unit mask (cuMask) in pairs for GFX10 GPUs.
#### ROCm Tools GPU IDs
The HIP language device IDs are not the same as the GPU IDs reported by the tools. GPU IDs are globally unique and guaranteed to be consistent across APIs and processes.
GPU IDs reported by ROCTracer and ROCProfiler or ROCm Tools are HSA Driver Node ID of that GPU, as it is a unique ID for that device in that particular node.

BIN
ROCm_SMI_Manual.pdf Normal file

Binary file not shown.

BIN
RPP.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

BIN
amd-dbgapi.pdf Normal file

Binary file not shown.

View File

@@ -1,79 +1,79 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="roc-github"
fetch="https://github.com/RadeonOpenCompute/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
fetch="https://github.com/ROCmSoftwarePlatform/" />
fetch="https://github.com/ROCmSoftwarePlatform/" />
<remote name="gpuopen-libs"
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
<remote name="gpuopen-tools"
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-5.5.1"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-3.5.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="rocm_smi_lib" />
<project name="rocm-core" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" remote="roc-github" />
<project name="rocm-cmake" />
<project name="rocminfo" />
<project name="rocprofiler" remote="rocm-devtools" />
<project name="roctracer" remote="rocm-devtools" />
<project name="ROCm-OpenCL-Runtime" />
<project name="ROCm-OpenCL-Runtime" revision="refs/tags/roc-3.5.0" />
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="6c03f8b58fafd9dd693eaac826749a5cfad515f8" />
<project name="clang-ocl" />
<!--HIP Projects-->
<!--HIP Projects-->
<project name="HCC-Example-Application" remote="rocm-devtools" revision="ffd6533305e79eed667badd3c4cdb7879a1281b8" />
<project name="HIP" remote="rocm-devtools" />
<project name="hipamd" remote="rocm-devtools" />
<project name="HIP-Examples" remote="rocm-devtools" />
<project name="ROCclr" remote="rocm-devtools" />
<project name="HIPIFY" remote="rocm-devtools" />
<project name="HIPCC" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" />
<project name="ROCclr" remote="rocm-devtools" revision="refs/tags/roc-3.5.0" />
<project name="HIPIFY" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" path="llvm_amd-stg-open" />
<project name="ROCm-Device-Libs" />
<project name="atmi" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocr_debug_agent" remote="rocm-devtools" revision="refs/tags/roc-3.5.0" />
<project name="rocm_bandwidth_test" />
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
<!-- gdb projects -->
<!-- gdb projects -->
<project name="ROCgdb" remote="rocm-devtools" />
<project name="ROCdbgapi" remote="rocm-devtools" />
<!-- ROCm Libraries -->
<project name="rdc" />
<project groups="mathlibs" name="rocBLAS" remote="rocm-swplat" />
<project groups="mathlibs" name="Tensile" remote="rocm-swplat" />
<project groups="mathlibs" name="hipBLAS" remote="rocm-swplat" />
<project groups="mathlibs" name="rocFFT" remote="rocm-swplat" />
<project groups="mathlibs" name="hipFFT" remote="rocm-swplat" />
<project groups="mathlibs" name="rocRAND" remote="rocm-swplat" />
<project groups="mathlibs" name="rocSPARSE" remote="rocm-swplat" />
<project groups="mathlibs" name="rocSOLVER" remote="rocm-swplat" />
<project groups="mathlibs" name="hipSOLVER" remote="rocm-swplat" />
<project groups="mathlibs" name="hipSPARSE" remote="rocm-swplat" />
<project groups="mathlibs" name="rocALUTION" remote="rocm-swplat" />
<project name="MIOpenGEMM" remote="rocm-swplat" />
<!-- ROCm Libraries -->
<project name="rocBLAS" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
<project name="rocSPARSE" remote="rocm-swplat" />
<project name="hipSPARSE" remote="rocm-swplat" />
<project name="rocALUTION" remote="rocm-swplat" />
<project name="MIOpenGEMM" remote="rocm-swplat" revision="refs/tags/1.1.6" />
<project name="MIOpen" remote="rocm-swplat" />
<project groups="mathlibs" name="rccl" remote="rocm-swplat" />
<project name="MIVisionX" remote="gpuopen-libs" />
<project groups="mathlibs" name="rocThrust" remote="rocm-swplat" />
<project groups="mathlibs" name="hipCUB" remote="rocm-swplat" />
<project groups="mathlibs" name="rocPRIM" remote="rocm-swplat" />
<project groups="mathlibs" name="rocWMMA" remote="rocm-swplat" />
<project name="hipfort" remote="rocm-swplat" />
<project name="rccl" remote="rocm-swplat" />
<project name="MIVisionX" remote="gpuopen-libs" revision="refs/tags/1.7" />
<project name="rocThrust" remote="rocm-swplat" />
<project name="hipCUB" remote="rocm-swplat" />
<project name="rocPRIM" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" />
<project name="ROCmValidationSuite" remote="rocm-devtools" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
<!-- Projects for AOMP -->
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" revision="ffcbd7e63395f8a4d3ccb7e4d5133f8d2dde793e" />
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" revision="72ce2c9783d514fc7da94db40f9f420320df098d" />
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" revision="12fb33212c99cb4b596b0f34691e7d044218e3e9" />
</manifest>

View File

@@ -1,6 +0,0 @@
# 404 Page Not Found
Page could not be found.
Return to [home](./index) or please use the links from the sidebar to find what
you are looking for.

View File

@@ -1,74 +0,0 @@
# About ROCm Documentation
ROCm documentation is made available under open source [licenses](licensing.md).
Documentation is built using open source toolchains. Contributions to our
documentation is encouraged and welcome. As a contributor, please familiarize
yourself with our documentation toolchain.
## ReadTheDocs
[ReadTheDocs](https://docs.readthedocs.io/en/stable/) is our front end for the
our documentation. By front end, this is the tool that serves our HTML based
documentation to our end users.
## Doxygen
[Doxygen](https://www.doxygen.nl/) is the most common inline code documentation
standard. ROCm projects are use Doxygen for public API documentation (unless the
upstream project is using a different tool).
## Sphinx
[Sphinx](https://www.sphinx-doc.org/en/master/) is a documentation generator
originally used for python. It is now widely used in the Open Source community.
Originally, sphinx supported RST based documentation. Markdown support is now
available. ROCm documentation plans to default to markdown for new projects.
Existing projects using RST are under no obligation to convert to markdown. New
projects that believe markdown is not suitable should contact the documentation
team prior to selecting RST.
### MyST
[Markedly Structured Text (MyST)](https://myst-tools.org/docs/spec) is an extended
flavor of Markdown ([CommonMark](https://commonmark.org/)) influenced by reStructuredText (RST) and Sphinx.
It is integrated via [`myst-parser`](https://myst-parser.readthedocs.io/en/latest/).
A cheat sheet that showcases how to use the MyST syntax is available over at [the Jupyter
reference](https://jupyterbook.org/en/stable/reference/cheatsheet.html).
### Sphinx Theme
ROCm is using the
[Sphinx Book Theme](https://sphinx-book-theme.readthedocs.io/en/latest/). This
theme is used by Jupyter books. ROCm documentation applies some customization
include a header and footer on top of the Sphinx Book Theme. A future custom
ROCm theme will be part of our documentation goals.
### Sphinx Design
Sphinx Design is an extension for sphinx based websites that add design
functionality. Please see the documentation
[here](https://sphinx-design.readthedocs.io/en/latest/index.html). ROCm
documentation uses sphinx design for grids, cards, and synchronized tabs.
Other features may be used in the future.
### Sphinx External TOC
ROCm uses the
[sphinx-external-toc](https://sphinx-external-toc.readthedocs.io/en/latest/intro.html)
for our navigation. This tool allows a YAML file based left navigation menu. This
tool was selected due to its flexibility that allows scripts to operate on the
YAML file. Please transition to this file for the project's navigation. You can
see the `_toc.yml.in` file in this repository in the docs/sphinx folder for an
example.
### Breathe
Sphinx uses [Breathe](https://www.breathe-doc.org/) to integrate Doxygen
content.
## `rocm-docs-core` pip package
[rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) is an AMD
maintained project that applies customization for our documentation. This
project is the tool most ROCm repositories will use as part of the documentation
build.

View File

@@ -1,76 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
import shutil
from rocm_docs import ROCmDocs
shutil.copy2('../CONTRIBUTING.md','./contributing.md')
shutil.copy2('../RELEASE.md','./release.md')
# Keep capitalization due to similar linking on GitHub's markdown preview.
shutil.copy2('../CHANGELOG.md','./CHANGELOG.md')
# configurations for PDF output by Read the Docs
project = "ROCm Documentation"
author = "Advanced Micro Devices, Inc."
copyright = "Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved."
version = "5.4.0"
release = "5.4.0"
setting_all_article_info = True
all_article_info_os = ["linux"]
all_article_info_author = ""
# pages with specific settings
article_pages = [
{"file":"deploy/linux/index", "os":["linux"]},
{"file":"deploy/linux/install_overview", "os":["linux"]},
{"file":"deploy/linux/prerequisites", "os":["linux"]},
{"file":"deploy/linux/quick_start", "os":["linux"]},
{"file":"deploy/linux/install", "os":["linux"]},
{"file":"deploy/linux/upgrade", "os":["linux"]},
{"file":"deploy/linux/uninstall", "os":["linux"]},
{"file":"deploy/linux/package_manager_integration", "os":["linux"]},
{"file":"deploy/docker", "os":["linux"]},
{"file":"release/gpu_os_support", "os":["linux"]},
{"file":"release/docker_support_matrix", "os":["linux"]},
{"file":"reference/gpu_libraries/communication", "os":["linux"]},
{"file":"reference/ai_tools", "os":["linux"]},
{"file":"reference/management_tools", "os":["linux"]},
{"file":"reference/validation_tools", "os":["linux"]},
{"file":"reference/framework_compatibility/framework_compatibility", "os":["linux"]},
{"file":"reference/computer_vision", "os":["linux"]},
{"file":"how_to/deep_learning_rocm", "os":["linux"]},
{"file":"how_to/gpu_aware_mpi", "os":["linux"]},
{"file":"how_to/magma_install/magma_install", "os":["linux"]},
{"file":"how_to/pytorch_install/pytorch_install", "os":["linux"]},
{"file":"how_to/system_debugging", "os":["linux"]},
{"file":"how_to/tensorflow_install/tensorflow_install", "os":["linux"]},
{"file":"examples/machine_learning", "os":["linux"]},
{"file":"examples/inception_casestudy/inception_casestudy", "os":["linux"]},
{"file":"understand/file_reorg", "os":["linux"]},
{"file":"understand/isv_deployment_win", "os":["windows"]},
]
external_toc_path = "./sphinx/_toc.yml"
docs_core = ROCmDocs("ROCm 5.4.0 Documentation Home")
docs_core.setup()
external_projects_current_project = "rocm"
for sphinx_var in ROCmDocs.SPHINX_VARS:
globals()[sphinx_var] = getattr(docs_core, sphinx_var)
html_theme_options = {
"link_main_doc": False
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 939 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 537 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 465 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

View File

@@ -1,90 +0,0 @@
# Deploy ROCm Docker containers
## Prerequisites
Docker containers share the kernel with the host operating system, therefore the
ROCm kernel-mode driver must be installed on the host. Please refer to
{ref}`using-the-package-manager` on installing `amdgpu-dkms`. The other
user-space parts (like the HIP-runtime or math libraries) of the ROCm stack will
be loaded from the container image and don't need to be installed to the host.
(docker-access-gpus-in-container)=
## Accessing GPUs in containers
In order to access GPUs in a container (to run applications using HIP, OpenCL or
OpenMP offloading) explicit access to the GPUs must be granted.
The ROCm runtimes make use of multiple device files:
- `/dev/kfd`: the main compute interface shared by all GPUs
- `/dev/dri/renderD<node>`: direct rendering interface (DRI) devices for each
GPU. **`<node>`** is a number for each card in the system starting from 128.
Exposing these devices to a container is done by using the
[`--device`](https://docs.docker.com/engine/reference/commandline/run/#device)
option, i.e. to allow access to all GPUs expose `/dev/kfd` and all
`/dev/dri/renderD` devices:
```shell
docker run --device /dev/kfd --device /dev/renderD128 --device /dev/renderD129 ...
```
More conveniently, instead of listing all devices, the entire `/dev/dri` folder
can be exposed to the new container:
```shell
docker run --device /dev/kfd --device /dev/dri
```
Note that this gives more access than strictly required, as it also exposes the
other device files found in that folder to the container.
(docker-restrict-gpus)=
### Restricting a container to a subset of the GPUs
If a `/dev/dri/renderD` device is not exposed to a container then it cannot use
the GPU associated with it; this allows to restrict a container to any subset of
devices.
For example to allow the container to access the first and third GPU start it
like:
```shell
docker run --device /dev/kfd --device /dev/dri/renderD128 --device /dev/dri/renderD130 <image>
```
### Additional Options
The performance of an application can vary depending on the assignment of GPUs
and CPUs to the task. Typically, `numactl` is installed as part of many HPC
applications to provide GPU/CPU mappings. This Docker runtime option supports
memory mapping and can improve performance.
```shell
--security-opt seccomp=unconfined
```
This option is recommended for Docker Containers running HPC applications.
```shell
docker run --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined ...
```
## Docker images in the ROCm ecosystem
### Base images
<https://github.com/RadeonOpenCompute/ROCm-docker> hosts images useful for users
wishing to build their own containers leveraging ROCm. The built images are
available from [Docker Hub](https://hub.docker.com/u/rocm). In particular
`rocm/rocm-terminal` is a small image with the prerequisites to build HIP
applications, but does not include any libraries.
### Applications
AMD provides pre-built images for various GPU-ready applications through its
Infinity Hub at <https://www.amd.com/en/technologies/infinity-hub>.
Examples for invoking each application and suggested parameters used for
benchmarking are also provided there.

View File

@@ -1,53 +0,0 @@
# Deploy ROCm on Linux
Start with {doc}`/deploy/linux/quick_start` or follow the detailed
instructions below.
## Prepare to Install
::::{grid} 1 1 2 2
:gutter: 1
:::{grid-item-card} Prerequisites
:link: prerequisites
:link-type: doc
The prerequisites page lists the required steps *before* installation.
:::
:::{grid-item-card} Install Choices
:link: install_overview
:link-type: doc
Package manager vs AMDGPU Installer
Standard Packages vs Multi-Version Packages
:::
::::
## Choose your install method
::::{grid} 1 1 2 2
:gutter: 1
:::{grid-item-card} Package Manager
:link: os-native/index
:link-type: doc
Directly use your distribution's package manager to install ROCm.
:::
:::{grid-item-card} AMDGPU Installer
:link: installer/index
:link-type: doc
Use an installer tool that orchestrates changes via the package
manager.
:::
::::
## See Also
- {doc}`/release/gpu_os_support`

View File

@@ -1,71 +0,0 @@
# ROCm Installation Options (Linux)
Users installing ROCm must choose between various installation options. A new
user should follow the [Quick Start guide](./quick_start).
## Package Manager versus AMDGPU Installer?
ROCm supports two methods for installation:
- Directly using the Linux distribution's package manager
- The `amdgpu-install` script
There is no difference in the final installation state when choosing either
option.
Using the distribution's package manager lets the user install,
upgrade and uninstall using familiar commands and workflows. Third party
ecosystem support is the same as your OS package manager.
The `amdgpu-install` script is a wrapper around the package manager. The same
packages are installed by this script as the package manager system.
The installer automates the installation process for the AMDGPU
and ROCm stack. It handles the complete installation process
for ROCm, including setting up the repository, cleaning the system, updating,
and installing the desired drivers and meta-packages. Users who are
less familiar with the package manager can choose this method for ROCm
installation.
(installation-types)=
## Single Version ROCm install versus Multi-Version
ROCm packages are versioned with both semantic versioning that is package
specific and a ROCm release version.
### Single-version Installation
The single-version ROCm installation refers to the following:
- Installation of a single instance of the ROCm release on a system
- Use of non-versioned ROCm meta-packages
### Multi-version Installation
The multi-version installation refers to the following:
- Installation of multiple instances of the ROCm stack on a system. Extending
the package name and its dependencies with the release version adds the
ability to support multiple versions of packages simultaneously.
- Use of versioned ROCm meta-packages.
```{attention}
ROCm packages that were previously installed from a single-version installation
must be removed before proceeding with the multi-version installation to avoid
conflicts.
```
```{note}
Multiversion install is not available for the kernel driver module, also referred to as AMDGPU.
```
The following image demonstrates the difference between single-version and
multi-version ROCm installation types:
```{figure-md} install-types
<img src="/data/deploy/linux/image.001.png" alt="">
ROCm Installation Types
```

View File

@@ -1,31 +0,0 @@
# AMDGPU Install Script
::::{grid} 2 3 3 3
:gutter: 1
:::{grid-item-card} Install
:link: install
:link-type: doc
How to install ROCm?
:::
:::{grid-item-card} Upgrade
:link: upgrade
:link-type: doc
Instructions for upgrading an existing ROCm installation.
:::
:::{grid-item-card} Uninstall
:link: uninstall
:link-type: doc
Steps for removing ROCm packages libraries and tools.
:::
::::
## See Also
- {doc}`/release/gpu_os_support`

Some files were not shown because too many files have changed in this diff Show More