Compare commits

..

7 Commits

Author SHA1 Message Date
Roopa Malavally
010d529b5a Update ROCm_Installation_Guide_v3.7.md 2020-09-29 12:27:58 -07:00
Roopa Malavally
862dcd9d45 Update ROCm_Installation_Guide_v3.7.md 2020-09-29 12:27:32 -07:00
Roopa Malavally
1d4428ed08 Update ROCm_Installation_Guide_v3.7.md 2020-09-29 12:27:15 -07:00
Roopa Malavally
f3d2c2e4c5 Update README.md 2020-09-29 12:24:59 -07:00
Roopa Malavally
8946653ba4 Update README.md 2020-09-29 12:24:32 -07:00
Roopa Malavally
5121abd825 Add files via upload 2020-09-29 12:23:43 -07:00
Roopa Malavally
b54610ef82 Add files via upload 2020-09-29 12:23:08 -07:00
195 changed files with 1425 additions and 15423 deletions

1
.github/CODEOWNERS vendored
View File

@@ -1 +0,0 @@
* @saadrahim @Rmalavally @amd-aakash @zhang2amd @jlgreathouse @samjwu @MathiasMagnus

View File

@@ -1,12 +0,0 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "pip" # See documentation for possible values
directory: "/docs/sphinx" # Location of package manifests
open-pull-requests-limit: 10
schedule:
interval: "daily"

View File

@@ -1,56 +0,0 @@
name: Linting
on:
push:
branches:
- develop
- main
pull_request:
branches:
- develop
- main
concurrency:
group: ${{ github.ref }}-${{ github.workflow }}
cancel-in-progress: true
jobs:
lint-rest:
name: "RestructuredText"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install rst-lint
run: pip install restructuredtext-lint
- name: Lint ResT files
run: rst-lint ${{ join(github.workspace, '/docs') }}
lint-md:
name: "Markdown"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Use markdownlint-cli2
uses: DavidAnson/markdownlint-cli2-action@v10.0.1
with:
globs: '**/*.md'
spelling:
name: "Spelling"
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Fetch config
shell: sh
run: |
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.spellcheck.yaml -O
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.wordlist.txt >> .wordlist.txt
- name: Run spellcheck
uses: rojopolis/spellcheck-github-actions@0.30.0
- name: On fail
if: failure()
run: |
echo "Please check for spelling mistakes or add them to '.wordlist.txt' in either the root of this project or in rocm-docs-core."

18
.gitignore vendored
View File

@@ -1,18 +0,0 @@
.venv
.vscode
build
# documentation artifacts
_build/
_images/
_static/
_templates/
_toc.yml
docBin/
_doxygen/
_readthedocs/
# avoid duplicating contributing.md due to conf.py
docs/contributing.md
docs/release.md
docs/CHANGELOG.md

View File

@@ -1,14 +0,0 @@
config:
default: true
MD013: false
MD026:
punctuation: '.,;:!'
MD029:
style: ordered
MD033: false
MD034: false
MD041: false
ignores:
- CHANGELOG.md
- "{,docs/}{RELEASE,release}.md"
- tools/autotag/templates/**/*.md

View File

@@ -1,21 +0,0 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.10"
apt_packages:
- "doxygen"
- "graphviz" # For dot graphs in doxygen
python:
install:
- requirements: docs/sphinx/requirements.txt
sphinx:
configuration: docs/conf.py
formats: []

View File

@@ -1,29 +0,0 @@
# isv_deployment_win
ABI
# gpu_aware_mpi
DMA
GDR
HCA
MPI
MVAPICH
Mellanox's
NIC
OFED
OSU
OpenFabrics
PeerDirect
RDMA
UCX
ib_core
# linear algebra
LAPACK
MMA
backends
cuSOLVER
cuSPARSE
# tuning_guides
BMC
DGEMM
HPCG
HPL
IOPM

Binary file not shown.

View File

@@ -1,813 +0,0 @@
# Release Notes
<!-- Do not edit this file! This file is autogenerated with -->
<!-- tools/autotag/tag_script.py -->
<!-- Disable lints since this is an auto-generated file. -->
<!-- markdownlint-disable blanks-around-headers -->
<!-- markdownlint-disable no-duplicate-header -->
<!-- markdownlint-disable no-blanks-blockquote -->
<!-- markdownlint-disable ul-indent -->
<!-- markdownlint-disable no-trailing-spaces -->
<!-- spellcheck-disable -->
The release notes for the ROCm platform.
-------------------
## ROCm 5.0.2
<!-- markdownlint-disable first-line-h1 -->
### Fixed Defects
The following defects are fixed in the ROCm v5.0.2 release.
#### Issue with hostcall Facility in HIP Runtime
In ROCm v5.0, when using the “assert()” call in a HIP kernel, the compiler may sometimes fail to emit kernel metadata related to the hostcall facility, which results in incomplete initialization of the hostcall facility in the HIP runtime. This can cause the HIP kernel to crash when it attempts to execute the “assert()” call.
The root cause was an incorrect check in the compiler to determine whether the hostcall facility is required by the kernel. This is fixed in the ROCm v5.0.2 release.
The resolution includes a compiler change, which emits the required metadata by default, unless the compiler can prove that the hostcall facility is not required by the kernel. This ensures that the “assert()” call never fails.
Note:
This fix may lead to breakage in some OpenMP offload use cases, which use print inside a target region and result in an abort in device code. The issue will be fixed in a future release.
Compatibility Matrix Updates to ROCm Deep Learning Guide
The compatibility matrix in the AMD Deep Learning Guide is updated for ROCm v5.0.2.
### Library Changes in ROCM 5.0.2
| Library | Version |
|---------|---------|
| hipBLAS | [0.49.0](https://github.com/ROCmSoftwarePlatform/hipBLAS/releases/tag/rocm-5.0.2) |
| hipCUB | [2.10.13](https://github.com/ROCmSoftwarePlatform/hipCUB/releases/tag/rocm-5.0.2) |
| hipFFT | [1.0.4](https://github.com/ROCmSoftwarePlatform/hipFFT/releases/tag/rocm-5.0.2) |
| hipSOLVER | [1.2.0](https://github.com/ROCmSoftwarePlatform/hipSOLVER/releases/tag/rocm-5.0.2) |
| hipSPARSE | [2.0.0](https://github.com/ROCmSoftwarePlatform/hipSPARSE/releases/tag/rocm-5.0.2) |
| rccl | [2.10.3](https://github.com/ROCmSoftwarePlatform/rccl/releases/tag/rocm-5.0.2) |
| rocALUTION | [2.0.1](https://github.com/ROCmSoftwarePlatform/rocALUTION/releases/tag/rocm-5.0.2) |
| rocBLAS | [2.42.0](https://github.com/ROCmSoftwarePlatform/rocBLAS/releases/tag/rocm-5.0.2) |
| rocFFT | [1.0.13](https://github.com/ROCmSoftwarePlatform/rocFFT/releases/tag/rocm-5.0.2) |
| rocPRIM | [2.10.12](https://github.com/ROCmSoftwarePlatform/rocPRIM/releases/tag/rocm-5.0.2) |
| rocRAND | [2.10.12](https://github.com/ROCmSoftwarePlatform/rocRAND/releases/tag/rocm-5.0.2) |
| rocSOLVER | [3.16.0](https://github.com/ROCmSoftwarePlatform/rocSOLVER/releases/tag/rocm-5.0.2) |
| rocSPARSE | [2.0.0](https://github.com/ROCmSoftwarePlatform/rocSPARSE/releases/tag/rocm-5.0.2) |
| rocThrust | [2.13.0](https://github.com/ROCmSoftwarePlatform/rocThrust/releases/tag/rocm-5.0.2) |
| Tensile | [4.31.0](https://github.com/ROCmSoftwarePlatform/Tensile/releases/tag/rocm-5.0.2) |
-------------------
## ROCm 5.0.1
<!-- markdownlint-disable first-line-h1 -->
### Deprecations and Warnings
#### Refactor of HIPCC/HIPCONFIG
In prior ROCm releases, by default, the hipcc/hipconfig Perl scripts were used to identify and set target compiler options, target platform, compiler, and runtime appropriately.
In ROCm v5.0.1, hipcc.bin and hipconfig.bin have been added as the compiled binary implementations of the hipcc and hipconfig. These new binaries are currently a work-in-progress, considered, and marked as experimental. ROCm plans to fully transition to hipcc.bin and hipconfig.bin in the a future ROCm release. The existing hipcc and hipconfig Perl scripts are renamed to hipcc.pl and hipconfig.pl respectively. New top-level hipcc and hipconfig Perl scripts are created, which can switch between the Perl script or the compiled binary based on the environment variable HIPCC_USE_PERL_SCRIPT.
In ROCm 5.0.1, by default, this environment variable is set to use hipcc and hipconfig through the Perl scripts.
Subsequently, Perl scripts will no longer be available in ROCm in a future release.
### Library Changes in ROCM 5.0.1
| Library | Version |
|---------|---------|
| hipBLAS | [0.49.0](https://github.com/ROCmSoftwarePlatform/hipBLAS/releases/tag/rocm-5.0.1) |
| hipCUB | [2.10.13](https://github.com/ROCmSoftwarePlatform/hipCUB/releases/tag/rocm-5.0.1) |
| hipFFT | [1.0.4](https://github.com/ROCmSoftwarePlatform/hipFFT/releases/tag/rocm-5.0.1) |
| hipSOLVER | [1.2.0](https://github.com/ROCmSoftwarePlatform/hipSOLVER/releases/tag/rocm-5.0.1) |
| hipSPARSE | [2.0.0](https://github.com/ROCmSoftwarePlatform/hipSPARSE/releases/tag/rocm-5.0.1) |
| rccl | [2.10.3](https://github.com/ROCmSoftwarePlatform/rccl/releases/tag/rocm-5.0.1) |
| rocALUTION | [2.0.1](https://github.com/ROCmSoftwarePlatform/rocALUTION/releases/tag/rocm-5.0.1) |
| rocBLAS | [2.42.0](https://github.com/ROCmSoftwarePlatform/rocBLAS/releases/tag/rocm-5.0.1) |
| rocFFT | [1.0.13](https://github.com/ROCmSoftwarePlatform/rocFFT/releases/tag/rocm-5.0.1) |
| rocPRIM | [2.10.12](https://github.com/ROCmSoftwarePlatform/rocPRIM/releases/tag/rocm-5.0.1) |
| rocRAND | [2.10.12](https://github.com/ROCmSoftwarePlatform/rocRAND/releases/tag/rocm-5.0.1) |
| rocSOLVER | [3.16.0](https://github.com/ROCmSoftwarePlatform/rocSOLVER/releases/tag/rocm-5.0.1) |
| rocSPARSE | [2.0.0](https://github.com/ROCmSoftwarePlatform/rocSPARSE/releases/tag/rocm-5.0.1) |
| rocThrust | [2.13.0](https://github.com/ROCmSoftwarePlatform/rocThrust/releases/tag/rocm-5.0.1) |
| Tensile | [4.31.0](https://github.com/ROCmSoftwarePlatform/Tensile/releases/tag/rocm-5.0.1) |
-------------------
## ROCm 5.0.0
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable no-blanks-blockquote -->
### What's New in This Release
#### HIP Enhancements
The ROCm v5.0 release consists of the following HIP enhancements.
##### HIP Installation Guide Updates
The HIP Installation Guide is updated to include building HIP from source on the NVIDIA platform.
Refer to the HIP Installation Guide v5.0 for more details.
##### Managed Memory Allocation
Managed memory, including the `__managed__` keyword, is now supported in the HIP combined host/device compilation. Through unified memory allocation, managed memory allows data to be shared and accessible to both the CPU and GPU using a single pointer. The allocation is managed by the AMD GPU driver using the Linux Heterogeneous Memory Management (HMM) mechanism. The user can call managed memory API hipMallocManaged to allocate a large chunk of HMM memory, execute kernels on a device, and fetch data between the host and device as needed.
> **Note**
>
> In a HIP application, it is recommended to do a capability check before calling the managed memory APIs. For example,
>
> ```cpp
> int managed_memory = 0;
> HIPCHECK(hipDeviceGetAttribute(&managed_memory,
> hipDeviceAttributeManagedMemory,p_gpuDevice));
> if (!managed_memory ) {
> printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
> }
> else {
> HIPCHECK(hipSetDevice(p_gpuDevice));
> HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
> . . .
> }
> ```
> **Note**
>
> The managed memory capability check may not be necessary; however, if HMM is not supported, managed malloc will fall back to using system memory. Other managed memory API calls will, then, have
Refer to the HIP API documentation for more details on managed memory APIs.
For the application, see
<https://github.com/ROCm-Developer-Tools/HIP/blob/rocm-4.5.x/tests/src/runtimeApi/memory/hipMallocManaged.cpp>
#### New Environment Variable
The following new environment variable is added in this release:
| Environment Variable | Value | Description |
|----------------------|-----------------------|-------------|
| HSA_COOP_CU_COUNT | 0 or 1 (default is 0) | Some processors support more CUs than can reliably be used in a cooperative dispatch. Setting the environment variable HSA_COOP_CU_COUNT to 1 will cause ROCr to return the correct CU count for cooperative groups through the HSA_AMD_AGENT_INFO_COOPERATIVE_COMPUTE_UNIT_COUNT attribute of hsa_agent_get_info(). Setting HSA_COOP_CU_COUNT to other values, or leaving it unset, will cause ROCr to return the same CU count for the attributes HSA_AMD_AGENT_INFO_COOPERATIVE_COMPUTE_UNIT_COUNT and HSA_AMD_AGENT_INFO_COMPUTE_UNIT_COUNT. Future ROCm releases will make HSA_COOP_CU_COUNT=1 the default. |
### Breaking Changes
#### Runtime Breaking Change
Re-ordering of the enumerated type in hip_runtime_api.h to better match NV. See below for the difference in enumerated types.
ROCm software will be affected if any of the defined enums listed below are used in the code. Applications built with ROCm v5.0 enumerated types will work with a ROCm 4.5.2 driver. However, an undefined behavior error will occur with a ROCm v4.5.2 application that uses these enumerated types with a ROCm 5.0 runtime.
```diff
typedef enum hipDeviceAttribute_t {
- hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block.
- hipDeviceAttributeMaxBlockDimX, ///< Maximum x-dimension of a block.
- hipDeviceAttributeMaxBlockDimY, ///< Maximum y-dimension of a block.
- hipDeviceAttributeMaxBlockDimZ, ///< Maximum z-dimension of a block.
- hipDeviceAttributeMaxGridDimX, ///< Maximum x-dimension of a grid.
- hipDeviceAttributeMaxGridDimY, ///< Maximum y-dimension of a grid.
- hipDeviceAttributeMaxGridDimZ, ///< Maximum z-dimension of a grid.
- hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in
- ///< bytes.
- hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes.
- hipDeviceAttributeWarpSize, ///< Warp size in threads.
- hipDeviceAttributeMaxRegistersPerBlock, ///< Maximum number of 32-bit registers available to a
- ///< thread block. This number is shared by all thread
- ///< blocks simultaneously resident on a
- ///< multiprocessor.
- hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz.
- hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz.
- hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits.
- hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device.
- hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in.
- hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2
- ///< cache.
- hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per
- ///< multiprocessor.
- hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number.
- hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number.
- hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels
- ///< concurrently.
- hipDeviceAttributePciBusId, ///< PCI Bus ID.
- hipDeviceAttributePciDeviceId, ///< PCI Device ID.
- hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory Per
- ///< Multiprocessor.
- hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices.
- hipDeviceAttributeIntegrated, ///< iGPU
- hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch
- hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices
- hipDeviceAttributeMaxTexture1DWidth, ///< Maximum number of elements in 1D images
- hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D images in image elements
- hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension height of 2D images in image elements
- hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D images in image elements
- hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimensions height of 3D images in image elements
- hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimensions depth of 3D images in image elements
+ hipDeviceAttributeCudaCompatibleBegin = 0,
- hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
- hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
+ hipDeviceAttributeEccEnabled = hipDeviceAttributeCudaCompatibleBegin, ///< Whether ECC support is enabled.
+ hipDeviceAttributeAccessPolicyMaxWindowSize, ///< Cuda only. The maximum size of the window policy in bytes.
+ hipDeviceAttributeAsyncEngineCount, ///< Cuda only. Asynchronous engines number.
+ hipDeviceAttributeCanMapHostMemory, ///< Whether host memory can be mapped into device address space
+ hipDeviceAttributeCanUseHostPointerForRegisteredMem,///< Cuda only. Device can access host registered memory
+ ///< at the same virtual address as the CPU
+ hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz.
+ hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in.
+ hipDeviceAttributeComputePreemptionSupported, ///< Cuda only. Device supports Compute Preemption.
+ hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels concurrently.
+ hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory concurrently with the CPU
+ hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch
+ hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices
+ hipDeviceAttributeDeviceOverlap, ///< Cuda only. Device can concurrently copy memory and execute a kernel.
+ ///< Deprecated. Use instead asyncEngineCount.
+ hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on
+ ///< the device without migration
+ hipDeviceAttributeGlobalL1CacheSupported, ///< Cuda only. Device supports caching globals in L1
+ hipDeviceAttributeHostNativeAtomicSupported, ///< Cuda only. Link between the device and the host supports native atomic operations
+ hipDeviceAttributeIntegrated, ///< Device is integrated GPU
+ hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices.
+ hipDeviceAttributeKernelExecTimeout, ///< Run time limit for kernels executed on the device
+ hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2 cache.
+ hipDeviceAttributeLocalL1CacheSupported, ///< caching locals in L1 is supported
+ hipDeviceAttributeLuid, ///< Cuda only. 8-byte locally unique identifier in 8 bytes. Undefined on TCC and non-Windows platforms
+ hipDeviceAttributeLuidDeviceNodeMask, ///< Cuda only. Luid device node mask. Undefined on TCC and non-Windows platforms
+ hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number.
+ hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system
+ hipDeviceAttributeMaxBlocksPerMultiProcessor, ///< Cuda only. Max block size per multiprocessor
+ hipDeviceAttributeMaxBlockDimX, ///< Max block size in width.
+ hipDeviceAttributeMaxBlockDimY, ///< Max block size in height.
+ hipDeviceAttributeMaxBlockDimZ, ///< Max block size in depth.
+ hipDeviceAttributeMaxGridDimX, ///< Max grid size in width.
+ hipDeviceAttributeMaxGridDimY, ///< Max grid size in height.
+ hipDeviceAttributeMaxGridDimZ, ///< Max grid size in depth.
+ hipDeviceAttributeMaxSurface1D, ///< Maximum size of 1D surface.
+ hipDeviceAttributeMaxSurface1DLayered, ///< Cuda only. Maximum dimensions of 1D layered surface.
+ hipDeviceAttributeMaxSurface2D, ///< Maximum dimension (width, height) of 2D surface.
+ hipDeviceAttributeMaxSurface2DLayered, ///< Cuda only. Maximum dimensions of 2D layered surface.
+ hipDeviceAttributeMaxSurface3D, ///< Maximum dimension (width, height, depth) of 3D surface.
+ hipDeviceAttributeMaxSurfaceCubemap, ///< Cuda only. Maximum dimensions of Cubemap surface.
+ hipDeviceAttributeMaxSurfaceCubemapLayered, ///< Cuda only. Maximum dimension of Cubemap layered surface.
+ hipDeviceAttributeMaxTexture1DWidth, ///< Maximum size of 1D texture.
+ hipDeviceAttributeMaxTexture1DLayered, ///< Cuda only. Maximum dimensions of 1D layered texture.
+ hipDeviceAttributeMaxTexture1DLinear, ///< Maximum number of elements allocatable in a 1D linear texture.
+ ///< Use cudaDeviceGetTexture1DLinearMaxWidth() instead on Cuda.
+ hipDeviceAttributeMaxTexture1DMipmap, ///< Cuda only. Maximum size of 1D mipmapped texture.
+ hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D texture.
+ hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension hight of 2D texture.
+ hipDeviceAttributeMaxTexture2DGather, ///< Cuda only. Maximum dimensions of 2D texture if gather operations performed.
+ hipDeviceAttributeMaxTexture2DLayered, ///< Cuda only. Maximum dimensions of 2D layered texture.
+ hipDeviceAttributeMaxTexture2DLinear, ///< Cuda only. Maximum dimensions (width, height, pitch) of 2D textures bound to pitched memory.
+ hipDeviceAttributeMaxTexture2DMipmap, ///< Cuda only. Maximum dimensions of 2D mipmapped texture.
+ hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D texture.
+ hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimension height of 3D texture.
+ hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimension depth of 3D texture.
+ hipDeviceAttributeMaxTexture3DAlt, ///< Cuda only. Maximum dimensions of alternate 3D texture.
+ hipDeviceAttributeMaxTextureCubemap, ///< Cuda only. Maximum dimensions of Cubemap texture
+ hipDeviceAttributeMaxTextureCubemapLayered, ///< Cuda only. Maximum dimensions of Cubemap layered texture.
+ hipDeviceAttributeMaxThreadsDim, ///< Maximum dimension of a block
+ hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block.
+ hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per multiprocessor.
+ hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies
+ hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits.
+ hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz.
+ hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number.
+ hipDeviceAttributeMultiGpuBoardGroupID, ///< Cuda only. Unique ID of device group on the same multi-GPU board
+ hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device.
+ hipDeviceAttributeName, ///< Device name.
+ hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory
+ ///< without calling hipHostRegister on it
+ hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via the host's page tables
+ hipDeviceAttributePciBusId, ///< PCI Bus ID.
+ hipDeviceAttributePciDeviceId, ///< PCI Device ID.
+ hipDeviceAttributePciDomainID, ///< PCI Domain ID.
+ hipDeviceAttributePersistingL2CacheMaxSize, ///< Cuda11 only. Maximum l2 persisting lines capacity in bytes
+ hipDeviceAttributeMaxRegistersPerBlock, ///< 32-bit registers available to a thread block. This number is shared
+ ///< by all thread blocks simultaneously resident on a multiprocessor.
+ hipDeviceAttributeMaxRegistersPerMultiprocessor, ///< 32-bit registers available per block.
+ hipDeviceAttributeReservedSharedMemPerBlock, ///< Cuda11 only. Shared memory reserved by CUDA driver per block.
+ hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in bytes.
+ hipDeviceAttributeSharedMemPerBlockOptin, ///< Cuda only. Maximum shared memory per block usable by special opt in.
+ hipDeviceAttributeSharedMemPerMultiprocessor, ///< Cuda only. Shared memory available per multiprocessor.
+ hipDeviceAttributeSingleToDoublePrecisionPerfRatio, ///< Cuda only. Performance ratio of single precision to double precision.
+ hipDeviceAttributeStreamPrioritiesSupported, ///< Cuda only. Whether to support stream priorities.
+ hipDeviceAttributeSurfaceAlignment, ///< Cuda only. Alignment requirement for surfaces
+ hipDeviceAttributeTccDriver, ///< Cuda only. Whether device is a Tesla device using TCC driver
+ hipDeviceAttributeTextureAlignment, ///< Alignment requirement for textures
+ hipDeviceAttributeTexturePitchAlignment, ///< Pitch alignment requirement for 2D texture references bound to pitched memory;
+ hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes.
+ hipDeviceAttributeTotalGlobalMem, ///< Global memory available on devicice.
+ hipDeviceAttributeUnifiedAddressing, ///< Cuda only. An unified address space shared with the host.
+ hipDeviceAttributeUuid, ///< Cuda only. Unique ID in 16 byte.
+ hipDeviceAttributeWarpSize, ///< Warp size in threads.
- hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies
- hipDeviceAttributeTextureAlignment, ///<Alignment requirement for textures
- hipDeviceAttributeTexturePitchAlignment, ///<Pitch alignment requirement for 2D texture references bound to pitched memory;
- hipDeviceAttributeKernelExecTimeout, ///<Run time limit for kernels executed on the device
- hipDeviceAttributeCanMapHostMemory, ///<Device can map host memory into device address space
- hipDeviceAttributeEccEnabled, ///<Device has ECC support enabled
+ hipDeviceAttributeCudaCompatibleEnd = 9999,
+ hipDeviceAttributeAmdSpecificBegin = 10000,
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple
- ///devices with unmatched functions
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple
- ///devices with unmatched grid dimensions
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple
- ///devices with unmatched block dimensions
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple
- ///devices with unmatched shared memories
- hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device
- hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system
- hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on
- /// the device without migration
- hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory
- /// concurrently with the CPU
- hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory
- /// without calling hipHostRegister on it
- hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via
- /// the host's page tables
- hipDeviceAttributeCanUseStreamWaitValue ///< '1' if Device supports hipStreamWaitValue32() and
- ///< hipStreamWaitValue64() , '0' otherwise.
+ hipDeviceAttributeClockInstructionRate = hipDeviceAttributeAmdSpecificBegin, ///< Frequency in khz of the timer used by the device-side "clock*"
+ hipDeviceAttributeArch, ///< Device architecture
+ hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory PerMultiprocessor.
+ hipDeviceAttributeGcnArch, ///< Device gcn architecture
+ hipDeviceAttributeGcnArchName, ///< Device gcnArch name in 256 bytes
+ hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
+ hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple
+ ///< devices with unmatched functions
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple
+ ///< devices with unmatched grid dimensions
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple
+ ///< devices with unmatched block dimensions
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple
+ ///< devices with unmatched shared memories
+ hipDeviceAttributeIsLargeBar, ///< Whether it is LargeBar
+ hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device
+ hipDeviceAttributeCanUseStreamWaitValue, ///< '1' if Device supports hipStreamWaitValue32() and
+ ///< hipStreamWaitValue64() , '0' otherwise.
+ hipDeviceAttributeAmdSpecificEnd = 19999,
+ hipDeviceAttributeVendorSpecificBegin = 20000,
+ // Extended attributes for vendors
} hipDeviceAttribute_t;
enum hipComputeMode {
```
### Known Issues
#### Incorrect dGPU Behavior When Using AMDVBFlash Tool
The AMDVBFlash tool, used for flashing the VBIOS image to dGPU, does not communicate with the ROM Controller specifically when the driver is present. This is because the driver, as part of its runtime power management feature, puts the dGPU to a sleep state.
As a workaround, users can run amdgpu.runpm=0, which temporarily disables the runtime power management feature from the driver and dynamically changes some power control-related sysfs files.
#### Issue with START Timestamp in ROCProfiler
Users may encounter an issue with the enabled timestamp functionality for monitoring one or multiple counters. ROCProfiler outputs the following four timestamps for each kernel:
- Dispatch
- Start
- End
- Complete
##### Issue
This defect is related to the Start timestamp functionality, which incorrectly shows an earlier time than the Dispatch timestamp.
To reproduce the issue,
1. Enable timing using the --timestamp on flag.
2. Use the -i option with the input filename that contains the name of the counter(s) to monitor.
3. Run the program.
4. Check the output result file.
##### Current behavior
BeginNS is lower than DispatchNS, which is incorrect.
##### Expected behavior
The correct order is:
Dispatch < Start < End < Complete
Users cannot use ROCProfiler to measure the time spent on each kernel because of the incorrect timestamp with counter collection enabled.
##### Recommended Workaround
Users are recommended to collect kernel execution timestamps without monitoring counters, as follows:
1. Enable timing using the --timestamp on flag, and run the application.
2. Rerun the application using the -i option with the input filename that contains the name of the counter(s) to monitor, and save this to a different output file using the -o flag.
3. Check the output result file from step 1.
4. The order of timestamps correctly displays as:
DispathNS < BeginNS < EndNS < CompleteNS
5. Users can find the values of the collected counters in the output file generated in step 2.
#### Radeon Pro V620 and W6800 Workstation GPUs
##### No Support for SMI and ROCDebugger on SRIOV
System Management Interface (SMI) and ROCDebugger are not supported in the SRIOV environment on any GPU. For more information, refer to the Systems Management Interface documentation.
### Deprecations and Warnings
#### ROCm Libraries Changes Deprecations and Deprecation Removal
- The hipFFT.h header is now provided only by the hipFFT package. Up to ROCm 5.0, users would get hipFFT.h in the rocFFT package too.
- The GlobalPairwiseAMG class is now entirely removed, users should use the PairwiseAMG class instead.
- The rocsparse_spmm signature in 5.0 was changed to match that of rocsparse_spmm_ex. In 5.0, rocsparse_spmm_ex is still present, but deprecated. Signature diff for rocsparse_spmm
rocsparse_spmm in 5.0
```h
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
rocsparse_operation trans_A,
rocsparse_operation trans_B,
const void* alpha,
const rocsparse_spmat_descr mat_A,
const rocsparse_dnmat_descr mat_B,
const void* beta,
const rocsparse_dnmat_descr mat_C,
rocsparse_datatype compute_type,
rocsparse_spmm_alg alg,
rocsparse_spmm_stage stage,
size_t* buffer_size,
void* temp_buffer);
```
rocSPARSE_spmm in 4.0
```h
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
rocsparse_operation trans_A,
rocsparse_operation trans_B,
const void* alpha,
const rocsparse_spmat_descr mat_A,
const rocsparse_dnmat_descr mat_B,
const void* beta,
const rocsparse_dnmat_descr mat_C,
rocsparse_datatype compute_type,
rocsparse_spmm_alg alg,
size_t* buffer_size,
void* temp_buffer);
```
#### HIP API Deprecations and Warnings
##### Warning - Arithmetic Operators of HIP Complex and Vector Types
In this release, arithmetic operators of HIP complex and vector types are deprecated.
- As alternatives to arithmetic operators of HIP complex types, users can use arithmetic operators of `std::complex` types.
- As alternatives to arithmetic operators of HIP vector types, users can use the operators of the native clang vector type associated with the data member of HIP vector types.
During the deprecation, two macros `_HIP_ENABLE_COMPLEX_OPERATORS` and `_HIP_ENABLE_VECTOR_OPERATORS` are provided to allow users to conditionally enable arithmetic operators of HIP complex or vector types.
Note, the two macros are mutually exclusive and, by default, set to Off.
The arithmetic operators of HIP complex and vector types will be removed in a future release.
Refer to the HIP API Guide for more information.
#### Warning - Compiler-Generated Code Object Version 4 Deprecation
Support for loading compiler-generated code object version 4 will be deprecated in a future release with no release announcement and replaced with code object 5 as the default version.
The current default is code object version 4.
#### Warning - MIOpenTensile Deprecation
MIOpenTensile will be deprecated in a future release.
### Library Changes in ROCM 5.0.0
| Library | Version |
|---------|---------|
| hipBLAS | ⇒ [0.49.0](https://github.com/ROCmSoftwarePlatform/hipBLAS/releases/tag/rocm-5.0.0) |
| hipCUB | ⇒ [2.10.13](https://github.com/ROCmSoftwarePlatform/hipCUB/releases/tag/rocm-5.0.0) |
| hipFFT | ⇒ [1.0.4](https://github.com/ROCmSoftwarePlatform/hipFFT/releases/tag/rocm-5.0.0) |
| hipSOLVER | ⇒ [1.2.0](https://github.com/ROCmSoftwarePlatform/hipSOLVER/releases/tag/rocm-5.0.0) |
| hipSPARSE | ⇒ [2.0.0](https://github.com/ROCmSoftwarePlatform/hipSPARSE/releases/tag/rocm-5.0.0) |
| rccl | ⇒ [2.10.3](https://github.com/ROCmSoftwarePlatform/rccl/releases/tag/rocm-5.0.0) |
| rocALUTION | ⇒ [2.0.1](https://github.com/ROCmSoftwarePlatform/rocALUTION/releases/tag/rocm-5.0.0) |
| rocBLAS | ⇒ [2.42.0](https://github.com/ROCmSoftwarePlatform/rocBLAS/releases/tag/rocm-5.0.0) |
| rocFFT | ⇒ [1.0.13](https://github.com/ROCmSoftwarePlatform/rocFFT/releases/tag/rocm-5.0.0) |
| rocPRIM | ⇒ [2.10.12](https://github.com/ROCmSoftwarePlatform/rocPRIM/releases/tag/rocm-5.0.0) |
| rocRAND | ⇒ [2.10.12](https://github.com/ROCmSoftwarePlatform/rocRAND/releases/tag/rocm-5.0.0) |
| rocSOLVER | ⇒ [3.16.0](https://github.com/ROCmSoftwarePlatform/rocSOLVER/releases/tag/rocm-5.0.0) |
| rocSPARSE | ⇒ [2.0.0](https://github.com/ROCmSoftwarePlatform/rocSPARSE/releases/tag/rocm-5.0.0) |
| rocThrust | ⇒ [2.13.0](https://github.com/ROCmSoftwarePlatform/rocThrust/releases/tag/rocm-5.0.0) |
| Tensile | ⇒ [4.31.0](https://github.com/ROCmSoftwarePlatform/Tensile/releases/tag/rocm-5.0.0) |
#### hipBLAS 0.49.0
hipBLAS 0.49.0 for ROCm 5.0.0
##### Added
- Added rocSOLVER functions to hipblas-bench
- Added option ROCM_MATHLIBS_API_USE_HIP_COMPLEX to opt-in to use hipFloatComplex and hipDoubleComplex
- Added compilation warning for future trmm changes
- Added documentation to hipblas.h
- Added option to forgo pivoting for getrf and getri when ipiv is nullptr
- Added code coverage option
##### Fixed
- Fixed use of incorrect &#39;HIP_PATH&#39; when building from source.
- Fixed windows packaging
- Allowing negative increments in hipblas-bench
- Removed boost dependency
#### hipCUB 2.10.13
hipCUB 2.10.13 for ROCm 5.0.0
##### Fixed
- Added missing includes to hipcub.hpp
##### Added
- Bfloat16 support to test cases (device_reduce &amp; device_radix_sort)
- Device merge sort
- Block merge sort
- API update to CUB 1.14.0
##### Changed
- The SetupNVCC.cmake automatic target selector select all of the capabalities of all available card for NVIDIA backend.
#### hipFFT 1.0.4
hipFFT 1.0.4 for ROCm 5.0.0
##### Fixed
- Add calls to rocFFT setup/cleanup.
- Cmake fixes for clients and backend support.
##### Added
- Added support for Windows 10 as a build target.
#### hipSOLVER 1.2.0
hipSOLVER 1.2.0 for ROCm 5.0.0
##### Added
- Added functions
- sytrf
- hipsolverSsytrf_bufferSize, hipsolverDsytrf_bufferSize, hipsolverCsytrf_bufferSize, hipsolverZsytrf_bufferSize
- hipsolverSsytrf, hipsolverDsytrf, hipsolverCsytrf, hipsolverZsytrf
##### Fixed
- Fixed use of incorrect `HIP_PATH` when building from source (#40).
Thanks [@jakub329homola](https://github.com/jakub329homola)!
#### hipSPARSE 2.0.0
hipSPARSE 2.0.0 for ROCm 5.0.0
##### Added
- Added (conjugate) transpose support for csrmv, hybmv and spmv routines
#### rccl 2.10.3
RCCL 2.10.3 for ROCm 5.0.0
##### Added
- Compatibility with NCCL 2.10.3
##### Known Issues
- Managed memory is not currently supported for clique-based kernels
#### rocALUTION 2.0.1
rocALUTION 2.0.1 for ROCm 5.0.0
##### Changed
- Removed deprecated GlobalPairwiseAMG class, please use PairwiseAMG instead.
- Changed to C++ 14 Standard
##### Improved
- Added sanitizer option
- Improved documentation
#### rocBLAS 2.42.0
rocBLAS 2.42.0 for ROCm 5.0.0
##### Added
- Added rocblas_get_version_string_size convenience function
- Added rocblas_xtrmm_outofplace, an out-of-place version of rocblas_xtrmm
- Added hpl and trig initialization for gemm_ex to rocblas-bench
- Added source code gemm. It can be used as an alternative to Tensile for debugging and development
- Added option ROCM_MATHLIBS_API_USE_HIP_COMPLEX to opt-in to use hipFloatComplex and hipDoubleComplex
##### Optimizations
- Improved performance of non-batched and batched single-precision GER for size m &gt; 1024. Performance enhanced by 5-10% measured on a MI100 (gfx908) GPU.
- Improved performance of non-batched and batched HER for all sizes and data types. Performance enhanced by 2-17% measured on a MI100 (gfx908) GPU.
##### Changed
- Instantiate templated rocBLAS functions to reduce size of librocblas.so
- Removed static library dependency on msgpack
- Removed boost dependencies for clients
##### Fixed
- Option to install script to build only rocBLAS clients with a pre-built rocBLAS library
- Correctly set output of nrm2_batched_ex and nrm2_strided_batched_ex when given bad input
- Fix for dgmm with side == rocblas_side_left and a negative incx
- Fixed out-of-bounds read for small trsm
- Fixed numerical checking for tbmv_strided_batched
#### rocFFT 1.0.13
rocFFT 1.0.13 for ROCm 5.0.0
##### Optimizations
- Improved many plans by removing unnecessary transpose steps.
- Optimized scheme selection for 3D problems.
- Imposed less restrictions on 3D_BLOCK_RC selection. More problems can use 3D_BLOCK_RC and
have some performance gain.
- Enabled 3D_RC. Some 3D problems with SBCC-supported z-dim can use less kernels and get benefit.
- Force --length 336 336 56 (dp) use faster 3D_RC to avoid it from being skipped by conservative
threshold test.
- Optimized some even-length R2C/C2R cases by doing more operations
in-place and combining pre/post processing into Stockham kernels.
- Added radix-17.
##### Added
- Added new kernel generator for select fused-2D transforms.
##### Fixed
- Improved large 1D transform decompositions.
#### rocPRIM 2.10.12
rocPRIM 2.10.12 for ROCm 5.0.0
##### Fixed
- Enable bfloat16 tests and reduce threshold for bfloat16
- Fix device scan limit_size feature
- Non-optimized builds no longer trigger local memory limit errors
##### Added
- Added scan size limit feature
- Added reduce size limit feature
- Added transform size limit feature
- Add block_load_striped and block_store_striped
- Add gather_to_blocked to gather values from other threads into a blocked arrangement
- The block sizes for device merge sorts initial block sort and its merge steps are now separate in its kernel config
- the block sort step supports multiple items per thread
##### Changed
- size_limit for scan, reduce and transform can now be set in the config struct instead of a parameter
- Device_scan and device_segmented_scan: `inclusive_scan` now uses the input-type as accumulator-type, `exclusive_scan` uses initial-value-type.
- This particularly changes behaviour of small-size input types with large-size output types (e.g. `short` input, `int` output).
- And low-res input with high-res output (e.g. `float` input, `double` output)
- Revert old Fiji workaround, because they solved the issue at compiler side
- Update README cmake minimum version number
- Block sort support multiple items per thread
- currently only powers of two block sizes, and items per threads are supported and only for full blocks
- Bumped the minimum required version of CMake to 3.16
##### Known Issues
- Unit tests may soft hang on MI200 when running in hipMallocManaged mode.
- device_segmented_radix_sort, device_scan unit tests failing for HIP on Windows
- ReduceEmptyInput cause random faulire with bfloat16
#### rocRAND 2.10.12
rocRAND 2.10.12 for ROCm 5.0.0
##### Changed
- No updates or changes for ROCm 5.0.0.
#### rocSOLVER 3.16.0
rocSOLVER 3.16.0 for ROCm 5.0.0
##### Added
- Symmetric matrix factorizations:
- LASYF
- SYTF2, SYTRF (with batched and strided\_batched versions)
- Added `rocsolver_get_version_string_size` to help with version string queries
- Added `rocblas_layer_mode_ex` and the ability to print kernel calls in the trace and profile logs
- Expanded batched and strided\_batched sample programs.
##### Optimized
- Improved general performance of LU factorization
- Increased parallelism of specialized kernels when compiling from source, reducing build times on multi-core systems.
##### Changed
- The rocsolver-test client now prints the rocSOLVER version used to run the tests,
rather than the version used to build them
- The rocsolver-bench client now prints the rocSOLVER version used in the benchmark
##### Fixed
- Added missing stdint.h include to rocsolver.h
#### rocSPARSE 2.0.0
rocSPARSE 2.0.0 for ROCm 5.0.0
##### Added
- csrmv, coomv, ellmv, hybmv for (conjugate) transposed matrices
- csrmv for symmetric matrices
##### Changed
- spmm\_ex is now deprecated and will be removed in the next major release
##### Improved
- Optimization for gtsv
#### rocThrust 2.13.0
rocThrust 2.13.0 for ROCm 5.0.0
##### Added
- Updated to match upstream Thrust 1.13.0
- Updated to match upstream Thrust 1.14.0
- Added async scan
##### Changed
- Scan algorithms: `inclusive_scan` now uses the input-type as accumulator-type, `exclusive_scan` uses initial-value-type.
- This particularly changes behaviour of small-size input types with large-size output types (e.g. `short` input, `int` output).
- And low-res input with high-res output (e.g. `float` input, `double` output)
#### Tensile 4.31.0
Tensile 4.31.0 for ROCm 5.0.0
##### Added
- DirectToLds support (x2/x4)
- DirectToVgpr support for DGEMM
- Parameter to control number of files kernels are merged into to better parallelize kernel compilation
- FP16 alternate implementation for HPA HGEMM on aldebaran
##### Optimized
- Add DGEMM NN custom kernel for HPL on aldebaran
##### Changed
- Update tensile_client executable to std=c++14
##### Removed
- Remove unused old Tensile client code
##### Fixed
- Fix hipErrorInvalidHandle during benchmarks
- Fix addrVgpr for atomic GSU
- Fix for Python 3.8: add case for Constant nodeType
- Fix architecture mapping for gfx1011 and gfx1012
- Fix PrintSolutionRejectionReason verbiage in KernelWriter.py
- Fix vgpr alignment problem when enabling flat buffer load

View File

@@ -1,246 +0,0 @@
# Contributing to ROCm Docs
AMD values and encourages the ROCm community to contribute to our code and
documentation. This repository is focused on ROCm documentation and this
contribution guide describes the recommend method for creating and modifying our
documentation.
While interacting with ROCm Documentation, we encourage you to be polite and
respectful in your contributions, content or otherwise. Authors, maintainers of
these docs act on good intentions and to the best of their knowledge.
Keep that in mind while you engage. Should you have issues with contributing
itself, refer to
[discussions](https://github.com/RadeonOpenCompute/ROCm/discussions) on the
GitHub repository.
## Supported Formats
Our documentation includes both markdown and rst files. Markdown is encouraged
over rst due to the lower barrier to participation. GitHub flavored markdown is preferred
for all submissions as it will render accurately on our GitHub repositories. For existing documentation,
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html) markdown
is used to implement certain features unsupported in GitHub markdown. This is
not encouraged for new documentation. AMD will transition
to stricter use of GitHub flavored markdown with a few caveats. ROCm documentation
also uses [sphinx-design](https://sphinx-design.readthedocs.io/en/latest/index.html)
in our markdown and rst files. We also will use breathe syntax for doxygen documentation
in our markdown files. Other design elements for effective HTML rendering of the documents
may be added to our markdown files. Please see
[GitHub](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github)'s
guide on writing and formatting on GitHub as a starting point.
ROCm documentation adds additional requirements to markdown and rst based files
as follows:
- Level one headers are only used for page titles. There must be only one level
1 header per file for both Markdown and Restructured Text.
- Pass [markdownlint](https://github.com/markdownlint/markdownlint) check via
our automated github action on a Pull Request (PR).
## Filenames and folder structure
Please use snake case for file names. Our documentation follows pitchfork for
folder structure. All documentation is in /docs except for special files like
the contributing guide in the / folder. All images used in the documentation are
place in the /docs/data folder.
## How to provide feedback for for ROCm documentation
There are three standard ways to provide feedback for this repository.
### Pull Request
All contributions to ROCm documentation should arrive via the
[GitHub Flow](https://docs.github.com/en/get-started/quickstart/github-flow)
targetting the develop branch of the repository. If you are unable to contribute
via the GitHub Flow, feel free to email us. TODO, confirm email address.
### GitHub Issue
Issues on existing or absent docs can be filed as [GitHub issues
](https://github.com/RadeonOpenCompute/ROCm/issues).
### Email Feedback
## Language and Style
Adopting Microsoft CPP-Docs guidelines for [Voice and Tone
](https://github.com/MicrosoftDocs/cpp-docs/blob/main/styleguide/voice-tone.md).
ROCm documentation templates to be made public shortly. ROCm templates dictate
the recommended structure and flow of the documentation. Guidelines on how to
integrate figures, equations, and tables are all based off
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html).
Font size and selection, page layout, white space control, and other formatting
details are controlled via rocm-docs-core, sphinx extention. Please raise issues
in rocm-docs-core for any formatting concerns and changes requested.
## Building Documentation
While contributing, one may build the documentation locally on the command-line
or rely on Continuous Integration for previewing the resulting HTML pages in a
browser.
### Command line documentation builds
Python versions known to build documentation:
- 3.8
To build the docs locally using Python Virtual Environment (`venv`), execute the
following commands from the project root:
```sh
python3 -mvenv .venv
# Windows
.venv/Scripts/python -m pip install -r docs/sphinx/requirements.txt
.venv/Scripts/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
# Linux
.venv/bin/python -m pip install -r docs/sphinx/requirements.txt
.venv/bin/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
```
Then open up `_build/html/index.html` in your favorite browser.
### Pull Requests documentation builds
When opening a PR to the `develop` branch on GitHub, the page corresponding to
the PR (`https://github.com/RadeonOpenCompute/ROCm/pull/<pr_number>`) will have
a summary at the bottom. This requires the user be logged in to GitHub.
- There, click `Show all checks` and `Details` of the Read the Docs pipeline. It
will take you to `https://readthedocs.com/projects/advanced-micro-devices-rocm/
builds/<some_build_num>/`
- The list of commands shown are the exact ones used by CI to produce a render
of the documentation.
- There, click on the small blue link `View docs` (which is not the same as the
bigger button with the same text). It will take you to the built HTML site with
a URL of the form `https://
advanced-micro-devices-demo--<pr_number>.com.readthedocs.build/projects/alpha/en
/<pr_number>/`.
### Build the docs using VS Code
One can put together a productive environment to author documentation and also
test it locally using VS Code with only a handful of extensions. Even though the
extension landscape of VS Code is ever changing, here is one example setup that
proved useful at the time of writing. In it, one can change/add content, build a
new version of the docs using a single VS Code Task (or hotkey), see all errors/
warnings emitted by Sphinx in the Problems pane and immediately see the
resulting website show up on a locally serving web server.
#### Configuring VS Code
1. Install the following extensions:
- Python (ms-python.python)
- Live Server (ritwickdey.LiveServer)
2. Add the following entries in `.vscode/settings.json`
```json
{
"liveServer.settings.root": "/.vscode/build/html",
"liveServer.settings.wait": 1000,
"python.terminal.activateEnvInCurrentTerminal": true
}
```
The settings in order are set for the following reasons:
- Sets the root of the output website for live previews. Must be changed
alongside the `tasks.json` command.
- Tells live server to wait with the update to give time for Sphinx to
regenerate site contents and not refresh before all is don. (Empirical value)
- Automatic virtual env activation is a nice touch, should you want to build
the site from the integrated terminal.
3. Add the following tasks in `.vscode/tasks.json`
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Docs",
"type": "process",
"windows": {
"command": "${workspaceFolder}/.venv/Scripts/python.exe"
},
"command": "${workspaceFolder}/.venv/bin/python3",
"args": [
"-m",
"sphinx",
"-j",
"auto",
"-T",
"-b",
"html",
"-d",
"${workspaceFolder}/.vscode/build/doctrees",
"-D",
"language=en",
"${workspaceFolder}/docs",
"${workspaceFolder}/.vscode/build/html"
],
"problemMatcher": [
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):(\\d+):\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
},
},
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):{1,2}\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"severity": 2,
"message": 3
}
}
],
"group": {
"kind": "build",
"isDefault": true
}
},
],
}
```
> (Implementation detail: two problem matchers were needed to be defined,
> because VS Code doesn't tolerate some problem information being potentially
> absent. While a single regex could match all types of errors, if a capture
> group remains empty (the line number doesn't show up in all warning/error
> messages) but the `pattern` references said empty capture group, VS Code
> discards the message completely.)
4. Configure Python virtual environment (venv)
- From the Command Palette, run `Python: Create Environment`
- Select `venv` environment and the `docs/sphinx/requirements.txt` file.
_(Simply pressing enter while hovering over the file from the dropdown is
insufficient, one has to select the radio button with the 'Space' key if
using the keyboard.)_
5. Build the docs
- Launch the default build Task using either:
- a hotkey _(default is 'Ctrl+Shift+B')_ or
- by issuing the `Tasks: Run Build Task` from the Command Palette.
6. Open the live preview
- Navigate to the output of the site within VS Code, right-click on
`.vscode/build/html/index.html` and select `Open with Live Server`. The
contents should update on every rebuild without having to refresh the
browser.
<!-- markdownlint-restore -->

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

472
README.md
View File

@@ -1,54 +1,448 @@
# AMD ROCm™ Platform
ROCm™ is an open-source stack for GPU computation. ROCm is primarily Open-Source
Software (OSS) that allows developers the freedom to customize and tailor their
GPU software for their own needs while collaborating with a community of other
developers, and helping each other find solutions in an agile, flexible, rapid
and secure manner.
# AMD ROCm Release Notes v3.7.0
ROCm is a collection of drivers, development tools and APIs enabling GPU
programming from the low-level kernel to end-user applications. ROCm is powered
by AMDs Heterogeneous-computing Interface for Portability (HIP), an OSS C++ GPU
programming environment and its corresponding runtime. HIP allows ROCm
developers to create portable applications on different platforms by deploying
code on a range of platforms, from dedicated gaming GPUs to exascale HPC
clusters. ROCm supports programming models such as OpenMP and OpenCL, and
includes all the necessary OSS compilers, debuggers and libraries. ROCm is fully
integrated into ML frameworks such as PyTorch and TensorFlow. ROCm can be
deployed in many ways, including through the use of containers such as Docker,
Spack, and your own build from source.
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
It also covers known issues and deprecated features in this release.
ROCms goal is to allow our users to maximize their GPU hardware investment.
ROCm is designed to help develop, test and deploy GPU accelerated HPC, AI,
scientific computing, CAD, and other applications in a free, open-source,
integrated and secure software ecosystem.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [AOMP Enhancements](#AOMP-Enhancements)
* [Compatibility with NVIDIA Communications Collective Library v2\.7 API](#Compatibility-with-NVIDIA-Communications-Collective-Library-v27-API)
* [Singular Value Decomposition of Bi\-diagonal Matrices](#Singular-Value-Decomposition-of-Bi-diagonal-Matrices)
* [rocSPARSE_gemmi\() Operations for Sparse Matrices](#rocSPARSE_gemmi-Operations-for-Sparse-Matrices)
- [Known Issues](#Known-Issues)
This repository contains the manifest file for ROCm™ releases, changelogs, and
release information. The file default.xml contains information for all
repositories and the associated commit used to build the current ROCm release.
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
The default.xml file uses the repo Manifest format.
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
* [ROCm Binary Package Structure](#ROCm-Binary-Package-Structure)
* [ROCm Platform Packages](#ROCm-Platform-Packages)
The develop branch of this repository contains content for the next
ROCm release.
## ROCm Documentation
# Supported Operating Systems
ROCm Documentation is available online at
[rocm.docs.amd.com](https://rocm.docs.amd.com). Source code for the documenation
is located in the docs folder of most repositories that are part of ROCm.
## Support for Ubuntu 20.04
### How to build documentation via Sphinx
```bash
cd docs
In this release, AMD ROCm extends support to Ubuntu 20.04, including dual-kernel.
pip3 install -r sphinx/requirements.txt
## List of Supported Operating Systems
python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
The AMD ROCm v3.7.x platform is designed to support the following operating systems:
* Ubuntu 20.04 and 18.04.4 (Kernel 5.3)
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
* SLES 15 SP1
## Fresh Installation of AMD ROCm v3.7 Recommended
A fresh and clean installation of AMD ROCm v3.7 is recommended. An upgrade from previous releases to AMD ROCm v3.7 is not supported.
For more information, refer to the AMD ROCm Installation Guide at:
https://github.com/RadeonOpenCompute/ROCm/blob/roc-3.7.x/ROCm_Installation_Guide_v3.7.md
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
# AMD ROCm Documentation Updates
## AMD ROCm Installation Guide
The AMD ROCm Installation Guide in this release includes:
* Updated Supported Environments
https://github.com/RadeonOpenCompute/ROCm/blob/roc-3.7.x/ROCm_Installation_Guide_v3.7.md
## AMD ROCm - HIP Documentation Updates
### Texture and Surface Functions
The documentation for Texture and Surface functions is updated and available at:
https://rocmdocs.amd.com/en/latest/Programming_Guides/Kernel_language.html
### Warp Shuffle Functions
The documentation for Warp Shuffle functions is updated and available at:
https://rocmdocs.amd.com/en/latest/Programming_Guides/Kernel_language.html
### Compiler Defines and Environment Variables
The documentation for the updated HIP Porting Guide is available at:
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html#hip-porting-guide
## AMD ROCm Debug Agent
ROCm Debug Agent Library
https://rocmdocs.amd.com/en/latest/ROCm_Tools/rocm-debug-agent.html
## General AMD ROCm Documentatin Links
Access the following links for more information:
* For AMD ROCm documentation, see
https://rocmdocs.amd.com/en/latest/
* For installation instructions on supped platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
* For AMD ROCm binary structure, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#build-amd-rocm
* For AMD ROCm Release History, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
# What\'s New in This Release
## AOMP ENHANCEMENTS
AOMP is a scripted build of LLVM. It supports OpenMP target offload on AMD GPUs. Since AOMP is a Clang/LLVM compiler, it also supports GPU offloading with HIP, CUDA, and OpenCL.
The following enhancements are made for AOMP in this release:
* OpenMP 5.0 is enabled by default. You can use -fopenmp-version=45 for OpenMP 4.5 compliance
* Restructured to include the ROCm compiler
* B=Bitcode search path using hip policy HIP_DEVICE_LIB_PATH and hip-devic-lib command line option to enable global_free for kmpc_impl_free
Restructured hostrpc, including:
* Replaced hostcall register functions with handlePayload(service, payload). Note, handlPayload has a simple switch to call the correct service handler function.
* Removed the WITH_HSA macro
* Moved the hostrpc stubs and host fallback functions into a single library and the include file. This enables the stubs openmp cpp source instead of hip and reorganizes the directory openmp/libomptarget/hostrpc.
* Moved hostrpc_invoke.cl to DeviceRTLs/amdgcn.
* Generalized the vargs processing in printf to work for any vargs function to execute on the host, including a vargs function that uses a function pointer.
* Reorganized files, added global_allocate and global_free.
* Fixed llvm TypeID enum to match the current upstream llvm TypeID.
* Moved strlen_max function inside the declare target #ifdef _DEVICE_GPU in hostrpc.cpp to resolve linker failure seen in pfspecifier_str smoke test.
* Fixed AOMP_GIT_CHECK_BRANCH in aomp_common_vars to not block builds in Red Hat if the repository is on a specific commit hash.
* Simplified and reduced the size of openmp host runtime
* Switched to default OpenMP 5.0
For more information, see https://github.com/ROCm-Developer-Tools/aomp
## ROCm COMMUNICATIONS COLLECTIVE LIBRARY
### Compatibility with NVIDIA Communications Collective Library v2\.7 API
ROCm Communications Collective Library (RCCL) is now compatible with the NVIDIA Communications Collective Library (NCCL) v2.7 API.
RCCL (pronounced "Rickle") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is also initial support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe, xGMI as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in a single node or multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications.
The collective operations are implemented using ring and tree algorithms and have been optimized for throughput and latency. For best performance, small operations can be either batched into larger operations or aggregated through the API.
For more information about RCCL APIs and compatibility with NCCL v2.7, see
https://rccl.readthedocs.io/en/develop/index.html
## Singular Value Decomposition of Bi\-diagonal Matrices
Rocsolver_bdsqr now computes the Singular Value Decomposition (SVD) of bi-diagonal matrices. It is an auxiliary function for the SVD of general matrices (function rocsolver_gesvd).
BDSQR computes the singular value decomposition (SVD) of a n-by-n bidiagonal matrix B.
The SVD of B has the following form:
B = Ub * S * Vb'
where
• S is the n-by-n diagonal matrix of singular values of B
• the columns of Ub are the left singular vectors of B
• the columns of Vb are its right singular vectors
The computation of the singular vectors is optional; this function accepts input matrices U (of size nu-by-n) and V (of size n-by-nv) that are overwritten with U*Ub and Vb*V. If nu = 0 no left vectors are computed; if nv = 0 no right vectors are computed.
Optionally, this function can also compute Ub*C for a given n-by-nc input matrix C.
PARAMETERS
• [in] handle: rocblas_handle.
• [in] uplo: rocblas_fill.
Specifies whether B is upper or lower bidiagonal.
• [in] n: rocblas_int. n >= 0.
The number of rows and columns of matrix B.
• [in] nv: rocblas_int. nv >= 0.
The number of columns of matrix V.
• [in] nu: rocblas_int. nu >= 0.
The number of rows of matrix U.
• [in] nc: rocblas_int. nu >= 0.
The number of columns of matrix C.
• [inout] D: pointer to real type. Array on the GPU of dimension n.
On entry, the diagonal elements of B. On exit, if info = 0, the singular values of B in decreasing order; if info > 0, the diagonal elements of a bidiagonal matrix orthogonally equivalent to B.
• [inout] E: pointer to real type. Array on the GPU of dimension n-1.
On entry, the off-diagonal elements of B. On exit, if info > 0, the off-diagonal elements of a bidiagonal matrix orthogonally equivalent to B (if info = 0 this matrix converges to zero).
• [inout] V: pointer to type. Array on the GPU of dimension ldv*nv.
On entry, the matrix V. On exit, it is overwritten with Vb*V. (Not referenced if nv = 0).
• [in] ldv: rocblas_int. ldv >= n if nv > 0, or ldv >=1 if nv = 0.
Specifies the leading dimension of V.
• [inout] U: pointer to type. Array on the GPU of dimension ldu*n.
On entry, the matrix U. On exit, it is overwritten with U*Ub. (Not referenced if nu = 0).
• [in] ldu: rocblas_int. ldu >= nu.
Specifies the leading dimension of U.
• [inout] C: pointer to type. Array on the GPU of dimension ldc*nc.
On entry, the matrix C. On exit, it is overwritten with Ub*C. (Not referenced if nc = 0).
• [in] ldc: rocblas_int. ldc >= n if nc > 0, or ldc >=1 if nc = 0.
Specifies the leading dimension of C.
• [out] info: pointer to a rocblas_int on the GPU.
If info = 0, successful exit. If info = i > 0, i elements of E have not converged to zero.
For more information, see
https://rocsolver.readthedocs.io/en/latest/userguide_api.html#rocsolver-type-bdsqr
### rocSPARSE_gemmi\() Operations for Sparse Matrices
This enhancement provides a dense matrix sparse matrix multiplication using the CSR storage format.
rocsparse_gemmi multiplies the scalar αα with a dense m×km×k matrix AA and the sparse k×nk×n matrix BB defined in the CSR storage format, and adds the result to the dense m×nm×n matrix CC that is multiplied by the scalar ββ, such that
C:=α⋅op(A)⋅op(B)+β⋅CC:=α⋅op(A)⋅op(B)+β⋅C
with
op(A)=⎧⎩⎨⎪⎪A,AT,AH,if trans_A == rocsparse_operation_noneif trans_A == rocsparse_operation_transposeif trans_A == rocsparse_operation_conjugate_transposeop(A)={A,if trans_A == rocsparse_operation_noneAT,if trans_A == rocsparse_operation_transposeAH,if trans_A == rocsparse_operation_conjugate_transpose
and
op(B)=⎧⎩⎨⎪⎪B,BT,BH,if trans_B == rocsparse_operation_noneif trans_B == rocsparse_operation_transposeif trans_B == rocsparse_operation_conjugate_transposeop(B)={B,if trans_B == rocsparse_operation_noneBT,if trans_B == rocsparse_operation_transposeBH,if trans_B == rocsparse_operation_conjugate_transpose
Note: This function is non-blocking and executed asynchronously with the host. It may return before the actual computation has finished.
For more information and examples, see
https://rocsparse.readthedocs.io/en/master/usermanual.html#rocsparse-gemmi
# Known Issues
The following are the known issues in this release.
## (AOMP) Undefined Hidden Symbol Linker Error Causes Compilation Failure in HIP
The HIP example device_lib fails to compile due to unreferenced symbols with Link Time Optimization resulting in undefined hidden symbol errors.
This issue is under investigation and there is no known workaround at this time.
## MIGraphX Fails for fp16 Datatype
The MIGraphX functionality does not work for the fp16 datatype.
The following workaround is recommended:
Use the AMD ROCm v3.3 of MIGraphX
Or
Build MIGraphX v3.7 from the source using AMD ROCm v3.3
## Missing Google Test Installation May Cause RCCL Unit Test Compilation Failure
Users of the RCCL install.sh script may encounter an RCCL unit test compilation error. It is recommended to use CMAKE directly instead of install.sh to compile RCCL. Ensure Google Test 1.10+ is available in the CMAKE search path.
As a workaround, use the latest RCCL from the GitHub development branch at:
https://github.com/ROCmSoftwarePlatform/rccl/pull/237
## Issue with Peer-to-Peer Transfers
Using peer-to-peer (P2P) transfers on systems without the hardware P2P assistance may produce incorrect results.
Ensure the hardware supports peer-to-peer transfers and enable the peer-to-peer setting in the hardware to resolve this issue.
## Partial Loss of Tracing Events for Large Applications
An internal tracing buffer allocation issue can cause a partial loss of some tracing events for large applications.
As a workaround, rebuild the roctracer/rocprofiler libraries from the GitHub roc-3.7 branch at:
• https://github.com/ROCm-Developer-Tools/rocprofiler
• https://github.com/ROCm-Developer-Tools/roctracer
## GPU Kernel C++ Names Not Demangled
GPU kernel C++ names in the profiling traces and stats produced by —hsa-trace option are not demangled.
As a workaround, users may choose to demangle the GPU kernel C++ names as required.
## rocprof option --parallel-kernels Not Supported in This Release
rocprof option --parallel-kernels is available in the options list, however, it is not fully validated and supported in this release.
## Random Soft Hang Observed When Running ResNet-Based Models
A random soft hang is observed when running ResNet-based models for a loop run of more than 25 to 30 hours. The issue is observed on both PyTorch and TensorFlow frameworks.
You can terminate the unresponsive process to temporarily resolve the issue.
There is no known workaround at this time.
# Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.7.x packages.
For more information on ROCM installation on all platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# Hardware and Software Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX8 GPUs
* "Polaris 11" chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100
* "Polaris 12" chips, such as on the AMD Radeon RX 550 and Radeon RX 540
* GFX7 GPUs
* "Hawaii" chips, such as the AMD Radeon R9 390X and FirePro W9100
As described in the next section, GFX8 GPUs require PCI Express 3.0 (PCIe 3.0) with support for PCIe atomics. This requires both CPU and motherboard support. GFX9 GPUs require PCIe 3.0 with support for PCIe atomics by default, but they can operate in most cases without this capability.
The integrated GPUs in AMD APUs are not officially supported targets for ROCm.
As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven Ridge" APUs are enabled in our upstream drivers and the ROCm OpenCL runtime.
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
#### Supported CPUs
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
In particular, the CPU and every active PCIe point between the CPU and GPU require support for PCIe 3.0 and PCIe atomics.
The CPU root must indicate PCIe AtomicOp Completion capabilities and any intermediate switch must indicate PCIe AtomicOp Routing capabilities.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* AMD Ryzen CPUs
* The CPUs in AMD Ryzen APUs
* AMD Ryzen Threadripper CPUs
* AMD EPYC CPUs
* Intel Xeon E7 v3 or newer CPUs
* Intel Xeon E5 v3 or newer CPUs
* Intel Xeon E3 v3 or newer CPUs
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer)
* Some Ivy Bridge-E systems
Beginning with ROCm 1.8, GFX9 GPUs (such as Vega 10) no longer require PCIe atomics.
We have similarly opened up more options for number of PCIe lanes.
GFX9 GPUs can now be run on CPUs without PCIe atomics and on older PCIe generations, such as PCIe 2.0.
This is not supported on GPUs below GFX9, e.g. GFX8 cards in the Fiji and Polaris families.
If you are using any PCIe switches in your system, please note that PCIe Atomics are only supported on some switches, such as Broadcom PLX.
When you install your GPUs, make sure you install them in a PCIe 3.1.0 x16, x8, x4, or x1 slot attached either directly to the CPU's Root I/O controller or via a PCIe switch directly attached to the CPU's Root I/O controller.
In our experience, many issues stem from trying to use consumer motherboards which provide physical x16 connectors that are electrically connected as e.g. PCIe 2.0 x4, PCIe slots connected via the Southbridge PCIe I/O controller, or PCIe slots connected through a PCIe switch that does
not support PCIe atomics.
If you attempt to run ROCm on a system without proper PCIe atomic support, you may see an error in the kernel log (`dmesg`):
```
kfd: skipped device 1002:7300, PCI rejects atomics
```
## Older ROCm™ Releases
Experimental support for our Hawaii (GFX7) GPUs (Radeon R9 290, R9 390, FirePro W9100, S9150, S9170)
does not require or take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above for compatibility purposes.
For release information for older ROCm™ releases, refer to
[CHANGELOG](./CHANGELOG.md).
#### Not supported or limited support under ROCm
##### Limited support
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
* In addition, various "Carrizo" and "Bristol Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
* AMD "Raven Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Raven Ridge" requires the use of upstream kernel drivers.
* In addition, various "Raven Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
##### Not supported
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
#### ROCm support in upstream Linux kernels
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
As such, users have the option of either using the ROCK kernel driver that are part of AMD's ROCm repositories or using the upstream driver and only installing ROCm user-level utilities from AMD's ROCm repositories.
These releases of the upstream Linux kernel support the following GPUs in ROCm:
* 4.17: Fiji, Polaris 10, Polaris 11
* 4.18: Fiji, Polaris 10, Polaris 11, Vega10
* 4.20: Fiji, Polaris 10, Polaris 11, Vega10, Vega 7nm
The upstream driver may be useful for running ROCm software on systems that are not compatible with the kernel driver available in AMD's repositories.
For users that have the option of using either AMD's or the upstreamed driver, there are various tradeoffs to take into consideration:
| | Using AMD's `rock-dkms` package | Using the upstream kernel driver |
| ---- | ------------------------------------------------------------| ----- |
| Pros | More GPU features, and they are enabled earlier | Includes the latest Linux kernel features |
| | Tested by AMD on supported distributions | May work on other distributions and with custom kernels |
| | Supported GPUs enabled regardless of kernel version | |
| | Includes the latest GPU firmware | |
| Cons | May not work on all Linux distributions or versions | Features and hardware support varies depending on kernel version |
| | Not currently supported on kernels newer than 5.4 | Limits GPU's usage of system memory to 3/8 of system memory (before 5.6). For 5.6 and beyond, both DKMS and upstream kernels allow use of 15/16 of system memory. |
| | | IPC and RDMA capabilities are not yet enabled |
| | | Not tested by AMD to the same level as `rock-dkms` package |
| | | Does not include most up-to-date firmware |
## Machine Learning and High Performance Computing Software Stack for AMD GPU
For an updated version of the software stack for AMD GPU, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#machine-learning-and-high-performance-computing-software-stack-for-amd-gpu-v3-5-0

View File

@@ -1,37 +0,0 @@
# Release Notes
<!-- Do not edit this file! This file is autogenerated with -->
<!-- tools/autotag/tag_script.py -->
<!-- Disable lints since this is an auto-generated file. -->
<!-- markdownlint-disable blanks-around-headers -->
<!-- markdownlint-disable no-duplicate-header -->
<!-- markdownlint-disable no-blanks-blockquote -->
<!-- markdownlint-disable ul-indent -->
<!-- markdownlint-disable no-trailing-spaces -->
<!-- spellcheck-disable -->
The release notes for the ROCm platform.
-------------------
## ROCm 5.0.2
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable no-duplicate-header -->
### Fixed Defects
The following defects are fixed in the ROCm v5.0.2 release.
#### Issue with hostcall Facility in HIP Runtime
In ROCm v5.0, when using the “assert()” call in a HIP kernel, the compiler may sometimes fail to emit kernel metadata related to the hostcall facility, which results in incomplete initialization of the hostcall facility in the HIP runtime. This can cause the HIP kernel to crash when it attempts to execute the “assert()” call.
The root cause was an incorrect check in the compiler to determine whether the hostcall facility is required by the kernel. This is fixed in the ROCm v5.0.2 release.
The resolution includes a compiler change, which emits the required metadata by default, unless the compiler can prove that the hostcall facility is not required by the kernel. This ensures that the “assert()” call never fails.
Note:
This fix may lead to breakage in some OpenMP offload use cases, which use print inside a target region and result in an abort in device code. The issue will be fixed in a future release.
Compatibility Matrix Updates to ROCm Deep Learning Guide
The compatibility matrix in the AMD Deep Learning Guide is updated for ROCm v5.0.2.

View File

@@ -0,0 +1,444 @@
![AMD Logo](amdblack.jpg)
# AMD ROCm Installation Guide v3.7
## Install AMD ROCm
- [Deploying ROCm](#deploying-rocm)
- [Prerequisites](#prerequisites-1)
- [Install ROCm on Supported Operating Systems](#supported-operating-systems)
- [Ubuntu](#ubuntu)
- [CentOS RHEL](#centos-rhel)
- [SLES 15 Service Pack 1](#sles-15-service-pack-1)
- [ROCm Installation Known Issues and
Workarounds](#rocm-installation-known-issues-and-workarounds)
## Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.x packages.
The following directions show how to install ROCm on supported Debian-based systems such as Ubuntu 18.04.x
**Note**: These directions may not work as written on unsupported Debian-based distributions. For example, newer versions of Ubuntu may
not be compatible with the rock-dkms kernel driver. In this case, you can exclude the rocm-dkms and rock-dkms packages.
## Prerequisites
In this release, AMD ROCm extends support to Ubuntu 20.04, including dual kernel.
The AMD ROCm platform is designed to support the following operating systems:
- Ubuntu 20.04 (5.4 and 5.6-oem) and 18.04.4 (Kernel 5.3)
- CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7
runtime support)
- CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
- SLES 15 SP1
### FRESH INSTALLATION OF AMD ROCm V3.7 RECOMMENDED
A fresh and clean installation of AMD ROCm v3.7 is recommended. An upgrade from previous releases to AMD ROCm v3.7 is not supported.
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a
fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
**Note**: *render group* is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use *video group*.
- For ROCm v3.5 and releases thereafter, the *clinfo* path is changed
to - */opt/rocm/opencl/bin/clinfo*.
- For ROCm v3.3 and older releases, the *clinfo* path remains unchanged - */opt/rocm/opencl/bin/x86\_64/clinfo*.
## Supported Operating Systems
### Ubuntu
**Installing a ROCm Package from a Debian Repository**
To install from a Debian Repository:
1. Run the following code to ensure that your system is up to date:
```
sudo apt update
sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot
```
2. Add the ROCm apt repository.
For Debian-based systems like Ubuntu, configure the Debian ROCm repository as follows:
**Note**: The public key has changed to reflect the new location. You must update to the new location as the old key will be removed in a
future release.
- Old Key: <http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key>
- New Key: <http://repo.radeon.com/rocm/rocm.gpg.key>
```
wget -q -O - http://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list
```
The gpg key may change; ensure it is updated when installing a new release. If the key signature verification fails while updating, re-add
the key from the ROCm apt repository.
The current rocm.gpg.key is not available in a standard key ring distribution, but has the following sha1sum hash:
e85a40d1a43453fe37d63aa6899bc96e08f2817a rocm.gpg.key
3. Install the ROCm meta-package. Update the appropriate repository list and install the rocm-dkms meta-package:
```
sudo apt update
sudo apt install rocm-dkms && sudo reboot
```
4. Set permissions. To access the GPU, you must be a user in the video and render groups. Ensure your user account is a member of the video
and render groups prior to using ROCm. To identify the groups you are a member of, use the following command:
```
groups
```
5. To add your user to the video and render groups, use the following command with the sudo password:
```
sudo usermod -a -G video $LOGNAME
sudo usermod -a -G render $LOGNAME
```
6. By default, you must add any future users to the video and render groups. To add future users to the video and render groups, run the
following command:
```
echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=render' | sudo tee -a /etc/adduser.conf
```
7. Restart the system.
8. After restarting the system, run the following commands to verify that the ROCm installation is successful. If you see your GPUs
listed by both commands, the installation is considered successful.
```
/opt/rocm/bin/rocminfo
/opt/rocm/opencl/bin/clinfo
```
**Note**: To run the ROCm programs, add the ROCm binaries in your PATH.
```
echo 'export PATH=$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin' | sudo tee -a /etc/profile.d/rocm.sh
```
#### Uninstalling ROCm Packages from Ubuntu
To uninstall the ROCm packages from Ubuntu 16.04.6 or Ubuntu 18.04.4, run the following command:
sudo apt autoremove rocm-opencl rocm-dkms rocm-dev rocm-utils && sudo reboot
#### Installing Development Packages for Cross Compilation
It is recommended that you develop and test development packages on different systems. For example, some development or build systems may
not have an AMD GPU installed. In this scenario, you must avoid installing the ROCk kernel driver on the development system.
Instead, install the following development subset of packages:
sudo apt update
sudo apt install rocm-dev
**Note**: To execute ROCm enabled applications, you must install the full ROCm driver stack on your system.
#### Using Debian-based ROCm with Upstream Kernel Drivers
You can install the ROCm user-level software without installing the AMD\'s custom ROCk kernel driver. To use the upstream kernels, run the
following commands instead of installing rocm-dkms:
sudo apt update
sudo apt install rocm-dev
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
### CentOS RHEL
#### CentOS v7.7/RHEL v7.8 and CentOS/RHEL 8.1
This section describes how to install ROCm on supported RPM-based systems such as CentOS v7.7/RHEL v7.8 and CentOS/RHEL v8.1.
#### Preparing RHEL for Installation
RHEL is a subscription-based operating system. You must enable the external repositories to install on the devtoolset-7 environment and the
dkms support files.
**Note**: The following steps do not apply to the CentOS installation.
1. The subscription for RHEL must be enabled and attached to a pool ID. See the Obtaining an RHEL image and license page for instructions on
registering your system with the RHEL subscription server and attaching to a pool id.
2. Enable the following repositories for RHEL v7.x:
```
sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms
sudo subscription-manager repos --enable rhel-7-server-optional-rpms
sudo subscription-manager repos --enable rhel-7-server-extras-rpms
```
3. Enable additional repositories by downloading and installing the epel-release-latest-7/epel-release-latest-8 repository RPM:
```
sudo rpm -ivh <repo>
```
For more details,
- For RHEL v7.x, see
<https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm>
- For RHEL v8.x, see
<https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm>
4. Install and set up Devtoolset-7.
**Note**: Devtoolset is not required for CentOS/RHEL v8.x
To setup the Devtoolset-7 environment, follow the instructions on this page: <https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/>
**Note**: devtoolset-7 is a software collections package and is not supported by AMD.
### Installing CentOS v7.7/v8.1 for DKMS
Use the dkms tool to install the kernel drivers on CentOS/RHEL:
sudo yum install -y epel-release
sudo yum install -y dkms kernel-headers-`uname -r` kernel-devel-`uname -r`
#### Installing ROCm
To install ROCm on your system, follow the instructions below:
1. Delete the previous versions of ROCm before installing the latest version.
2. Create a /etc/yum.repos.d/rocm.repo file with the following contents:
- CentOS/RHEL 7.x : <http://repo.radeon.com/rocm/yum/rpm>
- CentOS/RHEL 8.x : <http://repo.radeon.com/rocm/centos8/rpm>
```
[ROCm]
name=ROCm
baseurl=http://repo.radeon.com/rocm/yum/rpm
enabled=1
gpgcheck=1
gpgkey=http://repo.radeon.com/rocm/rocm.gpg.key
```
**Note**: The URL of the repository must point to the location of the repositories' repodata database.
3. Install ROCm components using the following command:
**Note**: This step is applicable only for CentOS/RHEL v8.1 and is not required for v7.8.
```
sudo yum install rocm-dkms && sudo reboot
```
4. Restart the system. The rock-dkms component is installed and the /dev/kfd device is now available.
5. Set permissions. To access the GPU, you must be a user in the video group. Ensure your user account is a member of the video group prior
to using ROCm. To identify the groups you are a member of, use the following command:
```
groups
```
6. To add your user to the video group, use the following command with the sudo password:
```
sudo usermod -a -G video $LOGNAME
```
7. By default, add any future users to the video group. Run the following command to add users to the video group:
```
echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf
```
**Note**: Before updating to the latest version of the operating system, delete the ROCm packages to avoid DKMS-related issues.
8. Restart the system.
9. Test the ROCm installation.
#### Testing the ROCm Installation
After restarting the system, run the following commands to verify that the ROCm installation is successful. If you see your GPUs listed, you
are good to go!
/opt/rocm/bin/rocminfo
/opt/rocm/opencl/bin/clinfo
**Note**: Add the ROCm binaries in your PATH for easy implementation of the ROCm programs.
echo 'export PATH=$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin' | sudo tee -a /etc/profile.d/rocm.sh
#### Compiling Applications Using HCC, HIP, and Other ROCm Software
To compile applications or samples, run the following command to use gcc-7.2 provided by the devtoolset-7 environment:
scl enable devtoolset-7 bash
#### Uninstalling ROCm from CentOS/RHEL
To uninstall the ROCm packages, run the following command:
sudo yum autoremove rocm-opencl rocm-dkms rock-dkms
#### Installing Development Packages for Cross Compilation
You can develop and test ROCm packages on different systems. For example, some development or build systems may not have an AMD GPU
installed. In this scenario, you can avoid installing the ROCm kernel driver on your development system. Instead, install the following
development subset of packages:
sudo yum install rocm-dev
**Note**: To execute ROCm-enabled applications, you will require a system installed with the full ROCm driver stack.
#### Using ROCm with Upstream Kernel Drivers
You can install ROCm user-level software without installing AMD\'s custom ROCk kernel driver. To use the upstream kernel drivers, run the
following commands
sudo yum install rocm-dev
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
sudo reboot
**Note**: You can use this command instead of installing rocm-dkms.
**Note**: Ensure you restart the system after ROCm installation.
### SLES 15 Service Pack 1
The following section tells you how to perform an install and uninstall ROCm on SLES 15 SP 1.
**Installation**
1. Install the \"dkms\" package.
```
sudo SUSEConnect --product PackageHub/15.1/x86_64
sudo zypper install dkms
```
2. Add the ROCm repo.
```
sudo zypper clean –all
sudo zypper addrepo http://repo.radeon.com/rocm/zyp/zypper/ rocm
sudo zypper ref
sudo rpm --import http://repo.radeon.com/rocm/rocm.gpg.key
sudo zypper --gpg-auto-import-keys install rocm-dkms
sudo reboot
```
3. Run the following command once
```
cat <<EOF | sudo tee /etc/modprobe.d/10-unsupported-modules.conf
allow_unsupported_modules 1
EOF
sudo modprobe amdgpu
```
4. Verify the ROCm installation.
5. Run /opt/rocm/bin/rocminfo and /opt/rocm/opencl/bin/clinfo commands to list the GPUs and verify that the ROCm installation is
successful.
6. Set permissions. To access the GPU, you must be a user in the video group. Ensure your user account is a member of the video group prior to using ROCm. To identify the groups you are a member of, use the following command:
groups
7. To add your user to the video group, use the following command with the sudo password:
```
sudo usermod -a -G video $LOGNAME
```
8. By default, add any future users to the video group. Run the following command to add users to the video group:
```
echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf
```
9. Restart the system.
10. Test the basic ROCm installation.
11. After restarting the system, run the following commands to verify that the ROCm installation is successful. If you see your GPUs
listed by both commands, the installation is considered successful.
```
/opt/rocm/bin/rocminfo
/opt/rocm/opencl/bin/clinfo
```
**Note**: To run the ROCm programs more efficiently, add the ROCm binaries in your PATH.
echo \'export
PATH=\$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin\'\|sudo
tee -a /etc/profile.d/rocm.sh
#### Uninstallation
To uninstall, use the following command:
sudo zypper remove rocm-opencl rocm-dkms rock-dkms
**Note**: Ensure all other installed packages/components are removed.
**Note**: Ensure all the content in the /opt/rocm directory is completely removed. If the command does not remove all the ROCm components/packages, ensure
you remove them individually.
## ROCm Installation Known Issues and Workarounds
### Closed source components
The ROCm platform relies on some closed source components to provide functionalities like HSA image support. These components are only
available through the ROCm repositories, and they may be deprecated or become open source components in the future. These components are made
available in the following packages:
- hsa-ext-rocr-dev

BIN
amdblack.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

View File

@@ -1,27 +1,27 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="roc-github"
fetch="https://github.com/RadeonOpenCompute/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
fetch="https://github.com/ROCmSoftwarePlatform/" />
fetch="https://github.com/ROCmSoftwarePlatform/" />
<remote name="gpuopen-libs"
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
<remote name="gpuopen-tools"
fetch="https://github.com/GPUOpen-Tools/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-5.5.1"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-3.7.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="rocm_smi_lib" />
<project name="rocm-core" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" remote="roc-github" />
<project name="rocm-cmake" />
<project name="rocminfo" />
<project name="rocprofiler" remote="rocm-devtools" />
@@ -29,51 +29,51 @@ fetch="https://github.com/KhronosGroup/" />
<project name="ROCm-OpenCL-Runtime" />
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="6c03f8b58fafd9dd693eaac826749a5cfad515f8" />
<project name="clang-ocl" />
<!--HIP Projects-->
<!--HIP Projects-->
<project name="HIP" remote="rocm-devtools" />
<project name="hipamd" remote="rocm-devtools" />
<project name="HIP-Examples" remote="rocm-devtools" />
<project name="ROCclr" remote="rocm-devtools" />
<project name="HIPIFY" remote="rocm-devtools" />
<project name="HIPCC" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" path="llvm_amd-stg-open" />
<project name="ROCm-Device-Libs" />
<project name="atmi" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocr_debug_agent" remote="rocm-devtools" revision="refs/tags/roc-3.7.0" />
<project name="rocm_bandwidth_test" />
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
<!-- gdb projects -->
<!-- gdb projects -->
<project name="ROCgdb" remote="rocm-devtools" />
<project name="ROCdbgapi" remote="rocm-devtools" />
<!-- ROCm Libraries -->
<project name="rdc" />
<project groups="mathlibs" name="rocBLAS" remote="rocm-swplat" />
<project groups="mathlibs" name="Tensile" remote="rocm-swplat" />
<project groups="mathlibs" name="hipBLAS" remote="rocm-swplat" />
<project groups="mathlibs" name="rocFFT" remote="rocm-swplat" />
<project groups="mathlibs" name="hipFFT" remote="rocm-swplat" />
<project groups="mathlibs" name="rocRAND" remote="rocm-swplat" />
<project groups="mathlibs" name="rocSPARSE" remote="rocm-swplat" />
<project groups="mathlibs" name="rocSOLVER" remote="rocm-swplat" />
<project groups="mathlibs" name="hipSOLVER" remote="rocm-swplat" />
<project groups="mathlibs" name="hipSPARSE" remote="rocm-swplat" />
<project groups="mathlibs" name="rocALUTION" remote="rocm-swplat" />
<!-- ROCm Libraries -->
<project name="rocBLAS" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
<project name="rocSPARSE" remote="rocm-swplat" />
<project name="rocSOLVER" remote="rocm-swplat" />
<project name="hipSPARSE" remote="rocm-swplat" />
<project name="rocALUTION" remote="rocm-swplat" />
<project name="MIOpenGEMM" remote="rocm-swplat" />
<project name="MIOpen" remote="rocm-swplat" />
<project groups="mathlibs" name="rccl" remote="rocm-swplat" />
<project name="rccl" remote="rocm-swplat" />
<project name="MIVisionX" remote="gpuopen-libs" />
<project groups="mathlibs" name="rocThrust" remote="rocm-swplat" />
<project groups="mathlibs" name="hipCUB" remote="rocm-swplat" />
<project groups="mathlibs" name="rocPRIM" remote="rocm-swplat" />
<project groups="mathlibs" name="rocWMMA" remote="rocm-swplat" />
<project name="hipfort" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" />
<project name="rocThrust" remote="rocm-swplat" />
<project name="hipCUB" remote="rocm-swplat" />
<project name="rocPRIM" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" revision="e66968a25f9342a28af1157b06cbdbf8579c5519" />
<project name="ROCmValidationSuite" remote="rocm-devtools" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
<!-- Projects for AOMP -->
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" />
</manifest>

View File

@@ -1,6 +0,0 @@
# 404 Page Not Found
Page could not be found.
Return to [home](./index) or please use the links from the sidebar to find what
you are looking for.

View File

@@ -1,74 +0,0 @@
# About ROCm Documentation
ROCm documentation is made available under open source [licenses](licensing.md).
Documentation is built using open source toolchains. Contributions to our
documentation is encouraged and welcome. As a contributor, please familiarize
yourself with our documentation toolchain.
## ReadTheDocs
[ReadTheDocs](https://docs.readthedocs.io/en/stable/) is our front end for the
our documentation. By front end, this is the tool that serves our HTML based
documentation to our end users.
## Doxygen
[Doxygen](https://www.doxygen.nl/) is the most common inline code documentation
standard. ROCm projects are use Doxygen for public API documentation (unless the
upstream project is using a different tool).
## Sphinx
[Sphinx](https://www.sphinx-doc.org/en/master/) is a documentation generator
originally used for python. It is now widely used in the Open Source community.
Originally, sphinx supported RST based documentation. Markdown support is now
available. ROCm documentation plans to default to markdown for new projects.
Existing projects using RST are under no obligation to convert to markdown. New
projects that believe markdown is not suitable should contact the documentation
team prior to selecting RST.
### MyST
[Markedly Structured Text (MyST)](https://myst-tools.org/docs/spec) is an extended
flavor of Markdown ([CommonMark](https://commonmark.org/)) influenced by reStructuredText (RST) and Sphinx.
It is integrated via [`myst-parser`](https://myst-parser.readthedocs.io/en/latest/).
A cheat sheet that showcases how to use the MyST syntax is available over at [the Jupyter
reference](https://jupyterbook.org/en/stable/reference/cheatsheet.html).
### Sphinx Theme
ROCm is using the
[Sphinx Book Theme](https://sphinx-book-theme.readthedocs.io/en/latest/). This
theme is used by Jupyter books. ROCm documentation applies some customization
include a header and footer on top of the Sphinx Book Theme. A future custom
ROCm theme will be part of our documentation goals.
### Sphinx Design
Sphinx Design is an extension for sphinx based websites that add design
functionality. Please see the documentation
[here](https://sphinx-design.readthedocs.io/en/latest/index.html). ROCm
documentation uses sphinx design for grids, cards, and synchronized tabs.
Other features may be used in the future.
### Sphinx External TOC
ROCm uses the
[sphinx-external-toc](https://sphinx-external-toc.readthedocs.io/en/latest/intro.html)
for our navigation. This tool allows a YAML file based left navigation menu. This
tool was selected due to its flexibility that allows scripts to operate on the
YAML file. Please transition to this file for the project's navigation. You can
see the `_toc.yml.in` file in this repository in the docs/sphinx folder for an
example.
### Breathe
Sphinx uses [Breathe](https://www.breathe-doc.org/) to integrate Doxygen
content.
## `rocm-docs-core` pip package
[rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) is an AMD
maintained project that applies customization for our documentation. This
project is the tool most ROCm repositories will use as part of the documentation
build.

View File

@@ -1,76 +0,0 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
import shutil
from rocm_docs import ROCmDocs
shutil.copy2('../CONTRIBUTING.md','./contributing.md')
shutil.copy2('../RELEASE.md','./release.md')
# Keep capitalization due to similar linking on GitHub's markdown preview.
shutil.copy2('../CHANGELOG.md','./CHANGELOG.md')
# configurations for PDF output by Read the Docs
project = "ROCm Documentation"
author = "Advanced Micro Devices, Inc."
copyright = "Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved."
version = "5.0.2"
release = "5.0.2"
setting_all_article_info = True
all_article_info_os = ["linux"]
all_article_info_author = ""
# pages with specific settings
article_pages = [
{"file":"deploy/linux/index", "os":["linux"]},
{"file":"deploy/linux/install_overview", "os":["linux"]},
{"file":"deploy/linux/prerequisites", "os":["linux"]},
{"file":"deploy/linux/quick_start", "os":["linux"]},
{"file":"deploy/linux/install", "os":["linux"]},
{"file":"deploy/linux/upgrade", "os":["linux"]},
{"file":"deploy/linux/uninstall", "os":["linux"]},
{"file":"deploy/linux/package_manager_integration", "os":["linux"]},
{"file":"deploy/docker", "os":["linux"]},
{"file":"release/gpu_os_support", "os":["linux"]},
{"file":"release/docker_support_matrix", "os":["linux"]},
{"file":"reference/gpu_libraries/communication", "os":["linux"]},
{"file":"reference/ai_tools", "os":["linux"]},
{"file":"reference/management_tools", "os":["linux"]},
{"file":"reference/validation_tools", "os":["linux"]},
{"file":"reference/framework_compatibility/framework_compatibility", "os":["linux"]},
{"file":"reference/computer_vision", "os":["linux"]},
{"file":"how_to/deep_learning_rocm", "os":["linux"]},
{"file":"how_to/gpu_aware_mpi", "os":["linux"]},
{"file":"how_to/magma_install/magma_install", "os":["linux"]},
{"file":"how_to/pytorch_install/pytorch_install", "os":["linux"]},
{"file":"how_to/system_debugging", "os":["linux"]},
{"file":"how_to/tensorflow_install/tensorflow_install", "os":["linux"]},
{"file":"examples/machine_learning", "os":["linux"]},
{"file":"examples/inception_casestudy/inception_casestudy", "os":["linux"]},
{"file":"understand/file_reorg", "os":["linux"]},
{"file":"understand/isv_deployment_win", "os":["windows"]},
]
external_toc_path = "./sphinx/_toc.yml"
docs_core = ROCmDocs("ROCm 5.0.2 Documentation Home")
docs_core.setup()
external_projects_current_project = "rocm"
for sphinx_var in ROCmDocs.SPHINX_VARS:
globals()[sphinx_var] = getattr(docs_core, sphinx_var)
html_theme_options = {
"link_main_doc": False
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 939 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 537 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 465 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 461 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

View File

@@ -1,90 +0,0 @@
# Deploy ROCm Docker containers
## Prerequisites
Docker containers share the kernel with the host operating system, therefore the
ROCm kernel-mode driver must be installed on the host. Please refer to
{ref}`using-the-package-manager` on installing `amdgpu-dkms`. The other
user-space parts (like the HIP-runtime or math libraries) of the ROCm stack will
be loaded from the container image and don't need to be installed to the host.
(docker-access-gpus-in-container)=
## Accessing GPUs in containers
In order to access GPUs in a container (to run applications using HIP, OpenCL or
OpenMP offloading) explicit access to the GPUs must be granted.
The ROCm runtimes make use of multiple device files:
- `/dev/kfd`: the main compute interface shared by all GPUs
- `/dev/dri/renderD<node>`: direct rendering interface (DRI) devices for each
GPU. **`<node>`** is a number for each card in the system starting from 128.
Exposing these devices to a container is done by using the
[`--device`](https://docs.docker.com/engine/reference/commandline/run/#device)
option, i.e. to allow access to all GPUs expose `/dev/kfd` and all
`/dev/dri/renderD` devices:
```shell
docker run --device /dev/kfd --device /dev/renderD128 --device /dev/renderD129 ...
```
More conveniently, instead of listing all devices, the entire `/dev/dri` folder
can be exposed to the new container:
```shell
docker run --device /dev/kfd --device /dev/dri
```
Note that this gives more access than strictly required, as it also exposes the
other device files found in that folder to the container.
(docker-restrict-gpus)=
### Restricting a container to a subset of the GPUs
If a `/dev/dri/renderD` device is not exposed to a container then it cannot use
the GPU associated with it; this allows to restrict a container to any subset of
devices.
For example to allow the container to access the first and third GPU start it
like:
```shell
docker run --device /dev/kfd --device /dev/dri/renderD128 --device /dev/dri/renderD130 <image>
```
### Additional Options
The performance of an application can vary depending on the assignment of GPUs
and CPUs to the task. Typically, `numactl` is installed as part of many HPC
applications to provide GPU/CPU mappings. This Docker runtime option supports
memory mapping and can improve performance.
```shell
--security-opt seccomp=unconfined
```
This option is recommended for Docker Containers running HPC applications.
```shell
docker run --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined ...
```
## Docker images in the ROCm ecosystem
### Base images
<https://github.com/RadeonOpenCompute/ROCm-docker> hosts images useful for users
wishing to build their own containers leveraging ROCm. The built images are
available from [Docker Hub](https://hub.docker.com/u/rocm). In particular
`rocm/rocm-terminal` is a small image with the prerequisites to build HIP
applications, but does not include any libraries.
### Applications
AMD provides pre-built images for various GPU-ready applications through its
Infinity Hub at <https://www.amd.com/en/technologies/infinity-hub>.
Examples for invoking each application and suggested parameters used for
benchmarking are also provided there.

View File

@@ -1,53 +0,0 @@
# Deploy ROCm on Linux
Start with {doc}`/deploy/linux/quick_start` or follow the detailed
instructions below.
## Prepare to Install
::::{grid} 1 1 2 2
:gutter: 1
:::{grid-item-card} Prerequisites
:link: prerequisites
:link-type: doc
The prerequisites page lists the required steps *before* installation.
:::
:::{grid-item-card} Install Choices
:link: install_overview
:link-type: doc
Package manager vs AMDGPU Installer
Standard Packages vs Multi-Version Packages
:::
::::
## Choose your install method
::::{grid} 1 1 2 2
:gutter: 1
:::{grid-item-card} Package Manager
:link: os-native/index
:link-type: doc
Directly use your distribution's package manager to install ROCm.
:::
:::{grid-item-card} AMDGPU Installer
:link: installer/index
:link-type: doc
Use an installer tool that orchestrates changes via the package
manager.
:::
::::
## See Also
- {doc}`/release/gpu_os_support`

View File

@@ -1,71 +0,0 @@
# ROCm Installation Options (Linux)
Users installing ROCm must choose between various installation options. A new
user should follow the [Quick Start guide](./quick_start).
## Package Manager versus AMDGPU Installer?
ROCm supports two methods for installation:
- Directly using the Linux distribution's package manager
- The `amdgpu-install` script
There is no difference in the final installation state when choosing either
option.
Using the distribution's package manager lets the user install,
upgrade and uninstall using familiar commands and workflows. Third party
ecosystem support is the same as your OS package manager.
The `amdgpu-install` script is a wrapper around the package manager. The same
packages are installed by this script as the package manager system.
The installer automates the installation process for the AMDGPU
and ROCm stack. It handles the complete installation process
for ROCm, including setting up the repository, cleaning the system, updating,
and installing the desired drivers and meta-packages. Users who are
less familiar with the package manager can choose this method for ROCm
installation.
(installation-types)=
## Single Version ROCm install versus Multi-Version
ROCm packages are versioned with both semantic versioning that is package
specific and a ROCm release version.
### Single-version Installation
The single-version ROCm installation refers to the following:
- Installation of a single instance of the ROCm release on a system
- Use of non-versioned ROCm meta-packages
### Multi-version Installation
The multi-version installation refers to the following:
- Installation of multiple instances of the ROCm stack on a system. Extending
the package name and its dependencies with the release version adds the
ability to support multiple versions of packages simultaneously.
- Use of versioned ROCm meta-packages.
```{attention}
ROCm packages that were previously installed from a single-version installation
must be removed before proceeding with the multi-version installation to avoid
conflicts.
```
```{note}
Multiversion install is not available for the kernel driver module, also referred to as AMDGPU.
```
The following image demonstrates the difference between single-version and
multi-version ROCm installation types:
```{figure-md} install-types
<img src="/data/deploy/linux/image.001.png" alt="">
ROCm Installation Types
```

View File

@@ -1,31 +0,0 @@
# AMDGPU Install Script
::::{grid} 2 3 3 3
:gutter: 1
:::{grid-item-card} Install
:link: install
:link-type: doc
How to install ROCm?
:::
:::{grid-item-card} Upgrade
:link: upgrade
:link-type: doc
Instructions for upgrading an existing ROCm installation.
:::
:::{grid-item-card} Uninstall
:link: uninstall
:link-type: doc
Steps for removing ROCm packages libraries and tools.
:::
::::
## See Also
- {doc}`/release/gpu_os_support`

View File

@@ -1,306 +0,0 @@
# Installation with install script
Prior to beginning, please ensure you have the [prerequisites](../prerequisites)
installed.
## Download the Installer Script
To download and install the `amdgpu-install` script on the system, use the
following commands based on your distribution.
::::::{tab-set}
:::::{tab-item} Ubuntu
:sync: ubuntu
::::{tab-set}
:::{tab-item} Ubuntu 18.04
:sync: ubuntu-18.04
```shell
sudo apt update
wget https://repo.radeon.com/amdgpu-install/21.50.2/ubuntu/bionic/amdgpu-install_21.50.2.50002-1_all.deb
sudo apt install ./amdgpu-install_21.50.2.50002-1_all.deb
```
:::
:::{tab-item} Ubuntu 20.04
:sync: ubuntu-20.04
```shell
sudo apt update
wget https://repo.radeon.com/amdgpu-install/21.50.2/ubuntu/focal/amdgpu-install_21.50.2.50002-1_all.deb
sudo apt install ./amdgpu-install_21.50.2.50002-1_all.deb
```
:::
::::
:::::
:::::{tab-item} Red Hat Enterprise Linux
:sync: RHEL
::::{tab-set}
:::{tab-item} RHEL 7.9
:sync: RHEL-7.9
:sync: RHEL-7
```shell
sudo yum install https://repo.radeon.com/amdgpu-install/21.50.2/rhel/7.9/amdgpu-install-21.50.2.50002-1.el7.noarch.rpm
```
:::
:::{tab-item} RHEL 8.4
:sync: RHEL-8.4
:sync: RHEL-8
```shell
sudo yum install https://repo.radeon.com/amdgpu-install/21.50.2/rhel/8.4/amdgpu-install-21.50.2.50002-1.el7.noarch.rpm
```
:::
:::{tab-item} RHEL 8.5
:sync: RHEL-8.5
:sync: RHEL-8
```shell
sudo yum install https://repo.radeon.com/amdgpu-install/21.50.2/rhel/8.5/amdgpu-install-21.50.2.50002-1.el7.noarch.rpm
```
:::
::::
:::::
:::::{tab-item} SUSE Linux Enterprise Server 15
:sync: SLES15
::::{tab-set}
:::{tab-item} Service Pack 3
:sync: SLES15-SP3
```shell
sudo zypper --no-gpg-checks install https://repo.radeon.com/amdgpu-install/21.50.2/sle/15/amdgpu-install-21.50.2.50002-1.noarch.rpm
```
:::
::::
:::::
::::::
## Use cases
Instead of installing individual applications or libraries the installer script
groups packages into specific use cases, matching typical workflows and runtimes.
To display a list of available use cases execute the command:
```shell
sudo amdgpu-install --list-usecase
```
The available use-cases will be printed in a format similar to the example
output below.
```none
If --usecase option is not present, the default selection is "graphics,opencl,hip"
Available use cases:
rocm(for users and developers requiring full ROCm stack)
- OpenCL (ROCr/KFD based) runtime
- HIP runtimes
- Machine learning framework
- All ROCm libraries and applications
- ROCm Compiler and device libraries
- ROCr runtime and thunk
lrt(for users of applications requiring ROCm runtime)
- ROCm Compiler and device libraries
- ROCr runtime and thunk
opencl(for users of applications requiring OpenCL on Vega or
later products)
- ROCr based OpenCL
- ROCm Language runtime
openclsdk (for application developers requiring ROCr based OpenCL)
- ROCr based OpenCL
- ROCm Language runtime
- development and SDK files for ROCr based OpenCL
hip(for users of HIP runtime on AMD products)
- HIP runtimes
hiplibsdk (for application developers requiring HIP on AMD products)
- HIP runtimes
- ROCm math libraries
- HIP development libraries
```
To install use cases specific to your requirements, use the installer
`amdgpu-install` as follows:
- To install a single use case add it with the `--usecase` option:
```shell
sudo amdgpu-install --usecase=rocm
```
- For multiple use cases separate them with commas:
```shell
sudo amdgpu-install --usecase=hiplibsdk,rocm
```
## Single-version ROCm Installation
By default (without the `--rocmrelease` option)
the installer script will install packages in the single-version layout.
## Multi-version ROCm Installation
For the multi-version ROCm installation you must use the installer script from
the latest release of ROCm that you wish to install.
**Example:** If you want to install ROCm releases 5.0.0 and 5.0.2
simultaneously, you are required to download the installer from the latest ROCm
release v5.0.2.
### Add Required Repositories
You must add the ROCm repositories manually for all ROCm releases
you want to install except the latest one. The `amdgpu-install` script
automatically adds the required repositories for the latest release.
Run the following commands based on your distribution to add the repositories:
::::::{tab-set}
:::::{tab-item} Ubuntu
:sync: ubuntu
::::{tab-set}
:::{tab-item} Ubuntu 18.04
:sync: ubuntu-18.04
```shell
for ver in 5.0; do
echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/rocm-keyring.gpg] https://repo.radeon.com/rocm/apt/$ver bionic main" | sudo tee /etc/apt/sources.list.d/rocm.list
done
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update
```
:::
:::{tab-item} Ubuntu 20.04
:sync: ubuntu-20.04
```shell
for ver in 5.0; do
echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg.d/rocm-keyring.gpg] https://repo.radeon.com/rocm/apt/$ver focal main" | sudo tee /etc/apt/sources.list.d/rocm.list
done
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update
```
:::
::::
:::::
:::::{tab-item} Red Hat Enterprise Linux
:sync: RHEL
::::{tab-set}
:::{tab-item} RHEL 7
:sync: RHEL-7
```shell
for ver in 5.0; do
sudo tee --append /etc/yum.repos.d/rocm.repo <<EOF
[ROCm-$ver]
name=ROCm$ver
baseurl=https://repo.radeon.com/rocm/yum/$ver/main
enabled=1
priority=50
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo yum clean all
```
:::
:::{tab-item} RHEL 8
:sync: RHEL-8
```shell
for ver in 5.0;
sudo tee --append /etc/yum.repos.d/rocm.repo <<EOF
[ROCm-$ver]
name=ROCm$ver
baseurl=https://repo.radeon.com/rocm/rhel8/$ver/main
enabled=1
priority=50
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo yum clean all
```
:::
::::
:::::
:::::{tab-item} SUSE Linux Enterprise Server 15
:sync: SLES15
::::{tab-set}
:::{tab-item} Service Pack 3
:sync: SLES15-SP3
```shell
for ver in 5.0; do
sudo tee --append /etc/zypp/repos.d/rocm.repo <<EOF
name=rocm
baseurl=https://repo.radeon.com/rocm/$ver/sle/15/main/x86_64
enabled=1
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF
done
sudo zypper ref
```
:::
::::
:::::
::::::
### Install packages
Use the installer script as given below:
```none
sudo amdgpu-install --usecase=rocm --rocmrelease=<release-number-1>
sudo amdgpu-install --usecase=rocm --rocmrelease=<release-number-2>
sudo amdgpu-install --usecase=rocm --rocmrelease=<release-number-3>
```
Following are examples of ROCm multi-version installation. The kernel-mode
driver, associated with the ROCm release v5.3, will be installed as its latest
release in the list.
```none
sudo amdgpu-install --usecase=rocm --rocmrelease=5.0.0
sudo amdgpu-install --usecase=rocm --rocmrelease=5.0.2
```
## Additional options
### Unattended installation
Adding `-y` as a parameter to `amdgpu-install` skips user prompts (for
automation). Example: `amdgpu-install -y --usecase=rocm`
### Skipping kernel mode driver installation
The installer script tries to install the kernel mode driver along with the
requested use cases. This might be unnecessary as in the case of docker
containers or you may wish to keep a specific version when using multi-version
installation, and not have the last installed version overwrite the kernel mode
driver.
To skip the installation of the kernel-mode driver add the `--no-dkms` option
when calling the installer script.

Some files were not shown because too many files have changed in this diff Show More