Compare commits

...

51 Commits

Author SHA1 Message Date
Roopa Malavally
4c8787087a Update README.md 2021-08-27 15:37:37 -07:00
Roopa Malavally
7cd85779c4 Update README.md 2021-08-27 15:31:42 -07:00
Aakash Sudhanwa
c676ff480e Update default.xml (#1567) 2021-08-27 15:26:48 -07:00
Roopa Malavally
6d19f5b6c1 Add files via upload 2021-08-27 15:24:56 -07:00
Roopa Malavally
4679e8ac87 Update README.md 2021-08-27 15:24:20 -07:00
Roopa Malavally
8a3209f985 Update README.md 2021-08-27 15:23:58 -07:00
Roopa Malavally
79d0d00b2a Update README.md 2021-08-27 15:23:18 -07:00
Roopa Malavally
db5121cdfe Update README.md 2021-08-27 15:22:30 -07:00
Aakash Sudhanwa
035f4995bb Merge branch 'master' into master 2021-08-27 15:08:41 -07:00
Roopa Malavally
f63e3f9ce1 Add files via upload 2021-08-27 15:02:49 -07:00
Roopa Malavally
4e56ed7dc3 Update README.md 2021-08-13 11:49:38 -07:00
Roopa Malavally
2faf5b6ab7 Update README.md 2021-08-13 11:48:18 -07:00
Roopa Malavally
e69b7e6f71 Delete OSKernel.PNG 2021-08-13 11:48:00 -07:00
Roopa Malavally
d53ffd1c89 Add files via upload 2021-08-13 11:47:48 -07:00
Roopa Malavally
e177599de1 Add files via upload 2021-08-09 12:55:19 -07:00
Roopa Malavally
9fc1ba3970 Add files via upload 2021-08-09 12:47:17 -07:00
Nick Curtis
520764faa3 Fix missing links in rocprof docs (#1550) 2021-08-07 08:42:25 -07:00
Roopa Malavally
7d0b53c87f Add files via upload 2021-08-03 10:53:16 -07:00
Roopa Malavally
c3a8ecd0c5 Delete AMD_Compiler_Reference_Guide_v4.3.pdf 2021-08-03 10:49:28 -07:00
Roopa Malavally
21cf37b2df Add files via upload 2021-08-02 21:37:19 -07:00
Roopa Malavally
f4419a3d1c Delete AMD_HIP_Programming_Guide_v4.3.pdf 2021-08-02 21:37:00 -07:00
zhozha
5ffdcf84ab Update to ROCm 4.3 manifest 2021-08-02 17:33:25 -07:00
Roopa Malavally
085295daea Update README.md 2021-08-02 16:51:39 -07:00
Roopa Malavally
cf5cec2580 ROCm v4.3 Release Notes (#1540)
* Delete AMD HIP Programming Guide_v4.2.pdf

* Delete AMD_HIP_API_Guide_4.2.pdf

* Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf

* Delete AMD_ROCm_Release_Notes_v4.2.pdf

* Delete HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf

* Delete ROCm_Data_Center_Tool_API_Guide_v4.2.pdf

* Delete ROCm_Debugger_API_Guide_v4.2.pdf

* Delete ROCm_Debugger_User_Guide_v4.2.pdf

* Delete ROCm_SMI_Manual_4.2.pdf

* Update README.md

* Update README.md

* Delete CG1.PNG

* Delete CG2.PNG

* Delete CG3.PNG

* Delete CGMain.PNG

* Delete CLI1.PNG

* Delete CLI2.PNG

* Delete SMI.PNG

* Delete keyfeatures.PNG

* Delete latestGPU.PNG

* Delete rocsolverAPI.PNG

* Create test.rst

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2021-08-02 16:39:54 -07:00
Roopa Malavally
e7a93ae3f5 Add files via upload 2021-08-01 18:53:14 -07:00
Roopa Malavally
e3b7d2f39d Delete AMD_ROCDebugger_API.pdf.pdf 2021-08-01 18:52:58 -07:00
Roopa Malavally
0c4565d913 Delete AMD_ROCDebugger_User_Guide.pdf.pdf 2021-08-01 18:52:30 -07:00
Roopa Malavally
313a589132 Add files via upload 2021-08-01 18:52:03 -07:00
Roopa Malavally
1caf5514e8 Add files via upload 2021-08-01 18:33:33 -07:00
Roopa Malavally
d029ad24cf Add files via upload 2021-08-01 18:09:17 -07:00
Roopa Malavally
ca6638d917 Add files via upload 2021-08-01 17:42:39 -07:00
Roopa Malavally
5cba920022 Add files via upload 2021-08-01 16:21:37 -07:00
Roopa Malavally
cefc8ef1d7 Add files via upload 2021-08-01 16:17:54 -07:00
Roopa Malavally
b71c5705a2 Delete ROCm_SMI_Manual_4.2.pdf 2021-08-01 16:13:32 -07:00
Roopa Malavally
977a1d14cd Delete ROCm_Debugger_User_Guide_v4.2.pdf 2021-08-01 16:13:17 -07:00
Roopa Malavally
3ab60d1326 Delete ROCm_Debugger_API_Guide_v4.2.pdf 2021-08-01 16:13:04 -07:00
Roopa Malavally
4b5b13294e Delete ROCm_Data_Center_Tool_API_Guide_v4.2.pdf 2021-08-01 16:12:50 -07:00
Roopa Malavally
ce66b14d9e Delete HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf 2021-08-01 16:12:32 -07:00
Roopa Malavally
01f63f546f Delete AMD_ROCm_Release_Notes_v4.2.pdf 2021-08-01 16:12:20 -07:00
Roopa Malavally
72eab2779e Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf 2021-08-01 16:12:05 -07:00
Roopa Malavally
8a366db3d7 Delete AMD_HIP_API_Guide_4.2.pdf 2021-08-01 16:11:50 -07:00
Roopa Malavally
8267a84345 Delete AMD HIP Programming Guide_v4.2.pdf 2021-08-01 16:11:30 -07:00
zhang2amd
f7b3a38d49 Merge pull request #1470 from RadeonOpenCompute/roc-4.2.x
4.2 : Manifest Files
2021-05-11 14:58:43 -07:00
Aakash Sudhanwa
67bd7501c1 Update README.md 2019-12-18 14:10:38 -08:00
Aakash Sudhanwa
d62f1c4247 Merge pull request #12 from RadeonOpenCompute/master
Rebase
2019-12-18 14:09:40 -08:00
Aakash Sudhanwa
c3d5bc6406 Rename Release nodes pdf 2019-11-25 20:54:25 -08:00
Aakash Sudhanwa
db45731729 Merge pull request #11 from RadeonOpenCompute/master
ROCm Release 2.10 (#947)
2019-11-25 20:12:36 -08:00
Aakash Sudhanwa
34552e95e0 Release Notes 2019-11-25 19:23:24 -08:00
Aakash Sudhanwa
8d0c516c5c Merge pull request #10 from RadeonOpenCompute/master
Update to 2.10
2019-11-25 19:20:50 -08:00
Aakash Sudhanwa
5cba919767 default.xml: ROCm Rel 2.10 2019-11-25 14:38:06 -08:00
Aakash Sudhanwa
bb0022e972 Merge pull request #9 from RadeonOpenCompute/master
Updating to latest
2019-11-25 13:04:27 -08:00
34 changed files with 525 additions and 52942 deletions

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

BIN
AMD_HIP_API_Guide_v4.3.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
AMD_ROCDebugger_API.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
AMD_ROCm_SMI_Guide_v4.3.pdf Normal file

Binary file not shown.

Binary file not shown.

773
README.md
View File

@@ -1,4 +1,74 @@
# AMD ROCm™ v4.2 Release Notes
# AMD ROCm™ v4.3.1 Point Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software.
It also covers known issues in this release.
## List of Supported Operating Systems
The AMD ROCm platform supports the following operating systems:
![Screenshot](https://github.com/RadeonOpenCompute/ROCm/blob/master/images/SuppEnv.PNG)
## What\'s New in This Release
The ROCm v4.3.1 release consists of the following enhancements:
### Support for RHEL V8.4
This release extends support for RHEL v8.4.
### Support for SLES V15 Service Pack 3
This release extends support for SLES v15 SP3.
### Pass Manager Update
In the AMD ROCm 4.3.1 release, the ROCm compiler uses the legacy pass manager, by default, to provide a better performance experience with some workloads.
Previously, in ROCm v4.3, the default choice for the ROCm compiler was the new pass manager.
For more information about legacy and new pass managers, see http://llvm.org.
## Known Issues in This Release
### General Userspace and Application Freeze on MI25
For some workloads on MI25, general user space and application freeze are observed, and the GPU resets intermittently. Note, the freeze may take hours to reproduce.
This issue is under active investigation, and no workarounds are available currently.
### hipRTC - File Not Found Error
hipRTC may fail, and users may encounter the following error:
<built-in>:1:10: fatal error: '__clang_hip_runtime_wrapper.h' file not found
#include "__clang_hip_runtime_wrapper.h"
#### Suggested Workarounds
* Set LLVM_PATH in the environment to <path to ROCm llvm>/llvm. Note, if ROCm is installed at the default location, then LLVM_PATH must be set to /opt/rocm/llvm.
* Add “-I <path to ROCm>/llvm/lib/clang/13.0.0/include/” to compiler options in the call to hiprtcCompileProgram (). Note, this workaround requires the following changes in the code:
// set NUM_OPTIONS to one more than the number of options that was previously required
const char* options[NUM_OPTIONS];
// fill other options[] here
std::string sarg = "-I/opt/rocm/llvm/lib/clang/13.0.0/include/";
options[NUM_OPTIONS - 1] = sarg.c_str();
hiprtcResult compileResult{hiprtcCompileProgram(prog, NUM_OPTIONS, options)};"
# AMD ROCm™ v4.3 Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software. It also covers known issues and deprecations in this release.
@@ -11,19 +81,11 @@ This document describes the features, fixed issues, and information about downlo
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [HIP Enhancements](#HIP-Enhancements)
* [ROCm Data Center Tool](#ROCm-Data-Center-Tool)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [ROCProfiler Enhancements](#ROCProfiler-Enhancements)
- [Known Issues in This Release](#Known-Issues-in-This-Release)
- [Fixed Defects](#Fixed-Defects)
- [Known Issues](#Known-Issues)
- [Deprecations](#Deprecations)
* [Compiler Generated Code Object Version 2 Deprecation ](#Compiler-Generated-Code-Object-Version-2-Deprecation)
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
@@ -39,16 +101,12 @@ This document describes the features, fixed issues, and information about downlo
The AMD ROCm platform is designed to support the following operating systems:
* Ubuntu 20.04.2 HWE (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.9 (3.10.0-1127) & RHEL 7.9 (3.10.0-1160.6.1.el7) (Using devtoolset-7 runtime support)
* CentOS 8.3 (4.18.0-193.el8)and RHEL 8.3 (4.18.0-193.1.1.el8) (devtoolset is not required)
* SLES 15 SP2
![Screenshot](https://github.com/RadeonOpenCompute/ROCm/blob/master/images/OSKernelupdated.PNG)
### Fresh Installation of AMD ROCM V4.3 Recommended
### Complete Installation of AMD ROCM V4.2 Recommended
Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. **An upgrade from previous releases to AMD ROCm v4.2 is not supported**. For more information, refer to the AMD ROCm Installation Guide at
Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. **An upgrade from previous releases to AMD ROCm v4.3 is not supported**. For more information, refer to the AMD ROCm Installation Guide at
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
@@ -62,7 +120,7 @@ https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
## ROCm Multi-Version Installation Update
With the AMD ROCm v4.2 release, the following ROCm multi-version installation changes apply:
With the AMD ROCm v4.3 release, the following ROCm multi-version installation changes apply:
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
@@ -73,32 +131,15 @@ The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm i
* ROCm v3.9 and above will not set any ldconfig entries for ROCm libraries for multi-version installation. Users must set LD_LIBRARY_PATH to load the ROCm library version of choice.
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
### Updated HIP Instructions for ROCm Installation
The hip-base package has a dependency on Perl modules that some operating systems may not have in their default package repositories. Use the following commands to add repositories that have the required Perl packages:
#### For SLES 15 SP2
sudo zypper addrepo
For more information, see
https://download.opensuse.org/repositories/devel:languages:perl/SLE_15/devel:languages:perl.repo
#### For CentOS8.3
sudo yum config-manager --set-enabled powertools
#### For RHEL8.3
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
## Support for Enviornment Modules
Environment modules are now supported. This enhancement in the ROCm v4.3 release enables users to switch between ROCm v4.2 and ROCm v4.3 easily and efficiently.
For more information about installing environment modules, refer to
https://modules.readthedocs.io/en/latest/
@@ -121,24 +162,25 @@ https://rocmdocs.amd.com/en/latest/
## AMD ROCm - HIP Documentation Updates
* HIP Programming Guide v4.2
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD%20HIP%20Programming%20Guide_v4.2.pdf
* HIP API Guide v4.2
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_4.2.pdf
* HIP-Supported CUDA API Reference Guide v4.2
https://github.com/RadeonOpenCompute/ROCm/blob/master/HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf
* HIP FAQ
For more information, refer to
* HIP Programming Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.3.pdf
* HIP API Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_v4.3.pdf
* HIP-Supported CUDA API Reference Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Supported_CUDA_API_Reference_Guide_v4.3.pdf
* **NEW** - AMD ROCm Compiler Reference Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_Compiler_Reference_Guide_v4.3.pdf
* HIP FAQ
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
@@ -146,38 +188,32 @@ https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
* ROCm Data Center Tool User Guide
- Reliability, Accessibility, and Serviceability (RAS) Plugin Integration
For more information, refer to the ROCm Data Center User Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf
- Prometheus (Grafana) Integration with Automatic Node Detection
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.3.pdf
* ROCm Data Center Tool API Guide
For more information, refer to the ROCm Data Center API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Data_Center_Tool_API_Guide_v4.2.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_RDC_API_Guide_v4.3.pdf
## ROCm SMI API Documentation Updates
* ROCm SMI API Guide
For more information, refer to the ROCm SMI API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_Manual_4.2.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_SMI_Guide_v4.3.pdf
## ROC Debugger User and API Guide
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Debugger_User_Guide_v4.2.pdf
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCDebugger_User_Guide.pdf
* Debugger API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Debugger_API_Guide_v4.2.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCDebugger_API.pdf
## General AMD ROCm Documentation Links
@@ -206,107 +242,230 @@ Access the following links for more information:
## HIP Enhancements
### HIP Target Platform Macro
### HIP Versioning Update
The platform macros are updated to target either the AMD or NVIDIA platform in HIP projects. They now include corresponding headers and libraries for compilation/linking.
* *__HIP_PLATFORM_AMD__* is defined if the HIP platform targets AMD. Note, __HIP_PLATFORM_HCC__ was used previously if the HIP platform targeted AMD.
This is now deprecated.
* *__HIP_PLATFORM_NVIDIA__* is defined if the HIP platform targets NVIDIA. Note, _HIP_PLATFORM_NVCC__ was used previously if the HIP platform targeted NVIDIA. This is now deprecated.
For example,
The HIP version definition is updated from the ROCm v4.2 release as follows:
```
#if (defined(__HIP_PLATFORM_AMD__)) && !(defined(__HIP_PLATFORM_NVIDIA__))
#include <hip/amd_detail/hip_complex.h>
#elif !(defined(__HIP_PLATFORM_AMD__)) && (defined(__HIP_PLATFORM_NVIDIA__))
#include <hip/nvidia_detail/hip_complex.h>
HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 +
HIP_VERSION_PATCH)
```
### Updated HIP 'Include' Directories
In the ROCm4.2 release, HIP *include* header directories for platforms are updated as follows:
* *amd_detail/* - includes source header details for the amd platform implementation. In previous releases, the "hcc_detail" directory was defined, and it it is now deprecated.
* *nvidia_detail/* - includes source header details for the nvidia platform implementation. In previous releases, the "nvcc_detail" directory was defined, and it is now deprecated.
The HIP version can be queried from a HIP API call
### HIP Stream Memory Operations
```
hipRuntimeGetVersion(&runtimeVersion);
```
**Note**: The version returned will be greater than the version in previous ROCm releases.
The ROCm v4.2 extends support to Stream Memory Operations to enable direct synchronization between Network Nodes and GPU. The following new APIs are added:
### Support for Managed Memory Allocation
* hipStreamWaitValue32
* hipStreamWaitValue64
* hipStreamWriteValue32
* hipStreamWriteValue64
HIP now supports and automatically manages Heterogeneous Memory Management (HMM) allocation. The HIP application performs a capability check before making the managed memory API call hipMallocManaged.
For more details, see the HIP API guide at
**Note**: The _managed_ keyword is unsupported currently.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_4.2.pdf
```
int managed_memory = 0;
HIPCHECK(hipDeviceGetAttribute(&managed_memory,
hipDeviceAttributeManagedMemory,p_gpuDevice));
if (!managed_memory ) {
printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
}
else {
HIPCHECK(hipSetDevice(p_gpuDevice));
HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
. . .
}
```
### Kernel Enqueue Serialization
Developers can control kernel command serialization from the host using the following environment variable,
AMD_SERIALIZE_KERNEL
* AMD_SERIALIZE_KERNEL = 1, Wait for completion before enqueue,
* AMD_SERIALIZE_KERNEL = 2, Wait for completion after enqueue,
* AMD_SERIALIZE_KERNEL = 3, Both.
This environment variable setting enables HIP runtime to wait for GPU idle before/after any GPU command.
### HIP Events in Kernel Dispatch
HIP events in kernel dispatch using *hipExtLaunchKernelGGL/hipExtLaunchKernel* and passed in the API are not explicitly recorded and should only be used to get elapsed time for that specific launch.
Events used across multiple dispatches, for example, start and stop events from different *hipExtLaunchKernelGGL/hipExtLaunchKernel* calls, are treated as invalid unrecorded events. In such scenarios, HIP will display the error *"hipErrorInvalidHandle"* from *hipEventElapsedTime*.
For more details, refer to the HIP API Guide at
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_4.2.pdf
### NUMA-aware Host Memory Allocation
The Non-Uniform Memory Architecture (NUMA) policy determines how memory is allocated and selects a CPU closest to each GPU.
NUMA also measures the distance between the GPU and CPU devices. By default, each GPU selects a Numa CPU node that has the least NUMA distance between them; the host memory is automatically allocated closest to the memory pool of the NUMA node of the current GPU device.
Note, using the *hipSetDevice* API with a different GPU provides access to the host allocation. However, it may have a longer NUMA distance.
### Changed Environment Variables for HIP
In the ROCm v3.5 release, the Heterogeneous Compute Compiler (HCC) compiler was deprecated, and the HIP-Clang compiler was introduced for compiling Heterogeneous-Compute Interface for Portability (HIP) programs. In addition, the HIP runtime API was implemented on top of Radeon Open Compute Common Language Runtime (ROCclr). ROCclr is an abstraction layer that provides the ability to interact with different runtime backends such as ROCr.
While the HIP_PLATFORM=hcc environment variable was functional in subsequent releases, in the ROCm v4.1 release, the following environment variables were changed:
* *HIP_PLATFORM=hcc to HIP_PLATFORM=amd*
* *HIP_PLATFORM=nvcc to HIP_PLATFORM=nvidia*
Therefore, any applications continuing to use the HIP_PLATFORM=hcc variable will fail. You must update the environment variables to reflect the changes as mentioned above.
### New Atomic System Scope Atomic Operations
HIP now provides new APIs with _system as a suffix to support system scope atomic operations. For example, atomicAnd atomic is dedicated to the GPU device, and atomicAnd_system allows developers to extend the atomic operation to system scope from the GPU device to other CPUs and GPU devices in the system.
For more information, refer to the HIP Programming Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.3.pdf
### Indirect Function Call and C++ Virtual Functions
While the new release of the ROCm compiler supports indirect function calls and C++ virtual functions on a device, there are some known limitations and issues.
**Limitations**
* An address to a function is device specific. Note, a function address taken on the host can not be used on a device, and a function address taken on a device can not be used on the host. On a system with multiple devices, an address taken on one device can not be used on a different device.
* C++ virtual functions only work on the device where the object was constructed.
* Indirect call to a device function with function scope shared memory allocation is not supported. For example, LDS.
* Indirect call to a device function defined in a source file different than the calling function/kernel is only supported when compiling the entire program with -fgpu-rdc.
**Known Issues in This Release**
* Programs containing kernels with different launch bounds may crash when making an indirect function call. This issue is due to a compiler issue miscalculating the register budget for the callee function.
* Programs may not work correctly when making an indirect call to a function that uses more resources. For example, scratch memory, shared memory, registers made available by the caller.
* Compiling a program with objects with pure or deleted virtual functions on the device will result in a linker error. This issue is due to the missing implementation of some C++ runtime functions on the device.
* Constructing an object with virtual functions in private or shared memory may crash the program due to a compiler issue when generating code for the constructor.
## ROCm Data Center Tool
### RAS Integration
### Prometheus (Grafana) Integration with Automatic Node Detection
The ROCm Data Center (RDC) Tool is enhanced with the Reliability, Accessibility, and Serviceability (RAS) plugin.
The ROCm Data Center (RDC) tool enables you to use Consul to discover the rdc_prometheus service automatically. Consul is “a service mesh solution providing a full-featured control plane with service discovery, configuration, and segmentation functionality.” For more information, refer to their website at https://www.consul.io/docs/intro.
The ROCm Data Center Tool uses Consul for health checks of RDCs integration with the Prometheus plug-in (rdc_prometheus), and these checks provide information on its efficiency.
Previously, when a new compute node was added, users had to change prometheus_targets.json to use Consul manually. Now, with the Consul agent integration, a new compute node can be discovered automatically.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.3.pdf
For more information about RAS integration and installation, refer to the ROCm Data Center Tool User guide at:
### Coarse Grain Utilization
This feature provides a counter that displays the coarse grain GPU usage information, as shown below.
Sample output
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf
```
$ rocm_smi.py --showuse
============================== % time GPU is busy =============================
GPU[0] : GPU use (%): 0
GPU[0] : GFX Activity: 3401
```
### Add 64-bit Energy Accumulator In-band
This feature provides an average value of energy consumed over time in a free-flowing RAPL counter, a 64-bit Energy Accumulator.
Sample output
```
$ rocm_smi.py --showenergycounter
=============================== Consumed Energy ================================
GPU[0] : Energy counter: 2424868
GPU[0] : Accumulated Energy (uJ): 0.0
```
### Support for Continuous Clocks Values
ROCm SMI will support continuous clock values instead of the previous discrete levels. Moving forward the updated sysfs file will consist of only MIN and MAX values and the user can set the clock value in the given range.
Sample output:
```
$ rocm_smi.py --setsrange 551 1270
Do you accept these terms? [y/N] y
============================= Set Valid sclk Range=======
GPU[0] : Successfully set sclk from 551(MHz) to 1270(MHz)
GPU[1] : Successfully set sclk from 551(MHz) to 1270(MHz)
=========================================================================
$ rocm_smi.py --showsclkrange
============================ Show Valid sclk Range======
GPU[0] : Valid sclk range: 551Mhz - 1270Mhz
GPU[1] : Valid sclk range: 551Mhz - 1270Mhz
```
### Memory Utilization Counters
This feature provides a counter display memory utilization information as shown below.
Sample output
```
$ rocm_smi.py --showmemuse
========================== Current Memory Use ==============================
GPU[0] : GPU memory use (%): 0
GPU[0] : Memory Activity: 0
```
### Performance Determinism
ROCm SMI supports performance determinism as a unique mode of operation. Performance variations are minimal as this enhancement allows users to control the entry and exit to set a soft maximum (ceiling) for the GFX clock.
Sample output
```
$ rocm_smi.py --setperfdeterminism 650
cat pp_od_clk_voltage
GFXCLK:
0: 500Mhz
1: 650Mhz *
2: 1200Mhz
$ rocm_smi.py --resetperfdeterminism
```
**Note**: The idle clock will not take up higher clock values if no workload is running. After enabling determinism, users can run a GFX workload to set performance determinism to the desired clock value in the valid range.
* GFX clock could either be less than or equal to the max value set in this mode. GFX clock will be at the max clock set in this mode only when required by the running workload.
* VDDGFX will be higher by an offset (75mv or so based on PPTable) in the determinism mode.
### HBM Temperature Metric Per Stack
This feature will enable ROCm SMI to report all HBM temperature values as shown below.
Sample output
```
$ rocm_smi.py showtemp
================================= Temperature =================================
GPU[0] : Temperature (Sensor edge) (C): 29.0
GPU[0] : Temperature (Sensor junction) (C): 36.0
GPU[0] : Temperature (Sensor memory) (C): 45.0
GPU[0] : Temperature (Sensor HBM 0) (C): 43.0
GPU[0] : Temperature (Sensor HBM 1) (C): 42.0
GPU[0] : Temperature (Sensor HBM 2) (C): 44.0
GPU[0] : Temperature (Sensor HBM 3) (C): 45.0
```
## ROCm Math and Communication Libraries
### rocBLAS
Enhancements and fixes:
**Optimizations**
* Added option to install script to build only rocBLAS clients with a pre-built rocBLAS library
* Supported gemm ext for unpacked int8 input layout on gfx908 GPUs
* Added new flags rocblas_gemm_flags::rocblas_gemm_flags_pack_int8x4 to specify if using the packed layout
* Set the rocblas_gemm_flags_pack_int8x4 when using packed int8x;, this should be always set on GPUs before gfx908
* For gfx908 GPUs, unpacked int8 is supported. Setting of this flag is no longer required
* Notice the default flags 0 uses unpacked int8 and changes the behaviour of int8 gemm from ROCm 4.1.0
* Added a query function rocblas_query_int8_layout_flag to get the preferable layout of int8 for gemm by device
* Improved performance of non-batched and batched rocblas_Xgemv for gfx908 when m <= 15000 and n <= 15000
* Improved performance of non-batched and batched rocblas_sgemv and rocblas_dgemv for gfx906 when m <= 6000 and n <= 6000
* Improved the overall performance of non-batched and batched rocblas_cgemv for gfx906
* Improved the overall performance of rocblas_Xtrsv
For more information, refer to
@@ -315,7 +474,19 @@ https://rocblas.readthedocs.io/en/master/
### rocRAND
* Performance fixes
**Enhancements**
* gfx90a support added
* gfx1030 support added
* gfx803 supported re-enabled
**Fixed**
* Memory leaks in Poisson tests has been fixed.
* Memory leaks when generator has been created but setting seed/offset/dimensions display an exception has been fixed.
For more information, refer to
@@ -324,26 +495,31 @@ https://rocrand.readthedocs.io/en/latest/
### rocSOLVER
Support for:
**Enhancements**
Linear solvers for general non-square systems:
* GELS now supports underdetermined and transposed cases
* Inverse of triangular matrices
* TRTRI (with batched and strided_batched versions)
* Out-of-place general matrix inversion
* GETRI_OUTOFPLACE (with batched and strided_batched versions)
* Argument names for the benchmark client now match argument names from the public API
**Fixed Issues**
* Known issues with Thin-SVD. The problem was identified in the test specification, not in the thin-SVD implementation or the rocBLAS gemm_batched routines.
* Multi-level logging functionality
* Benchmark client longer crashes as a result of leading dimension or stride arguments not being provided on the command line.
* Implementation of the Thin-SVD algorithm
* Reductions of generalized symmetric- and hermitian-definite eigenproblems:
* SYGS2, SYGST (with batched and strided_batched versions)
* HEGS2, HEGST (with batched and strided_batched versions)
* Symmetric and hermitian matrix eigensolvers:
* SYEV (with batched and strided_batched versions)
* HEEV (with batched and strided_batched versions)
* Generalized symmetric- and hermitian-definite eigensolvers:
* SYGV (with batched and strided_batched versions)
* HEGV (with batched and strided_batched versions)
**Optimizations**
* Improved general performance of matrix inversion (GETRI)
For more information, refer to
@@ -351,116 +527,213 @@ https://rocsolver.readthedocs.io/en/latest/
### rocSPARSE
**Enhancements**
* (batched) tridiagonal solver with and without pivoting
* dense matrix sparse vector multiplication (gemvi)
* support for gfx90a
* sampled dense-dense matrix multiplication (sddmm)
**Improvements**
* client matrix download mechanism
* boost dependency in clients removed
Enhancements:
* SpMM (CSR, COO)
* Code coverage analysis
For more information, refer to
https://rocsparse.readthedocs.io/en/latest/usermanual.html#rocsparse-gebsrmv
### hipSPARSE
### hipBLAS
Enhancements:
**Enhancements**
* Added *hipblasStatusToString*
**Fixed**
* Added catch() blocks around API calls to prevent the leak of C++ exceptions
* Generic API support, including SpMM (CSR, COO)
* csru2csr, csr2csru
### rocFFT
**Changes**
* Re-split device code into single-precision, double-precision, and miscellaneous kernels.
**Fixed Issues**
* double-precision planar->planar transpose.
* 3D transforms with unusual strides, for SBCC-optimized sizes.
* Improved buffer placement logic.
For more information, refer to
https://rocsparse.readthedocs.io/en/latest/usermanual.html#types
https://rocfft.readthedocs.io/en/rocm-4.3.0/
### hipFFT
**Fixed Issues**
* CMAKE updates
* Added callback API in hipfftXt.h header.
### rocALUTION
**Enhancements**
* Support for gfx90a target
* Support for gfx1030 target
**Improvements**
* Install script
For more information, refer to
### rocTHRUST
# Fixed Defects
**Enhancements**
* Updated to match upstream Thrust 1.11
* gfx90a support added
* gfx803 support re-enabled
## Performance Impact for LDS-BOUND Kernels
hipCUB
The following issue is fixed in the ROCm v4.2 release.
Enhancements
The compiler in ROCm v4.1 generates LDS load and stores instructions that incorrectly assume equal performance between aligned and misaligned accesses. While this does not impact code correctness, it may result in sub-optimal performance.
* DiscardOutputIterator to backend header
## ROCProfiler Enhancements
# Known Issues
### Tracing Multiple MPI Ranks
When tracing multiple MPI ranks in ROCm v4.3, users must use the form:
```
mpirun ... <mpi args> ... rocprof ... <rocprof args> ... application ... <application args>
```
**NOTE**: This feature differs from ROCm v4.2 (and lower), which used "rocprof ... mpirun ... application".
This change was made to enable ROCProfiler to handle process forking better and launching via mpirun (and related) executables.
From a user perspective, this new execution mode requires:
1. Generation of trace data per MPI (or process) rank.
2. Use of a new ["merge_traces.sh" utility script](https://github.com/ROCm-Developer-Tools/rocprofiler/blob/rocm-4.3.x/bin/merge_traces.sh) to combine traces from multiple processes into a unified trace for profiling.
For example, to accomplish step #1, ROCm provides a simple bash wrapper that demonstrates how to generate a unique output directory per process:
```
$ cat wrapper.sh
#! /usr/bin/env bash
if [[ -n ${OMPI_COMM_WORLD_RANK+z} ]]; then
# mpich
export MPI_RANK=${OMPI_COMM_WORLD_RANK}
elif [[ -n ${MV2_COMM_WORLD_RANK+z} ]]; then
# ompi
export MPI_RANK=${MV2_COMM_WORLD_RANK}
fi
args="$*"
pid="$$"
outdir="rank_${pid}_${MPI_RANK}"
outfile="results_${pid}_${MPI_RANK}.csv"
eval "rocprof -d ${outdir} -o ${outdir}/${outfile} $*"
```
This script:
* Determines the global MPI rank (implemented here for OpenMPI and MPICH only)
* Determines the process id of the MPI rank
* Generates a unique output directory using the two
To invoke this wrapper, use the following command:
```
mpirun <mpi args> ./wrapper.sh --hip-trace <application> <args>
```
This generates an output directory for each used MPI rank. For example,
```
$ ls -ld rank_* | awk {'print $5" "$9'}
4096 rank_513555_0
4096 rank_513556_1
```
Finally, these traces may be combined using the [merge traces script](https://github.com/ROCm-Developer-Tools/rocprofiler/blob/rocm-4.3.x/bin/merge_traces.sh). For example,
```
$ ./merge_traces.sh -h
Script for aggregating results from multiple rocprofiler out directries.
Full path: /opt/rocm/bin/merge_traces.sh
Usage:
merge_traces.sh -o <outputdir> [<inputdir>...]
```
Use the following input arguments to the merge_traces.sh script to control which traces are merged and where the resulting merged trace is saved.
* -o <*outputdir*> - output directory where the results are aggregated.
* <*inputdir*>... - space-separated list of rocprofiler directories. If not specified, CWD is used.
For example, if an output directory named "unified" was supplied to the `merge_traces.sh` script, the file 'unified/results.json' will be generated, and the contains trace data from both MPI ranks.
Known issue for ROCProfiler
Collecting several counter collection passes (multiple "pmc:" lines in an counter input file) is not supported in a single run.
The workaround is to break the multiline counter input file into multiple single-line counter input files and execute runs.
# Known Issues in This Release
The following are the known issues in this release.
## Upgrade to AMD ROCm v4.2 Not Supported
## Upgrade to AMD ROCm v4.3 Not Supported
An upgrade from previous releases to AMD ROCm v4.2 is not supported. Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm.
The hip-base package has a dependency on Perl modules that some operating systems may not have in their default package repositories. Use the following commands to add repositories that have the required Perl packages:
#### For SLES 15 SP2
sudo zypper addrepo
For more information, see
https://download.opensuse.org/repositories/devel:languages:perl/SLE_15/devel:languages:perl.repo
#### For CentOS8.3
sudo yum config-manager --set-enabled powertools
## _LAUNCH BOUNDS_Ignored During Kernel Launch
#### For RHEL8.3
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
The HIP runtime returns the hipErrorLaunchFailure error code when an application tries to launch kernel with a block size larger than the launch bounds mentioned during compile time. If no launch bounds were specified during the compile time, the default value of 1024 is assumed. Refer to the HIP trace for more information about the failing kernel. A sample error in the trace is shown below:
## Modulefile Fails to Install Automatically in ROCm Multi-Version Environment
Snippet of the HIP trace
```
:3:devprogram.cpp :2504: 2227377746776 us: Using Code Object V4.
:3:hip_module.cpp :361 : 2227377768546 us: 7670 : [7f7c6eddd180] ihipModuleLaunchKernel ( 0x0x16fe080, 2048, 1, 1, 1024, 1, 1, 0, stream:<null>, 0x7ffded8ad260, char array:<null>, event:0, event:0, 0, 0 )
:1:hip_module.cpp :254 : 2227377768572 us: Launch params (1024, 1, 1) are larger than launch bounds (64) for kernel _Z8MyKerneliPd
:3:hip_platform.cpp :667 : 2227377768577 us: 7670 : [7f7c6eddd180] ihipLaunchKernel: Returned hipErrorLaunchFailure :
:3:hip_module.cpp :493 : 2227377768581 us: 7670 : [7f7c6eddd180] hipLaunchKernel: Returned hipErrorLaunchFailure :
```
The ROCm v4.2 release includes a preliminary implementation of environment modules to enable switching between multi versions of ROCm installation. The modulefile in */opt/rocm-4.2/lib/rocmmod* fails to install automatically in the ROCm multi-version environment.
This is a known limitation for environment modules in ROCm, and the issue is under investigation at this time.
**Workaround**
Ensure you install the modulefile in */opt/rocm-4.2/lib/rocmmod* manually in a multi-version installation environment.
For general information about modules, see
http://modules.sourceforge.net/
## Issue with Input/Output Types for Scan Algorithms in rocThrust
As rocThrust is updated to match CUDA Thrust 1.10, the different input/output types for scan algorithms in rocThrust/CUDA Thrust are no longer officially supported. In this situation, the current C++ standard does not specify the intermediate accumulator type leading to potentially incorrect results and ill-defined behavior.
As a workaround, users can:
* Use the same types for input and output
Or
* For exclusive_scan, explicitly specify an *InitialValueType* in the last argument
Or
* For inclusive_scan, which does not have an initial value argument, use a transform_iterator to explicitly cast the input iterators to match the outputs value_type
## Precision Issue in AMD RADEON™ PRO VII and AMD RADEON™ VII
In AMD Radeon™ Pro VII AND AMD Radeon™ VII, a precision issue can occur when using the Tensorflow XLA path.
This issue is currently under investigation.
# Deprecations
This section describes deprecations and removals in AMD ROCm.
## Compiler Generated Code Object Version 2 Deprecation
Compiler-generated code object version 2 is no longer supported and has been completely removed. Support for loading code object version 2 is also deprecated with no announced removal release.
There is no known workaround at this time.
## PYCACHE Folder Exists After ROCM SMI Library Uninstallation
Users may observe that the /opt/rocm-x/bin/__pycache__ folder continues to exist even after the rocm_smi_lib uninstallation.
Workaround: Delete the /opt/rocm-x/bin/__pycache__ folder manually before uninstalling rocm_smi_lib.
# Deploying ROCm

Binary file not shown.

Binary file not shown.

View File

@@ -12,7 +12,7 @@ fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-4.2.0"
<default revision="refs/tags/rocm-4.3.1"
remote="roc-github"
sync-c="true"
sync-j="4" />
@@ -20,7 +20,6 @@ fetch="https://github.com/KhronosGroup/" />
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="ROC-smi" revision="refs/tags/rocm-4.1.0" />
<project name="rocm_smi_lib" />
<project name="rocm-cmake" />
<project name="rocminfo" />

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

BIN
images/OSKernelupdated.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

BIN
images/SuppEnv.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

1
images/test.rst Normal file
View File

@@ -0,0 +1 @@