Compare commits

...

125 Commits

Author SHA1 Message Date
Lad, Aditya
2b2bab5bf3 ROCm 4.1.1 default.xml 2021-04-08 09:59:11 -07:00
Roopa Malavally
5ec9b12f99 Update README.md 2021-04-08 09:27:07 -07:00
Roopa Malavally
803148affd Update README.md 2021-04-08 09:21:27 -07:00
Roopa Malavally
9275fb6298 Update README.md 2021-04-08 09:19:52 -07:00
Roopa Malavally
b6ae3f145e Update README.md 2021-04-07 11:06:04 -07:00
Roopa Malavally
f80eefc965 Update README.md 2021-04-07 11:04:51 -07:00
Roopa Malavally
c5d91843a7 Update README.md 2021-04-07 11:03:31 -07:00
Roopa Malavally
733a9c097c Update README.md 2021-04-07 07:15:49 -07:00
Roopa Malavally
ff2b3f8a23 Add files via upload 2021-03-26 12:14:59 -07:00
Roopa Malavally
5a4cf1cee1 Delete AMD_ROCm_Release_Notes_v4.1.docx 2021-03-26 12:14:46 -07:00
Roopa Malavally
dccf5ca356 Update README.md 2021-03-26 12:01:54 -07:00
Roopa Malavally
8b20bd56a6 Update README.md 2021-03-26 10:00:07 -07:00
zhang2amd
65cb10e5e8 Merge pull request #1427 from xuhuisheng/patch-1
add hipFFT to default.xml
2021-03-25 23:03:26 -07:00
Roopa Malavally
ac2625dd26 Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-03-25 15:55:22 -07:00
Roopa Malavally
3716310e93 Add files via upload 2021-03-25 15:55:04 -07:00
Roopa Malavally
2dee17f7d6 Add files via upload 2021-03-25 13:03:33 -07:00
Roopa Malavally
61e8b0d70e Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-03-25 13:03:20 -07:00
Roopa Malavally
8a3304a8d9 Update README.md 2021-03-25 11:45:08 -07:00
Roopa Malavally
55488a9424 Update README.md 2021-03-25 11:03:19 -07:00
Roopa Malavally
ff4a1d4059 Update README.md 2021-03-25 10:03:46 -07:00
Xu Huisheng
4b2d93fb7e add hipFFT to default.xml
There is hipFFT on <http://repo.radeon.com/rocm/apt/4.1/pool/main/h/hipfft/>.
Please add related repository in default.xml.
Thank you.
2021-03-25 19:41:05 +08:00
Roopa Malavally
061ccd21b8 Update README.md 2021-03-24 10:26:07 -07:00
Roopa Malavally
0ed1bd9f8e Add files via upload 2021-03-24 10:25:24 -07:00
Roopa Malavally
856c74de55 Update README.md 2021-03-24 07:59:03 -07:00
Roopa Malavally
12c6f60e45 Update README.md 2021-03-24 07:58:30 -07:00
Aditya Lad
897b1e8e2d Merge pull request #1422 from RadeonOpenCompute/roc-4.1.x
Roc 4.1.x
2021-03-23 17:59:19 -07:00
Lad, Aditya
382ea7553f Remove inaccessible repos 2021-03-23 17:56:10 -07:00
Aditya Lad
2014b47dcb Merge pull request #1420 from RadeonOpenCompute/master
Addition of ROCm release notes
2021-03-23 17:29:17 -07:00
zhang2amd
b9f9bafd9b Merge pull request #1419 from RadeonOpenCompute/roc-4.1.x
ROCm 4.1 Release
2021-03-23 17:17:00 -07:00
Lad, Aditya
ff15f420c6 ROCm 4.1 default.xml edit 2021-03-23 17:10:44 -07:00
Lad, Aditya
f51c9be952 Release ROCm 4.1 Readme.md and default.xml 2021-03-23 17:03:00 -07:00
Lad, Aditya
64e254dc99 Release ROCm 4.1 Readme.md and default.xml 2021-03-23 17:01:33 -07:00
Roopa Malavally
af7f921474 Add files via upload 2021-03-23 17:00:17 -07:00
Roopa Malavally
8b3377749f Add files via upload 2021-03-23 14:16:46 -07:00
Roopa Malavally
c3a3ce55d1 Delete gdb.pdf 2021-03-22 17:29:33 -07:00
Roopa Malavally
64c727449b Delete amd-dbgapi.pdf 2021-03-22 17:29:24 -07:00
Roopa Malavally
182dfc65cf Add files via upload 2021-03-22 16:43:36 -07:00
Roopa Malavally
d529d5c585 Delete AMD_ROCm_Release_Notes_v4.0.pdf 2021-03-22 16:29:11 -07:00
Roopa Malavally
cca6bc4921 Delete HIP_Programming_Guide_v4.0.pdf 2021-03-22 16:28:56 -07:00
Roopa Malavally
e3dbbb6bbf Add files via upload 2021-03-22 16:27:41 -07:00
Roopa Malavally
6e39c80762 Add files via upload 2021-03-22 16:17:38 -07:00
Roopa Malavally
f96f5df625 Add files via upload 2021-03-22 16:07:44 -07:00
Roopa Malavally
0639a312c8 Delete ROCm_Data_Center_Too_API_Manual_4.1.pdf 2021-03-22 16:07:03 -07:00
Roopa Malavally
a2878b1460 Add files via upload 2021-03-22 15:38:16 -07:00
Roopa Malavally
1daf261d25 Delete ROCm_SMI_API_Guide_v4.0.pdf 2021-03-22 15:37:54 -07:00
Roopa Malavally
5848bc3d7e Add files via upload 2021-03-22 15:37:15 -07:00
Roopa Malavally
d9692359ad Delete HIP-API_Guide_v4.0.pdf 2021-03-22 15:36:42 -07:00
Roopa Malavally
25110784cf Add files via upload 2021-03-22 14:53:33 -07:00
Roopa Malavally
9ff31d316f Update README.md 2021-03-10 07:53:11 -08:00
Roopa Malavally
b072119ad6 Update README.md 2021-03-09 09:03:05 -08:00
Roopa Malavally
095544032c Update README.md 2021-02-25 07:28:52 -08:00
Roopa Malavally
26a39a637a Update README.md 2021-02-25 07:24:46 -08:00
Roopa Malavally
6fb55e6f45 Update README.md 2021-02-24 13:16:33 -08:00
Lad, Aditya
290091946f ROCm 4.0.1 Manifest file 2021-01-25 15:11:55 -08:00
Roopa Malavally
2874a8ae6c Update README.md 2021-01-25 15:02:27 -08:00
Roopa Malavally
f62f2b24da Add files via upload 2021-01-20 18:10:40 -08:00
Roopa Malavally
790567e3bd Update README.md 2020-12-18 15:08:54 -08:00
Roopa Malavally
57d7a202d4 Update README.md 2020-12-18 15:08:24 -08:00
Aditya Lad
80d2aa739b Merge pull request #1343 from RadeonOpenCompute/roc-4.0.x
ROCm 4.0 Release
2020-12-18 14:30:27 -08:00
Roopa Malavally
b18851f804 Update README.md 2020-12-18 13:12:20 -08:00
Roopa Malavally
0f0dbf0c92 Update README.md 2020-12-18 13:11:59 -08:00
Lad, Aditya
224a45379f ROCm 4.0 Release 2020-12-18 12:53:33 -08:00
Roopa Malavally
f521943747 Update README.md 2020-12-18 12:52:04 -08:00
Roopa Malavally
2b7f806b10 AMD ROCm Release Notes v4.0 (#1342)
* Update README.md

* Update README.md

* Add files via upload

* Delete AMD_ROCm_Release_Notes_v3.10.pdf

* Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf

* Delete ROCm_Data_Center_API_Guide.pdf

* Delete ROCm_SMI_API_Guide_v3.10.pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2020-12-18 12:46:40 -08:00
Roopa Malavally
cd55ef67c9 Add files via upload 2020-12-18 12:32:43 -08:00
Roopa Malavally
9320669eee Delete AMD_ROCm_Release_Notes_v3.10.pdf 2020-12-18 08:19:51 -08:00
Roopa Malavally
c1211c66e3 Delete ROCm_SMI_API_Guide_v3.10.pdf 2020-12-18 08:19:36 -08:00
Roopa Malavally
c8fcff6488 Delete ROCm_Data_Center_API_Guide.pdf 2020-12-18 08:19:18 -08:00
Roopa Malavally
7118076ab4 Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-12-18 08:18:58 -08:00
Roopa Malavally
ec5523395a Add files via upload 2020-12-17 21:00:59 -08:00
Roopa Malavally
41d8f6a235 Add files via upload 2020-12-17 14:00:59 -08:00
Roopa Malavally
c69eef858a Update README.md 2020-12-10 13:38:07 -08:00
Aditya Lad
5b902ca38c Merge pull request #1316 from RadeonOpenCompute/roc-3.10.x
add rdc and half
2020-12-02 16:11:11 -08:00
Aditya Lad
68c5c198df add rdc and half 2020-12-02 16:07:15 -08:00
Aditya Lad
761ed4e70f Merge pull request #1314 from RadeonOpenCompute/roc-3.10.x
3.10 : Manifest Files
2020-12-01 16:31:55 -08:00
Lad, Aditya
8d5a160f0a 3.10 : Manifest Files 2020-12-01 16:24:12 -08:00
Roopa Malavally
f61c2ad155 Add files via upload 2020-12-01 15:45:33 -08:00
Roopa Malavally
3e2e30cc9a Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-12-01 15:44:56 -08:00
Roopa Malavally
a1f3b4e6b8 Update README.md 2020-12-01 15:08:53 -08:00
Roopa Malavally
7a3a012e6a Update README.md 2020-11-30 15:45:42 -08:00
Roopa Malavally
5b6ab31db3 Update README.md 2020-11-30 14:12:01 -08:00
Roopa Malavally
acabe2c532 Update README.md 2020-11-30 14:10:06 -08:00
Roopa Malavally
39d8bcd504 Release notes for v3.10 (#1312)
* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Delete matrix.png

* Delete ROCMCLI3.PNG

* Delete ROCMCLI2.PNG

* Delete ROCMCLI1.PNG

* Delete GEMM2.PNG

* Add files via upload

* Delete ROCm_SMI_Manual_v3.9.pdf

* Delete AMD_ROCm_Release_Notes_v3.9.pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2020-11-30 14:07:52 -08:00
Roopa Malavally
af6d1e9b26 Add files via upload 2020-11-30 14:01:36 -08:00
Roopa Malavally
1fa1d4a935 Add files via upload 2020-11-30 09:53:49 -08:00
Roopa Malavally
03d93c1948 Delete AMD_ROCm_Release_Notes_v3.9.pdf 2020-11-30 08:55:35 -08:00
Roopa Malavally
93984b0956 Add files via upload 2020-11-30 08:54:52 -08:00
Roopa Malavally
6ccb1cfc0f Add files via upload 2020-11-30 07:29:29 -08:00
Roopa Malavally
f054f82173 Delete ROCm_SMI_Manual_v3.9.pdf 2020-11-30 07:28:11 -08:00
Xu Huisheng
bb6756b58d remove dumplicated remote=roc-github (#1248) 2020-11-18 08:19:23 -08:00
Roopa Malavally
d957b8a17c Update README.md 2020-11-12 13:47:48 -08:00
Roopa Malavally
37ece61861 Update README.md 2020-11-11 14:16:48 -08:00
Roopa Malavally
434023f31b Update README.md 2020-11-03 07:45:53 -08:00
Aditya Lad
a555260687 Merge pull request #1268 from RadeonOpenCompute/roc-3.9.x
Roc 3.9.x
2020-10-28 17:39:17 -07:00
Lad, Aditya
bf89c6bbf1 3.9 documentation 2020-10-28 15:32:49 -07:00
Lad, Aditya
bd4b772255 ROCm 3.9 default.xml 2020-10-28 15:22:02 -07:00
Lad, Aditya
e99027c39c ROCm 3.9 : Manifest files 2020-10-28 15:14:41 -07:00
Roopa Malavally
93c69afb5b Add files via upload 2020-10-28 14:54:54 -07:00
Roopa Malavally
bc2ce5c35b Delete staticlinkinglib.PNG 2020-10-28 14:52:02 -07:00
Roopa Malavally
bf633aec6b Delete forweb.PNG 2020-10-28 14:51:49 -07:00
Roopa Malavally
8608a9a1c9 Delete RDCComponentsrevised.png 2020-10-28 14:51:33 -07:00
Roopa Malavally
76afb05b6c Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-10-28 14:51:19 -07:00
Roopa Malavally
8bc67a21ea Update README.md 2020-10-19 20:23:07 -07:00
Roopa Malavally
1ce148edb1 Update README.md 2020-10-19 20:21:08 -07:00
Roopa Malavally
cc6147c25b Update README.md 2020-10-19 20:20:20 -07:00
Roopa Malavally
aadd9e68e1 Update README.md 2020-10-19 20:17:34 -07:00
Roopa Malavally
dce5aee2dc Add files via upload 2020-10-19 19:34:27 -07:00
Aditya Lad
0bcae510a3 Merge pull request #1244 from RadeonOpenCompute/roc-3.8.x
Remove MiGraphX from 3.8
2020-09-25 10:06:57 -07:00
Lad, Aditya
86a09b146b Remove MiGraphX from 3.8 2020-09-25 10:05:32 -07:00
Roopa Malavally
506cdcf6db Update README.md 2020-09-25 08:06:49 -07:00
Roopa Malavally
a919ba64c9 Update README.md 2020-09-25 08:00:10 -07:00
Roopa Malavally
fae25ccf9b Update README.md 2020-09-22 16:52:31 -07:00
Lad, Aditya
d1f9aa98a3 hipfort addition to 3.8 2020-09-22 11:38:23 -07:00
Lad, Aditya
42fa0e0765 Remove version_history.md file. Since we are currently maintaining it on external documentation. 2020-09-21 16:04:25 -07:00
Lad, Aditya
e89903ed3a ROCm release 3.8 2020-09-21 15:58:09 -07:00
Roopa Malavally
ba2e1f0109 ROCm v3.8 Release Notes (#1226)
* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Delete staticlinkinglib.PNG

* Add files via upload

* Delete staticlinkinglib.PNG

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Delete AMD_ROCm_Release_Notes_v3.7.pdf

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Add files via upload

* Add files via upload

* Add files via upload

* Update README.md
2020-09-21 15:47:24 -07:00
Roopa Malavally
a1830b5330 Add files via upload 2020-09-17 12:23:13 -07:00
Roopa Malavally
0c596d155a Update README.md 2020-09-07 10:46:57 -07:00
Roopa Malavally
75c0d668d9 Update README.md 2020-09-02 06:13:56 -07:00
Roopa Malavally
49bd50c858 Update README.md 2020-09-02 06:13:23 -07:00
Roopa Malavally
a54214d05d Update README.md 2020-09-02 06:12:10 -07:00
Roopa Malavally
2524166765 Update README.md 2020-08-23 18:33:23 -07:00
Roopa Malavally
abc65687d4 Add files via upload 2020-08-23 09:44:46 -07:00
Roopa Malavally
0fddb14b8f Delete AMD_ROCm_Release_Notes_v3.7.pdf 2020-08-23 09:44:30 -07:00
Roopa Malavally
3909efb389 Update README.md 2020-08-23 09:34:53 -07:00
23 changed files with 56730 additions and 732 deletions

Binary file not shown.

56203
AMD_HIP_API_Guide_v4.1.pdf Normal file

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

694
README.md
View File

@@ -1,22 +1,68 @@
# AMD ROCm™ v4.1.1 Patch Release Notes
# AMD ROCm Release Notes v3.7.0
The ROCm v4.1.1 release consists of the following updates:
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
It also covers known issues and deprecated features in this release.
* Changed Environment Variables for HIP
* Updated HIP Instructions for ROCm Installation
## Changed Environment Variables for HIP
In the ROCm v3.5 release, the Heterogeneous Compute Compiler (HCC) compiler was deprecated, and the HIP-Clang compiler was introduced for compiling Heterogeneous-Compute Interface for Portability (HIP) programs. In addition, the HIP runtime API was implemented on top of Radeon Open Compute Common Language Runtime (ROCclr). ROCclr is an abstraction layer that provides the ability to interact with different runtime backends such as ROCr.
While the *HIP_PLATFORM=hcc* environment variable was functional in subsequent releases, in the ROCm v4.1 release, the following environment variables were changed:
* *HIP_PLATFORM=hcc to HIP_PLATFORM=amd*
* *HIP_PLATFORM=nvcc to HIP_PLATFORM=nvidia*
Therefore, any applications continuing to use the *HIP_PLATFORM=hcc* variable will fail. You must update the environment variables to reflect the changes as mentioned above.
## Updated HIP Instructions for ROCm Installation
The hip-base package has a dependency on Perl modules that some operating systems may not have in their default package repositories. Use the following commands to add repositories that have the required Perl packages:
* For SLES 15 SP2
sudo zypper addrepo
https://download.opensuse.org/repositories/devel:languages:perl/SLE_15/devel:languages:perl.repo
* For CentOS8.3
sudo yum config-manager --set-enabled powertools
* For RHEL8.3
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
# AMD ROCm™ v4.1 Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software. It also covers known issues and deprecations in the AMD ROCm v4.1 release.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [ROCm Installation Updates](#ROCm-Installation-Updates)
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
- [Driver Compatibility Issue in This Release](#Driver-Compatibility-Issue-in-This-Release)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [AOMP Enhancements](#AOMP-Enhancements)
* [Compatibility with NVIDIA Communications Collective Library v2\.7 API](#Compatibility-with-NVIDIA-Communications-Collective-Library-v27-API)
* [Singular Value Decomposition of Bi\-diagonal Matrices](#Singular-Value-Decomposition-of-Bi-diagonal-Matrices)
* [rocSPARSE_gemmi\() Operations for Sparse Matrices](#rocSPARSE_gemmi-Operations-for-Sparse-Matrices)
* [TargetID for Multiple Configurations](#TargetID-for-Multiple-Configurations)
* [ROCm Data Center Tool](#ROCm-Data-Center-Tool)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [HIP Enhancements](#HIP-Enhancements)
* [OpenMP Enhancements and Fixes](#OpenMP-Enhancements-and-Fixes)
* [MIOpen Tensile Integration](#MIOpen-Tensile-Integration)
- [Known Issues](#Known-Issues)
- [Deprecations](#Deprecations)
* [Compiler Generated Code Object Version 2 Deprecation ](#Compiler-Generated-Code-Object-Version-2-Deprecation)
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
@@ -27,30 +73,119 @@ It also covers known issues and deprecated features in this release.
# Supported Operating Systems
## Support for Ubuntu 20.04
## ROCm Installation Updates
### List of Supported Operating Systems
In this release, AMD ROCm extends support to Ubuntu 20.04, including dual-kernel.
The AMD ROCm platform is designed to support the following operating systems:
## List of Supported Operating Systems
* Ubuntu 20.04.1 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.9 (3.10.0-1127) & RHEL 7.9 (3.10.0-1160.6.1.el7) (Using devtoolset-7 runtime support)
* CentOS 8.3 (4.18.0-193.el8) and RHEL 8.3 (4.18.0-193.1.1.el8) (devtoolset is not required)
* SLES 15 SP2
The AMD ROCm v3.7.x platform is designed to support the following operating systems:
### Fresh Installation of AMD ROCM V4.1 Recommended
* Ubuntu 20.04 and 18.04.4 (Kernel 5.3)
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
* SLES 15 SP1
A complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. An upgrade from previous releases to AMD ROCm v4.1 is not supported. For more information, refer to the AMD ROCm Installation Guide at
## Fresh Installation of AMD ROCm v3.7 Recommended
A fresh and clean installation of AMD ROCm v3.7 is recommended. An upgrade from previous releases to AMD ROCm v3.7 is not supported.
For more information, refer to the AMD ROCm Installation Guide at:
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
**Note**: *render* group is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use video group.
* For ROCm v3.5 and releases thereafter, the clinfo path is changed to /opt/rocm/opencl/bin/clinfo.
* For ROCm v3.3 and older releases, the clinfo path remains /opt/rocm/opencl/bin/x86_64/clinfo.
### ROCm Multi-Version Installation Update
With the AMD ROCm v4.1 release, the following ROCm multi-version installation changes apply:
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
* Multi-version installation of ROCm should be performed by installing rocm-dev<version> using each of the desired ROCm versions. For example, rocm-dev3.7.0, rocm-dev3.8.0, rocm-dev3.9.0.
* Version files must be created for each multi-version rocm <= 4.1.0
* Command: echo <version> | sudo tee /opt/rocm-<version>/.info/version
* Example: echo 4.1.0 | sudo tee /opt/rocm-4.1.0/.info/version
* The rock-dkms loadable kernel modules should be installed using a single rock-dkms package.
* ROCm v3.9 and above will not set any ldconfig entries for ROCm libraries for multi-version installation. Users must set LD_LIBRARY_PATH to load the ROCm library version of choice.
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
### Updated HIP Instructions for ROCm Installation
The hip-base package has a dependency on Perl modules that some operating systems may not have in their default package repositories. Use the following commands to add repositories that have the required Perl packages:
#### For SLES 15 SP2
sudo zypper addrepo
For more information, see
https://download.opensuse.org/repositories/devel:languages:perl/SLE_15/devel:languages:perl.repo
#### For CentOS8.3
sudo yum config-manager --set-enabled powertools
#### For RHEL8.3
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
# Driver Compatibility Issue in This Release
In certain scenarios, the ROCm 4.1 run-time and userspace environment are not compatible with ROCm v4.0 and older driver implementations for 7nm-based (Vega 20) hardware (MI50 and MI60).
To mitigate issues, the ROCm v4.1 or newer userspace prevents running older drivers for these GPUs.
Users are notified in the following scenarios:
* Bare Metal
* Containers
## Bare Metal
In the bare-metal environment, the following error message displays in the console:
*“HSA Error: Incompatible kernel and userspace, Vega 20 disabled. Upgrade amdgpu.”*
To test the compatibility, run the ROCm v4.1 version of rocminfo using the following instruction:
*/opt/rocm-4.1.0/bin/rocminfo 2>&1 | less*
## Containers
A container (built with error detection for this issue) using a ROCm v4.1 or newer run-time is initiated to execute on an older kernel. The container fails to start and the following warning appears:
*Error: Incompatible ROCm environment. The Docker container requires the latest kernel driver to operate correctly.
Upgrade the ROCm kernel to v4.1 or newer, or use a container tagged for v4.0.1 or older.*
To inspect the version of the installed kernel driver, run either:
* dpkg --status rock-dkms [Debian-based]
or
* rpm -ql rock-dkms [RHEL, SUSE, and others]
To install or update the driver, follow the installation instructions at:
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# AMD ROCm Documentation Updates
@@ -58,38 +193,78 @@ https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
The AMD ROCm Installation Guide in this release includes:
* Updated Supported Environments
* HIP Installation Instructions
* Supported Environments
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
* Installation Instructions for v4.1
* HIP Installation Instructions
For more information, refer to the ROCm documentation website at:
https://rocmdocs.amd.com/en/latest/
## AMD ROCm - HIP Documentation Updates
### Texture and Surface Functions
The documentation for Texture and Surface functions is updated and available at:
* HIP Programming Guide v4.1
https://rocmdocs.amd.com/en/latest/Programming_Guides/Kernel_language.html
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.1.pdf
### Warp Shuffle Functions
The documentation for Warp Shuffle functions is updated and available at:
* HIP API Guide v4.1
https://rocmdocs.amd.com/en/latest/Programming_Guides/Kernel_language.html
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_v4.1.pdf
### Compiler Defines and Environment Variables
The documentation for the updated HIP Porting Guide is available at:
* HIP-Supported CUDA API Reference Guide v4.1
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html#hip-porting-guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/HIP_Supported_CUDA_API_Reference_Guide_v4.1.pdf
* HIP FAQ
For more information, refer to
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
## AMD ROCm Debug Agent
## ROCm Data Center User and API Guide
ROCm Debug Agent Library
* ROCm Data Center Tool User Guide
https://rocmdocs.amd.com/en/latest/ROCm_Tools/rocm-debug-agent.html
- Grafana Plugin Integration
For more information, refer to the ROCm Data Center User Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf
* ROCm Data Center Tool API Guide
For more information, refer to the ROCm Data Center API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Data_Center_Tool_API_Manual_4.1.pdf
## General AMD ROCm Documentatin Links
## ROCm SMI API Documentation Updates
* ROCm SMI API Guide
For more information, refer to the ROCm SMI API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_API_GUIDE_v4.1.pdf
## ROC Debugger User and API Guide
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/Debugging%20with%20ROCGDB%20User%20Guide%20v4.1.pdf
* Debugger API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD-Debugger%20API%20Guide%20v4.1.pdf
## General AMD ROCm Documentation Links
Access the following links for more information:
@@ -103,242 +278,358 @@ Access the following links for more information:
* For AMD ROCm binary structure, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#build-amd-rocm
https://rocmdocs.amd.com/en/latest/Installation_Guide/Software-Stack-for-AMD-GPU.html
* For AMD ROCm Release History, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
https://rocmdocs.amd.com/en/latest/Current_Release_Notes/ROCm-Version-History.html
# What\'s New in This Release
## AOMP ENHANCEMENTS
## TargetID for Multiple Configurations
AOMP is a scripted build of LLVM. It supports OpenMP target offload on AMD GPUs. Since AOMP is a Clang/LLVM compiler, it also supports GPU offloading with HIP, CUDA, and OpenCL.
The new TargetID functionality allows compilations to specify various configurations of the supported hardware.
The following enhancements are made for AOMP in this release:
* OpenMP 5.0 is enabled by default. You can use -fopenmp-version=45 for OpenMP 4.5 compliance
* Restructured to include the ROCm compiler
* B=Bitcode search path using hip policy HIP_DEVICE_LIB_PATH and hip-devic-lib command line option to enable global_free for kmpc_impl_free
Previously, ROCm supported only a single configuration per target.
Restructured hostrpc, including:
* Replaced hostcall register functions with handlePayload(service, payload). Note, handlPayload has a simple switch to call the correct service handler function.
* Removed the WITH_HSA macro
* Moved the hostrpc stubs and host fallback functions into a single library and the include file. This enables the stubs openmp cpp source instead of hip and reorganizes the directory openmp/libomptarget/hostrpc.
* Moved hostrpc_invoke.cl to DeviceRTLs/amdgcn.
* Generalized the vargs processing in printf to work for any vargs function to execute on the host, including a vargs function that uses a function pointer.
* Reorganized files, added global_allocate and global_free.
* Fixed llvm TypeID enum to match the current upstream llvm TypeID.
* Moved strlen_max function inside the declare target #ifdef _DEVICE_GPU in hostrpc.cpp to resolve linker failure seen in pfspecifier_str smoke test.
* Fixed AOMP_GIT_CHECK_BRANCH in aomp_common_vars to not block builds in Red Hat if the repository is on a specific commit hash.
* Simplified and reduced the size of openmp host runtime
* Switched to default OpenMP 5.0
With the TargetID enhancement, ROCm supports configurations for Linux, PAL and associated configurations such as XNACK. This feature addresses configurations for the same target in different modes and allows applications to build executables that specify the supported configurations, including the option to be agnostic for the desired setting.
For more information, see https://github.com/ROCm-Developer-Tools/aomp
### New Code Object Format Version for TargetID
* A new clang option -mcode-object-version can be used to request the legacy code object version 3 or code object version 2. For more information, refer to
https://llvm.org/docs/AMDGPUUsage.html#elf-code-object
* A new clang --offload-arch= option is introduced to specify the offload target architecture(s) for the HIP language.
* The clang --offload-arch= and -mcpu options accept a new Target ID syntax. This allows both the processor and target feature settings to be specified. For more details, refer to
https://llvm.org/docs/AMDGPUUsage.html#amdgpu-target-id
- If a target feature is not specified, it defaults to a new concept of "any". The compiler, then, produces code,
which executes on a target configured for either value of the setting impacting the overall performance.
It is recommended to explicitly specify the setting for more efficient performance.
- In particular, the setting for XNACK now defaults to produce less performant code than previous ROCm releases.
- The legacy clang -mxnack, -mno-xnack, -msram-ecc, and -mno-sram-ecc options are deprecated. They are still
supported, however, they will be removed in a future release.
- The new Target ID syntax renames the SRAM ECC feature from sram-ecc to sramecc.
* The clang offload bundler uses the new offload hipv4 for HIP code object version 4. For more information, see
https://clang.llvm.org/docs/ClangOffloadBundler.html
* ROCm v4.1 corrects code object loading to enforce target feature settings of the code object to match the setting of the agent. It also corrects the recording of target feature settings in the code object. As a consequence, the legacy code objects may no longer load due to mismatches.
* gfx802, gfx803, and gfx805 do not support the XNACK target feature in the ROCm v4.1 release.
### New Code Object Tools
AMD ROCm v4.1 provides new code object tools *roc-obj-ls* and *roc-obj-extract*. These tools allow for the listing and extraction of AMD GPU ROCm code objects that are embedded in HIP executables and shared objects. Each tool supports a --help option that provides more information.
Refer to the HIP Programming Guide v4.1 for additional information and examples.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.1.pdf
**Note**
The extractkernel tool in previous AMD ROCm releases has been removed from the AMD ROCm v4.1 release
and will no longer be supported.
**Note**
The roc-obj-ls and roc-obj-extract tools may generate an error about the following missing Perl modules:
* File::Which
* File::BaseDir
* File::Copy
* URI::Encode
This error is due to the missing dependencies in the hip-base installer package. As a workaround, you may use the
following instructions to install the Perl modules:
*Ubuntu*
apt-get install libfile-which-perl libfile-basedir-perl libfile-copy-recursive-perl liburi-encode-perl
*CentOS*
“sudo yum install perl-File-Which perl-File-BaseDir perl-File-Copy-Recursive perl-URI-Encode”
## ROCm COMMUNICATIONS COLLECTIVE LIBRARY
Repo for CentOS8.3:
### Compatibility with NVIDIA Communications Collective Library v2\.7 API
ROCm Communications Collective Library (RCCL) is now compatible with the NVIDIA Communications Collective Library (NCCL) v2.7 API.
RCCL (pronounced "Rickle") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is also initial support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe, xGMI as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in a single node or multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications.
The collective operations are implemented using ring and tree algorithms and have been optimized for throughput and latency. For best performance, small operations can be either batched into larger operations or aggregated through the API.
For more information about RCCL APIs and compatibility with NCCL v2.7, see
https://rccl.readthedocs.io/en/develop/index.html
sudo yum config-manager --set-enabled powertools
## Singular Value Decomposition of Bi\-diagonal Matrices
*RHEL*
Rocsolver_bdsqr now computes the Singular Value Decomposition (SVD) of bi-diagonal matrices. It is an auxiliary function for the SVD of general matrices (function rocsolver_gesvd).
sudo yum install perl-File-Which perl-File-BaseDir perl-File-Copy-Recursive perl-URI-Encode
BDSQR computes the singular value decomposition (SVD) of a n-by-n bidiagonal matrix B.
Repo for RHEL8.3:
The SVD of B has the following form:
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
B = Ub * S * Vb'
where
• S is the n-by-n diagonal matrix of singular values of B
• the columns of Ub are the left singular vectors of B
• the columns of Vb are its right singular vectors
The computation of the singular vectors is optional; this function accepts input matrices U (of size nu-by-n) and V (of size n-by-nv) that are overwritten with U*Ub and Vb*V. If nu = 0 no left vectors are computed; if nv = 0 no right vectors are computed.
*SLES 15 SP2*
Optionally, this function can also compute Ub*C for a given n-by-nc input matrix C.
sudo zypper addrepo https://download.opensuse.org/repositories/devel:languages:perl/SLE_15/devel:languages:perl.repo
PARAMETERS
• [in] handle: rocblas_handle.
## ROCm Data Center Tool
• [in] uplo: rocblas_fill.
### Grafana Integration
Specifies whether B is upper or lower bidiagonal.
The ROCm Data Center (RDC) Tool is enhanced with the Grafana plugin. Grafana is a common monitoring stack used for storing and visualizing time series data. Prometheus acts as the storage backend, and Grafana is used as the interface for analysis and visualization. Grafana has a plethora of visualization options and can be integrated with Prometheus for the ROCm Data Center (RDC) dashboard.
• [in] n: rocblas_int. n >= 0.
For more information about Grafana integration and installation, refer to the ROCm Data Center Tool User guide at:
The number of rows and columns of matrix B.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf
• [in] nv: rocblas_int. nv >= 0.
The number of columns of matrix V.
• [in] nu: rocblas_int. nu >= 0.
## ROCm Math and Communication Libraries
The number of rows of matrix U.
### rocSPARSE
• [in] nc: rocblas_int. nu >= 0.
rocSPARSE extends support for:
The number of columns of matrix C.
* gebsrmm
* gebsrmv
* gebsrsv
* coo2dense and dense2coo
* generic API including axpby, gather, scatter, rot, spvv, spmv, spgemm, sparsetodense, densetosparse
* mixed indexing types in matrix formats
• [inout] D: pointer to real type. Array on the GPU of dimension n.
For more information, see
On entry, the diagonal elements of B. On exit, if info = 0, the singular values of B in decreasing order; if info > 0, the diagonal elements of a bidiagonal matrix orthogonally equivalent to B.
https://rocsparse.readthedocs.io/en/latest/
• [inout] E: pointer to real type. Array on the GPU of dimension n-1.
On entry, the off-diagonal elements of B. On exit, if info > 0, the off-diagonal elements of a bidiagonal matrix orthogonally equivalent to B (if info = 0 this matrix converges to zero).
### rocSOLVER
• [inout] V: pointer to type. Array on the GPU of dimension ldv*nv.
rocSOLVER extends support for:
On entry, the matrix V. On exit, it is overwritten with Vb*V. (Not referenced if nv = 0).
* Eigensolver routines for symmetric/hermitian matrices:
- STERF, STEQR
* Linear solvers for general non-square systems:
- GELS (API added with batched and strided_batched versions. Only the overdetermined non-transpose case is implemented in
this release. Other cases will return rocblas_status_not_implemented status for now.)
* Extended test coverage for functions returning information
• [in] ldv: rocblas_int. ldv >= n if nv > 0, or ldv >=1 if nv = 0.
* Changelog file
Specifies the leading dimension of V.
• [inout] U: pointer to type. Array on the GPU of dimension ldu*n.
On entry, the matrix U. On exit, it is overwritten with U*Ub. (Not referenced if nu = 0).
• [in] ldu: rocblas_int. ldu >= nu.
Specifies the leading dimension of U.
• [inout] C: pointer to type. Array on the GPU of dimension ldc*nc.
On entry, the matrix C. On exit, it is overwritten with Ub*C. (Not referenced if nc = 0).
• [in] ldc: rocblas_int. ldc >= n if nc > 0, or ldc >=1 if nc = 0.
Specifies the leading dimension of C.
• [out] info: pointer to a rocblas_int on the GPU.
If info = 0, successful exit. If info = i > 0, i elements of E have not converged to zero.
* Tridiagonalization routines for symmetric and hermitian matrices:
- LATRD
- SYTD2, SYTRD (with batched and strided_batched versions)
- HETD2, HETRD (with batched and strided_batched versions)
* Sample code and unit test for unified memory model/Heterogeneous Memory Management (HMM)
For more information, see
https://rocsolver.readthedocs.io/en/latest/userguide_api.html#rocsolver-type-bdsqr
https://rocsolver.readthedocs.io/en/latest/
### hipCUB
The new iterator DiscardOutputIterator in hipCUB represents a special kind of pointer that ignores values written to it upon dereference. It is useful for ignoring the output of certain algorithms without wasting memory capacity or bandwidth. DiscardOutputIterator may also be used to count the size of an algorithm's output, which was not known previously.
For more information, see
https://hipcub.readthedocs.io/en/latest/
### rocSPARSE_gemmi\() Operations for Sparse Matrices
## HIP Enhancements
This enhancement provides a dense matrix sparse matrix multiplication using the CSR storage format.
rocsparse_gemmi multiplies the scalar αα with a dense m×km×k matrix AA and the sparse k×nk×n matrix BB defined in the CSR storage format, and adds the result to the dense m×nm×n matrix CC that is multiplied by the scalar ββ, such that
C:=α⋅op(A)⋅op(B)+β⋅CC:=α⋅op(A)⋅op(B)+β⋅C
with
### Support for hipEventDisableTiming Flag
op(A)=⎧⎩⎨⎪⎪A,AT,AH,if trans_A == rocsparse_operation_noneif trans_A == rocsparse_operation_transposeif trans_A == rocsparse_operation_conjugate_transposeop(A)={A,if trans_A == rocsparse_operation_noneAT,if trans_A == rocsparse_operation_transposeAH,if trans_A == rocsparse_operation_conjugate_transpose
HIP now supports the hipEventDisableTiming flag for hipEventCreateWithFlags. Note, events created with this flag do not record profiling data and provide optimal performance when used for synchronization.
and
### Cooperative Group Functions
op(B)=⎧⎩⎨⎪⎪B,BT,BH,if trans_B == rocsparse_operation_noneif trans_B == rocsparse_operation_transposeif trans_B == rocsparse_operation_conjugate_transposeop(B)={B,if trans_B == rocsparse_operation_noneBT,if trans_B == rocsparse_operation_transposeBH,if trans_B == rocsparse_operation_conjugate_transpose
Note: This function is non-blocking and executed asynchronously with the host. It may return before the actual computation has finished.
Cooperative Groups defines, synchronizes, and communicates between groups of threads and blocks for efficiency and ease of management. HIP now supports the following kernel language Cooperative Groups types and functions:
![Screenshot](images/CGMain.PNG)
### Support for Extern Shared Declarations
Previously, it was required to declare dynamic shared memory using the HIP_DYNAMIC_SHARED macro for accuracy as using static shared memory in the same kernel could result in overlapping memory ranges and data-races.
Now, the HIP-Clang compiler provides support for extern shared declarations, and the HIP_DYNAMIC_SHARED option is no longer required.
You may use the standard extern definition:
extern __shared__ type var[];
## OpenMP Enhancements and Fixes
This release includes the following OpenMP changes:
* Usability Enhancements
* Fixes to Internal Clang Math Headers
* OpenMP Defect Fixes
### Usability Enhancements
* OMPD updates for flang
* To support OpenMP debugging, the selected OpenMP runtime sources are included in lib-debug/src/openmp. The ROCgdb debugger
will find these automatically.
* Threadsafe hsa plugin for libomptarget
* Support multiple devices with malloc and hostrpc
* Improve hostrpc version check
* Add max reduction offload feature to flang
* Integration of changes to support HPC Toolkit
* Support for fprintf
* Initial support for GPU malloc and Free. The internal (device rtl) is required for GPU malloc and Free for nested parallelism.
GPU malloc and Free are now replaced, which improves the device memory footprint.
* Increase detail of debug printing controlled by LIBOMPTARGET_KERNEL_TRACE environment variable
* Add support for -gpubnames in Flang Driver
* Increase detail of debug printing controlled by LIBOMPTARGET_KERNEL_TRACE environment variable
* Add support for -gpubnames in Flang Driver
### Fixes to Internal Clang Math Headers
This release includes a set of changes applied to Clang internal headers to support OpenMP C, C++, FORTRAN, and HIP C. This establishes consistency between NVPTX and AMDGCN offloading, and OpenMP, HIP, and CUDA. OpenMP uses function variants and header overlays to define device versions of functions. This causes Clang LLVM IR codegen to mangle names of variants in both the definition and callsites of functions defined in the internal Clang headers. The changes apply to headers found in the installation subdirectory lib/clang/11.0.0/include.
The changes also temporarily eliminate the use of the libm bitcode libraries for C and C++. Although math functions are now defined with internal clang headers, a bitcode library of the C functions defined in the headers is still built for the FORTRAN toolchain linking. This is because FORTRAN cannot use C math headers. This bitcode library is installed in lib/libdevice/libm-.bc. The source build of the bitcode library is implemented with the aomp-extras repository and the component-built script build_extras.sh.
### OpenMP Defect Fixes
The following OpenMP defects are fixed in this release:
* Openmpi configuration issue with real16.
* [flang] The AOMP 11.7-1 Fortran compiler claims to support the -isystem flag, but ignores it.
* [flang] producing internal compiler error when the character is used with KIND.
* [flang] openmp map clause on complex allocatable expressions !$omp target data map( chunk%tiles(1)%field%density0).
* Add a fatal error if missing -Xopenmp-target or -march options when -fopenmp-targets is specified. However, this requirement is not
applicable for offloading to the host when there is only a single target and that target is the host.
* Openmp error message output for no_rocm_device_lib was asserting.
* Linkage on constant per-kernel symbols from external to weaklinkageonly to prevent duplicate symbols when building kokkos.
* Add environment variables ROCM_LLD_ARGS ROCM_LINK_ARGS ROCM_SELECT_ARGS to test driver options without compiler rebuild.
* Fix problems with device math functions being ambiguous, especially the pow function.ix aompcc to accept file type cxx.
* Fix a latent race between host runtime and devicertl.
## MIOpen Tensile Integration
MIOpenTensile provides host-callable interfaces to the Tensile library and supports the HIP programming model. You may use the Tensile feature in the HIP backend by setting the building environment variable value to ON.
MIOPEN_USE_MIOPENTENSILE=ON
MIOpenTensile is an open-source collaboration tool where external entities can submit source pull requests (PRs) for updates. MIOpenTensile maintainers review and approve the PRs using standard open-source practices.
For more information about the sources and the build system, see
https://github.com/ROCmSoftwarePlatform/MIOpenTensile
For more information and examples, see
https://rocsparse.readthedocs.io/en/master/usermanual.html#rocsparse-gemmi
# Known Issues
The following are the known issues in this release.
## (AOMP) Undefined Hidden Symbol Linker Error Causes Compilation Failure in HIP
## Upgrade to AMD ROCm v4.1 Not Supported
The HIP example device_lib fails to compile due to unreferenced symbols with Link Time Optimization resulting in undefined hidden symbol errors.
An upgrade from previous releases to AMD ROCm v4.1 is not supported. A complete uninstallation of previous ROCm versions is required before installing a new version of ROCm.
This issue is under investigation and there is no known workaround at this time.
## Performance Impact for Kernel Launch Bound Attribute
Kernels without the *__launch_bounds__* attribute assume the default maximum threads per block value. In the previous ROCm release, this value was 256. In the ROCm v4.1 release, it is changed to 1024. The objective of this change ensures the actual threads per block value used to launch a kernel, by default, are always within the launch bounds, thus, establishing the correctness of HIP programs.
**NOTE**: Using the above-mentioned approach may incur performance degradation in certain cases. Users must add a minimum launch bound to each kernel, which covers all possible threads per block values used to launch that kernel for correctness and performance.
The recommended workaround to recover the performance is to add *--gpu-max-threads-per-block=256* to the compilation options for HIP programs.
## Issue with Passing a Subset of GPUs in a Multi-GPU System
ROCm support for passing individual GPUs via the docker --device flag in a Docker run command has a known issue when passing a subset of GPUs in a multi-GPU system. The command runs without any warning or error notification. However, all GPU executable run outputs are randomly corrupted.
Using GPU targeting via the Docker command is not recommended for users of ROCm 4.1. There is no workaround for this issue currently.
## Performance Impact for LDS-Bound Kernels
The compiler in ROCm v4.1 generates LDS load and stores instructions that incorrectly assume equal performance between aligned and misaligned accesses. While this does not impact code correctness, it may result in sub-optimal performance.
This issue is under investigation, and there is no known workaround at this time.
## Changed HIP Environment Variables in ROCm v4.1 Release
In the ROCm v3.5 release, the Heterogeneous Compute Compiler (HCC) compiler was deprecated, and the HIP-Clang compiler was introduced for compiling Heterogeneous-Compute Interface for Portability (HIP) programs. Also, the HIP runtime API was implemented on top of the Radeon Open Compute Common Language runtime (ROCclr). ROCclr is an abstraction layer that provides the ability to interact with different runtime backends such as ROCr.
While the *HIP_PLATFORM=hcc* environment variable was functional in subsequent releases after ROCm v3.5, in the ROCm v4.1 release, changes to the following environment variables were implemented:
* *HIP_PLATFORM=hcc was changed to HIP_PLATFORM=amd*
* *HIP_PLATFORM=nvcc was changed to HIP_PLATFORM=nvidia*
Therefore, any applications continuing to use the HIP_PLATFORM=hcc environment variable will fail.
**Workaround:** Update the environment variables to reflect the changes mentioned above.
## MIGraphX Fails for fp16 Datatype
The MIGraphX functionality does not work for the fp16 datatype.
# Deprecations
The following workaround is recommended:
This section describes deprecations and removals in AMD ROCm.
Use the AMD ROCm v3.3 of MIGraphX
Or
Build MIGraphX v3.7 from the source using AMD ROCm v3.3
## Missing Google Test Installation May Cause RCCL Unit Test Compilation Failure
Users of the RCCL install.sh script may encounter an RCCL unit test compilation error. It is recommended to use CMAKE directly instead of install.sh to compile RCCL. Ensure Google Test 1.10+ is available in the CMAKE search path.
As a workaround, use the latest RCCL from the GitHub development branch at:
https://github.com/ROCmSoftwarePlatform/rccl/pull/237
## Issue with Peer-to-Peer Transfers
Using peer-to-peer (P2P) transfers on systems without the hardware P2P assistance may produce incorrect results.
Ensure the hardware supports peer-to-peer transfers and enable the peer-to-peer setting in the hardware to resolve this issue.
## Partial Loss of Tracing Events for Large Applications
An internal tracing buffer allocation issue can cause a partial loss of some tracing events for large applications.
As a workaround, rebuild the roctracer/rocprofiler libraries from the GitHub roc-3.7 branch at:
• https://github.com/ROCm-Developer-Tools/rocprofiler
• https://github.com/ROCm-Developer-Tools/roctracer
## GPU Kernel C++ Names Not Demangled
GPU kernel C++ names in the profiling traces and stats produced by —hsa-trace option are not demangled.
As a workaround, users may choose to demangle the GPU kernel C++ names as required.
## rocprof option --parallel-kernels Not Supported in This Release
rocprof option --parallel-kernels is available in the options list, however, it is not fully validated and supported in this release.
## Random Soft Hang Observed When Running ResNet-Based Models
A random soft hang is observed when running ResNet-based models for a loop run of more than 25 to 30 hours. The issue is observed on both PyTorch and TensorFlow frameworks.
You can terminate the unresponsive process to temporarily resolve the issue.
There is no known workaround at this time.
## Compiler Generated Code Object Version 2 Deprecation
Compiler-generated code object version 2 is no longer supported and has been completely removed. Support for loading code object version 2 is also deprecated with no announced removal release.
# Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.7.x packages.
AMD hosts both Debian and RPM repositories for the ROCm packages.
For more information on ROCM installation on all platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# Machine Learning and High Performance Computing Software Stack for AMD GPU
For an updated version of the software stack for AMD GPU, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu
# Hardware and Software Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
**Note:** The AMD ROCm™ open software platform is a compute stack for headless system deployments. GUI-based software applications are currently not supported.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
**Note:** The integrated GPUs of Ryzen are not officially supported targets for ROCm.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
* GFX9 GPUs
- "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
- "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII, Radeon Pro VII
* CDNA GPUs
- MI100 chips such as on the AMD Instinct™ MI100
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX8 GPUs
@@ -354,7 +645,7 @@ As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven R
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
For a more detailed list of hardware support, please see [the following documentation](https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units).
#### Supported CPUs
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
@@ -394,9 +685,10 @@ does not require or take advantage of PCIe Atomics. However, we still recommend
from the list provided above for compatibility purposes.
#### Not supported or limited support under ROCm
##### Limited support
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* ROCm 4.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
@@ -409,11 +701,13 @@ from the list provided above for compatibility purposes.
##### Not supported
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported.
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
In the default ROCm configuration, GFX8 and GFX9 GPUs require PCI Express 3.0 with PCIe atomics. The ROCm platform leverages these advanced capabilities to allow features such as user-level submission of work from the host to the GPU. This includes PCIe atomic Fetch and Add, Compare and Swap, Unconditional Swap, and AtomicOp Completion.
#### ROCm support in upstream Linux kernels
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
@@ -440,9 +734,17 @@ For users that have the option of using either AMD's or the upstreamed driver, t
| | | Does not include most up-to-date firmware |
# Disclaimer
## Machine Learning and High Performance Computing Software Stack for AMD GPU
AMD®, the AMD Arrow logo, AMD Instinct™, Radeon™, ROCm® and combinations thereof are trademarks of Advanced Micro Devices, Inc.
For an updated version of the software stack for AMD GPU, see
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
PCIe® is a registered trademark of PCI-SIG Corporation. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
Google® is a registered trademark of Google LLC.
Ubuntu and the Ubuntu logo are registered trademarks of Canonical Ltd.
Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#machine-learning-and-high-performance-computing-software-stack-for-amd-gpu-v3-5-0

Binary file not shown.

BIN
ROCm_SMI_API_GUIDE_v4.1.pdf Normal file

Binary file not shown.

View File

@@ -1,27 +1,27 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="roc-github"
fetch="http://github.com/RadeonOpenCompute/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
fetch="https://github.com/ROCmSoftwarePlatform/" />
fetch="https://github.com/ROCmSoftwarePlatform/" />
<remote name="gpuopen-libs"
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
<remote name="gpuopen-tools"
fetch="https://github.com/GPUOpen-Tools/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-3.7.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-4.1.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" remote="roc-github" />
<project name="rocm_smi_lib" />
<project name="rocm-cmake" />
<project name="rocminfo" />
<project name="rocprofiler" remote="rocm-devtools" />
@@ -29,26 +29,30 @@
<project name="ROCm-OpenCL-Runtime" />
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="6c03f8b58fafd9dd693eaac826749a5cfad515f8" />
<project name="clang-ocl" />
<!--HIP Projects-->
<project name="HIP" remote="rocm-devtools" />
<!--HIP Projects-->
<project name="HIP" remote="rocm-devtools" revision="refs/tags/rocm-4.1.1" />
<project name="HIP-Examples" remote="rocm-devtools" />
<project name="ROCclr" remote="rocm-devtools" />
<project name="HIPIFY" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" path="llvm_amd-stg-open" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" revision="refs/tags/rocm-4.1.1" />
<project name="ROCm-Device-Libs" />
<project name="atmi" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" revision="refs/tags/roc-3.7.0" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocm_bandwidth_test" />
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
<!-- gdb projects -->
<!-- gdb projects -->
<project name="ROCgdb" remote="rocm-devtools" />
<project name="ROCdbgapi" remote="rocm-devtools" />
<!-- ROCm Libraries -->
<!-- ROCm Libraries -->
<project name="rdc" remote="roc-github" />
<project name="rocBLAS" remote="rocm-swplat" />
<project name="Tensile" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="hipFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
<project name="rocSPARSE" remote="rocm-swplat" />
<project name="rocSOLVER" remote="rocm-swplat" />
@@ -61,19 +65,11 @@
<project name="rocThrust" remote="rocm-swplat" />
<project name="hipCUB" remote="rocm-swplat" />
<project name="rocPRIM" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" revision="e66968a25f9342a28af1157b06cbdbf8579c5519" />
<project name="hipfort" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" />
<project name="ROCmValidationSuite" remote="rocm-devtools" />
<!-- Projects for AOMP -->
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
</manifest>

BIN
images/CG1.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

BIN
images/CG2.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

BIN
images/CG3.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

BIN
images/CGMain.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

BIN
images/CLI1.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

BIN
images/CLI2.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
images/SMI.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
images/keyfeatures.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

BIN
images/latestGPU.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

BIN
images/rocsolverAPI.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

@@ -1,503 +0,0 @@
## ROCm Version History
This file contains archived version history information for the [ROCm project](https://github.com/RadeonOpenCompute/ROCm)
### Current ROCm Version: 3.3
- [New features and enhancements in ROCm v3.1](#new-features-and-enhancements-in-rocm-v31)
- [New features and enhancements in ROCm v3.0](#new-features-and-enhancements-in-rocm-v30)
- [New features and enhancements in ROCm v2.10](#new-features-and-enhancements-in-rocm-v210)
- [New features and enhancements in ROCm 2.9](#new-features-and-enhancements-in-rocm-29)
- [New features and enhancements in ROCm 2.8](#new-features-and-enhancements-in-rocm-28)
- [New features and enhancements in ROCm 2.7.2](#new-features-and-enhancements-in-rocm-272)
- [New features and enhancements in ROCm 2.7](#new-features-and-enhancements-in-rocm-27)
- [New features and enhancements in ROCm 2.6](#new-features-and-enhancements-in-rocm-26)
- [New features and enhancements in ROCm 2.5](#new-features-and-enhancements-in-rocm-25)
- [New features and enhancements in ROCm 2.4](#new-features-and-enhancements-in-rocm-24)
- [New features and enhancements in ROCm 2.3](#new-features-and-enhancements-in-rocm-23)
- [New features and enhancements in ROCm 2.2](#new-features-and-enhancements-in-rocm-22)
- [New features and enhancements in ROCm 2.1](#new-features-and-enhancements-in-rocm-21)
- [New features and enhancements in ROCm 2.0](#new-features-and-enhancements-in-rocm-20)
- [New features and enhancements in ROCm 1.9.2](#new-features-and-enhancements-in-rocm-192)
- [New features and enhancements in ROCm 1.9.2](#new-features-and-enhancements-in-rocm-192-1)
- [New features and enhancements in ROCm 1.9.1](#new-features-and-enhancements-in-rocm-191)
- [New features and enhancements in ROCm 1.9.0](#new-features-and-enhancements-in-rocm-190)
- [New features as of ROCm 1.8.3](#new-features-as-of-rocm-183)
- [New features as of ROCm 1.8](#new-features-as-of-rocm-18)
- [New Features as of ROCm 1.7](#new-features-as-of-rocm-17)
- [New Features as of ROCm 1.5](#new-features-as-of-rocm-15)
## New features and enhancements in ROCm v3.2
The AMD ROCm v3.2 release was not productized.
## New features and enhancements in ROCm v3.1
### Change in ROCm Installation Directory Structure
A fresh installation of the ROCm toolkit installs the packages in the /opt/rocm-<version> folder. Previously, ROCm toolkit packages were installed in the /opt/rocm folder.
### Reliability, Accessibility, and Serviceability Support for Vega 7nm
The Reliability, Accessibility, and Serviceability (RAS) support for Vega7nm is now available.
### SLURM Support for AMD GPU
SLURM (Simple Linux Utility for Resource Management) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
## New features and enhancements in ROCm v3.0
### Support for CentOS RHEL v7.7 <a id="centos-anchor"></a>
Support is extended for CentOS/RHEL v7.7 in the ROCm v3.0 release. For more information about the CentOS/RHEL v7.7 release, see:
[CentOS/RHEL](https://centos.org/forums/viewtopic.php?t=71657)
### Initial distribution of AOMP 0.7-5 in ROCm v3.0 <a id="aomp-anchor"></a>
The code base for this release of AOMP is the Clang/LLVM 9.0 sources as of October 8th, 2019. The LLVM-project branch used to build this release is AOMP-191008. It is now locked. With this release, an artifact tarball of the entire source tree is created. This tree includes a Makefile in the root directory used to build AOMP from the release tarball. You can use Spack to build AOMP from this source tarball or build manually without Spack.
For more information about AOMP 0.7-5, see: [AOMP](https://github.com/ROCm-Developer-Tools/aomp/tree/roc-3.0.0)
### Fast Fourier Transform Updates
The Fast Fourier Transform (FFT) is an efficient algorithm for computing the Discrete Fourier Transform. Fast Fourier transforms are used in signal processing, image processing, and many other areas. The following real FFT performance change is made in the ROCm v3.0 release:
• Implement efficient real/complex 2D transforms for even lengths.
Other improvements:
• More 2D test coverage sizes.
• Fix buffer allocation error for large 1D transforms.
• C++ compatibility improvements.
### MemCopy Enhancement for rocProf
In the v3.0 release, the rocProf tool is enhanced with an additional capability to dump asynchronous GPU memcopy information into a .csv file. You can use the '-hsa-trace' option to create the results_mcopy.csv file.
Future enhancements will include column labels.
### New features and enhancements in ROCm v2.10
#### rocBLAS Support for Complex GEMM
The rocBLAS library is a gpu-accelerated implementation of the standard Basic Linear Algebra Subroutines (BLAS). rocBLAS is designed to enable you to develop algorithms, including high performance computing, image analysis, and machine learning.
In the AMD ROCm release v2.10, support is extended to the General Matrix Multiply (GEMM) routine for multiple small matrices processed simultaneously for rocBLAS in AMD Radeon Instinct MI50. Both single and double precision, CGEMM and ZGEMM, are now supported in rocBLAS.
#### Support for SLES 15 SP1
In the AMD ROCm v2.10 release, support is added for SUSE Linux® Enterprise Server (SLES) 15 SP1. SLES is a modular operating system for both multimodal and traditional IT.
#### Code Marker Support for rocProfiler and rocTracer Libraries
Code markers provide the external correlation ID for the calling thread. This function indicates that the calling thread is entering and leaving an external API region.
### New features and enhancements in ROCm 2.9
#### Initial release for Radeon Augmentation Library(RALI)
The AMD Radeon Augmentation Library (RALI) is designed to efficiently decode and process images from a variety of storage formats and modify them through a processing graph programmable by the user. RALI currently provides C API.
#### Quantization in MIGraphX v0.4
MIGraphX 0.4 introduces support for fp16 and int8 quantization. For additional details, as well as other new MIGraphX features, see [MIGraphX documentation](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.4).
#### rocSparse csrgemm
csrgemm enables the user to perform matrix-matrix multiplication with two sparse matrices in CSR format.
#### Singularity Support
ROCm 2.9 adds support for Singularity container version 2.5.2.
#### Initial release of rocTX
ROCm 2.9 introduces rocTX, which provides a C API for code markup for performance profiling. This initial release of rocTX supports annotation of code ranges and ASCII markers. For an example, see this [code](https://github.com/ROCm-Developer-Tools/roctracer/blob/amd-master/test/MatrixTranspose_test/MatrixTranspose.cpp).
#### Added support for Ubuntu 18.04.3
Ubuntu 18.04.3 is now supported in ROCm 2.9.
### New features and enhancements in ROCm 2.8
#### Support for NCCL2.4.8 API
Implements ncclCommAbort() and ncclCommGetAsyncError() to match the NCCL 2.4.x API
### New features and enhancements in ROCm 2.7.2
This release is a hotfix for ROCm release 2.7.
#### Issues fixed in ROCm 2.7.2
##### A defect in upgrades from older ROCm releases has been fixed.
##### rocprofiler --hiptrace and --hsatrace fails to load roctracer library
In ROCm 2.7.2, rocprofiler --hiptrace and --hsatrace fails to load roctracer library defect has been fixed.
To generate traces, please provide directory path also using the parameter: -d <$directoryPath> for example:
```shell
/opt/rocm/bin/rocprof --hsa-trace -d $PWD/traces /opt/rocm/hip/samples/0_Intro/bit_extract/bit_extract
```
All traces and results will be saved under $PWD/traces path
#### Upgrading from ROCm 2.7 to 2.7.2
To upgrade, please remove 2.7 completely as specified [for ubuntu](#how-to-uninstall-from-ubuntu-1604-or-Ubuntu-1804) or [for centos/rhel](#how-to-uninstall-rocm-from-centosrhel-76), and install 2.7.2 as per instructions [install instructions](#installing-from-amd-rocm-repositories)
#### Other notes
To use rocprofiler features, the following steps need to be completed before using rocprofiler:
##### Step-1: Install roctracer
###### Ubuntu 16.04 or Ubuntu 18.04:
```shell
sudo apt install roctracer-dev
```
###### CentOS/RHEL 7.6:
```shell
sudo yum install roctracer-dev
```
##### Step-2: Add /opt/rocm/roctracer/lib to LD_LIBRARY_PATH
### New features and enhancements in ROCm 2.7
#### [rocFFT] Real FFT Functional
Improved real/complex 1D even-length transforms of unit stride. Performance improvements of up to 4.5x are observed. Large problem sizes should see approximately 2x.
#### rocRand Enhancements and Optimizations
- Added support for new datatypes: uchar, ushort, half.
- Improved performance on "Vega 7nm" chips, such as on the Radeon Instinct MI50
- mtgp32 uniform double performance changes due generation algorithm standardization. Better quality random numbers now generated with 30% decrease in performance
- Up to 5% performance improvements for other algorithms
#### RAS
Added support for RAS on Radeon Instinct MI50, including:
- Memory error detection
- Memory error detection counter
#### ROCm-SMI enhancements
Added ROCm-SMI CLI and LIB support for FW version, compute running processes, utilization rates, utilization counter, link error counter, and unique ID.
### New features and enhancements in ROCm 2.6
#### ROCmInfo enhancements
ROCmInfo was extended to do the following:
For ROCr API call errors including initialization determine if the error could be explained by:
- ROCk (driver) is not loaded / available
- User does not have membership in appropriate group - "video"
- If not above print the error string that is mapped to the returned error code
- If no error string is available, print the error code in hex
#### Thrust - Functional Support on Vega20
ROCm2.6 contains the first official release of rocThrust and hipCUB. rocThrust is a port of thrust, a parallel algorithm library. hipCUB is a port of CUB, a reusable software component library. Thrust/CUB has been ported to the HIP/ROCm platform to use the rocPRIM library. The HIP ported library works on HIP/ROCm platforms.
Note: rocThrust and hipCUB library replaces https://github.com/ROCmSoftwarePlatform/thrust (hip-thrust), i.e. hip-thrust has been separated into two libraries, rocThrust and hipCUB. Existing hip-thrust users are encouraged to port their code to rocThrust and/or hipCUB. Hip-thrust will be removed from official distribution later this year.
#### MIGraphX v0.3
MIGraphX optimizer adds support to read models frozen from Tensorflow framework. Further details and an example usage at https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.3
#### MIOpen 2.0
- This release contains several new features including an immediate mode for selecting convolutions, bfloat16 support, new layers, modes, and algorithms.
- MIOpenDriver, a tool for benchmarking and developing kernels is now shipped with MIOpen.
BFloat16 now supported in HIP requires an updated rocBLAS as a GEMM backend.
- Immediate mode API now provides the ability to quickly obtain a convolution kernel.
- MIOpen now contains HIP source kernels and implements the ImplicitGEMM kernels. This is a new feature and is currently disabled by default. Use the environmental variable "MIOPEN_DEBUG_CONV_IMPLICIT_GEMM=1" to activation this feature. ImplicitGEMM requires an up to date HIP version of at least 1.5.9211.
- A new "loss" catagory of layers has been added, of which, CTC loss is the first. See the API reference for more details.
2.0 is the last release of active support for gfx803 architectures. In future releases, MIOpen will not actively debug and develop new features specifically for gfx803.
- System Find-Db in memory cache is disabled by default. Please see build instructions to enable this feature.
Additional documentation can be found here: https://rocmsoftwareplatform.github.io/MIOpen/doc/html/
#### Bloat16 software support in rocBLAS/Tensile
Added mixed precision bfloat16/IEEE f32 to gemm_ex. The input and output matrices are bfloat16. All arithmetic is in IEEE f32.
#### AMD Infinity Fabric™ Link enablement
The ability to connect four Radeon Instinct MI60 or Radeon Instinct MI50 boards in two hives or two Radeon Instinct MI60 or Radeon Instinct MI50 boards in four hives via AMD Infinity Fabric™ Link GPU interconnect technology has been added.
#### ROCm-smi features and bug fixes
- mGPU & Vendor check
- Fix clock printout if DPM is disabled
- Fix finding marketing info on CentOS
- Clarify some error messages
#### ROCm-smi-lib enhancements
- Documentation updates
- Improvements to *name_get functions
#### RCCL2 Enablement
RCCL2 supports collectives intranode communication using PCIe, Infinity Fabric™, and pinned host memory, as well as internode communication using Ethernet (TCP/IP sockets) and Infiniband/RoCE (Infiniband Verbs). Note: For Infiniband/RoCE, RDMA is not currently supported.
#### rocFFT enhancements
- Added: Debian package with FFT test, benchmark, and sample programs
- Improved: hipFFT interfaces
- Improved: rocFFT CPU reference code, plan generation code and logging code
### New features and enhancements in ROCm 2.5
#### UCX 1.6 support
Support for UCX version 1.6 has been added.
#### BFloat16 GEMM in rocBLAS/Tensile
Software support for BFloat16 on Radeon Instinct MI50, MI60 has been added. This includes:
- Mixed precision GEMM with BFloat16 input and output matrices, and all arithmetic in IEEE32 bit
- Input matrix values are converted from BFloat16 to IEEE32 bit, all arithmetic and accumulation is IEEE32 bit. Output values are rounded from IEEE32 bit to BFloat16
- Accuracy should be correct to 0.5 ULP
#### ROCm-SMI enhancements
CLI support for querying the memory size, driver version, and firmware version has been added to ROCm-smi.
#### [PyTorch] multi-GPU functional support (CPU aggregation/Data Parallel)
Multi-GPU support is enabled in PyTorch using Dataparallel path for versions of PyTorch built using the 06c8aa7a3bbd91cda2fd6255ec82aad21fa1c0d5 commit or later.
#### rocSparse optimization on Radeon Instinct MI50 and MI60
This release includes performance optimizations for csrsv routines in the rocSparse library.
#### [Thrust] Preview
Preview release for early adopters. rocThrust is a port of thrust, a parallel algorithm library. Thrust has been ported to the HIP/ROCm platform to use the rocPRIM library. The HIP ported library works on HIP/ROCm platforms.
Note: This library will replace https://github.com/ROCmSoftwarePlatform/thrust in a future release. The package for rocThrust (this library) currently conflicts with version 2.5 package of thrust. They should not be installed together.
#### Support overlapping kernel execution in same HIP stream
HIP API has been enhanced to allow independent kernels to run in parallel on the same stream.
#### AMD Infinity Fabric&#x2122; Link enablement
The ability to connect four Radeon Instinct MI60 or Radeon Instinct MI50 boards in one hive via AMD Infinity Fabric™ Link GPU interconnect technology has been added.
### New features and enhancements in ROCm 2.4
#### TensorFlow 2.0 support
ROCm 2.4 includes the enhanced compilation toolchain and a set of bug fixes to support TensorFlow 2.0 features natively
#### AMD Infinity Fabric&#x2122; Link enablement
ROCm 2.4 adds support to connect two Radeon Instinct MI60 or Radeon Instinct MI50 boards via AMD Infinity Fabric&#x2122; Link GPU interconnect technology.
### New features and enhancements in ROCm 2.3
#### Mem usage per GPU
Per GPU memory usage is added to rocm-smi.
Display information regarding used/total bytes for VRAM, visible VRAM and GTT, via the --showmeminfo flag
#### MIVisionX, v1.1 - ONNX
ONNX parser changes to adjust to new file formats
#### MIGraphX, v0.2
MIGraphX 0.2 supports the following new features:
* New Python API
* Support for additional ONNX operators and fixes that now enable a large set of Imagenet models
* Support for RNN Operators
* Support for multi-stream Execution
* [Experimental] Support for Tensorflow frozen protobuf files
See: [Getting-started:-using-the-new-features-of-MIGraphX-0.2](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.2) for more details
#### MIOpen, v1.8 - 3d convolutions and int8
* This release contains full 3-D convolution support and int8 support for inference.
* Additionally, there are major updates in the performance database for major models including those found in Torchvision.
See: [MIOpen releases](https://github.com/ROCmSoftwarePlatform/MIOpen/releases)
#### Caffe2 - mGPU support
Multi-gpu support is enabled for Caffe2.
#### rocTracer library, ROCm tracing API for collecting runtimes API and asynchronous GPU activity traces
HIP/HCC domains support is introduced in rocTracer library.
#### BLAS - Int8 GEMM performance, Int8 functional and performance
Introduces support and performance optimizations for Int8 GEMM, implements TRSV support, and includes improvements and optimizations with Tensile.
#### Prioritized L1/L2/L3 BLAS (functional)
Functional implementation of BLAS L1/L2/L3 functions
#### BLAS - tensile optimization
Improvements and optimizations with tensile
#### MIOpen Int8 support
Support for int8
### New features and enhancements in ROCm 2.2
#### rocSparse Optimization on Vega20
Cache usage optimizations for csrsv (sparse triangular solve), coomv
(SpMV in COO format) and ellmv (SpMV in ELL format) are available.
#### DGEMM and DTRSM Optimization
Improved DGEMM performance for reduced matrix sizes (k=384, k=256)
#### Caffe2
Added support for multi-GPU training
### New features and enhancements in ROCm 2.1
#### RocTracer v1.0 preview release 'rocprof' HSA runtime tracing and statistics support -
Supports HSA API tracing and HSA asynchronous GPU activity including kernels execution and memory copy
#### Improvements to ROCM-SMI tool -
Added support to show real-time PCIe bandwidth usage via the -b/--showbw flag
#### DGEMM Optimizations -
Improved DGEMM performance for large square and reduced matrix sizes (k=384, k=256)
### New features and enhancements in ROCm 2.0
#### Adds support for RHEL 7.6 / CentOS 7.6 and Ubuntu 18.04.1
#### Adds support for Vega 7nm, Polaris 12 GPUs
#### Introduces MIVisionX
* A comprehensive computer vision and machine intelligence libraries, utilities and applications bundled into a single toolkit.
#### Improvements to ROCm Libraries
* rocSPARSE & hipSPARSE
* rocBLAS with improved DGEMM efficiency on Vega 7nm
#### MIOpen
* This release contains general bug fixes and an updated performance database
* Group convolutions backwards weights performance has been improved
* RNNs now support fp16
#### Tensorflow multi-gpu and Tensorflow FP16 support for Vega 7nm
* TensorFlow v1.12 is enabled with fp16 support
#### PyTorch/Caffe2 with Vega 7nm Support
* fp16 support is enabled
* Several bug fixes and performance enhancements
* Known Issue: breaking changes are introduced in ROCm 2.0 which are not addressed upstream yet. Meanwhile, please continue to use ROCm fork at https://github.com/ROCmSoftwarePlatform/pytorch
#### Improvements to ROCProfiler tool
* Support for Vega 7nm
#### Support for hipStreamCreateWithPriority
* Creates a stream with the specified priority. It creates a stream on which enqueued kernels have a different priority for execution compared to kernels enqueued on normal priority streams. The priority could be higher or lower than normal priority streams.
#### OpenCL 2.0 support
* ROCm 2.0 introduces full support for kernels written in the OpenCL 2.0 C language on certain devices and systems.  Applications can detect this support by calling the “clGetDeviceInfo” query function with “parame_name” argument set to “CL_DEVICE_OPENCL_C_VERSION”.  In order to make use of OpenCL 2.0 C language features, the application must include the option “-cl-std=CL2.0” in options passed to the runtime API calls responsible for compiling or building device programs.  The complete specification for the OpenCL 2.0 C language can be obtained using the following link: https://www.khronos.org/registry/OpenCL/specs/opencl-2.0-openclc.pdf
#### Improved Virtual Addressing (48 bit VA) management for Vega 10 and later GPUs
* Fixes Clang AddressSanitizer and potentially other 3rd-party memory debugging tools with ROCm
* Small performance improvement on workloads that do a lot of memory management
* Removes virtual address space limitations on systems with more VRAM than system memory
#### Kubernetes support
### New features and enhancements in ROCm 1.9.2
#### RDMA(MPI) support on Vega 7nm
* Support ROCnRDMA based on Mellanox InfiniBand
#### Improvements to HCC
* Improved link time optimization
#### Improvements to ROCProfiler tool
* General bug fixes and implemented versioning APIs
### New features and enhancements in ROCm 1.9.2
#### RDMA(MPI) support on Vega 7nm
* Support ROCnRDMA based on Mellanox InfiniBand
#### Improvements to HCC
* Improved link time optimization
#### Improvements to ROCProfiler tool
* General bug fixes and implemented versioning APIs
#### Critical bug fixes
### New features and enhancements in ROCm 1.9.1
#### Added DPM support to Vega 7nm
* Dynamic Power Management feature is enabled on Vega 7nm.
#### Fix for 'ROCm profiling' that used to fail with a “Version mismatch between HSA runtime and libhsa-runtime-tools64.so.1” error
### New features and enhancements in ROCm 1.9.0
#### Preview for Vega 7nm
* Enables developer preview support for Vega 7nm
#### System Management Interface
* Adds support for the ROCm SMI (System Management Interface) library, which provides monitoring and management capabilities for AMD GPUs.
#### Improvements to HIP/HCC
* Support for gfx906
* Added deprecation warning for C++AMP. This will be the last version of HCC supporting C++AMP.
* Improved optimization for global address space pointers passing into a GPU kernel
* Fixed several race conditions in the HCC runtime
* Performance tuning to the unpinned copy engine
* Several codegen enhancement fixes in the compiler backend
#### Preview for rocprof Profiling Tool
Developer preview (alpha) of profiling tool rocProfiler. It includes a command-line front-end, `rpl_run.sh`, which enables:
* Cmd-line tool for dumping public per kernel perf-counters/metrics and kernel timestamps
* Input file with counters list and kernels selecting parameters
* Multiple counters groups and app runs supported
* Output results in CSV format
The tool can be installed from the `rocprofiler-dev` package. It will be installed into: `/opt/rocm/bin/rpl_run.sh`
#### Preview for rocr Debug Agent rocr_debug_agent
The ROCr Debug Agent is a library that can be loaded by ROCm Platform Runtime to provide the following functionality:
* Print the state for wavefronts that report memory violation or upon executing a "s_trap 2" instruction.
* Allows SIGINT (`ctrl c`) or SIGTERM (`kill -15`) to print wavefront state of aborted GPU dispatches.
* It is enabled on Vega10 GPUs on ROCm1.9.
The ROCm1.9 release will install the ROCr Debug Agent library at `/opt/rocm/lib/librocr_debug_agent64.so`
#### New distribution support
* Binary package support for Ubuntu 18.04
#### ROCm 1.9 is ABI compatible with KFD in upstream Linux kernels.
Upstream Linux kernels support the following GPUs in these releases:
4.17: Fiji, Polaris 10, Polaris 11
4.18: Fiji, Polaris 10, Polaris 11, Vega10
Some ROCm features are not available in the upstream KFD:
* More system memory available to ROCm applications
* Interoperability between graphics and compute
* RDMA
* IPC
To try ROCm with an upstream kernel, install ROCm as normal, but do not install the rock-dkms package. Also add a udev rule to control `/dev/kfd` permissions:
```
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
```
### New features as of ROCm 1.8.3
* ROCm 1.8.3 is a minor update meant to fix compatibility issues on Ubuntu releases running kernel 4.15.0-33
### New features as of ROCm 1.8
#### DKMS driver installation
* Debian packages are provided for DKMS on Ubuntu
* RPM packages are provided for CentOS/RHEL 7.4 and 7.5 support
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x) for additional documentation on driver setup
#### New distribution support
* Binary package support for Ubuntu 16.04 and 18.04
* Binary package support for CentOS 7.4 and 7.5
* Binary package support for RHEL 7.4 and 7.5
#### Improved OpenMPI via UCX support
* UCX support for OpenMPI
* ROCm RDMA
### New Features as of ROCm 1.7
#### DKMS driver installation
* New driver installation uses Dynamic Kernel Module Support (DKMS)
* Only amdkfd and amdgpu kernel modules are installed to support AMD hardware
* Currently only Debian packages are provided for DKMS (no Fedora suport available)
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.7.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.7.x) for additional documentation on driver setup
### New Features as of ROCm 1.5
#### Developer preview of the new OpenCL 1.2 compatible language runtime and compiler
* OpenCL 2.0 compatible kernel language support with OpenCL 1.2 compatible
runtime
* Supports offline ahead of time compilation today;
during the Beta phase we will add in-process/in-memory compilation.
#### Binary Package support for Ubuntu 16.04
#### Binary Package support for Fedora 24 is not currently available
#### Dropping binary package support for Ubuntu 14.04, Fedora 23
#### IPC support