Compare commits

...

85 Commits

Author SHA1 Message Date
Lad, Aditya
382ea7553f Remove inaccessible repos 2021-03-23 17:56:10 -07:00
Aditya Lad
2014b47dcb Merge pull request #1420 from RadeonOpenCompute/master
Addition of ROCm release notes
2021-03-23 17:29:17 -07:00
zhang2amd
b9f9bafd9b Merge pull request #1419 from RadeonOpenCompute/roc-4.1.x
ROCm 4.1 Release
2021-03-23 17:17:00 -07:00
Lad, Aditya
ff15f420c6 ROCm 4.1 default.xml edit 2021-03-23 17:10:44 -07:00
Lad, Aditya
f51c9be952 Release ROCm 4.1 Readme.md and default.xml 2021-03-23 17:03:00 -07:00
Lad, Aditya
64e254dc99 Release ROCm 4.1 Readme.md and default.xml 2021-03-23 17:01:33 -07:00
Roopa Malavally
af7f921474 Add files via upload 2021-03-23 17:00:17 -07:00
Roopa Malavally
8b3377749f Add files via upload 2021-03-23 14:16:46 -07:00
Roopa Malavally
c3a3ce55d1 Delete gdb.pdf 2021-03-22 17:29:33 -07:00
Roopa Malavally
64c727449b Delete amd-dbgapi.pdf 2021-03-22 17:29:24 -07:00
Roopa Malavally
182dfc65cf Add files via upload 2021-03-22 16:43:36 -07:00
Roopa Malavally
d529d5c585 Delete AMD_ROCm_Release_Notes_v4.0.pdf 2021-03-22 16:29:11 -07:00
Roopa Malavally
cca6bc4921 Delete HIP_Programming_Guide_v4.0.pdf 2021-03-22 16:28:56 -07:00
Roopa Malavally
e3dbbb6bbf Add files via upload 2021-03-22 16:27:41 -07:00
Roopa Malavally
6e39c80762 Add files via upload 2021-03-22 16:17:38 -07:00
Roopa Malavally
f96f5df625 Add files via upload 2021-03-22 16:07:44 -07:00
Roopa Malavally
0639a312c8 Delete ROCm_Data_Center_Too_API_Manual_4.1.pdf 2021-03-22 16:07:03 -07:00
Roopa Malavally
a2878b1460 Add files via upload 2021-03-22 15:38:16 -07:00
Roopa Malavally
1daf261d25 Delete ROCm_SMI_API_Guide_v4.0.pdf 2021-03-22 15:37:54 -07:00
Roopa Malavally
5848bc3d7e Add files via upload 2021-03-22 15:37:15 -07:00
Roopa Malavally
d9692359ad Delete HIP-API_Guide_v4.0.pdf 2021-03-22 15:36:42 -07:00
Roopa Malavally
25110784cf Add files via upload 2021-03-22 14:53:33 -07:00
Roopa Malavally
9ff31d316f Update README.md 2021-03-10 07:53:11 -08:00
Roopa Malavally
b072119ad6 Update README.md 2021-03-09 09:03:05 -08:00
Roopa Malavally
095544032c Update README.md 2021-02-25 07:28:52 -08:00
Roopa Malavally
26a39a637a Update README.md 2021-02-25 07:24:46 -08:00
Roopa Malavally
6fb55e6f45 Update README.md 2021-02-24 13:16:33 -08:00
Lad, Aditya
290091946f ROCm 4.0.1 Manifest file 2021-01-25 15:11:55 -08:00
Roopa Malavally
2874a8ae6c Update README.md 2021-01-25 15:02:27 -08:00
Roopa Malavally
f62f2b24da Add files via upload 2021-01-20 18:10:40 -08:00
Roopa Malavally
790567e3bd Update README.md 2020-12-18 15:08:54 -08:00
Roopa Malavally
57d7a202d4 Update README.md 2020-12-18 15:08:24 -08:00
Aditya Lad
80d2aa739b Merge pull request #1343 from RadeonOpenCompute/roc-4.0.x
ROCm 4.0 Release
2020-12-18 14:30:27 -08:00
Roopa Malavally
b18851f804 Update README.md 2020-12-18 13:12:20 -08:00
Roopa Malavally
0f0dbf0c92 Update README.md 2020-12-18 13:11:59 -08:00
Lad, Aditya
224a45379f ROCm 4.0 Release 2020-12-18 12:53:33 -08:00
Roopa Malavally
f521943747 Update README.md 2020-12-18 12:52:04 -08:00
Roopa Malavally
2b7f806b10 AMD ROCm Release Notes v4.0 (#1342)
* Update README.md

* Update README.md

* Add files via upload

* Delete AMD_ROCm_Release_Notes_v3.10.pdf

* Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf

* Delete ROCm_Data_Center_API_Guide.pdf

* Delete ROCm_SMI_API_Guide_v3.10.pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2020-12-18 12:46:40 -08:00
Roopa Malavally
cd55ef67c9 Add files via upload 2020-12-18 12:32:43 -08:00
Roopa Malavally
9320669eee Delete AMD_ROCm_Release_Notes_v3.10.pdf 2020-12-18 08:19:51 -08:00
Roopa Malavally
c1211c66e3 Delete ROCm_SMI_API_Guide_v3.10.pdf 2020-12-18 08:19:36 -08:00
Roopa Malavally
c8fcff6488 Delete ROCm_Data_Center_API_Guide.pdf 2020-12-18 08:19:18 -08:00
Roopa Malavally
7118076ab4 Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-12-18 08:18:58 -08:00
Roopa Malavally
ec5523395a Add files via upload 2020-12-17 21:00:59 -08:00
Roopa Malavally
41d8f6a235 Add files via upload 2020-12-17 14:00:59 -08:00
Roopa Malavally
c69eef858a Update README.md 2020-12-10 13:38:07 -08:00
Aditya Lad
5b902ca38c Merge pull request #1316 from RadeonOpenCompute/roc-3.10.x
add rdc and half
2020-12-02 16:11:11 -08:00
Aditya Lad
68c5c198df add rdc and half 2020-12-02 16:07:15 -08:00
Aditya Lad
761ed4e70f Merge pull request #1314 from RadeonOpenCompute/roc-3.10.x
3.10 : Manifest Files
2020-12-01 16:31:55 -08:00
Lad, Aditya
8d5a160f0a 3.10 : Manifest Files 2020-12-01 16:24:12 -08:00
Roopa Malavally
f61c2ad155 Add files via upload 2020-12-01 15:45:33 -08:00
Roopa Malavally
3e2e30cc9a Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-12-01 15:44:56 -08:00
Roopa Malavally
a1f3b4e6b8 Update README.md 2020-12-01 15:08:53 -08:00
Roopa Malavally
7a3a012e6a Update README.md 2020-11-30 15:45:42 -08:00
Roopa Malavally
5b6ab31db3 Update README.md 2020-11-30 14:12:01 -08:00
Roopa Malavally
acabe2c532 Update README.md 2020-11-30 14:10:06 -08:00
Roopa Malavally
39d8bcd504 Release notes for v3.10 (#1312)
* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Delete matrix.png

* Delete ROCMCLI3.PNG

* Delete ROCMCLI2.PNG

* Delete ROCMCLI1.PNG

* Delete GEMM2.PNG

* Add files via upload

* Delete ROCm_SMI_Manual_v3.9.pdf

* Delete AMD_ROCm_Release_Notes_v3.9.pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2020-11-30 14:07:52 -08:00
Roopa Malavally
af6d1e9b26 Add files via upload 2020-11-30 14:01:36 -08:00
Roopa Malavally
1fa1d4a935 Add files via upload 2020-11-30 09:53:49 -08:00
Roopa Malavally
03d93c1948 Delete AMD_ROCm_Release_Notes_v3.9.pdf 2020-11-30 08:55:35 -08:00
Roopa Malavally
93984b0956 Add files via upload 2020-11-30 08:54:52 -08:00
Roopa Malavally
6ccb1cfc0f Add files via upload 2020-11-30 07:29:29 -08:00
Roopa Malavally
f054f82173 Delete ROCm_SMI_Manual_v3.9.pdf 2020-11-30 07:28:11 -08:00
Xu Huisheng
bb6756b58d remove dumplicated remote=roc-github (#1248) 2020-11-18 08:19:23 -08:00
Roopa Malavally
d957b8a17c Update README.md 2020-11-12 13:47:48 -08:00
Roopa Malavally
37ece61861 Update README.md 2020-11-11 14:16:48 -08:00
Roopa Malavally
434023f31b Update README.md 2020-11-03 07:45:53 -08:00
Aditya Lad
a555260687 Merge pull request #1268 from RadeonOpenCompute/roc-3.9.x
Roc 3.9.x
2020-10-28 17:39:17 -07:00
Lad, Aditya
bf89c6bbf1 3.9 documentation 2020-10-28 15:32:49 -07:00
Lad, Aditya
bd4b772255 ROCm 3.9 default.xml 2020-10-28 15:22:02 -07:00
Lad, Aditya
e99027c39c ROCm 3.9 : Manifest files 2020-10-28 15:14:41 -07:00
Roopa Malavally
93c69afb5b Add files via upload 2020-10-28 14:54:54 -07:00
Roopa Malavally
bc2ce5c35b Delete staticlinkinglib.PNG 2020-10-28 14:52:02 -07:00
Roopa Malavally
bf633aec6b Delete forweb.PNG 2020-10-28 14:51:49 -07:00
Roopa Malavally
8608a9a1c9 Delete RDCComponentsrevised.png 2020-10-28 14:51:33 -07:00
Roopa Malavally
76afb05b6c Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-10-28 14:51:19 -07:00
Roopa Malavally
8bc67a21ea Update README.md 2020-10-19 20:23:07 -07:00
Roopa Malavally
1ce148edb1 Update README.md 2020-10-19 20:21:08 -07:00
Roopa Malavally
cc6147c25b Update README.md 2020-10-19 20:20:20 -07:00
Roopa Malavally
aadd9e68e1 Update README.md 2020-10-19 20:17:34 -07:00
Roopa Malavally
dce5aee2dc Add files via upload 2020-10-19 19:34:27 -07:00
Aditya Lad
0bcae510a3 Merge pull request #1244 from RadeonOpenCompute/roc-3.8.x
Remove MiGraphX from 3.8
2020-09-25 10:06:57 -07:00
Roopa Malavally
506cdcf6db Update README.md 2020-09-25 08:06:49 -07:00
Roopa Malavally
a919ba64c9 Update README.md 2020-09-25 08:00:10 -07:00
Roopa Malavally
fae25ccf9b Update README.md 2020-09-22 16:52:31 -07:00
25 changed files with 56646 additions and 128 deletions

Binary file not shown.

56203
AMD_HIP_API_Guide_v4.1.pdf Normal file

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

516
README.md
View File

@@ -1,23 +1,28 @@
# AMD ROCm Release Notes v3.8.0
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
It also covers known issues in this release.
# AMD ROCm™ v4.1 Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software. It also covers known issues and deprecations in the AMD ROCm v4.1 release.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [ROCm Installation Updates](#ROCm-Installation-Updates)
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
- [Driver Compatibility Issue in This Release](#Driver-Compatibility-Issue-in-This-Release)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [Hipfort-Interface for GPU Kernel Libraries](#Hipfort-Interface-for-GPU-Kernel-Libraries)
* [TargetID for Multiple Configurations](#TargetID-for-Multiple-Configurations)
* [ROCm Data Center Tool](#ROCm-Data-Center-Tool)
* [Error-Correcting Code Fields in ROCm Data Center Tool](#Error-Correcting-Code-Fields-in-ROCm-Data-Center-Tool)
* [Static Linking Libraries](#Static-Linking-Libraries)
- [Fixed Defects](#Fixed-Defects)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [HIP Enhancements](#HIP-Enhancements)
* [OpenMP Enhancements and Fixes](#OpenMP-Enhancements-and-Fixes)
* [MIOpen Tensile Integration](#MIOpen-Tensile-Integration)
- [Known Issues](#Known-Issues)
- [Deprecations](#Deprecations)
* [Compiler Generated Code Object Version 2 Deprecation ](#Compiler-Generated-Code-Object-Version-2-Deprecation)
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
@@ -28,34 +33,93 @@ It also covers known issues in this release.
# Supported Operating Systems
## Support for Vega 7nm Workstation
## ROCm Installation Updates
This release extends support to the Vega 7nm Workstation (Vega20 GL-XE) version.
## List of Supported Operating Systems
### List of Supported Operating Systems
The AMD ROCm platform is designed to support the following operating systems:
* Ubuntu 20.04 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
* SLES 15 SP1
* Ubuntu 20.04.1 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.9 (3.10.0-1127) & RHEL 7.9 (3.10.0-1160.6.1.el7) (Using devtoolset-7 runtime support)
* CentOS 8.3 (4.18.0-193.el8) and RHEL 8.3 (4.18.0-193.1.1.el8) (devtoolset is not required)
* SLES 15 SP2
## Fresh Installation of AMD ROCm v3.8 Recommended
A fresh and clean installation of AMD ROCm v3.8 is recommended. An upgrade from previous releases to AMD ROCm v3.8 is not supported.
### FRESH INSTALLATION OF AMD ROCM V4.1 RECOMMENDED
A complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. An upgrade from previous releases to AMD ROCm v4.1 is not supported. For more information, refer to the AMD ROCm Installation Guide at
For more information, refer to the AMD ROCm Installation Guide at:
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
**Note**: *render group* is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use *video group*.
**Note**: *render* group is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use video group.
* For ROCm v3.5 and releases thereafter,the *clinfo* path is changed to - */opt/rocm/opencl/bin/clinfo*.
* For ROCm v3.5 and releases thereafter, the clinfo path is changed to /opt/rocm/opencl/bin/clinfo.
* For ROCm v3.3 and older releases, the *clinfo* path remains unchanged - */opt/rocm/opencl/bin/x86_64/clinfo*.
* For ROCm v3.3 and older releases, the clinfo path remains /opt/rocm/opencl/bin/x86_64/clinfo.
### ROCM MULTI-VERSION INSTALLATION UPDATE
With the AMD ROCm v4.1 release, the following ROCm multi-version installation changes apply:
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
* Multi-version installation of ROCm should be performed by installing rocm-dev<version> using each of the desired ROCm versions. For example, rocm-dev3.7.0, rocm-dev3.8.0, rocm-dev3.9.0.
* Version files must be created for each multi-version rocm <= 4.1.0
* Command: echo <version> | sudo tee /opt/rocm-<version>/.info/version
* Example: echo 4.1.0 | sudo tee /opt/rocm-4.1.0/.info/version
* The rock-dkms loadable kernel modules should be installed using a single rock-dkms package.
* ROCm v3.9 and above will not set any ldconfig entries for ROCm libraries for multi-version installation. Users must set LD_LIBRARY_PATH to load the ROCm library version of choice.
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
# Driver Compatibility Issue in This Release
In certain scenarios, the ROCm 4.1 run-time and userspace environment are not compatible with ROCm v4.0 and older driver implementations for 7nm-based (Vega 20) hardware (MI50 and MI60).
To mitigate issues, the ROCm v4.1 or newer userspace prevents running older drivers for these GPUs.
Users are notified in the following scenarios:
* Bare Metal
* Containers
## Bare Metal
In the bare-metal environment, the following error message displays in the console:
*“HSA Error: Incompatible kernel and userspace, Vega 20 disabled. Upgrade amdgpu.”*
To test the compatibility, run the ROCm v4.1 version of rocminfo using the following instruction:
*/opt/rocm-4.1.0/bin/rocminfo 2>&1 | less*
## Containers
A container (built with error detection for this issue) using a ROCm v4.1 or newer run-time is initiated to execute on an older kernel. The container fails to start and the following warning appears:
*Error: Incompatible ROCm environment. The Docker container requires the latest kernel driver to operate correctly.
Upgrade the ROCm kernel to v4.1 or newer, or use a container tagged for v4.0.1 or older.*
To inspect the version of the installed kernel driver, run either:
* dpkg --status rock-dkms [Debian-based]
or
* rpm -ql rock-dkms [RHEL, SUSE, and others]
To install or update the driver, follow the installation instructions at:
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# AMD ROCm Documentation Updates
@@ -64,27 +128,76 @@ https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
The AMD ROCm Installation Guide in this release includes:
* Updated Supported Environments
* HIP Installation Instructions
* Tensorflow ROCm Port: Basic Installations on RHEL v8.2
* Supported Environments
* Installation Instructions for v4.1
* HIP Installation Instructions
For more information, refer to the ROCm documentation website at:
https://rocmdocs.amd.com/en/latest/
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
## AMD ROCm - HIP Documentation Updates
* HIP Repository Information
* HIP Programming Guide v4.1
For more information, see
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.1.pdf
https://rocmdocs.amd.com/en/latest/Programming_Guides/Programming-Guides.html#hip-repository-information
* HIP API Guide v4.1
## ROCm Data Center Tool User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_v4.1.pdf
* Error-Correction Codes Field and Output Documentation
* HIP-Supported CUDA API Reference Guide v4.1
For more information, refer to the AMD ROCm Data Center User Guide at
https://github.com/RadeonOpenCompute/ROCm/blob/master/HIP_Supported_CUDA_API_Reference_Guide_v4.1.pdf
* HIP FAQ
For more information, refer to
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
## ROCm Data Center User and API Guide
* ROCm Data Center Tool User Guide
- Grafana Plugin Integration
For more information, refer to the ROCm Data Center User Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf
* ROCm Data Center Tool API Guide
For more information, refer to the ROCm Data Center API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Data_Center_Tool_API_Manual_4.1.pdf
## ROCm SMI API Documentation Updates
* ROCm SMI API Guide
For more information, refer to the ROCm SMI API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_API_GUIDE_v4.1.pdf
## ROC Debugger User and API Guide
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/Debugging%20with%20ROCGDB%20User%20Guide%20v4.1.pdf
* Debugger API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD-Debugger%20API%20Guide%20v4.1.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
## General AMD ROCm Documentation Links
@@ -100,129 +213,326 @@ Access the following links for more information:
* For AMD ROCm binary structure, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#build-amd-rocm
https://rocmdocs.amd.com/en/latest/Installation_Guide/Software-Stack-for-AMD-GPU.html
* For AMD ROCm Release History, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
https://rocmdocs.amd.com/en/latest/Current_Release_Notes/ROCm-Version-History.html
# What\'s New in This Release
## Hipfort-Interface for GPU Kernel Libraries
## TARGETID FOR MULTIPLE CONFIGURATIONS
Hipfort is an interface library for accessing GPU Kernels. It provides support to the AMD ROCm architecture from within the Fortran programming language. Currently, the gfortran and HIP-Clang compilers support hipfort. Note, the gfortran compiler belongs to the GNU Compiler Collection (GCC). While hipfc wrapper calls hipcc for the non-fortran kernel source, gfortran is used for FORTRAN applications that call GPU kernels.
The new TargetID functionality allows compilations to specify various configurations of the supported hardware.
Previously, ROCm supported only a single configuration per target.
With the TargetID enhancement, ROCm supports configurations for Linux, PAL and associated configurations such as XNACK. This feature addresses configurations for the same target in different modes and allows applications to build executables that specify the supported configurations, including the option to be agnostic for the desired setting.
### New Code Object Format Version for TargetID
* A new clang option -mcode-object-version can be used to request the legacy code object version 3 or code object version 2. For more information, refer to
https://llvm.org/docs/AMDGPUUsage.html#elf-code-object
* A new clang --offload-arch= option is introduced to specify the offload target architecture(s) for the HIP language.
* The clang --offload-arch= and -mcpu options accept a new Target ID syntax. This allows both the processor and target feature settings to be specified. For more details, refer to
https://llvm.org/docs/AMDGPUUsage.html#amdgpu-target-id
- If a target feature is not specified, it defaults to a new concept of "any". The compiler, then, produces code,
which executes on a target configured for either value of the setting impacting the overall performance.
It is recommended to explicitly specify the setting for more efficient performance.
- In particular, the setting for XNACK now defaults to produce less performant code than previous ROCm releases.
- The legacy clang -mxnack, -mno-xnack, -msram-ecc, and -mno-sram-ecc options are deprecated. They are still
supported, however, they will be removed in a future release.
- The new Target ID syntax renames the SRAM ECC feature from sram-ecc to sramecc.
* The clang offload bundler uses the new offload hipv4 for HIP code object version 4. For more information, see
https://clang.llvm.org/docs/ClangOffloadBundler.html
* ROCm v4.1 corrects code object loading to enforce target feature settings of the code object to match the setting of the agent. It also corrects the recording of target feature settings in the code object. As a consequence, the legacy code objects may no longer load due to mismatches.
* gfx802, gfx803, and gfx805 do not support the XNACK target feature in the ROCm v4.1 release.
### New Code Object Tools
AMD ROCm v4.1 provides new code object tools *roc-obj-ls* and *roc-obj-extract*. These tools allow for the listing and extraction of AMD GPU ROCm code objects that are embedded in HIP executables and shared objects. Each tool supports a --help option that provides more information.
Refer to the HIP Programming Guide v4.1 for additional information and examples.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.1.pdf
**Note**
The extractkernel tool in previous AMD ROCm releases has been removed from the AMD ROCm v4.1 release
and will no longer be supported.
**Note**
The roc-obj-ls and roc-obj-extract tools may generate an error about the following missing Perl modules:
* File::Which
* File::BaseDir
* File::Copy
* URI::Encode
This error is due to the missing dependencies in the hip-base installer package. As a workaround, you may use the
following instructions to install the Perl modules:
*Ubuntu*
apt-get install libfile-which-perl libfile-basedir-perl libfile-copy-recursive-perl liburi-encode-perl
*CentOS*
yum install “ perl(File::Which) perl(File::BaseDir) perl(File::Copy) perl(URI::Encode)
The hipfort interface library is meant for Fortran developers with a focus on gfortran users.
For information on HIPFort installation and examples, see
https://github.com/ROCmSoftwarePlatform/hipfort
## ROCm Data Center Tool
The ROCm™ Data Center Tool™ simplifies the administration and addresses key infrastructure challenges in AMD GPUs in cluster and datacenter environments. The important features of this tool are:
### Grafana Integration
* GPU telemetry
The ROCm Data Center (RDC) Tool is enhanced with the Grafana plugin. Grafana is a common monitoring stack used for storing and visualizing time series data. Prometheus acts as the storage backend, and Grafana is used as the interface for analysis and visualization. Grafana has a plethora of visualization options and can be integrated with Prometheus for the ROCm Data Center (RDC) dashboard.
* GPU statistics for jobs
For more information about Grafana integration and installation, refer to the ROCm Data Center Tool User guide at:
* Integration with third-party tools
* Open source
The ROCm Data Center Tool can be used in the standalone mode if all components are installed. The same set of features is also available in a library format that can be used by existing management tools.
![ScreenShot](https://github.com/Rmalavally/ROCm/blob/master/RDCComponentsrevised.png)
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf
Refer to the ROCm Data Center Tool™ User Guide for more details on the different modes of operation.
NOTE: The ROCm Data Center User Guide is intended to provide an overview of ROCm Data Center Tool features and how system administrators and Data Center (or HPC) users can administer and configure AMD GPUs. The guide also provides an overview of its components and open source developer handbook.
## ROCm Math and Communication Libraries
For installation information on different distributions, refer to the ROCm Data Center User Guide at
### rocSPARSE
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
rocSPARSE extends support for:
* gebsrmm
* gebsrmv
* gebsrsv
* coo2dense and dense2coo
* generic API including axpby, gather, scatter, rot, spvv, spmv, spgemm, sparsetodense, densetosparse
* mixed indexing types in matrix formats
For more information, see
https://rocsparse.readthedocs.io/en/latest/
### Error Correcting Code Fields in ROCm Data Center Tool
### rocSOLVER
The ROCm Data Center (RDC) tool is enhanced to provide counters to track correctable and uncorrectable errors. While a single bit per word error can be corrected, double bit per word errors cannot be corrected.
rocSOLVER extends support for:
The RDC tool now helps monitor and protect undetected memory data corruption. If the system is using ECC- enabled memory, the ROCm Data Center tool can report the error counters to monitor the status of the memory.
* Eigensolver routines for symmetric/hermitian matrices:
- STERF, STEQR
* Linear solvers for general non-square systems:
- GELS (API added with batched and strided_batched versions. Only the overdetermined non-transpose case is implemented in
this release. Other cases will return rocblas_status_not_implemented status for now.)
* Extended test coverage for functions returning information
![ScreenShot](https://github.com/Rmalavally/ROCm/blob/master/forweb.PNG)
* Changelog file
## Static Linking Libraries
* Tridiagonalization routines for symmetric and hermitian matrices:
- LATRD
- SYTD2, SYTRD (with batched and strided_batched versions)
- HETD2, HETRD (with batched and strided_batched versions)
* Sample code and unit test for unified memory model/Heterogeneous Memory Management (HMM)
The underlying libraries of AMD ROCm are dynamic and are called shared objects (.so) in Linux.
The AMD ROCm v3.8 release includes the capability to build static ROCm libraries and link to the applications statically. CMake target files enable linking an application statically to ROCm libraries and each component exports the required dependencies for linking. The static libraries are called Archives (.a) in Linux.
For more information, see
This release also comprises of the requisite changes required for all the components to work in a static environment. The components have been successfully tested for basic functionalities like *rocminfo /rocm_bandwidth_test* and archives.
https://rocsolver.readthedocs.io/en/latest/
In the AMD ROCm v3.8 release, the following libraries support static linking:
### hipCUB
![ScreenShot](https://github.com/Rmalavally/ROCm/blob/master/staticlinkinglib.PNG)
The new iterator DiscardOutputIterator in hipCUB represents a special kind of pointer that ignores values written to it upon dereference. It is useful for ignoring the output of certain algorithms without wasting memory capacity or bandwidth. DiscardOutputIterator may also be used to count the size of an algorithm's output, which was not known previously.
# Fixed Defects
The following defects are fixed in this release:
For more information, see
https://hipcub.readthedocs.io/en/latest/
## HIP Enhancements
### Support for hipEventDisableTiming Flag
HIP now supports the hipEventDisableTiming flag for hipEventCreateWithFlags. Note, events created with this flag do not record profiling data and provide optimal performance when used for synchronization.
### Cooperative Group Functions
Cooperative Groups defines, synchronizes, and communicates between groups of threads and blocks for efficiency and ease of management. HIP now supports the following kernel language Cooperative Groups types and functions:
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/CG1.PNG)
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/CG2.PNG)
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/CG3.PNG)
### Support for Extern Shared Declarations
Previously, it was required to declare dynamic shared memory using the HIP_DYNAMIC_SHARED macro for accuracy as using static shared memory in the same kernel could result in overlapping memory ranges and data-races.
Now, the HIP-Clang compiler provides support for extern shared declarations, and the HIP_DYNAMIC_SHARED option is no longer required.
You may use the standard extern definition:
extern __shared__ type var[];
## OpenMP Enhancements and Fixes
This release includes the following OpenMP changes:
* Usability Enhancements
* Fixes to Internal Clang Math Headers
* OpenMP Defect Fixes
### Usability Enhancements
* OMPD updates for flang
* To support OpenMP debugging, the selected OpenMP runtime sources are included in lib-debug/src/openmp. The ROCgdb debugger
will find these automatically.
* Threadsafe hsa plugin for libomptarget
* Support multiple devices with malloc and hostrpc
* Improve hostrpc version check
* Add max reduction offload feature to flang
* Integration of changes to support HPC Toolkit
* Support for fprintf
* Initial support for GPU malloc and Free. The internal (device rtl) is required for GPU malloc and Free for nested parallelism.
GPU malloc and Free are now replaced, which improves the device memory footprint.
* Increase detail of debug printing controlled by LIBOMPTARGET_KERNEL_TRACE environment variable
* Add support for -gpubnames in Flang Driver
* Increase detail of debug printing controlled by LIBOMPTARGET_KERNEL_TRACE environment variable
* Add support for -gpubnames in Flang Driver
### Fixes to Internal Clang Math Headers
This release includes a set of changes applied to Clang internal headers to support OpenMP C, C++, FORTRAN, and HIP C. This establishes consistency between NVPTX and AMDGCN offloading, and OpenMP, HIP, and CUDA. OpenMP uses function variants and header overlays to define device versions of functions. This causes Clang LLVM IR codegen to mangle names of variants in both the definition and callsites of functions defined in the internal Clang headers. The changes apply to headers found in the installation subdirectory lib/clang/11.0.0/include.
The changes also temporarily eliminate the use of the libm bitcode libraries for C and C++. Although math functions are now defined with internal clang headers, a bitcode library of the C functions defined in the headers is still built for the FORTRAN toolchain linking. This is because FORTRAN cannot use C math headers. This bitcode library is installed in lib/libdevice/libm-.bc. The source build of the bitcode library is implemented with the aomp-extras repository and the component-built script build_extras.sh.
### OpenMP Defect Fixes
The following OpenMP defects are fixed in this release:
* Openmpi configuration issue with real16.
* [flang] The AOMP 11.7-1 Fortran compiler claims to support the -isystem flag, but ignores it.
* [flang] producing internal compiler error when the character is used with KIND.
* [flang] openmp map clause on complex allocatable expressions !$omp target data map( chunk%tiles(1)%field%density0).
* Add a fatal error if missing -Xopenmp-target or -march options when -fopenmp-targets is specified. However, this requirement is not
applicable for offloading to the host when there is only a single target and that target is the host.
* Openmp error message output for no_rocm_device_lib was asserting.
* Linkage on constant per-kernel symbols from external to weaklinkageonly to prevent duplicate symbols when building kokkos.
* Add environment variables ROCM_LLD_ARGS ROCM_LINK_ARGS ROCM_SELECT_ARGS to test driver options without compiler rebuild.
* Fix problems with device math functions being ambiguous, especially the pow function.ix aompcc to accept file type cxx.
* Fix a latent race between host runtime and devicertl.
## MIOPEN TENSILE INTEGRATION
MIOpenTensile provides host-callable interfaces to the Tensile library and supports the HIP programming model. You may use the Tensile feature in the HIP backend by setting the building environment variable value to ON.
MIOPEN_USE_MIOPENTENSILE=ON
MIOpenTensile is an open-source collaboration tool where external entities can submit source pull requests (PRs) for updates. MIOpenTensile maintainers review and approve the PRs using standard open-source practices.
For more information about the sources and the build system, see
https://github.com/ROCmSoftwarePlatform/MIOpenTensile
* GPU Kernel C++ Names Not Demangled
* MIGraphX Fails for fp16 Datatype
* Issue with Peer-to-Peer Transfers
* rocprof option --parallel-kernels Not Supported in this Release
# Known Issues
## Undefined Reference Issue in Statically Linked Libraries
The following are the known issues in this release.
Libraries and applications statically linked using flags -rtlib=compiler-rt, such as rocBLAS, have an implicit dependency on gcc_s not captured in their CMAKE configuration.
## Upgrade to AMD ROCm v4.1 Not Supported
Client applications may require linking with an additional library -lgcc_s to resolve the undefined reference to symbol '_Unwind_Resume@@GCC_3.0'.
An upgrade from previous releases to AMD ROCm v4.1 is not supported. A complete uninstallation of previous ROCm versions is required before installing a new version of ROCm.
## MIGraphX Pooling Operation Fails for Some Models
## Performance Impact for Kernel Launch Bound Attribute
MIGraphX does not work for some models with pooling operations and the following error appears:
Kernels without the *__launch_bounds__* attribute assume the default maximum threads per block value. In the previous ROCm release, this value was 256. In the ROCm v4.1 release, it is changed to 1024. The objective of this change ensures the actual threads per block value used to launch a kernel, by default, are always within the launch bounds, thus, establishing the correctness of HIP programs.
*test_gpu_ops_test FAILED*
**NOTE**: Using the above-mentioned approach may incur performance degradation in certain cases. Users must add a minimum launch bound to each kernel, which covers all possible threads per block values used to launch that kernel for correctness and performance.
This issue is currently under investigation and there is no known workaround currently.
The recommended workaround to recover the performance is to add *--gpu-max-threads-per-block=256* to the compilation options for HIP programs.
## MIVisionX Installation Error on CentOS/RHEL8.2 and SLES 15
## Issue with Passing a Subset of GPUs in a Multi-GPU System
Installing ROCm on MIVisionX results in the following error on CentOS/RHEL8.2 and SLES 15:
ROCm support for passing individual GPUs via the docker --device flag in a Docker run command has a known issue when passing a subset of GPUs in a multi-GPU system. The command runs without any warning or error notification. However, all GPU executable run outputs are randomly corrupted.
*"Problem: nothing provides opencv needed"*
Using GPU targeting via the Docker command is not recommended for users of ROCm 4.1. There is no workaround for this issue currently.
As a workaround, install opencv before installing MIVisionX.
## Performance Impact for LDS-Bound Kernels
The compiler in ROCm v4.1 generates LDS load and stores instructions that incorrectly assume equal performance between aligned and misaligned accesses. While this does not impact code correctness, it may result in sub-optimal performance.
This issue is under investigation, and there is no known workaround at this time.
# Deprecations
This section describes deprecations and removals in AMD ROCm.
## Compiler Generated Code Object Version 2 Deprecation
Compiler-generated code object version 2 is no longer supported and has been completely removed. Support for loading code object version 2 is also deprecated with no announced removal release.
# Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.8.x packages.
AMD hosts both Debian and RPM repositories for the ROCm packages.
For more information on ROCM installation on all platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# Machine Learning and High Performance Computing Software Stack for AMD GPU
For an updated version of the software stack for AMD GPU, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu
# Hardware and Software Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
**Note:** The AMD ROCm™ open software platform is a compute stack for headless system deployments. GUI-based software applications are currently not supported.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
**Note:** The integrated GPUs of Ryzen are not officially supported targets for ROCm.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
* GFX9 GPUs
- "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
- "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII, Radeon Pro VII
* CDNA GPUs
- MI100 chips such as on the AMD Instinct™ MI100
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX8 GPUs
@@ -238,7 +548,7 @@ As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven R
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
For a more detailed list of hardware support, please see [the following documentation](https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units).
#### Supported CPUs
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
@@ -281,7 +591,7 @@ from the list provided above for compatibility purposes.
##### Limited support
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* ROCm 4.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
@@ -294,11 +604,13 @@ from the list provided above for compatibility purposes.
##### Not supported
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported.
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
In the default ROCm configuration, GFX8 and GFX9 GPUs require PCI Express 3.0 with PCIe atomics. The ROCm platform leverages these advanced capabilities to allow features such as user-level submission of work from the host to the GPU. This includes PCIe atomic Fetch and Add, Compare and Swap, Unconditional Swap, and AtomicOp Completion.
#### ROCm support in upstream Linux kernels
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
@@ -325,9 +637,17 @@ For users that have the option of using either AMD's or the upstreamed driver, t
| | | Does not include most up-to-date firmware |
# Disclaimer
## Machine Learning and High Performance Computing Software Stack for AMD GPU
AMD®, the AMD Arrow logo, AMD Instinct™, Radeon™, ROCm® and combinations thereof are trademarks of Advanced Micro Devices, Inc.
For an updated version of the software stack for AMD GPU, see
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
PCIe® is a registered trademark of PCI-SIG Corporation. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
Google® is a registered trademark of Google LLC.
Ubuntu and the Ubuntu logo are registered trademarks of Canonical Ltd.
Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu

Binary file not shown.

BIN
ROCm_SMI_API_GUIDE_v4.1.pdf Normal file

Binary file not shown.

View File

@@ -1,27 +1,27 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="roc-github"
fetch="http://github.com/RadeonOpenCompute/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
fetch="https://github.com/ROCmSoftwarePlatform/" />
fetch="https://github.com/ROCmSoftwarePlatform/" />
<remote name="gpuopen-libs"
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
<remote name="gpuopen-tools"
fetch="https://github.com/GPUOpen-Tools/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-3.8.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-4.1.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" remote="roc-github" />
<project name="rocm_smi_lib" />
<project name="rocm-cmake" />
<project name="rocminfo" />
<project name="rocprofiler" remote="rocm-devtools" />
@@ -29,24 +29,27 @@
<project name="ROCm-OpenCL-Runtime" />
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="6c03f8b58fafd9dd693eaac826749a5cfad515f8" />
<project name="clang-ocl" />
<!--HIP Projects-->
<!--HIP Projects-->
<project name="HIP" remote="rocm-devtools" />
<project name="HIP-Examples" remote="rocm-devtools" />
<project name="ROCclr" remote="rocm-devtools" />
<project name="HIPIFY" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" path="llvm_amd-stg-open" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" />
<project name="ROCm-Device-Libs" />
<project name="atmi" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocm_bandwidth_test" />
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
<!-- gdb projects -->
<!-- gdb projects -->
<project name="ROCgdb" remote="rocm-devtools" />
<project name="ROCdbgapi" remote="rocm-devtools" />
<!-- ROCm Libraries -->
<!-- ROCm Libraries -->
<project name="rdc" remote="roc-github" />
<project name="rocBLAS" remote="rocm-swplat" />
<project name="Tensile" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
@@ -62,18 +65,10 @@
<project name="hipCUB" remote="rocm-swplat" />
<project name="rocPRIM" remote="rocm-swplat" />
<project name="hipfort" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" />
<project name="ROCmValidationSuite" remote="rocm-devtools" />
<!-- Projects for AOMP -->
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
</manifest>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

BIN
images/CG1.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

BIN
images/CG2.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

BIN
images/CG3.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

BIN
images/CLI1.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

BIN
images/CLI2.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
images/SMI.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
images/keyfeatures.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

BIN
images/latestGPU.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

BIN
images/rocsolverAPI.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB