Compare commits

...

73 Commits

Author SHA1 Message Date
zhozha
5ffdcf84ab Update to ROCm 4.3 manifest 2021-08-02 17:33:25 -07:00
Roopa Malavally
085295daea Update README.md 2021-08-02 16:51:39 -07:00
Roopa Malavally
cf5cec2580 ROCm v4.3 Release Notes (#1540)
* Delete AMD HIP Programming Guide_v4.2.pdf

* Delete AMD_HIP_API_Guide_4.2.pdf

* Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf

* Delete AMD_ROCm_Release_Notes_v4.2.pdf

* Delete HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf

* Delete ROCm_Data_Center_Tool_API_Guide_v4.2.pdf

* Delete ROCm_Debugger_API_Guide_v4.2.pdf

* Delete ROCm_Debugger_User_Guide_v4.2.pdf

* Delete ROCm_SMI_Manual_4.2.pdf

* Update README.md

* Update README.md

* Delete CG1.PNG

* Delete CG2.PNG

* Delete CG3.PNG

* Delete CGMain.PNG

* Delete CLI1.PNG

* Delete CLI2.PNG

* Delete SMI.PNG

* Delete keyfeatures.PNG

* Delete latestGPU.PNG

* Delete rocsolverAPI.PNG

* Create test.rst

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2021-08-02 16:39:54 -07:00
Roopa Malavally
e7a93ae3f5 Add files via upload 2021-08-01 18:53:14 -07:00
Roopa Malavally
e3b7d2f39d Delete AMD_ROCDebugger_API.pdf.pdf 2021-08-01 18:52:58 -07:00
Roopa Malavally
0c4565d913 Delete AMD_ROCDebugger_User_Guide.pdf.pdf 2021-08-01 18:52:30 -07:00
Roopa Malavally
313a589132 Add files via upload 2021-08-01 18:52:03 -07:00
Roopa Malavally
1caf5514e8 Add files via upload 2021-08-01 18:33:33 -07:00
Roopa Malavally
d029ad24cf Add files via upload 2021-08-01 18:09:17 -07:00
Roopa Malavally
ca6638d917 Add files via upload 2021-08-01 17:42:39 -07:00
Roopa Malavally
5cba920022 Add files via upload 2021-08-01 16:21:37 -07:00
Roopa Malavally
cefc8ef1d7 Add files via upload 2021-08-01 16:17:54 -07:00
Roopa Malavally
b71c5705a2 Delete ROCm_SMI_Manual_4.2.pdf 2021-08-01 16:13:32 -07:00
Roopa Malavally
977a1d14cd Delete ROCm_Debugger_User_Guide_v4.2.pdf 2021-08-01 16:13:17 -07:00
Roopa Malavally
3ab60d1326 Delete ROCm_Debugger_API_Guide_v4.2.pdf 2021-08-01 16:13:04 -07:00
Roopa Malavally
4b5b13294e Delete ROCm_Data_Center_Tool_API_Guide_v4.2.pdf 2021-08-01 16:12:50 -07:00
Roopa Malavally
ce66b14d9e Delete HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf 2021-08-01 16:12:32 -07:00
Roopa Malavally
01f63f546f Delete AMD_ROCm_Release_Notes_v4.2.pdf 2021-08-01 16:12:20 -07:00
Roopa Malavally
72eab2779e Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf 2021-08-01 16:12:05 -07:00
Roopa Malavally
8a366db3d7 Delete AMD_HIP_API_Guide_4.2.pdf 2021-08-01 16:11:50 -07:00
Roopa Malavally
8267a84345 Delete AMD HIP Programming Guide_v4.2.pdf 2021-08-01 16:11:30 -07:00
zhang2amd
f7b3a38d49 Merge pull request #1470 from RadeonOpenCompute/roc-4.2.x
4.2 : Manifest Files
2021-05-11 14:58:43 -07:00
Lad, Aditya
12e3bb376b 4.2 : Manifest Files 2021-05-11 14:37:52 -07:00
Roopa Malavally
a44e82f263 ROCm v4.2 Release Notes (#1469)
* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2021-05-11 14:15:34 -07:00
Roopa Malavally
9af988ffc8 Add files via upload 2021-05-11 14:09:17 -07:00
Roopa Malavally
5fed386cf1 Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-05-11 14:08:41 -07:00
Roopa Malavally
d729428302 Add files via upload 2021-05-10 17:47:35 -07:00
Roopa Malavally
8611c5f450 Delete ROCm_SMI_API_GUIDE_v4.1.pdf 2021-05-10 17:47:20 -07:00
Roopa Malavally
ae0b56d029 Delete AMD_HIP_Programming_Guide_v4.1.pdf 2021-05-10 17:28:58 -07:00
Roopa Malavally
3862c69b09 Add files via upload 2021-05-10 16:30:37 -07:00
Roopa Malavally
be34f32307 Add files via upload 2021-05-10 15:18:46 -07:00
Roopa Malavally
08c9cce749 Add files via upload 2021-05-10 15:18:03 -07:00
Roopa Malavally
a83a7c9206 Delete Debugging with ROCGDB User Guide v4.1.pdf 2021-05-10 15:16:45 -07:00
Roopa Malavally
71faa9c81f Delete AMD-Debugger API Guide v4.1.pdf 2021-05-10 15:16:35 -07:00
Roopa Malavally
6b021edb23 Add files via upload 2021-05-10 13:37:23 -07:00
Roopa Malavally
3936d236e6 Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf 2021-05-10 13:37:12 -07:00
Roopa Malavally
dbcb26756d Add files via upload 2021-05-10 13:13:55 -07:00
Roopa Malavally
96de448de6 Delete HIP_Supported_CUDA_API_Reference_Guide_v4.1.pdf 2021-05-10 13:13:39 -07:00
Roopa Malavally
ee0bc562e6 Add files via upload 2021-05-10 12:01:17 -07:00
Roopa Malavally
376b8673b7 Delete ROCm_Data_Center_Tool_API_Manual_4.1.pdf 2021-05-10 12:00:11 -07:00
Roopa Malavally
e9147a9103 Add files via upload 2021-05-10 11:58:50 -07:00
Roopa Malavally
fab1a697f0 Delete AMD_HIP_API_Guide_4.2.pdf.pdf 2021-05-10 11:58:28 -07:00
Roopa Malavally
a369e642b8 Delete AMD_HIP_API_Guide_v4.1.pdf 2021-05-10 11:58:16 -07:00
Roopa Malavally
9101972654 Add files via upload 2021-05-10 11:57:52 -07:00
Roopa Malavally
f3ba8df53d Update README.md 2021-04-21 08:28:44 -07:00
Roopa Malavally
ba7a87a2dc Update README.md 2021-04-19 13:43:39 -07:00
zhang2amd
df6d746d50 Merge pull request #1443 from RadeonOpenCompute/roc-4.1.1
ROCm 4.1.1 default.xml
2021-04-08 10:06:17 -07:00
Lad, Aditya
2b2bab5bf3 ROCm 4.1.1 default.xml 2021-04-08 09:59:11 -07:00
Roopa Malavally
5ec9b12f99 Update README.md 2021-04-08 09:27:07 -07:00
Roopa Malavally
803148affd Update README.md 2021-04-08 09:21:27 -07:00
Roopa Malavally
9275fb6298 Update README.md 2021-04-08 09:19:52 -07:00
Roopa Malavally
b6ae3f145e Update README.md 2021-04-07 11:06:04 -07:00
Roopa Malavally
f80eefc965 Update README.md 2021-04-07 11:04:51 -07:00
Roopa Malavally
c5d91843a7 Update README.md 2021-04-07 11:03:31 -07:00
Roopa Malavally
733a9c097c Update README.md 2021-04-07 07:15:49 -07:00
Roopa Malavally
ff2b3f8a23 Add files via upload 2021-03-26 12:14:59 -07:00
Roopa Malavally
5a4cf1cee1 Delete AMD_ROCm_Release_Notes_v4.1.docx 2021-03-26 12:14:46 -07:00
Roopa Malavally
dccf5ca356 Update README.md 2021-03-26 12:01:54 -07:00
Roopa Malavally
8b20bd56a6 Update README.md 2021-03-26 10:00:07 -07:00
zhang2amd
65cb10e5e8 Merge pull request #1427 from xuhuisheng/patch-1
add hipFFT to default.xml
2021-03-25 23:03:26 -07:00
Roopa Malavally
ac2625dd26 Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-03-25 15:55:22 -07:00
Roopa Malavally
3716310e93 Add files via upload 2021-03-25 15:55:04 -07:00
Roopa Malavally
2dee17f7d6 Add files via upload 2021-03-25 13:03:33 -07:00
Roopa Malavally
61e8b0d70e Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-03-25 13:03:20 -07:00
Roopa Malavally
8a3304a8d9 Update README.md 2021-03-25 11:45:08 -07:00
Roopa Malavally
55488a9424 Update README.md 2021-03-25 11:03:19 -07:00
Roopa Malavally
ff4a1d4059 Update README.md 2021-03-25 10:03:46 -07:00
Xu Huisheng
4b2d93fb7e add hipFFT to default.xml
There is hipFFT on <http://repo.radeon.com/rocm/apt/4.1/pool/main/h/hipfft/>.
Please add related repository in default.xml.
Thank you.
2021-03-25 19:41:05 +08:00
Roopa Malavally
061ccd21b8 Update README.md 2021-03-24 10:26:07 -07:00
Roopa Malavally
0ed1bd9f8e Add files via upload 2021-03-24 10:25:24 -07:00
Roopa Malavally
856c74de55 Update README.md 2021-03-24 07:59:03 -07:00
Roopa Malavally
12c6f60e45 Update README.md 2021-03-24 07:58:30 -07:00
Aditya Lad
897b1e8e2d Merge pull request #1422 from RadeonOpenCompute/roc-4.1.x
Roc 4.1.x
2021-03-23 17:59:19 -07:00
29 changed files with 471 additions and 56496 deletions

Binary file not shown.

Binary file not shown.

File diff suppressed because it is too large Load Diff

BIN
AMD_HIP_API_Guide_v4.3.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
AMD_ROCDebugger_API.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
AMD_ROCm_SMI_Guide_v4.3.pdf Normal file

Binary file not shown.

759
README.md
View File

@@ -1,30 +1,21 @@
# AMD ROCm™ v4.1 Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software. It also covers known issues and deprecations in the AMD ROCm v4.1 release.
# AMD ROCm™ v4.3 Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software. It also covers known issues and deprecations in this release.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [ROCm Installation Updates](#ROCm-Installation-Updates)
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
- [Driver Compatibility Issue in This Release](#Driver-Compatibility-Issue-in-This-Release)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [TargetID for Multiple Configurations](#TargetID-for-Multiple-Configurations)
* [HIP Enhancements](#HIP-Enhancements)
* [ROCm Data Center Tool](#ROCm-Data-Center-Tool)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [HIP Enhancements](#HIP-Enhancements)
* [OpenMP Enhancements and Fixes](#OpenMP-Enhancements-and-Fixes)
* [MIOpen Tensile Integration](#MIOpen-Tensile-Integration)
* [ROCProfiler Enhancements](#ROCProfiler-Enhancements)
- [Known Issues](#Known-Issues)
- [Known Issues in This Release](#Known-Issues-in-This-Release)
- [Deprecations](#Deprecations)
* [Compiler Generated Code Object Version 2 Deprecation ](#Compiler-Generated-Code-Object-Version-2-Deprecation)
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
@@ -36,18 +27,16 @@ This document describes the features, fixed issues, and information about downlo
## ROCm Installation Updates
### List of Supported Operating Systems
### Supported Operating Systems
The AMD ROCm platform is designed to support the following operating systems:
* Ubuntu 20.04.1 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.9 (3.10.0-1127) & RHEL 7.9 (3.10.0-1160.6.1.el7) (Using devtoolset-7 runtime support)
* CentOS 8.3 (4.18.0-193.el8) and RHEL 8.3 (4.18.0-193.1.1.el8) (devtoolset is not required)
* SLES 15 SP2
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/OSKernel.PNG)
### FRESH INSTALLATION OF AMD ROCM V4.1 RECOMMENDED
A complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. An upgrade from previous releases to AMD ROCm v4.1 is not supported. For more information, refer to the AMD ROCm Installation Guide at
### Fresh Installation of AMD ROCM V4.3 Recommended
Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. **An upgrade from previous releases to AMD ROCm v4.3 is not supported**. For more information, refer to the AMD ROCm Installation Guide at
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
@@ -59,67 +48,29 @@ https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
* For ROCm v3.3 and older releases, the clinfo path remains /opt/rocm/opencl/bin/x86_64/clinfo.
### ROCM MULTI-VERSION INSTALLATION UPDATE
## ROCm Multi-Version Installation Update
With the AMD ROCm v4.1 release, the following ROCm multi-version installation changes apply:
With the AMD ROCm v4.3 release, the following ROCm multi-version installation changes apply:
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
* Multi-version installation of ROCm should be performed by installing rocm-dev<version> using each of the desired ROCm versions. For example, rocm-dev3.7.0, rocm-dev3.8.0, rocm-dev3.9.0.
* Version files must be created for each multi-version rocm <= 4.1.0
* Command: echo <version> | sudo tee /opt/rocm-<version>/.info/version
* Example: echo 4.1.0 | sudo tee /opt/rocm-4.1.0/.info/version
* The rock-dkms loadable kernel modules should be installed using a single rock-dkms package.
* ROCm v3.9 and above will not set any ldconfig entries for ROCm libraries for multi-version installation. Users must set LD_LIBRARY_PATH to load the ROCm library version of choice.
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
# Driver Compatibility Issue in This Release
In certain scenarios, the ROCm 4.1 run-time and userspace environment are not compatible with ROCm v4.0 and older driver implementations for 7nm-based (Vega 20) hardware (MI50 and MI60).
To mitigate issues, the ROCm v4.1 or newer userspace prevents running older drivers for these GPUs.
Users are notified in the following scenarios:
* Bare Metal
* Containers
## Bare Metal
In the bare-metal environment, the following error message displays in the console:
*“HSA Error: Incompatible kernel and userspace, Vega 20 disabled. Upgrade amdgpu.”*
To test the compatibility, run the ROCm v4.1 version of rocminfo using the following instruction:
*/opt/rocm-4.1.0/bin/rocminfo 2>&1 | less*
## Containers
A container (built with error detection for this issue) using a ROCm v4.1 or newer run-time is initiated to execute on an older kernel. The container fails to start and the following warning appears:
*Error: Incompatible ROCm environment. The Docker container requires the latest kernel driver to operate correctly.
Upgrade the ROCm kernel to v4.1 or newer, or use a container tagged for v4.0.1 or older.*
To inspect the version of the installed kernel driver, run either:
* dpkg --status rock-dkms [Debian-based]
or
* rpm -ql rock-dkms [RHEL, SUSE, and others]
To install or update the driver, follow the installation instructions at:
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
## Support for Enviornment Modules
Environment modules are now supported. This enhancement in the ROCm v4.3 release enables users to switch between ROCm v4.2 and ROCm v4.3 easily and efficiently.
For more information about installing environment modules, refer to
https://modules.readthedocs.io/en/latest/
# AMD ROCm Documentation Updates
@@ -130,7 +81,7 @@ The AMD ROCm Installation Guide in this release includes:
* Supported Environments
* Installation Instructions for v4.1
* Installation Instructions
* HIP Installation Instructions
@@ -141,63 +92,58 @@ https://rocmdocs.amd.com/en/latest/
## AMD ROCm - HIP Documentation Updates
* HIP Programming Guide v4.1
* HIP Programming Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.3.pdf
* HIP API Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_v4.3.pdf
* HIP-Supported CUDA API Reference Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Supported_CUDA_API_Reference_Guide_v4.3.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.1.pdf
* HIP API Guide v4.1
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_v4.1.pdf
* HIP-Supported CUDA API Reference Guide v4.1
https://github.com/RadeonOpenCompute/ROCm/blob/master/HIP_Supported_CUDA_API_Reference_Guide_v4.1.pdf
* HIP FAQ
For more information, refer to
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
* **NEW** - AMD ROCm Compiler Reference Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_Compiler_Reference_Guide_v4.3.pdf
* HIP FAQ
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
## ROCm Data Center User and API Guide
* ROCm Data Center Tool User Guide
- Grafana Plugin Integration
For more information, refer to the ROCm Data Center User Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf
- Prometheus (Grafana) Integration with Automatic Node Detection
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.3.pdf
* ROCm Data Center Tool API Guide
For more information, refer to the ROCm Data Center API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Data_Center_Tool_API_Manual_4.1.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_RDC_API_Guide_v4.3.pdf
## ROCm SMI API Documentation Updates
* ROCm SMI API Guide
For more information, refer to the ROCm SMI API Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_API_GUIDE_v4.1.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_SMI_Guide_v4.3.pdf
## ROC Debugger User and API Guide
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/Debugging%20with%20ROCGDB%20User%20Guide%20v4.1.pdf
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCDebugger_User_Guide.pdf
* Debugger API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD-Debugger%20API%20Guide%20v4.1.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCDebugger_API.pdf
## General AMD ROCm Documentation Links
@@ -224,269 +170,500 @@ Access the following links for more information:
# What\'s New in This Release
## TARGETID FOR MULTIPLE CONFIGURATIONS
## HIP Enhancements
The new TargetID functionality allows compilations to specify various configurations of the supported hardware.
### HIP Versioning Update
Previously, ROCm supported only a single configuration per target.
The HIP version definition is updated from the ROCm v4.2 release as follows:
With the TargetID enhancement, ROCm supports configurations for Linux, PAL and associated configurations such as XNACK. This feature addresses configurations for the same target in different modes and allows applications to build executables that specify the supported configurations, including the option to be agnostic for the desired setting.
```
HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 +
HIP_VERSION_PATCH)
```
The HIP version can be queried from a HIP API call
### New Code Object Format Version for TargetID
```
hipRuntimeGetVersion(&runtimeVersion);
```
**Note**: The version returned will be greater than the version in previous ROCm releases.
* A new clang option -mcode-object-version can be used to request the legacy code object version 3 or code object version 2. For more information, refer to
### Support for Managed Memory Allocation
https://llvm.org/docs/AMDGPUUsage.html#elf-code-object
HIP now supports and automatically manages Heterogeneous Memory Management (HMM) allocation. The HIP application performs a capability check before making the managed memory API call hipMallocManaged.
* A new clang --offload-arch= option is introduced to specify the offload target architecture(s) for the HIP language.
**Note**: The _managed_ keyword is unsupported currently.
* The clang --offload-arch= and -mcpu options accept a new Target ID syntax. This allows both the processor and target feature settings to be specified. For more details, refer to
```
int managed_memory = 0;
HIPCHECK(hipDeviceGetAttribute(&managed_memory,
hipDeviceAttributeManagedMemory,p_gpuDevice));
if (!managed_memory ) {
printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
}
else {
HIPCHECK(hipSetDevice(p_gpuDevice));
HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
. . .
}
```
https://llvm.org/docs/AMDGPUUsage.html#amdgpu-target-id
### Kernel Enqueue Serialization
- If a target feature is not specified, it defaults to a new concept of "any". The compiler, then, produces code,
which executes on a target configured for either value of the setting impacting the overall performance.
It is recommended to explicitly specify the setting for more efficient performance.
Developers can control kernel command serialization from the host using the following environment variable,
AMD_SERIALIZE_KERNEL
* AMD_SERIALIZE_KERNEL = 1, Wait for completion before enqueue,
- In particular, the setting for XNACK now defaults to produce less performant code than previous ROCm releases.
* AMD_SERIALIZE_KERNEL = 2, Wait for completion after enqueue,
- The legacy clang -mxnack, -mno-xnack, -msram-ecc, and -mno-sram-ecc options are deprecated. They are still
supported, however, they will be removed in a future release.
* AMD_SERIALIZE_KERNEL = 3, Both.
- The new Target ID syntax renames the SRAM ECC feature from sram-ecc to sramecc.
* The clang offload bundler uses the new offload hipv4 for HIP code object version 4. For more information, see
https://clang.llvm.org/docs/ClangOffloadBundler.html
* ROCm v4.1 corrects code object loading to enforce target feature settings of the code object to match the setting of the agent. It also corrects the recording of target feature settings in the code object. As a consequence, the legacy code objects may no longer load due to mismatches.
* gfx802, gfx803, and gfx805 do not support the XNACK target feature in the ROCm v4.1 release.
This environment variable setting enables HIP runtime to wait for GPU idle before/after any GPU command.
### New Code Object Tools
### NUMA-aware Host Memory Allocation
The Non-Uniform Memory Architecture (NUMA) policy determines how memory is allocated and selects a CPU closest to each GPU.
NUMA also measures the distance between the GPU and CPU devices. By default, each GPU selects a Numa CPU node that has the least NUMA distance between them; the host memory is automatically allocated closest to the memory pool of the NUMA node of the current GPU device.
Note, using the *hipSetDevice* API with a different GPU provides access to the host allocation. However, it may have a longer NUMA distance.
AMD ROCm v4.1 provides new code object tools *roc-obj-ls* and *roc-obj-extract*. These tools allow for the listing and extraction of AMD GPU ROCm code objects that are embedded in HIP executables and shared objects. Each tool supports a --help option that provides more information.
Refer to the HIP Programming Guide v4.1 for additional information and examples.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.1.pdf
**Note**
The extractkernel tool in previous AMD ROCm releases has been removed from the AMD ROCm v4.1 release
and will no longer be supported.
**Note**
The roc-obj-ls and roc-obj-extract tools may generate an error about the following missing Perl modules:
* File::Which
* File::BaseDir
* File::Copy
* URI::Encode
This error is due to the missing dependencies in the hip-base installer package. As a workaround, you may use the
following instructions to install the Perl modules:
*Ubuntu*
apt-get install libfile-which-perl libfile-basedir-perl libfile-copy-recursive-perl liburi-encode-perl
*CentOS*
yum install “ perl(File::Which) perl(File::BaseDir) perl(File::Copy) perl(URI::Encode)
### New Atomic System Scope Atomic Operations
HIP now provides new APIs with _system as a suffix to support system scope atomic operations. For example, atomicAnd atomic is dedicated to the GPU device, and atomicAnd_system allows developers to extend the atomic operation to system scope from the GPU device to other CPUs and GPU devices in the system.
For more information, refer to the HIP Programming Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.3.pdf
### Indirect Function Call and C++ Virtual Functions
While the new release of the ROCm compiler supports indirect function calls and C++ virtual functions on a device, there are some known limitations and issues.
**Limitations**
* An address to a function is device specific. Note, a function address taken on the host can not be used on a device, and a function address taken on a device can not be used on the host. On a system with multiple devices, an address taken on one device can not be used on a different device.
* C++ virtual functions only work on the device where the object was constructed.
* Indirect call to a device function with function scope shared memory allocation is not supported. For example, LDS.
* Indirect call to a device function defined in a source file different than the calling function/kernel is only supported when compiling the entire program with -fgpu-rdc.
**Known Issues in This Release**
* Programs containing kernels with different launch bounds may crash when making an indirect function call. This issue is due to a compiler issue miscalculating the register budget for the callee function.
* Programs may not work correctly when making an indirect call to a function that uses more resources. For example, scratch memory, shared memory, registers made available by the caller.
* Compiling a program with objects with pure or deleted virtual functions on the device will result in a linker error. This issue is due to the missing implementation of some C++ runtime functions on the device.
* Constructing an object with virtual functions in private or shared memory may crash the program due to a compiler issue when generating code for the constructor.
## ROCm Data Center Tool
### Grafana Integration
### Prometheus (Grafana) Integration with Automatic Node Detection
The ROCm Data Center (RDC) Tool is enhanced with the Grafana plugin. Grafana is a common monitoring stack used for storing and visualizing time series data. Prometheus acts as the storage backend, and Grafana is used as the interface for analysis and visualization. Grafana has a plethora of visualization options and can be integrated with Prometheus for the ROCm Data Center (RDC) dashboard.
The ROCm Data Center (RDC) tool enables you to use Consul to discover the rdc_prometheus service automatically. Consul is “a service mesh solution providing a full-featured control plane with service discovery, configuration, and segmentation functionality.” For more information, refer to their website at https://www.consul.io/docs/intro.
The ROCm Data Center Tool uses Consul for health checks of RDCs integration with the Prometheus plug-in (rdc_prometheus), and these checks provide information on its efficiency.
Previously, when a new compute node was added, users had to change prometheus_targets.json to use Consul manually. Now, with the Consul agent integration, a new compute node can be discovered automatically.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.3.pdf
For more information about Grafana integration and installation, refer to the ROCm Data Center Tool User guide at:
### Coarse Grain Utilization
This feature provides a counter that displays the coarse grain GPU usage information, as shown below.
Sample output
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf
```
$ rocm_smi.py --showuse
============================== % time GPU is busy =============================
GPU[0] : GPU use (%): 0
GPU[0] : GFX Activity: 3401
```
### Add 64-bit Energy Accumulator In-band
This feature provides an average value of energy consumed over time in a free-flowing RAPL counter, a 64-bit Energy Accumulator.
Sample output
```
$ rocm_smi.py --showenergycounter
=============================== Consumed Energy ================================
GPU[0] : Energy counter: 2424868
GPU[0] : Accumulated Energy (uJ): 0.0
```
### Support for Continuous Clocks Values
ROCm SMI will support continuous clock values instead of the previous discrete levels. Moving forward the updated sysfs file will consist of only MIN and MAX values and the user can set the clock value in the given range.
Sample output:
```
$ rocm_smi.py --setsrange 551 1270
Do you accept these terms? [y/N] y
============================= Set Valid sclk Range=======
GPU[0] : Successfully set sclk from 551(MHz) to 1270(MHz)
GPU[1] : Successfully set sclk from 551(MHz) to 1270(MHz)
=========================================================================
$ rocm_smi.py --showsclkrange
============================ Show Valid sclk Range======
GPU[0] : Valid sclk range: 551Mhz - 1270Mhz
GPU[1] : Valid sclk range: 551Mhz - 1270Mhz
```
### Memory Utilization Counters
This feature provides a counter display memory utilization information as shown below.
Sample output
```
$ rocm_smi.py --showmemuse
========================== Current Memory Use ==============================
GPU[0] : GPU memory use (%): 0
GPU[0] : Memory Activity: 0
```
### Performance Determinism
ROCm SMI supports performance determinism as a unique mode of operation. Performance variations are minimal as this enhancement allows users to control the entry and exit to set a soft maximum (ceiling) for the GFX clock.
Sample output
```
$ rocm_smi.py --setperfdeterminism 650
cat pp_od_clk_voltage
GFXCLK:
0: 500Mhz
1: 650Mhz *
2: 1200Mhz
$ rocm_smi.py --resetperfdeterminism
```
**Note**: The idle clock will not take up higher clock values if no workload is running. After enabling determinism, users can run a GFX workload to set performance determinism to the desired clock value in the valid range.
* GFX clock could either be less than or equal to the max value set in this mode. GFX clock will be at the max clock set in this mode only when required by the running workload.
* VDDGFX will be higher by an offset (75mv or so based on PPTable) in the determinism mode.
### HBM Temperature Metric Per Stack
This feature will enable ROCm SMI to report all HBM temperature values as shown below.
Sample output
```
$ rocm_smi.py showtemp
================================= Temperature =================================
GPU[0] : Temperature (Sensor edge) (C): 29.0
GPU[0] : Temperature (Sensor junction) (C): 36.0
GPU[0] : Temperature (Sensor memory) (C): 45.0
GPU[0] : Temperature (Sensor HBM 0) (C): 43.0
GPU[0] : Temperature (Sensor HBM 1) (C): 42.0
GPU[0] : Temperature (Sensor HBM 2) (C): 44.0
GPU[0] : Temperature (Sensor HBM 3) (C): 45.0
```
## ROCm Math and Communication Libraries
### rocSPARSE
### rocBLAS
rocSPARSE extends support for:
**Optimizations**
* gebsrmm
* gebsrmv
* gebsrsv
* coo2dense and dense2coo
* generic API including axpby, gather, scatter, rot, spvv, spmv, spgemm, sparsetodense, densetosparse
* mixed indexing types in matrix formats
* Improved performance of non-batched and batched rocblas_Xgemv for gfx908 when m <= 15000 and n <= 15000
* Improved performance of non-batched and batched rocblas_sgemv and rocblas_dgemv for gfx906 when m <= 6000 and n <= 6000
* Improved the overall performance of non-batched and batched rocblas_cgemv for gfx906
* Improved the overall performance of rocblas_Xtrsv
For more information, see
For more information, refer to
https://rocsparse.readthedocs.io/en/latest/
https://rocblas.readthedocs.io/en/master/
### rocSOLVER
### rocRAND
rocSOLVER extends support for:
**Enhancements**
* gfx90a support added
* gfx1030 support added
* Eigensolver routines for symmetric/hermitian matrices:
- STERF, STEQR
* Linear solvers for general non-square systems:
- GELS (API added with batched and strided_batched versions. Only the overdetermined non-transpose case is implemented in
this release. Other cases will return rocblas_status_not_implemented status for now.)
* Extended test coverage for functions returning information
* gfx803 supported re-enabled
* Changelog file
**Fixed**
* Memory leaks in Poisson tests has been fixed.
* Memory leaks when generator has been created but setting seed/offset/dimensions display an exception has been fixed.
* Tridiagonalization routines for symmetric and hermitian matrices:
- LATRD
- SYTD2, SYTRD (with batched and strided_batched versions)
- HETD2, HETRD (with batched and strided_batched versions)
* Sample code and unit test for unified memory model/Heterogeneous Memory Management (HMM)
For more information, refer to
For more information, see
https://rocrand.readthedocs.io/en/latest/
### rocSOLVER
**Enhancements**
Linear solvers for general non-square systems:
* GELS now supports underdetermined and transposed cases
* Inverse of triangular matrices
* TRTRI (with batched and strided_batched versions)
* Out-of-place general matrix inversion
* GETRI_OUTOFPLACE (with batched and strided_batched versions)
* Argument names for the benchmark client now match argument names from the public API
**Fixed Issues**
* Known issues with Thin-SVD. The problem was identified in the test specification, not in the thin-SVD implementation or the rocBLAS gemm_batched routines.
* Benchmark client longer crashes as a result of leading dimension or stride arguments not being provided on the command line.
**Optimizations**
* Improved general performance of matrix inversion (GETRI)
For more information, refer to
https://rocsolver.readthedocs.io/en/latest/
### hipCUB
The new iterator DiscardOutputIterator in hipCUB represents a special kind of pointer that ignores values written to it upon dereference. It is useful for ignoring the output of certain algorithms without wasting memory capacity or bandwidth. DiscardOutputIterator may also be used to count the size of an algorithm's output, which was not known previously.
For more information, see
https://hipcub.readthedocs.io/en/latest/
### rocSPARSE
**Enhancements**
* (batched) tridiagonal solver with and without pivoting
* dense matrix sparse vector multiplication (gemvi)
* support for gfx90a
* sampled dense-dense matrix multiplication (sddmm)
**Improvements**
* client matrix download mechanism
* boost dependency in clients removed
## HIP Enhancements
For more information, refer to
### Support for hipEventDisableTiming Flag
HIP now supports the hipEventDisableTiming flag for hipEventCreateWithFlags. Note, events created with this flag do not record profiling data and provide optimal performance when used for synchronization.
### Cooperative Group Functions
Cooperative Groups defines, synchronizes, and communicates between groups of threads and blocks for efficiency and ease of management. HIP now supports the following kernel language Cooperative Groups types and functions:
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/CG1.PNG)
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/CG2.PNG)
![Screenshot](https://github.com/Rmalavally/ROCm/blob/master/images/CG3.PNG)
https://rocsparse.readthedocs.io/en/latest/usermanual.html#rocsparse-gebsrmv
### Support for Extern Shared Declarations
### hipBLAS
Previously, it was required to declare dynamic shared memory using the HIP_DYNAMIC_SHARED macro for accuracy as using static shared memory in the same kernel could result in overlapping memory ranges and data-races.
Now, the HIP-Clang compiler provides support for extern shared declarations, and the HIP_DYNAMIC_SHARED option is no longer required.
**Enhancements**
* Added *hipblasStatusToString*
**Fixed**
* Added catch() blocks around API calls to prevent the leak of C++ exceptions
You may use the standard extern definition:
### rocFFT
extern __shared__ type var[];
**Changes**
* Re-split device code into single-precision, double-precision, and miscellaneous kernels.
**Fixed Issues**
* double-precision planar->planar transpose.
* 3D transforms with unusual strides, for SBCC-optimized sizes.
* Improved buffer placement logic.
For more information, refer to
https://rocfft.readthedocs.io/en/rocm-4.3.0/
### hipFFT
**Fixed Issues**
* CMAKE updates
* Added callback API in hipfftXt.h header.
## OpenMP Enhancements and Fixes
### rocALUTION
**Enhancements**
* Support for gfx90a target
* Support for gfx1030 target
**Improvements**
* Install script
For more information, refer to
### rocTHRUST
This release includes the following OpenMP changes:
**Enhancements**
* Updated to match upstream Thrust 1.11
* gfx90a support added
* gfx803 support re-enabled
* Usability Enhancements
* Fixes to Internal Clang Math Headers
* OpenMP Defect Fixes
hipCUB
### Usability Enhancements
Enhancements
* OMPD updates for flang
* To support OpenMP debugging, the selected OpenMP runtime sources are included in lib-debug/src/openmp. The ROCgdb debugger
will find these automatically.
* Threadsafe hsa plugin for libomptarget
* Support multiple devices with malloc and hostrpc
* Improve hostrpc version check
* Add max reduction offload feature to flang
* Integration of changes to support HPC Toolkit
* Support for fprintf
* Initial support for GPU malloc and Free. The internal (device rtl) is required for GPU malloc and Free for nested parallelism.
GPU malloc and Free are now replaced, which improves the device memory footprint.
* Increase detail of debug printing controlled by LIBOMPTARGET_KERNEL_TRACE environment variable
* Add support for -gpubnames in Flang Driver
* Increase detail of debug printing controlled by LIBOMPTARGET_KERNEL_TRACE environment variable
* Add support for -gpubnames in Flang Driver
* DiscardOutputIterator to backend header
### Fixes to Internal Clang Math Headers
## ROCProfiler Enhancements
This release includes a set of changes applied to Clang internal headers to support OpenMP C, C++, FORTRAN, and HIP C. This establishes consistency between NVPTX and AMDGCN offloading, and OpenMP, HIP, and CUDA. OpenMP uses function variants and header overlays to define device versions of functions. This causes Clang LLVM IR codegen to mangle names of variants in both the definition and callsites of functions defined in the internal Clang headers. The changes apply to headers found in the installation subdirectory lib/clang/11.0.0/include.
### Tracing Multiple MPI Ranks
The changes also temporarily eliminate the use of the libm bitcode libraries for C and C++. Although math functions are now defined with internal clang headers, a bitcode library of the C functions defined in the headers is still built for the FORTRAN toolchain linking. This is because FORTRAN cannot use C math headers. This bitcode library is installed in lib/libdevice/libm-.bc. The source build of the bitcode library is implemented with the aomp-extras repository and the component-built script build_extras.sh.
When tracing multiple MPI ranks in ROCm v4.3, users must use the form:
### OpenMP Defect Fixes
```
mpirun ... <mpi args> ... rocprof ... <rocprof args> ... application ... <application args>
```
The following OpenMP defects are fixed in this release:
**NOTE**: This feature differs from ROCm v4.2 (and lower), which used "rocprof ... mpirun ... application".
This change was made to enable ROCProfiler to handle process forking better and launching via mpirun (and related) executables.
* Openmpi configuration issue with real16.
* [flang] The AOMP 11.7-1 Fortran compiler claims to support the -isystem flag, but ignores it.
* [flang] producing internal compiler error when the character is used with KIND.
* [flang] openmp map clause on complex allocatable expressions !$omp target data map( chunk%tiles(1)%field%density0).
* Add a fatal error if missing -Xopenmp-target or -march options when -fopenmp-targets is specified. However, this requirement is not
applicable for offloading to the host when there is only a single target and that target is the host.
* Openmp error message output for no_rocm_device_lib was asserting.
* Linkage on constant per-kernel symbols from external to weaklinkageonly to prevent duplicate symbols when building kokkos.
* Add environment variables ROCM_LLD_ARGS ROCM_LINK_ARGS ROCM_SELECT_ARGS to test driver options without compiler rebuild.
* Fix problems with device math functions being ambiguous, especially the pow function.ix aompcc to accept file type cxx.
* Fix a latent race between host runtime and devicertl.
From a user perspective, this new execution mode requires:
1. Generation of trace data per MPI (or process) rank.
2. Use of a new "merge_traces.sh" utility script (see: <insert link here>) to combine traces from multiple processes into a unified trace for profiling.
For example, to accomplish step #1, ROCm provides a simple bash wrapper that demonstrates how to generate a unique output directory per process:
```
$ cat wrapper.sh
#! /usr/bin/env bash
if [[ -n ${OMPI_COMM_WORLD_RANK+z} ]]; then
# mpich
export MPI_RANK=${OMPI_COMM_WORLD_RANK}
elif [[ -n ${MV2_COMM_WORLD_RANK+z} ]]; then
# ompi
export MPI_RANK=${MV2_COMM_WORLD_RANK}
fi
args="$*"
pid="$$"
outdir="rank_${pid}_${MPI_RANK}"
outfile="results_${pid}_${MPI_RANK}.csv"
eval "rocprof -d ${outdir} -o ${outdir}/${outfile} $*"
```
## MIOPEN TENSILE INTEGRATION
This script:
* Determines the global MPI rank (implemented here for OpenMPI and MPICH only)
* Determines the process id of the MPI rank
* Generates a unique output directory using the two
MIOpenTensile provides host-callable interfaces to the Tensile library and supports the HIP programming model. You may use the Tensile feature in the HIP backend by setting the building environment variable value to ON.
To invoke this wrapper, use the following command:
MIOPEN_USE_MIOPENTENSILE=ON
```
mpirun <mpi args> ./wrapper.sh --hip-trace <application> <args>
```
MIOpenTensile is an open-source collaboration tool where external entities can submit source pull requests (PRs) for updates. MIOpenTensile maintainers review and approve the PRs using standard open-source practices.
This generates an output directory for each used MPI rank. For example,
For more information about the sources and the build system, see
```
$ ls -ld rank_* | awk {'print $5" "$9'}
4096 rank_513555_0
4096 rank_513556_1
```
Finally, these traces may be combined using the merge traces script (<insert link here>). For example,
https://github.com/ROCmSoftwarePlatform/MIOpenTensile
```
$ ./merge_traces.sh -h
Script for aggregating results from multiple rocprofiler out directries.
Full path: /opt/rocm/bin/merge_traces.sh
Usage:
merge_traces.sh -o <outputdir> [<inputdir>...]
```
Use the following input arguments to the merge_traces.sh script to control which traces are merged and where the resulting merged trace is saved.
* -o <*outputdir*> - output directory where the results are aggregated.
* <*inputdir*>... - space-separated list of rocprofiler directories. If not specified, CWD is used.
# Known Issues
The file 'unified/results.json' is generated, and the resulting unified/results.json file contains trace data from both MPI ranks.
Known issue for ROCProfiler
Collecting several counter collection passes (multiple "pmc:" lines in an counter input file) is not supported in a single run.
The workaround is to break the multiline counter input file into multiple single-line counter input files and execute runs.
# Known Issues in This Release
The following are the known issues in this release.
## Upgrade to AMD ROCm v4.1 Not Supported
## Upgrade to AMD ROCm v4.3 Not Supported
An upgrade from previous releases to AMD ROCm v4.1 is not supported. A complete uninstallation of previous ROCm versions is required before installing a new version of ROCm.
An upgrade from previous releases to AMD ROCm v4.2 is not supported. Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm.
## _LAUNCH BOUNDS_Ignored During Kernel Launch
## Performance Impact for Kernel Launch Bound Attribute
The HIP runtime returns the hipErrorLaunchFailure error code when an application tries to launch kernel with a block size larger than the launch bounds mentioned during compile time. If no launch bounds were specified during the compile time, the default value of 1024 is assumed. Refer to the HIP trace for more information about the failing kernel. A sample error in the trace is shown below:
Snippet of the HIP trace
```
:3:devprogram.cpp :2504: 2227377746776 us: Using Code Object V4.
:3:hip_module.cpp :361 : 2227377768546 us: 7670 : [7f7c6eddd180] ihipModuleLaunchKernel ( 0x0x16fe080, 2048, 1, 1, 1024, 1, 1, 0, stream:<null>, 0x7ffded8ad260, char array:<null>, event:0, event:0, 0, 0 )
:1:hip_module.cpp :254 : 2227377768572 us: Launch params (1024, 1, 1) are larger than launch bounds (64) for kernel _Z8MyKerneliPd
:3:hip_platform.cpp :667 : 2227377768577 us: 7670 : [7f7c6eddd180] ihipLaunchKernel: Returned hipErrorLaunchFailure :
:3:hip_module.cpp :493 : 2227377768581 us: 7670 : [7f7c6eddd180] hipLaunchKernel: Returned hipErrorLaunchFailure :
```
Kernels without the *__launch_bounds__* attribute assume the default maximum threads per block value. In the previous ROCm release, this value was 256. In the ROCm v4.1 release, it is changed to 1024. The objective of this change ensures the actual threads per block value used to launch a kernel, by default, are always within the launch bounds, thus, establishing the correctness of HIP programs.
**NOTE**: Using the above-mentioned approach may incur performance degradation in certain cases. Users must add a minimum launch bound to each kernel, which covers all possible threads per block values used to launch that kernel for correctness and performance.
The recommended workaround to recover the performance is to add *--gpu-max-threads-per-block=256* to the compilation options for HIP programs.
## Issue with Passing a Subset of GPUs in a Multi-GPU System
ROCm support for passing individual GPUs via the docker --device flag in a Docker run command has a known issue when passing a subset of GPUs in a multi-GPU system. The command runs without any warning or error notification. However, all GPU executable run outputs are randomly corrupted.
Using GPU targeting via the Docker command is not recommended for users of ROCm 4.1. There is no workaround for this issue currently.
## Performance Impact for LDS-Bound Kernels
The compiler in ROCm v4.1 generates LDS load and stores instructions that incorrectly assume equal performance between aligned and misaligned accesses. While this does not impact code correctness, it may result in sub-optimal performance.
This issue is under investigation, and there is no known workaround at this time.
# Deprecations
This section describes deprecations and removals in AMD ROCm.
## Compiler Generated Code Object Version 2 Deprecation
Compiler-generated code object version 2 is no longer supported and has been completely removed. Support for loading code object version 2 is also deprecated with no announced removal release.
There is no known workaround at this time.
## PYCACHE Folder Exists After ROCM SMI Library Uninstallation
Users may observe that the /opt/rocm-x/bin/__pycache__ folder continues to exist even after the rocm_smi_lib uninstallation.
Workaround: Delete the /opt/rocm-x/bin/__pycache__ folder manually before uninstalling rocm_smi_lib.
# Deploying ROCm

Binary file not shown.

View File

@@ -12,7 +12,7 @@ fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-4.1.0"
<default revision="refs/tags/rocm-4.3.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
@@ -20,7 +20,6 @@ fetch="https://github.com/KhronosGroup/" />
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" />
<project name="rocm-cmake" />
<project name="rocminfo" />
@@ -52,6 +51,7 @@ fetch="https://github.com/KhronosGroup/" />
<project name="Tensile" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="hipFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
<project name="rocSPARSE" remote="rocm-swplat" />
<project name="rocSOLVER" remote="rocm-swplat" />

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

BIN
images/OSKernel.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

1
images/test.rst Normal file
View File

@@ -0,0 +1 @@