mirror of
https://github.com/ROCm/ROCm.git
synced 2026-01-09 22:58:17 -05:00
Compare commits
305 Commits
roc-3.7.x
...
rocm-5.4.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8b49837f76 | ||
|
|
0e2b33f904 | ||
|
|
4eb9653b68 | ||
|
|
a1884e46fe | ||
|
|
419f1a9560 | ||
|
|
a9c87c8b13 | ||
|
|
002cca3756 | ||
|
|
48ded5bc01 | ||
|
|
ee989c21f9 | ||
|
|
b638a620ac | ||
|
|
36a57f1389 | ||
|
|
c92f5af561 | ||
|
|
09001c933b | ||
|
|
b7c9943ff7 | ||
|
|
25a52ec827 | ||
|
|
b14834e5a1 | ||
|
|
368178d758 | ||
|
|
a047d37bfe | ||
|
|
7536ef0196 | ||
|
|
5241caf779 | ||
|
|
1ae99c5e4b | ||
|
|
f034733da2 | ||
|
|
d4879fdec4 | ||
|
|
60957c84b7 | ||
|
|
3859eef2a9 | ||
|
|
4915438362 | ||
|
|
c4ce059e12 | ||
|
|
ca4d4597ba | ||
|
|
418e8bfda6 | ||
|
|
82477df454 | ||
|
|
075562b1f2 | ||
|
|
74d067032e | ||
|
|
526846dc7e | ||
|
|
a47030ca10 | ||
|
|
fac29ca466 | ||
|
|
986ba19e80 | ||
|
|
e00f7f6d59 | ||
|
|
cac8ecf2bc | ||
|
|
2653e081e2 | ||
|
|
34eb2a85f3 | ||
|
|
164129954e | ||
|
|
eaf8e74802 | ||
|
|
403c81a83e | ||
|
|
ced195c62c | ||
|
|
3486206b09 | ||
|
|
c379917e1c | ||
|
|
0a60a3b256 | ||
|
|
99a3476a5e | ||
|
|
ad3a774274 | ||
|
|
5bb9c86fb6 | ||
|
|
0a0b750e0e | ||
|
|
c6ec9d7b55 | ||
|
|
a1eac48dea | ||
|
|
94f4488904 | ||
|
|
afc1a33ad7 | ||
|
|
9b6fb663c9 | ||
|
|
7d78a111b4 | ||
|
|
f04316efdb | ||
|
|
0083f955a7 | ||
|
|
237e662486 | ||
|
|
475711bb7d | ||
|
|
dc2b00f43d | ||
|
|
c0cd1b72ce | ||
|
|
95493f625c | ||
|
|
c3f91afb26 | ||
|
|
d827b836b2 | ||
|
|
99d5fb03e0 | ||
|
|
1f6c308006 | ||
|
|
bb3aa02a86 | ||
|
|
9b82c422d0 | ||
|
|
8eed074e8a | ||
|
|
53db303dd3 | ||
|
|
36ec27d9a4 | ||
|
|
d78bb0121b | ||
|
|
f72c130e06 | ||
|
|
c058e7a1c9 | ||
|
|
0d12925fe9 | ||
|
|
f088317e44 | ||
|
|
ca8f60e96f | ||
|
|
ba8c56abdc | ||
|
|
18410afcd7 | ||
|
|
c637c2a964 | ||
|
|
5a56a31fac | ||
|
|
82b35be1ee | ||
|
|
03fb0f863c | ||
|
|
c730ade1e3 | ||
|
|
164a386ed6 | ||
|
|
db517138f6 | ||
|
|
bc63e35725 | ||
|
|
c9a8556171 | ||
|
|
91f193a510 | ||
|
|
b2fac149b5 | ||
|
|
1d23bb0ec6 | ||
|
|
fedfa50634 | ||
|
|
51ea894667 | ||
|
|
63b0e6d273 | ||
|
|
f1383c5d16 | ||
|
|
f3ec7b4720 | ||
|
|
9492fc9b0d | ||
|
|
c103fe233f | ||
|
|
63c16a229e | ||
|
|
18aa89804f | ||
|
|
65a4524834 | ||
|
|
b04ab30e81 | ||
|
|
4c8787087a | ||
|
|
7cd85779c4 | ||
|
|
c676ff480e | ||
|
|
6d19f5b6c1 | ||
|
|
4679e8ac87 | ||
|
|
8a3209f985 | ||
|
|
79d0d00b2a | ||
|
|
db5121cdfe | ||
|
|
035f4995bb | ||
|
|
f63e3f9ce1 | ||
|
|
4e56ed7dc3 | ||
|
|
2faf5b6ab7 | ||
|
|
e69b7e6f71 | ||
|
|
d53ffd1c89 | ||
|
|
e177599de1 | ||
|
|
9fc1ba3970 | ||
|
|
520764faa3 | ||
|
|
7d0b53c87f | ||
|
|
c3a8ecd0c5 | ||
|
|
21cf37b2df | ||
|
|
f4419a3d1c | ||
|
|
5ffdcf84ab | ||
|
|
085295daea | ||
|
|
cf5cec2580 | ||
|
|
e7a93ae3f5 | ||
|
|
e3b7d2f39d | ||
|
|
0c4565d913 | ||
|
|
313a589132 | ||
|
|
1caf5514e8 | ||
|
|
d029ad24cf | ||
|
|
ca6638d917 | ||
|
|
5cba920022 | ||
|
|
cefc8ef1d7 | ||
|
|
b71c5705a2 | ||
|
|
977a1d14cd | ||
|
|
3ab60d1326 | ||
|
|
4b5b13294e | ||
|
|
ce66b14d9e | ||
|
|
01f63f546f | ||
|
|
72eab2779e | ||
|
|
8a366db3d7 | ||
|
|
8267a84345 | ||
|
|
f7b3a38d49 | ||
|
|
12e3bb376b | ||
|
|
a44e82f263 | ||
|
|
9af988ffc8 | ||
|
|
5fed386cf1 | ||
|
|
d729428302 | ||
|
|
8611c5f450 | ||
|
|
ae0b56d029 | ||
|
|
3862c69b09 | ||
|
|
be34f32307 | ||
|
|
08c9cce749 | ||
|
|
a83a7c9206 | ||
|
|
71faa9c81f | ||
|
|
6b021edb23 | ||
|
|
3936d236e6 | ||
|
|
dbcb26756d | ||
|
|
96de448de6 | ||
|
|
ee0bc562e6 | ||
|
|
376b8673b7 | ||
|
|
e9147a9103 | ||
|
|
fab1a697f0 | ||
|
|
a369e642b8 | ||
|
|
9101972654 | ||
|
|
f3ba8df53d | ||
|
|
ba7a87a2dc | ||
|
|
df6d746d50 | ||
|
|
2b2bab5bf3 | ||
|
|
5ec9b12f99 | ||
|
|
803148affd | ||
|
|
9275fb6298 | ||
|
|
b6ae3f145e | ||
|
|
f80eefc965 | ||
|
|
c5d91843a7 | ||
|
|
733a9c097c | ||
|
|
ff2b3f8a23 | ||
|
|
5a4cf1cee1 | ||
|
|
dccf5ca356 | ||
|
|
8b20bd56a6 | ||
|
|
65cb10e5e8 | ||
|
|
ac2625dd26 | ||
|
|
3716310e93 | ||
|
|
2dee17f7d6 | ||
|
|
61e8b0d70e | ||
|
|
8a3304a8d9 | ||
|
|
55488a9424 | ||
|
|
ff4a1d4059 | ||
|
|
4b2d93fb7e | ||
|
|
061ccd21b8 | ||
|
|
0ed1bd9f8e | ||
|
|
856c74de55 | ||
|
|
12c6f60e45 | ||
|
|
897b1e8e2d | ||
|
|
382ea7553f | ||
|
|
2014b47dcb | ||
|
|
b9f9bafd9b | ||
|
|
ff15f420c6 | ||
|
|
f51c9be952 | ||
|
|
64e254dc99 | ||
|
|
af7f921474 | ||
|
|
8b3377749f | ||
|
|
c3a3ce55d1 | ||
|
|
64c727449b | ||
|
|
182dfc65cf | ||
|
|
d529d5c585 | ||
|
|
cca6bc4921 | ||
|
|
e3dbbb6bbf | ||
|
|
6e39c80762 | ||
|
|
f96f5df625 | ||
|
|
0639a312c8 | ||
|
|
a2878b1460 | ||
|
|
1daf261d25 | ||
|
|
5848bc3d7e | ||
|
|
d9692359ad | ||
|
|
25110784cf | ||
|
|
9ff31d316f | ||
|
|
b072119ad6 | ||
|
|
095544032c | ||
|
|
26a39a637a | ||
|
|
6fb55e6f45 | ||
|
|
290091946f | ||
|
|
2874a8ae6c | ||
|
|
f62f2b24da | ||
|
|
790567e3bd | ||
|
|
57d7a202d4 | ||
|
|
80d2aa739b | ||
|
|
b18851f804 | ||
|
|
0f0dbf0c92 | ||
|
|
224a45379f | ||
|
|
f521943747 | ||
|
|
2b7f806b10 | ||
|
|
cd55ef67c9 | ||
|
|
9320669eee | ||
|
|
c1211c66e3 | ||
|
|
c8fcff6488 | ||
|
|
7118076ab4 | ||
|
|
ec5523395a | ||
|
|
41d8f6a235 | ||
|
|
c69eef858a | ||
|
|
5b902ca38c | ||
|
|
68c5c198df | ||
|
|
761ed4e70f | ||
|
|
8d5a160f0a | ||
|
|
f61c2ad155 | ||
|
|
3e2e30cc9a | ||
|
|
a1f3b4e6b8 | ||
|
|
7a3a012e6a | ||
|
|
5b6ab31db3 | ||
|
|
acabe2c532 | ||
|
|
39d8bcd504 | ||
|
|
af6d1e9b26 | ||
|
|
1fa1d4a935 | ||
|
|
03d93c1948 | ||
|
|
93984b0956 | ||
|
|
6ccb1cfc0f | ||
|
|
f054f82173 | ||
|
|
bb6756b58d | ||
|
|
d957b8a17c | ||
|
|
37ece61861 | ||
|
|
434023f31b | ||
|
|
a555260687 | ||
|
|
bf89c6bbf1 | ||
|
|
bd4b772255 | ||
|
|
e99027c39c | ||
|
|
93c69afb5b | ||
|
|
bc2ce5c35b | ||
|
|
bf633aec6b | ||
|
|
8608a9a1c9 | ||
|
|
76afb05b6c | ||
|
|
8bc67a21ea | ||
|
|
1ce148edb1 | ||
|
|
cc6147c25b | ||
|
|
aadd9e68e1 | ||
|
|
dce5aee2dc | ||
|
|
0bcae510a3 | ||
|
|
86a09b146b | ||
|
|
506cdcf6db | ||
|
|
a919ba64c9 | ||
|
|
fae25ccf9b | ||
|
|
d1f9aa98a3 | ||
|
|
42fa0e0765 | ||
|
|
e89903ed3a | ||
|
|
ba2e1f0109 | ||
|
|
a1830b5330 | ||
|
|
0c596d155a | ||
|
|
75c0d668d9 | ||
|
|
49bd50c858 | ||
|
|
a54214d05d | ||
|
|
2524166765 | ||
|
|
abc65687d4 | ||
|
|
0fddb14b8f | ||
|
|
3909efb389 | ||
|
|
67bd7501c1 | ||
|
|
d62f1c4247 | ||
|
|
c3d5bc6406 | ||
|
|
db45731729 | ||
|
|
34552e95e0 | ||
|
|
8d0c516c5c | ||
|
|
5cba919767 | ||
|
|
bb0022e972 |
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1 @@
|
||||
* @saadrahim @Rmalavally @amd-aakash @zhang2amd @jlgreathouse
|
||||
Binary file not shown.
761
CHANGELOG.md
Normal file
761
CHANGELOG.md
Normal file
@@ -0,0 +1,761 @@
|
||||
Changelog
|
||||
------------------
|
||||
# AMD ROCm™ Releases
|
||||
|
||||
## AMD ROCm™ V5.2 Release
|
||||
|
||||
AMD ROCm v5.2 is now released. The release documentation is available at https://docs.amd.com.
|
||||
|
||||
## AMD ROCm™ V5.1.3 Release
|
||||
|
||||
AMD ROCm v5.1.3 is now released. The release documentation is available at https://docs.amd.com.
|
||||
|
||||
## AMD ROCm™ V5.1.1 Release
|
||||
|
||||
AMD ROCm v5.1.1 is now released. The release documentation is available at https://docs.amd.com.
|
||||
|
||||
## AMD ROCm™ V5.1 Release
|
||||
|
||||
AMD ROCm v5.1 is now released. The release documentation is available at https://docs.amd.com.
|
||||
|
||||
|
||||
## AMD ROCm™ v5.0.2 Release Notes
|
||||
|
||||
### Fixed Defects in This Release
|
||||
|
||||
The following defects are fixed in the ROCm v5.0.2 release.
|
||||
|
||||
#### Issue with hostcall Facility in HIP Runtime
|
||||
|
||||
In ROCm v5.0, when using the “assert()” call in a HIP kernel, the compiler may sometimes fail to emit kernel metadata related to the hostcall facility, which results in incomplete initialization of the hostcall facility in the HIP runtime. This can cause the HIP kernel to crash when it attempts to execute the “assert()” call.
|
||||
The root cause was an incorrect check in the compiler to determine whether the hostcall facility is required by the kernel. This is fixed in the ROCm v5.0.2 release.
|
||||
The resolution includes a compiler change, which emits the required metadata by default, unless the compiler can prove that the hostcall facility is not required by the kernel. This ensures that the “assert()” call never fails.
|
||||
|
||||
**Note**: This fix may lead to breakage in some OpenMP offload use cases, which use print inside a target region and result in an abort in device code. The issue will be fixed in a future release.
|
||||
|
||||
#### Compatibility Matrix Updates to ROCm Deep Learning Guide
|
||||
|
||||
The compatibility matrix in the AMD Deep Learning Guide is updated for ROCm v5.0.2.
|
||||
|
||||
For more information and documentation updates, refer to https://docs.amd.com.
|
||||
|
||||
|
||||
|
||||
## AMD ROCm™ v5.0.1 Release Notes
|
||||
|
||||
### Deprecations and Warnings
|
||||
|
||||
#### Refactor of HIPCC/HIPCONFIG
|
||||
|
||||
In prior ROCm releases, by default, the hipcc/hipconfig Perl scripts were used to identify and set target compiler options, target platform, compiler, and runtime appropriately.
|
||||
In ROCm v5.0.1, hipcc.bin and hipconfig.bin have been added as the compiled binary implementations of the hipcc and hipconfig. These new binaries are currently a work-in-progress, considered, and marked as experimental. ROCm plans to fully transition to hipcc.bin and hipconfig.bin in the a future ROCm release. The existing hipcc and hipconfig Perl scripts are renamed to hipcc.pl and hipconfig.pl respectively. New top-level hipcc and hipconfig Perl scripts are created, which can switch between the Perl script or the compiled binary based on the environment variable HIPCC_USE_PERL_SCRIPT.
|
||||
In ROCm 5.0.1, by default, this environment variable is set to use hipcc and hipconfig through the Perl scripts.
|
||||
Subsequently, Perl scripts will no longer be available in ROCm in a future release.
|
||||
|
||||
|
||||
### ROCM DOCUMENTATION UPDATES FOR ROCM 5.0.1
|
||||
|
||||
* ROCm Downloads Guide
|
||||
|
||||
* ROCm Installation Guide
|
||||
|
||||
* ROCm Release Notes
|
||||
|
||||
For more information, see https://docs.amd.com.
|
||||
|
||||
|
||||
|
||||
## AMD ROCm™ v5.0 Release Notes
|
||||
|
||||
|
||||
# ROCm Installation Updates
|
||||
|
||||
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software.
|
||||
|
||||
It also covers known issues and deprecations in this release.
|
||||
|
||||
## Notice for Open-source and Closed-source ROCm Repositories in Future Releases
|
||||
|
||||
To make a distinction between open-source and closed-source components, all ROCm repositories will consist of sub-folders in future releases.
|
||||
|
||||
- All open-source components will be placed in the `base-url/<rocm-ver>/main` sub-folder
|
||||
- All closed-source components will reside in the `base-url/<rocm-ver>/proprietary` sub-folder
|
||||
|
||||
## List of Supported Operating Systems
|
||||
|
||||
The AMD ROCm platform supports the following operating systems:
|
||||
|
||||
| **OS-Version (64-bit)** | **Kernel Versions** |
|
||||
| --- | --- |
|
||||
| CentOS 8.3 | 4.18.0-193.el8 |
|
||||
| CentOS 7.9 | 3.10.0-1127 |
|
||||
| RHEL 8.5 | 4.18.0-348.7.1.el8\_5.x86\_64 |
|
||||
| RHEL 8.4 | 4.18.0-305.el8.x86\_64 |
|
||||
| RHEL 7.9 | 3.10.0-1160.6.1.el7 |
|
||||
| SLES 15 SP3 | 5.3.18-59.16-default |
|
||||
| Ubuntu 20.04.3 | 5.8.0 LTS / 5.11 HWE |
|
||||
| Ubuntu 18.04.5 [5.4 HWE kernel] | 5.4.0-71-generic |
|
||||
|
||||
### Support for RHEL v8.5
|
||||
|
||||
This release extends support for RHEL v8.5.
|
||||
|
||||
### Supported GPUs
|
||||
|
||||
#### Radeon Pro V620 and W6800 Workstation GPUs
|
||||
|
||||
This release extends ROCm support for Radeon Pro V620 and W6800 Workstation GPUs.
|
||||
|
||||
- SRIOV virtualization support for Radeon Pro V620
|
||||
- KVM Hypervisor (1VF support only) on Ubuntu Host OS with Ubuntu, CentOs, and RHEL Guest
|
||||
- Support for ROCm-SMI in an SRIOV environment. For more details, refer to the ROCm SMI API documentation.
|
||||
|
||||
**Note:** Radeon Pro v620 is not supported on SLES.
|
||||
|
||||
## ROCm Installation Updates for ROCm v5.0
|
||||
|
||||
This release has the following ROCm installation enhancements.
|
||||
|
||||
### Support for Kernel Mode Driver
|
||||
|
||||
In this release, users can install the kernel-mode driver using the Installer method. Some of the ROCm-specific use cases that the installer currently supports are:
|
||||
|
||||
- OpenCL (ROCr/KFD based) runtime
|
||||
- HIP runtimes
|
||||
- ROCm libraries and applications
|
||||
- ROCm Compiler and device libraries
|
||||
- ROCr runtime and thunk
|
||||
- Kernel-mode driver
|
||||
|
||||
### Support for Multi-version ROCm Installation and Uninstallation
|
||||
|
||||
Users now can install multiple ROCm releases simultaneously on a system using the newly introduced installer script and package manager install mechanism.
|
||||
|
||||
Users can also uninstall multi-version ROCm releases using the `amdgpu-uninstall` script and package manager.
|
||||
|
||||
### Support for Updating Information on Local Repositories
|
||||
|
||||
In this release, the `amdgpu-install` script automates the process of updating local repository information before proceeding to ROCm installation.
|
||||
|
||||
### Support for Release Upgrades
|
||||
|
||||
Users can now upgrade the existing ROCm installation to specific or latest ROCm releases.
|
||||
|
||||
For more details, refer to the AMD ROCm Installation Guide v5.0.
|
||||
|
||||
# AMD ROCm V5.0 Documentation Updates
|
||||
|
||||
## New AMD ROCm Information Portal – ROCm v4.5 and Above
|
||||
|
||||
Beginning ROCm release v5.0, AMD ROCm documentation has a new portal at https://docs.amd.com. This portal consists of ROCm documentation v4.5 and above.
|
||||
|
||||
For documentation prior to ROCm v4.5, you may continue to access https://rocmdocs.amd.com.
|
||||
|
||||
## Documentation Updates for ROCm 5.0
|
||||
|
||||
### Deployment Tools
|
||||
|
||||
#### ROCm Data Center Tool Documentation Updates
|
||||
|
||||
- ROCm Data Center Tool User Guide
|
||||
- ROCm Data Center Tool API Guide
|
||||
|
||||
#### ROCm System Management Interface Updates
|
||||
|
||||
- System Management Interface Guide
|
||||
- System Management Interface API Guide
|
||||
|
||||
#### ROCm Command Line Interface Updates
|
||||
|
||||
- Command Line Interface Guide
|
||||
|
||||
### Machine Learning/AI Documentation Updates
|
||||
|
||||
- Deep Learning Guide
|
||||
- MIGraphX API Guide
|
||||
- MIOpen API Guide
|
||||
- MIVisionX API Guide
|
||||
|
||||
### ROCm Libraries Documentation Updates
|
||||
|
||||
- hipSOLVER User Guide
|
||||
- RCCL User Guide
|
||||
- rocALUTION User Guide
|
||||
- rocBLAS User Guide
|
||||
- rocFFT User Guide
|
||||
- rocRAND User Guide
|
||||
- rocSOLVER User Guide
|
||||
- rocSPARSE User Guide
|
||||
- rocThrust User Guide
|
||||
|
||||
### Compilers and Tools
|
||||
|
||||
#### ROCDebugger Documentation Updates
|
||||
|
||||
- ROCDebugger User Guide
|
||||
- ROCDebugger API Guide
|
||||
|
||||
#### ROCTracer
|
||||
|
||||
- ROCTracer User Guide
|
||||
- ROCTracer API Guide
|
||||
|
||||
#### Compilers
|
||||
|
||||
- AMD Instinct High Performance Computing and Tuning Guide
|
||||
- AMD Compiler Reference Guide
|
||||
|
||||
#### HIPify Documentation
|
||||
|
||||
- HIPify User Guide
|
||||
- HIP Supported CUDA API Reference Guide
|
||||
|
||||
#### ROCm Debug Agent
|
||||
|
||||
- ROCm Debug Agent Guide
|
||||
- System Level Debug Guide
|
||||
- ROCm Validation Suite
|
||||
|
||||
### Programming Models Documentation
|
||||
|
||||
#### HIP Documentation
|
||||
|
||||
- HIP Programming Guide
|
||||
- HIP API Guide
|
||||
- HIP FAQ Guide
|
||||
|
||||
#### OpenMP Documentation
|
||||
|
||||
- OpenMP Support Guide
|
||||
|
||||
### ROCm Glossary
|
||||
|
||||
- ROCm Glossary – Terms and Definitions
|
||||
|
||||
## AMD ROCm Legacy Documentation Links – ROCm v4.3 and Prior
|
||||
|
||||
- For AMD ROCm documentation, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/
|
||||
|
||||
- For installation instructions on supported platforms, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
- For AMD ROCm binary structure, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Software-Stack-for-AMD-GPU.html
|
||||
|
||||
- For AMD ROCm release history, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Current_Release_Notes/ROCm-Version-History.html
|
||||
|
||||
# What's New in This Release
|
||||
|
||||
## HIP Enhancements
|
||||
|
||||
The ROCm v5.0 release consists of the following HIP enhancements.
|
||||
|
||||
### HIP Installation Guide Updates
|
||||
|
||||
The HIP Installation Guide is updated to include building HIP from source on the NVIDIA platform.
|
||||
|
||||
Refer to the HIP Installation Guide v5.0 for more details.
|
||||
|
||||
### Managed Memory Allocation
|
||||
|
||||
Managed memory, including the `__managed__` keyword, is now supported in the HIP combined host/device compilation. Through unified memory allocation, managed memory allows data to be shared and accessible to both the CPU and GPU using a single pointer. The allocation is managed by the AMD GPU driver using the Linux Heterogeneous Memory Management (HMM) mechanism. The user can call managed memory API hipMallocManaged to allocate a large chunk of HMM memory, execute kernels on a device, and fetch data between the host and device as needed.
|
||||
|
||||
**Note:** In a HIP application, it is recommended to do a capability check before calling the managed memory APIs. For example,
|
||||
|
||||
```c
|
||||
int managed_memory = 0;
|
||||
HIPCHECK(hipDeviceGetAttribute(&managed_memory, hipDeviceAttributeManagedMemory, p_gpuDevice));
|
||||
|
||||
if (!managed_memory) {
|
||||
printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
|
||||
} else {
|
||||
HIPCHECK(hipSetDevice(p_gpuDevice));
|
||||
HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
|
||||
. . .
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** The managed memory capability check may not be necessary; however, if HMM is not supported, managed malloc will fall back to using system memory. Other managed memory API calls will, then, have
|
||||
|
||||
Refer to the HIP API documentation for more details on managed memory APIs.
|
||||
|
||||
For the application, see [hipMallocManaged.cpp](https://github.com/ROCm-Developer-Tools/HIP/blob/rocm-4.5.x/tests/src/runtimeApi/memory/hipMallocManaged.cpp)
|
||||
|
||||
## New Environment Variable
|
||||
|
||||
The following new environment variable is added in this release:
|
||||
|
||||
| **Environment Variable** | **Value** | **Description** |
|
||||
| --- | --- | --- |
|
||||
| **HSA\_COOP\_CU\_COUNT** | 0 or 1 (default is 0) | Some processors support more compute units than can reliably be used in a cooperative dispatch. Setting the environment variable HSA\_COOP\_CU\_COUNT to 1 will cause ROCr to return the correct CU count for cooperative groups through the HSA\_AMD\_AGENT\_INFO\_COOPERATIVE\_COMPUTE\_UNIT\_COUNT attribute of hsa\_agent\_get\_info(). Setting HSA\_COOP\_CU\_COUNT to other values, or leaving it unset, will cause ROCr to return the same CU count for the attributes HSA\_AMD\_AGENT\_INFO\_COOPERATIVE\_COMPUTE\_UNIT\_COUNT and HSA\_AMD\_AGENT\_INFO\_COMPUTE\_UNIT\_COUNT. Future ROCm releases will make HSA\_COOP\_CU\_COUNT=1 the default.
|
||||
|
|
||||
|
||||
## ROCm Math and Communication Libraries
|
||||
|
||||
| **Library** | **Changes** |
|
||||
| --- | --- |
|
||||
| **rocBLAS** | **Added** <ul><li>Added rocblas\_get\_version\_string\_size convenience function</li><li>Added rocblas\_xtrmm\_outofplace, an out-of-place version of rocblas\_xtrmm</li><li>Added hpl and trig initialization for gemm\_ex to rocblas-bench</li><li>Added source code gemm. It can be used as an alternative to Tensile for debugging and development</li><li>Added option `ROCM_MATHLIBS_API_USE_HIP_COMPLEX` to opt-in to use hipFloatComplex and hipDoubleComplex</li></ul> **Optimizations** <ul><li>Improved performance of non-batched and batched single-precision GER for size m > 1024. Performance enhanced by 5-10% measured on a MI100 (gfx908) GPU.</li><li>Improved performance of non-batched and batched HER for all sizes and data types. Performance enhanced by 2-17% measured on a MI100 (gfx908) GPU.</li></ul> **Changed** <ul><li>Instantiate templated rocBLAS functions to reduce size of librocblas.so</li><li>Removed static library dependency on msgpack</li><li>Removed boost dependencies for clients</li></ul> **Fixed** <ul><li>Option to install script to build only rocBLAS clients with a pre-built rocBLAS library</li><li>Correctly set output of nrm2\_batched\_ex and nrm2\_strided\_batched\_ex when given bad input</li><li>Fix for dgmm with side == rocblas\_side\_left and a negative incx</li><li>Fixed out-of-bounds read for small trsm</li><li>Fixed numerical checking for tbmv\_strided\_batched</li></ul> |
|
||||
| | |
|
||||
| **hipBLAS** | **Added** <ul><li>Added rocSOLVER functions to hipblas-bench</li><li>Added option `ROCM_MATHLIBS_API_USE_HIP_COMPLEX` to opt-in to use hipFloatComplex and hipDoubleComplex</li><li>Added compilation warning for future trmm changes</li><li>Added documentation to hipblas.h</li><li>Added option to forgo pivoting for getrf and getri when ipiv is nullptr</li><li>Added code coverage option</li></ul> **Fixed** <ul><li>Fixed use of incorrect `HIP_PATH` when building from source.</li><li>Fixed windows packaging</li><li>Allowing negative increments in hipblas-bench</li><li>Removed boost dependency</li></ul> |
|
||||
| | |
|
||||
| **rocFFT** | **Changed** <ul><li>Enabled runtime compilation of single FFT kernels > length 1024.</li><li>Re-aligned split device library into 4 roughly equal libraries.</li><li>Implemented the FuseShim framework to replace the original OptimizePlan</li><li>Implemented the generic buffer-assignment framework. The buffer assignment is no longer performed by each node. A generic algorithm is designed to test and pick the best assignment path. With the help of FuseShim, more kernel-fusions are achieved.</li><li>Do not read the imaginary part of the DC and Nyquist modes for even-length complex-to-real transforms.</li></ul> **Optimizations** <ul><li>Optimized twiddle-conjugation; complex-to-complex inverse transforms have similar performance to foward transforms now.</li><li>Improved performance of single-kernel small 2D transforms.</li></ul> |
|
||||
| | |
|
||||
| **hipFFT** | **Fixed** <ul><li>Fixed incorrect reporting of rocFFT version.</li></ul> **Changed** <ul><li>Unconditionally enabled callback functionality. On the CUDA backend, callbacks only run correctly when hipFFT is built as a static library, and is linked against the static cuFFT library.</li></ul> |
|
||||
| | |
|
||||
| **rocSPARSE** | **Added** <ul><li>csrmv, coomv, ellmv, hybmv for (conjugate) transposed matricescsrmv for symmetric matrices</li></ul> **Changed** <ul><li>spmm\_ex is now deprecated and will be removed in the next major release</li></ul> **Improved** <ul><li>Optimization for gtsv</li></ul> |
|
||||
| | |
|
||||
| **hipSPARSE** | **Added** <ul><li>Added (conjugate) transpose support for csrmv, hybmv and spmv routines</li></ul> |
|
||||
| | |
|
||||
| **rocALUTION** | **Changed** <ul><li>Removed deprecated GlobalPairwiseAMG class, please use PairwiseAMG instead.</li></ul> **Improved** <ul><li>Improved documentation</li></ul> |
|
||||
| | |
|
||||
| **rocTHRUST** | **Updates** <ul><li>Updated to match upstream Thrust 1.13.0</li><li>Updated to match upstream Thrust 1.14.0</li><li>Added async scan</li></ul> **Changed** <ul><li>Scan algorithms: inclusive\_scan now uses the input-type as accumulator-type, exclusive\_scan uses initial-value-type. This particularly changes behaviour of small-size input types with large-size output types (e.g. short input, int output). And low-res input with high-res output (e.g. float input, double output)</li></ul> |
|
||||
| | |
|
||||
| **rocSOLVER** | **Added** <ul><li>Symmetric matrix factorizations: <ul><li>LASYF</li><li>SYTF2, SYTRF (with batched and strided\_batched versions)</li></ul><li>Added rocsolver\_get\_version\_string\_size to help with version string queries</li><li>Added rocblas\_layer\_mode\_ex and the ability to print kernel calls in the trace and profile logs</li><li>Expanded batched and strided\_batched sample programs.</li></ul> **Optimizations** <ul><li>Improved general performance of LU factorization</li><li>Increased parallelism of specialized kernels when compiling from source, reducing build times on multi-core systems.</li></ul> **Changed** <ul><li>The rocsolver-test client now prints the rocSOLVER version used to run the tests, rather than the version used to build them</li><li>The rocsolver-bench client now prints the rocSOLVER version used in the benchmark</li></ul> **Fixed** <ul><li>Added missing stdint.h include to rocsolver.h</li></ul> |
|
||||
| | |
|
||||
| **hipSOLVER** | **Added** <ul><li>Added SYTRF functions: hipsolverSsytrf\_bufferSize, hipsolverDsytrf\_bufferSize, hipsolverCsytrf\_bufferSize, hipsolverZsytrf\_bufferSize, hipsolverSsytrf, hipsolverDsytrf, hipsolverCsytrf, hipsolverZsytrf</li></ul> **Fixed** <ul><li>Fixed use of incorrect `HIP_PATH` when building from source</li></ul> |
|
||||
| | |
|
||||
| **RCCL** | **Added** <ul><li>Compatibility with NCCL 2.10.3</li></ul> **Known issues** <ul><li>Managed memory is not currently supported for clique-based kernels</li></ul> |
|
||||
| | |
|
||||
| **hipCUB** | **Fixed** <ul><li>Added missing includes to hipcub.hpp</li></ul> **Added** <ul><li>Bfloat16 support to test cases (device\_reduce & device\_radix\_sort)</li><li>Device merge sort</li><li>Block merge sort</li><li>API update to CUB 1.14.0</li></ul> **Changed** <ul><li>The SetupNVCC.cmake automatic target selector select all of the capabalities of all available card for NVIDIA backend.</li></ul> |
|
||||
| | |
|
||||
| **rocPRIM** | **Fixed** <ul><li>Enable bfloat16 tests and reduce threshold for bfloat16</li><li>Fix device scan limit\_size feature</li><li>Non-optimized builds no longer trigger local memory limit errors</li></ul> **Added** <ul><li>Scan size limit feature</li><li>Reduce size limit feature</li><li>Transform size limit feature</li><li>Add block\_load\_striped and block\_store\_striped</li><li>Add gather\_to\_blocked to gather values from other threads into a blocked arrangement</li><li>The block sizes for device merge sorts initial block sort and its merge steps are now separate in its kernel config (the block sort step supports multiple items per thread)</li></ul> **Changed** <ul><li>size\_limit for scan, reduce and transform can now be set in the config struct instead of a parameter</li><li>device\_scan and device\_segmented\_scan: inclusive\_scan now uses the input-type as accumulator-type, exclusive\_scan uses initial-value-type. This particularly changes behaviour of small-size input types with large-size output types (e.g. short input, int output) and low-res input with high-res output (e.g. float input, double output)</li><li>Revert old Fiji workaround, because the issue was solved at compiler side</li><li>Update README cmake minimum version number</li><li>Block sort support multiple items per thread. Currently only powers of two block sizes, and items per threads are supported and only for full blocks</li><li>Bumped the minimum required version of CMake to 3.16</li></ul> **Known issues** <ul><li>Unit tests may soft hang on MI200 when running in hipMallocManaged mode.</li><li>device\_segmented\_radix\_sort, device\_scan unit tests failing for HIP on WindowsReduceEmptyInput cause random failure with bfloat16</li><li>Managed memory is not currently supported for clique-based kernels</li></ul> |
|
||||
|
||||
## System Management Interface
|
||||
|
||||
### Clock Throttling for GPU Events
|
||||
|
||||
This feature lists GPU events as they occur in real-time and can be used with _kfdtest_ to produce _vm\_fault_ events for testing.
|
||||
|
||||
The command can be called with either " `-e` or `--showevents` like this:
|
||||
|
||||
-e [EVENT [EVENT ...]], --showevents [EVENT [EVENT ...]] Show event list
|
||||
|
||||
Where `EVENT` is any list combination of `VM_FAULT`, `THERMAL_THROTTLE`, or `GPU_RESET` and is NOT case sensitive.
|
||||
|
||||
**Note:** If no event arguments are passed, all events will be watched by default.
|
||||
|
||||
#### CLI Commands
|
||||
|
||||
```
|
||||
$ rocm-smi --showevents vm_fault thermal_throttle gpu_reset
|
||||
|
||||
======================= ROCm System Management Interface =======================
|
||||
================================= Show Events ==================================
|
||||
press 'q' or 'ctrl + c' to quit
|
||||
DEVICE TIME TYPE DESCRIPTION
|
||||
|
||||
============================= End of ROCm SMI Log ==============================
|
||||
```
|
||||
|
||||
(Run kfdtest in another window to test for vm\_fault events.)
|
||||
|
||||
**Note:** Unlike other rocm-smi CLI commands, this command does not quit unless specified by the user. Users may press either `q` or `ctrl + c` to quit.
|
||||
|
||||
### Display XGMI Bandwidth Between Nodes
|
||||
|
||||
The _rsmi\_minmax\_bandwidth\_get_ API reads the HW Topology file and displays bandwidth (min-max) between any two NUMA nodes in a matrix format.
|
||||
|
||||
The Command Line Interface (CLI) command can be called as follows:
|
||||
|
||||
```
|
||||
$ rocm-smi --shownodesbw
|
||||
|
||||
======================= ROCm System Management Interface =======================
|
||||
================================== Bandwidth ===================================
|
||||
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
|
||||
GPU0 N/A 50000-200000 50000-50000 0-0 0-0 0-0 50000-100000 0-0
|
||||
GPU1 50000-200000 N/A 0-0 50000-50000 0-0 50000-50000 0-0 0-0
|
||||
GPU2 50000-50000 0-0 N/A 50000-200000 50000-100000 0-0 0-0 0-0
|
||||
GPU3 0-0 50000-50000 50000-200000 N/A 0-0 0-0 0-0 50000-50000
|
||||
GPU4 0-0 0-0 50000-100000 0-0 N/A 50000-200000 50000-50000 0-0
|
||||
GPU5 0-0 50000-50000 0-0 0-0 50000-200000 N/A 0-0 50000-50000
|
||||
GPU6 50000-100000 0-0 0-0 0-0 50000-50000 0-0 N/A 50000-200000
|
||||
GPU7 0-0 0-0 0-0 50000-50000 0-0 50000-50000 50000-200000 N/A
|
||||
Format: min-max; Units: mps
|
||||
============================= End of ROCm SMI Log ==============================
|
||||
```
|
||||
|
||||
The sample output above shows the maximum theoretical xgmi bandwidth between 2 numa nodes,
|
||||
|
||||
**Note:** "0-0" min-max bandwidth indicates devices are not connected directly.
|
||||
|
||||
### P2P Connection Status
|
||||
|
||||
The _rsmi\_is\_p2p\_accessible_ API returns "True" if P2P can be implemented between two nodes, and returns "False" if P2P cannot be implemented between the two nodes.
|
||||
|
||||
The Command Line Interface command can be called as follows:
|
||||
|
||||
rocm-smi --showtopoaccess
|
||||
|
||||
Sample Output:
|
||||
|
||||
```
|
||||
$ rocm-smi --showtopoaccess
|
||||
======================= ROCm System Management Interface =======================
|
||||
===================== Link accessibility between two GPUs ======================
|
||||
GPU0 GPU1
|
||||
GPU0 True True
|
||||
GPU1 True True
|
||||
============================= End of ROCm SMI Log ==============================
|
||||
```
|
||||
|
||||
# Breaking Changes
|
||||
|
||||
## Runtime Breaking Change
|
||||
|
||||
Re-ordering of the enumerated type in hip\_runtime\_api.h to better match NV. See below for the difference in enumerated types.
|
||||
|
||||
ROCm software will be affected if any of the defined enums listed below are used in the code. Applications built with ROCm v5.0 enumerated types will work with a ROCm 4.5.2 driver. However, an undefined behavior error will occur with a ROCm v4.5.2 application that uses these enumerated types with a ROCm 5.0 runtime.
|
||||
|
||||
```c
|
||||
typedef enum hipDeviceAttribute_t {
|
||||
hipDeviceAttributeMaxThreadsPerBlock, // Maximum number of threads per block.
|
||||
hipDeviceAttributeMaxBlockDimX, // Maximum x-dimension of a block.
|
||||
hipDeviceAttributeMaxBlockDimY, // Maximum y-dimension of a block.
|
||||
hipDeviceAttributeMaxBlockDimZ, // Maximum z-dimension of a block.
|
||||
hipDeviceAttributeMaxGridDimX, // Maximum x-dimension of a grid.
|
||||
hipDeviceAttributeMaxGridDimY, // Maximum y-dimension of a grid.
|
||||
hipDeviceAttributeMaxGridDimZ, // Maximum z-dimension of a grid.
|
||||
hipDeviceAttributeMaxSharedMemoryPerBlock, // Maximum shared memory available per block in bytes.
|
||||
hipDeviceAttributeTotalConstantMemory, // Constant memory size in bytes.
|
||||
hipDeviceAttributeWarpSize, // Warp size in threads.
|
||||
hipDeviceAttributeMaxRegistersPerBlock, // Maximum number of 32-bit registers available to a
|
||||
// thread block. This number is shared by all thread
|
||||
// blocks simultaneously resident on a
|
||||
// multiprocessor.
|
||||
hipDeviceAttributeClockRate, // Peak clock frequency in kilohertz.
|
||||
hipDeviceAttributeMemoryClockRate, // Peak memory clock frequency in kilohertz.
|
||||
hipDeviceAttributeMemoryBusWidth, // Global memory bus width in bits.
|
||||
hipDeviceAttributeMultiprocessorCount, // Number of multiprocessors on the device.
|
||||
hipDeviceAttributeComputeMode, // Compute mode that device is currently in.
|
||||
hipDeviceAttributeL2CacheSize, // Size of L2 cache in bytes. 0 if the device doesn't have L2
|
||||
// cache.
|
||||
hipDeviceAttributeMaxThreadsPerMultiProcessor, // Maximum resident threads per
|
||||
// multiprocessor.
|
||||
hipDeviceAttributeComputeCapabilityMajor, // Major compute capability version number.
|
||||
hipDeviceAttributeComputeCapabilityMinor, // Minor compute capability version number.
|
||||
hipDeviceAttributeConcurrentKernels, // Device can possibly execute multiple kernels
|
||||
// concurrently.
|
||||
hipDeviceAttributePciBusId, // PCI Bus ID.
|
||||
hipDeviceAttributePciDeviceId, // PCI Device ID.
|
||||
hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, // Maximum Shared Memory Per
|
||||
// Multiprocessor.
|
||||
hipDeviceAttributeIsMultiGpuBoard, // Multiple GPU devices.
|
||||
hipDeviceAttributeIntegrated, // iGPU
|
||||
hipDeviceAttributeCooperativeLaunch, // Support cooperative launch
|
||||
hipDeviceAttributeCooperativeMultiDeviceLaunch, // Support cooperative launch on multiple devices
|
||||
hipDeviceAttributeMaxTexture1DWidth, // Maximum number of elements in 1D images
|
||||
hipDeviceAttributeMaxTexture2DWidth, // Maximum dimension width of 2D images in image elements
|
||||
hipDeviceAttributeMaxTexture2DHeight, // Maximum dimension height of 2D images in image elements
|
||||
hipDeviceAttributeMaxTexture3DWidth, // Maximum dimension width of 3D images in image elements
|
||||
hipDeviceAttributeMaxTexture3DHeight, // Maximum dimensions height of 3D images in image elements
|
||||
hipDeviceAttributeMaxTexture3DDepth, // Maximum dimensions depth of 3D images in image elements
|
||||
hipDeviceAttributeCudaCompatibleBegin = 0,
|
||||
hipDeviceAttributeHdpMemFlushCntl, // Address of the HDP\_MEM\_COHERENCY\_FLUSH\_CNTL register
|
||||
hipDeviceAttributeHdpRegFlushCntl, // Address of the HDP\_REG\_COHERENCY\_FLUSH\_CNTL register
|
||||
hipDeviceAttributeEccEnabled = hipDeviceAttributeCudaCompatibleBegin, // Whether ECC support is enabled.
|
||||
hipDeviceAttributeAccessPolicyMaxWindowSize, // Cuda only. The maximum size of the window policy in bytes.
|
||||
hipDeviceAttributeAsyncEngineCount, // Cuda only. Asynchronous engines number.
|
||||
hipDeviceAttributeCanMapHostMemory, // Whether host memory can be mapped into device address space
|
||||
hipDeviceAttributeCanUseHostPointerForRegisteredMem, // Cuda only. Device can access host registered memory
|
||||
// at the same virtual address as the CPU
|
||||
hipDeviceAttributeClockRate, // Peak clock frequency in kilohertz.
|
||||
hipDeviceAttributeComputeMode, // Compute mode that device is currently in.
|
||||
hipDeviceAttributeComputePreemptionSupported, // Cuda only. Device supports Compute Preemption.
|
||||
hipDeviceAttributeConcurrentKernels, // Device can possibly execute multiple kernels concurrently.
|
||||
hipDeviceAttributeConcurrentManagedAccess, // Device can coherently access managed memory concurrently with the CPU
|
||||
hipDeviceAttributeCooperativeLaunch, // Support cooperative launch
|
||||
hipDeviceAttributeCooperativeMultiDeviceLaunch, // Support cooperative launch on multiple devices
|
||||
hipDeviceAttributeDeviceOverlap, // Cuda only. Device can concurrently copy memory and execute a kernel.
|
||||
// Deprecated. Use instead asyncEngineCount.
|
||||
hipDeviceAttributeDirectManagedMemAccessFromHost, // Host can directly access managed memory on
|
||||
// the device without migration
|
||||
hipDeviceAttributeGlobalL1CacheSupported, // Cuda only. Device supports caching globals in L1
|
||||
hipDeviceAttributeHostNativeAtomicSupported, // Cuda only. Link between the device and the host supports native atomic operations
|
||||
hipDeviceAttributeIntegrated, // Device is integrated GPU
|
||||
hipDeviceAttributeIsMultiGpuBoard, // Multiple GPU devices.
|
||||
hipDeviceAttributeKernelExecTimeout, // Run time limit for kernels executed on the device
|
||||
hipDeviceAttributeL2CacheSize, // Size of L2 cache in bytes. 0 if the device doesn't have L2 cache.
|
||||
hipDeviceAttributeLocalL1CacheSupported, // caching locals in L1 is supported
|
||||
hipDeviceAttributeLuid, // Cuda only. 8-byte locally unique identifier in 8 bytes. Undefined on TCC and non-Windows platforms
|
||||
hipDeviceAttributeLuidDeviceNodeMask, // Cuda only. Luid device node mask. Undefined on TCC and non-Windows platforms
|
||||
hipDeviceAttributeComputeCapabilityMajor, // Major compute capability version number.
|
||||
hipDeviceAttributeManagedMemory, // Device supports allocating managed memory on this system
|
||||
hipDeviceAttributeMaxBlocksPerMultiProcessor, // Cuda only. Max block size per multiprocessor
|
||||
hipDeviceAttributeMaxBlockDimX, // Max block size in width.
|
||||
hipDeviceAttributeMaxBlockDimY, // Max block size in height.
|
||||
hipDeviceAttributeMaxBlockDimZ, // Max block size in depth.
|
||||
hipDeviceAttributeMaxGridDimX, // Max grid size in width.
|
||||
hipDeviceAttributeMaxGridDimY, // Max grid size in height.
|
||||
hipDeviceAttributeMaxGridDimZ, // Max grid size in depth.
|
||||
hipDeviceAttributeMaxSurface1D, // Maximum size of 1D surface.
|
||||
hipDeviceAttributeMaxSurface1DLayered, // Cuda only. Maximum dimensions of 1D layered surface.
|
||||
hipDeviceAttributeMaxSurface2D, // Maximum dimension (width, height) of 2D surface.
|
||||
hipDeviceAttributeMaxSurface2DLayered, // Cuda only. Maximum dimensions of 2D layered surface.
|
||||
hipDeviceAttributeMaxSurface3D, // Maximum dimension (width, height, depth) of 3D surface.
|
||||
hipDeviceAttributeMaxSurfaceCubemap, // Cuda only. Maximum dimensions of Cubemap surface.
|
||||
hipDeviceAttributeMaxSurfaceCubemapLayered, // Cuda only. Maximum dimension of Cubemap layered surface.
|
||||
hipDeviceAttributeMaxTexture1DWidth, // Maximum size of 1D texture.
|
||||
hipDeviceAttributeMaxTexture1DLayered, // Cuda only. Maximum dimensions of 1D layered texture.
|
||||
hipDeviceAttributeMaxTexture1DLinear, // Maximum number of elements allocatable in a 1D linear texture.
|
||||
// Use cudaDeviceGetTexture1DLinearMaxWidth() instead on Cuda.
|
||||
hipDeviceAttributeMaxTexture1DMipmap, // Cuda only. Maximum size of 1D mipmapped texture.
|
||||
hipDeviceAttributeMaxTexture2DWidth, // Maximum dimension width of 2D texture.
|
||||
hipDeviceAttributeMaxTexture2DHeight, // Maximum dimension hight of 2D texture.
|
||||
hipDeviceAttributeMaxTexture2DGather, // Cuda only. Maximum dimensions of 2D texture if gather operations performed.
|
||||
hipDeviceAttributeMaxTexture2DLayered, // Cuda only. Maximum dimensions of 2D layered texture.
|
||||
hipDeviceAttributeMaxTexture2DLinear, // Cuda only. Maximum dimensions (width, height, pitch) of 2D textures bound to pitched memory.
|
||||
hipDeviceAttributeMaxTexture2DMipmap, // Cuda only. Maximum dimensions of 2D mipmapped texture.
|
||||
hipDeviceAttributeMaxTexture3DWidth, // Maximum dimension width of 3D texture.
|
||||
hipDeviceAttributeMaxTexture3DHeight, // Maximum dimension height of 3D texture.
|
||||
hipDeviceAttributeMaxTexture3DDepth, // Maximum dimension depth of 3D texture.
|
||||
hipDeviceAttributeMaxTexture3DAlt, // Cuda only. Maximum dimensions of alternate 3D texture.
|
||||
hipDeviceAttributeMaxTextureCubemap, // Cuda only. Maximum dimensions of Cubemap texture
|
||||
hipDeviceAttributeMaxTextureCubemapLayered, // Cuda only. Maximum dimensions of Cubemap layered texture.
|
||||
hipDeviceAttributeMaxThreadsDim, // Maximum dimension of a block
|
||||
hipDeviceAttributeMaxThreadsPerBlock, // Maximum number of threads per block.
|
||||
hipDeviceAttributeMaxThreadsPerMultiProcessor, // Maximum resident threads per multiprocessor.
|
||||
hipDeviceAttributeMaxPitch, // Maximum pitch in bytes allowed by memory copies
|
||||
hipDeviceAttributeMemoryBusWidth, // Global memory bus width in bits.
|
||||
hipDeviceAttributeMemoryClockRate, // Peak memory clock frequency in kilohertz.
|
||||
hipDeviceAttributeComputeCapabilityMinor, // Minor compute capability version number.
|
||||
hipDeviceAttributeMultiGpuBoardGroupID, // Cuda only. Unique ID of device group on the same multi-GPU board
|
||||
hipDeviceAttributeMultiprocessorCount, // Number of multiprocessors on the device.
|
||||
hipDeviceAttributeName, // Device name.
|
||||
hipDeviceAttributePageableMemoryAccess, // Device supports coherently accessing pageable memory
|
||||
// without calling hipHostRegister on it
|
||||
hipDeviceAttributePageableMemoryAccessUsesHostPageTables, // Device accesses pageable memory via the host's page tables
|
||||
hipDeviceAttributePciBusId, // PCI Bus ID.
|
||||
hipDeviceAttributePciDeviceId, // PCI Device ID.
|
||||
hipDeviceAttributePciDomainID, // PCI Domain ID.
|
||||
hipDeviceAttributePersistingL2CacheMaxSize, // Cuda11 only. Maximum l2 persisting lines capacity in bytes
|
||||
hipDeviceAttributeMaxRegistersPerBlock, // 32-bit registers available to a thread block. This number is shared
|
||||
// by all thread blocks simultaneously resident on a multiprocessor.
|
||||
hipDeviceAttributeMaxRegistersPerMultiprocessor, // 32-bit registers available per block.
|
||||
hipDeviceAttributeReservedSharedMemPerBlock, // Cuda11 only. Shared memory reserved by CUDA driver per block.
|
||||
hipDeviceAttributeMaxSharedMemoryPerBlock, // Maximum shared memory available per block in bytes.
|
||||
hipDeviceAttributeSharedMemPerBlockOptin, // Cuda only. Maximum shared memory per block usable by special opt in.
|
||||
hipDeviceAttributeSharedMemPerMultiprocessor, // Cuda only. Shared memory available per multiprocessor.
|
||||
hipDeviceAttributeSingleToDoublePrecisionPerfRatio, // Cuda only. Performance ratio of single precision to double precision.
|
||||
hipDeviceAttributeStreamPrioritiesSupported, // Cuda only. Whether to support stream priorities.
|
||||
hipDeviceAttributeSurfaceAlignment, // Cuda only. Alignment requirement for surfaces
|
||||
hipDeviceAttributeTccDriver, // Cuda only. Whether device is a Tesla device using TCC driver
|
||||
hipDeviceAttributeTextureAlignment, // Alignment requirement for textures
|
||||
hipDeviceAttributeTexturePitchAlignment, // Pitch alignment requirement for 2D texture references bound to pitched memory;
|
||||
hipDeviceAttributeTotalConstantMemory, // Constant memory size in bytes.
|
||||
hipDeviceAttributeTotalGlobalMem, // Global memory available on devicice.
|
||||
hipDeviceAttributeUnifiedAddressing, // Cuda only. An unified address space shared with the host.
|
||||
hipDeviceAttributeUuid, // Cuda only. Unique ID in 16 byte.
|
||||
hipDeviceAttributeWarpSize, // Warp size in threads.
|
||||
hipDeviceAttributeMaxPitch, // Maximum pitch in bytes allowed by memory copies
|
||||
hipDeviceAttributeTextureAlignment, //Alignment requirement for textures
|
||||
hipDeviceAttributeTexturePitchAlignment, //Pitch alignment requirement for 2D texture references bound to pitched memory;
|
||||
hipDeviceAttributeKernelExecTimeout, //Run time limit for kernels executed on the device
|
||||
hipDeviceAttributeCanMapHostMemory, //Device can map host memory into device address space
|
||||
hipDeviceAttributeEccEnabled, //Device has ECC support enabled
|
||||
hipDeviceAttributeCudaCompatibleEnd = 9999,
|
||||
hipDeviceAttributeAmdSpecificBegin = 10000,
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, // Supports cooperative launch on multiple
|
||||
// devices with unmatched functions
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, // Supports cooperative launch on multiple
|
||||
// devices with unmatched grid dimensions
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, // Supports cooperative launch on multiple
|
||||
// devices with unmatched block dimensions
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, // Supports cooperative launch on multiple
|
||||
// devices with unmatched shared memories
|
||||
hipDeviceAttributeAsicRevision, // Revision of the GPU in this device
|
||||
hipDeviceAttributeManagedMemory, // Device supports allocating managed memory on this system
|
||||
hipDeviceAttributeDirectManagedMemAccessFromHost, // Host can directly access managed memory on
|
||||
// the device without migration
|
||||
hipDeviceAttributeConcurrentManagedAccess, // Device can coherently access managed memory
|
||||
// concurrently with the CPU
|
||||
hipDeviceAttributePageableMemoryAccess, // Device supports coherently accessing pageable memory
|
||||
// without calling hipHostRegister on it
|
||||
hipDeviceAttributePageableMemoryAccessUsesHostPageTables, // Device accesses pageable memory via
|
||||
// the host's page tables
|
||||
hipDeviceAttributeCanUseStreamWaitValue // '1' if Device supports hipStreamWaitValue32() and
|
||||
// hipStreamWaitValue64(), '0' otherwise.
|
||||
hipDeviceAttributeClockInstructionRate = hipDeviceAttributeAmdSpecificBegin, // Frequency in khz of the timer used by the device-side "clock"
|
||||
hipDeviceAttributeArch, // Device architecture
|
||||
hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, // Maximum Shared Memory PerMultiprocessor.
|
||||
hipDeviceAttributeGcnArch, // Device gcn architecture
|
||||
hipDeviceAttributeGcnArchName, // Device gcnArch name in 256 bytes
|
||||
hipDeviceAttributeHdpMemFlushCntl, // Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
|
||||
hipDeviceAttributeHdpRegFlushCntl, // Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, // Supports cooperative launch on multiple
|
||||
// devices with unmatched functions
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, // Supports cooperative launch on multiple
|
||||
// devices with unmatched grid dimensions
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, // Supports cooperative launch on multiple
|
||||
// devices with unmatched block dimensions
|
||||
hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, // Supports cooperative launch on multiple
|
||||
// devices with unmatched shared memories
|
||||
hipDeviceAttributeIsLargeBar, // Whether it is LargeBar
|
||||
hipDeviceAttributeAsicRevision, // Revision of the GPU in this device
|
||||
hipDeviceAttributeCanUseStreamWaitValue, // '1' if Device supports hipStreamWaitValue32() and
|
||||
// hipStreamWaitValue64() , '0' otherwise.
|
||||
hipDeviceAttributeAmdSpecificEnd = 19999,
|
||||
hipDeviceAttributeVendorSpecificBegin = 20000, // Extended attributes for vendors
|
||||
} hipDeviceAttribute_t;
|
||||
```
|
||||
|
||||
# Known Issues in This Release
|
||||
|
||||
## Incorrect dGPU Behavior When Using AMDVBFlash Tool
|
||||
|
||||
The AMDVBFlash tool, used for flashing the VBIOS image to dGPU, does not communicate with the ROM Controller specifically when the driver is present. This is because the driver, as part of its runtime power management feature, puts the dGPU to a sleep state.
|
||||
|
||||
As a workaround, users can run `amdgpu.runpm=0`, which temporarily disables the runtime power management feature from the driver and dynamically changes some power control-related sysfs files.
|
||||
|
||||
## Issue with START Timestamp in ROCProfiler
|
||||
|
||||
Users may encounter an issue with the enabled timestamp functionality for monitoring one or multiple counters. ROCProfiler outputs the following four timestamps for each kernel:
|
||||
|
||||
- Dispatch
|
||||
- Start
|
||||
- End
|
||||
- Complete
|
||||
|
||||
**Issue**
|
||||
|
||||
This defect is related to the Start timestamp functionality, which incorrectly shows an earlier time than the Dispatch timestamp.
|
||||
|
||||
To reproduce the issue,
|
||||
|
||||
1. Enable timing using the `--timestamp on` flag.
|
||||
2. Use the `-i` option with the input filename that contains the name of the counter(s) to monitor.
|
||||
3. Run the program.
|
||||
4. Check the output result file.
|
||||
|
||||
**Current behavior**
|
||||
|
||||
BeginNS is lower than DispatchNS, which is incorrect.
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
The correct order is:
|
||||
|
||||
_Dispatch < Start < End < Complete_
|
||||
|
||||
Users cannot use ROCProfiler to measure the time spent on each kernel because of the incorrect timestamp with counter collection enabled.
|
||||
|
||||
**Recommended Workaround**
|
||||
|
||||
Users are recommended to collect kernel execution timestamps without monitoring counters, as follows:
|
||||
|
||||
1. Enable timing using the `--timestamp on` flag, and run the application.
|
||||
2. Rerun the application using the `-i` option with the input filename that contains the name of the counter(s) to monitor, and save this to a different output file using the `-o` flag.
|
||||
3. Check the output result file from step 1.
|
||||
4. The order of timestamps correctly displays as:
|
||||
|
||||
_DispathNS < BeginNS < EndNS < CompleteNS_
|
||||
|
||||
1. Users can find the values of the collected counters in the output file generated in step 2.
|
||||
|
||||
## Radeon Pro V620 and W6800 Workstation GPUs
|
||||
|
||||
### No Support for SMI and ROCDebugger on SRIOV
|
||||
|
||||
System Management Interface (SMI) and ROCDebugger are not supported in the SRIOV environment on any GPU. For more information, refer to the Systems Management Interface documentation.
|
||||
|
||||
# Deprecations and Warnings in This Release
|
||||
|
||||
## ROCm Libraries Changes – Deprecations and Deprecation Removal
|
||||
|
||||
- The hipfft.h header is now provided only by the hipfft package. Up to ROCm 5.0, users would get hipfft.h in the rocfft package too.
|
||||
- The GlobalPairwiseAMG class is now entirely removed, users should use the PairwiseAMG class instead.
|
||||
- The rocsparse\_spmm signature in 5.0 was changed to match that of rocsparse\_spmm\_ex. In 5.0, rocsparse\_spmm\_ex is still present, but deprecated. Signature diff for rocsparse\_spmm
|
||||
|
||||
### _rocsparse\_spmm_ in 5.0
|
||||
|
||||
```c
|
||||
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
|
||||
rocsparse_operation trans_A,
|
||||
rocsparse_operation trans_B,
|
||||
const void* alpha,
|
||||
const rocsparse_spmat_descr mat_A,
|
||||
const rocsparse_dnmat_descr mat_B,
|
||||
const void* beta,
|
||||
const rocsparse_dnmat_descr mat_C,
|
||||
rocsparse_datatype compute_type,
|
||||
rocsparse_spmm_alg alg,
|
||||
rocsparse_spmm_stage stage,
|
||||
size_t* buffer_size,
|
||||
void* temp_buffer);
|
||||
```
|
||||
|
||||
### _rocsparse\_spmm_ in 4.0
|
||||
|
||||
```c
|
||||
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
|
||||
rocsparse_operation trans_A,
|
||||
rocsparse_operation trans_B,
|
||||
const void* alpha,
|
||||
const rocsparse_spmat_descr mat_A,
|
||||
const rocsparse_dnmat_descr mat_B,
|
||||
const void* beta,
|
||||
const rocsparse_dnmat_descr mat_C,
|
||||
rocsparse_datatype compute_type,
|
||||
rocsparse_spmm_alg alg,
|
||||
size_t* buffer_size,
|
||||
void* temp_buffer);
|
||||
```
|
||||
|
||||
## HIP API Deprecations and Warnings
|
||||
|
||||
### Warning - Arithmetic Operators of HIP Complex and Vector Types
|
||||
|
||||
In this release, arithmetic operators of HIP complex and vector types are deprecated.
|
||||
|
||||
- As alternatives to arithmetic operators of HIP complex types, users can use arithmetic operators of std::complex types.
|
||||
- As alternatives to arithmetic operators of HIP vector types, users can use the operators of the native clang vector type associated with the data member of HIP vector types.
|
||||
|
||||
During the deprecation, two macros `__HIP_ENABLE_COMPLEX_OPERATORS` and `__HIP_ENABLE_VECTOR_OPERATORS` are provided to allow users to conditionally enable arithmetic operators of HIP complex or vector types.
|
||||
|
||||
Note, the two macros are mutually exclusive and, by default, set to off.
|
||||
|
||||
The arithmetic operators of HIP complex and vector types will be removed in a future release.
|
||||
|
||||
Refer to the HIP API Guide for more information.
|
||||
|
||||
### Refactor of HIPCC/HIPCONFIG
|
||||
|
||||
In prior ROCm releases, by default, the hipcc/hipconfig Perl scripts were used to identify and set target compiler options, target platform, compiler, and runtime appropriately.
|
||||
|
||||
In ROCm v5.0, hipcc.bin and hipconfig.bin have been added as the compiled binary implementations of the hipcc and hipconfig. These new binaries are currently a work-in-progress, considered, and marked as experimental. ROCm plans to fully transition to hipcc.bin and hipconfig.bin in the a future ROCm release. The existing hipcc and hipconfig Perl scripts are renamed to hipcc.pl and hipconfig.pl respectively. New top-level hipcc and hipconfig Perl scripts are created, which can switch between the Perl script or the compiled binary based on the environment variable `HIPCC_USE_PERL_SCRIPT`.
|
||||
|
||||
In ROCm 5.0, by default, this environment variable is set to use hipcc and hipconfig through the Perl scripts.
|
||||
|
||||
Subsequently, Perl scripts will no longer be available in ROCm in a future release.
|
||||
|
||||
## Warning - Compiler-Generated Code Object Version 4 Deprecation
|
||||
|
||||
Support for loading compiler-generated code object version 4 will be deprecated in a future release with no release announcement and replaced with code object 5 as the default version.
|
||||
|
||||
The current default is code object version 4.
|
||||
|
||||
## Warning - MIOpenTensile Deprecation
|
||||
|
||||
MIOpenTensile will be deprecated in a future release.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Archived Documentation
|
||||
----------------------
|
||||
Older rocm documentation is archived at https://rocmdocs.amd.com.
|
||||
|
||||
# Disclaimer
|
||||
|
||||
The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions, and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard versionchanges, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. Any computer system has risks of security vulnerabilities that cannot be completely prevented or mitigated.AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes.THIS INFORMATION IS PROVIDED ‘AS IS.” AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS, OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY RELIANCE, DIRECT, INDIRECT, SPECIAL, OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc.Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
|
||||
©[2021]Advanced Micro Devices, Inc.All rights reserved.
|
||||
|
||||
## Third-party Disclaimer
|
||||
Third-party content is licensed to you directly by the third party that owns the content and is not licensed to you by AMD. ALL LINKED THIRD-PARTY CONTENT IS PROVIDED “AS IS” WITHOUT A WARRANTY OF ANY KIND. USE OF SUCH THIRD-PARTY CONTENT IS DONE AT YOUR SOLE DISCRETION AND UNDER NO CIRCUMSTANCES WILL AMD BE LIABLE TO YOU FOR ANY THIRD-PARTY CONTENT. YOU ASSUME ALL RISK AND ARE SOLELY RESPONSIBLE FOR ANY DAMAGES THAT MAY ARISE FROM YOUR USE OF THIRD-PARTY CONTENT.
|
||||
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022 Advanced Micro Devices, Inc. All rights reserved.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
469
README.md
469
README.md
@@ -1,448 +1,93 @@
|
||||
# ROCm™ Repository Updates
|
||||
This repository contains the manifest file for ROCm™ releases, changelogs, and release information. The file default.xml contains information for all repositories and the associated commit used to build the current ROCm release.
|
||||
|
||||
# AMD ROCm Release Notes v3.7.0
|
||||
The default.xml file uses the repo Manifest format.
|
||||
|
||||
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
|
||||
It also covers known issues and deprecated features in this release.
|
||||
# ROCm v5.4.1 Release Notes
|
||||
ROCm v5.4.1 is now released. For ROCm v5.4.1 documentation, refer to https://docs.amd.com.
|
||||
|
||||
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
|
||||
* [Supported Operating Systems](#Supported-Operating-Systems)
|
||||
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
|
||||
|
||||
- [What\'s New in This Release](#Whats-New-in-This-Release)
|
||||
* [AOMP Enhancements](#AOMP-Enhancements)
|
||||
* [Compatibility with NVIDIA Communications Collective Library v2\.7 API](#Compatibility-with-NVIDIA-Communications-Collective-Library-v27-API)
|
||||
* [Singular Value Decomposition of Bi\-diagonal Matrices](#Singular-Value-Decomposition-of-Bi-diagonal-Matrices)
|
||||
* [rocSPARSE_gemmi\() Operations for Sparse Matrices](#rocSPARSE_gemmi-Operations-for-Sparse-Matrices)
|
||||
|
||||
|
||||
- [Known Issues](#Known-Issues)
|
||||
# ROCm v5.4 Release Notes
|
||||
ROCm v5.4 is now released. For ROCm v5.4 documentation, refer to https://docs.amd.com.
|
||||
|
||||
- [Deploying ROCm](#Deploying-ROCm)
|
||||
|
||||
- [Hardware and Software Support](#Hardware-and-Software-Support)
|
||||
# ROCm v5.3.3 Release Notes
|
||||
ROCm v5.3.3 is now released. For ROCm v5.3.3 documentation, refer to https://docs.amd.com.
|
||||
|
||||
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
|
||||
* [ROCm Binary Package Structure](#ROCm-Binary-Package-Structure)
|
||||
* [ROCm Platform Packages](#ROCm-Platform-Packages)
|
||||
|
||||
# ROCm v5.3.2 Release Notes
|
||||
ROCm v5.3.2 is now released. For ROCm v5.3.2 documentation, refer to https://docs.amd.com.
|
||||
|
||||
# ROCm v5.3 Release Notes
|
||||
ROCm v5.3 is now released. For ROCm v5.3 documentation, refer to https://docs.amd.com.
|
||||
|
||||
# Supported Operating Systems
|
||||
# ROCm v5.2.3 Release Notes
|
||||
The ROCm v5.2.3 patch release is now available. The details are listed below. Highlights of this release include enhancements in RCCL version compatibility and minor bug fixes in the HIP Runtime.
|
||||
|
||||
## Support for Ubuntu 20.04
|
||||
Additionally, ROCm releases will return to use of the [ROCm](https://github.com/RadeonOpenCompute/ROCm) repository for version-controlled release notes henceforth.
|
||||
|
||||
**NOTE**: This release of ROCm is validated with the AMDGPU release v22.20.1.
|
||||
|
||||
In this release, AMD ROCm extends support to Ubuntu 20.04, including dual-kernel.
|
||||
All users of the ROCm v5.2.1 release and below are encouraged to upgrade. Refer to https://docs.amd.com for documentation associated with this release.
|
||||
|
||||
## List of Supported Operating Systems
|
||||
|
||||
The AMD ROCm v3.7.x platform is designed to support the following operating systems:
|
||||
## Introducing Preview Support for Ubuntu 20.04.5 HWE
|
||||
|
||||
* Ubuntu 20.04 and 18.04.4 (Kernel 5.3)
|
||||
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
|
||||
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
|
||||
* SLES 15 SP1
|
||||
Refer to the following article for information on the preview support for Ubuntu 20.04.5 HWE.
|
||||
|
||||
## Fresh Installation of AMD ROCm v3.7 Recommended
|
||||
A fresh and clean installation of AMD ROCm v3.7 is recommended. An upgrade from previous releases to AMD ROCm v3.7 is not supported.
|
||||
https://www.amd.com/en/support/kb/release-notes/rn-amdgpu-unified-linux-22-20
|
||||
|
||||
For more information, refer to the AMD ROCm Installation Guide at:
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
|
||||
## Changes in This Release
|
||||
|
||||
### Ubuntu 18.04 End of Life
|
||||
|
||||
# AMD ROCm Documentation Updates
|
||||
Support for Ubuntu 18.04 ends in this release. Future releases of ROCm will not provide prebuilt packages for Ubuntu 18.04.
|
||||
|
||||
## AMD ROCm Installation Guide
|
||||
|
||||
The AMD ROCm Installation Guide in this release includes:
|
||||
### HIP and Other Runtimes
|
||||
|
||||
* Updated Supported Environments
|
||||
* HIP Installation Instructions
|
||||
#### HIP Runtime
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
##### Fixes
|
||||
|
||||
- A bug was discovered in the HIP graph capture implementation in the ROCm v5.2.0 release. If the same kernel is called twice (with different argument values) in a graph capture, the implementation only kept the argument values for the second kernel call.
|
||||
|
||||
## AMD ROCm - HIP Documentation Updates
|
||||
- A bug was introduced in the hiprtc implementation in the ROCm v5.2.0 release. This bug caused the *hiprtcGetLoweredName* call to fail for named expressions with whitespace in it.
|
||||
|
||||
### Texture and Surface Functions
|
||||
The documentation for Texture and Surface functions is updated and available at:
|
||||
**Example:** The named expression ```my_sqrt<complex<double>>``` passed but ```my_sqrt<complex<double>>``` failed.
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Programming_Guides/Kernel_language.html
|
||||
|
||||
### Warp Shuffle Functions
|
||||
The documentation for Warp Shuffle functions is updated and available at:
|
||||
### ROCm Libraries
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Programming_Guides/Kernel_language.html
|
||||
#### RCCL
|
||||
|
||||
### Compiler Defines and Environment Variables
|
||||
The documentation for the updated HIP Porting Guide is available at:
|
||||
##### Added
|
||||
- Compatibility with NCCL 2.12.10
|
||||
- Packages for test and benchmark executables on all supported OSes using CPack
|
||||
- Adding custom signal handler - opt-in with RCCL_ENABLE_SIGNALHANDLER=1
|
||||
- Additional details provided if Binary File Descriptor library (BFD) is pre-installed.
|
||||
- Adding experimental support for using multiple ranks per device
|
||||
- Requires using a new interface to create communicator (ncclCommInitRankMulti),
|
||||
refer to the interface documentation for details.
|
||||
- To avoid potential deadlocks, user might have to set an environment variables increasing
|
||||
the number of hardware queues. For example,
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html#hip-porting-guide
|
||||
|
||||
|
||||
## AMD ROCm Debug Agent
|
||||
|
||||
ROCm Debug Agent Library
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/ROCm_Tools/rocm-debug-agent.html
|
||||
|
||||
|
||||
## General AMD ROCm Documentatin Links
|
||||
|
||||
Access the following links for more information:
|
||||
|
||||
* For AMD ROCm documentation, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/
|
||||
|
||||
* For installation instructions on supped platforms, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
* For AMD ROCm binary structure, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#build-amd-rocm
|
||||
|
||||
* For AMD ROCm Release History, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
|
||||
|
||||
|
||||
|
||||
# What\'s New in This Release
|
||||
|
||||
## AOMP ENHANCEMENTS
|
||||
|
||||
AOMP is a scripted build of LLVM. It supports OpenMP target offload on AMD GPUs. Since AOMP is a Clang/LLVM compiler, it also supports GPU offloading with HIP, CUDA, and OpenCL.
|
||||
|
||||
The following enhancements are made for AOMP in this release:
|
||||
* OpenMP 5.0 is enabled by default. You can use -fopenmp-version=45 for OpenMP 4.5 compliance
|
||||
* Restructured to include the ROCm compiler
|
||||
* B=Bitcode search path using hip policy HIP_DEVICE_LIB_PATH and hip-devic-lib command line option to enable global_free for kmpc_impl_free
|
||||
|
||||
Restructured hostrpc, including:
|
||||
* Replaced hostcall register functions with handlePayload(service, payload). Note, handlPayload has a simple switch to call the correct service handler function.
|
||||
* Removed the WITH_HSA macro
|
||||
* Moved the hostrpc stubs and host fallback functions into a single library and the include file. This enables the stubs openmp cpp source instead of hip and reorganizes the directory openmp/libomptarget/hostrpc.
|
||||
* Moved hostrpc_invoke.cl to DeviceRTLs/amdgcn.
|
||||
* Generalized the vargs processing in printf to work for any vargs function to execute on the host, including a vargs function that uses a function pointer.
|
||||
* Reorganized files, added global_allocate and global_free.
|
||||
* Fixed llvm TypeID enum to match the current upstream llvm TypeID.
|
||||
* Moved strlen_max function inside the declare target #ifdef _DEVICE_GPU in hostrpc.cpp to resolve linker failure seen in pfspecifier_str smoke test.
|
||||
* Fixed AOMP_GIT_CHECK_BRANCH in aomp_common_vars to not block builds in Red Hat if the repository is on a specific commit hash.
|
||||
* Simplified and reduced the size of openmp host runtime
|
||||
* Switched to default OpenMP 5.0
|
||||
|
||||
For more information, see https://github.com/ROCm-Developer-Tools/aomp
|
||||
|
||||
|
||||
## ROCm COMMUNICATIONS COLLECTIVE LIBRARY
|
||||
|
||||
### Compatibility with NVIDIA Communications Collective Library v2\.7 API
|
||||
|
||||
ROCm Communications Collective Library (RCCL) is now compatible with the NVIDIA Communications Collective Library (NCCL) v2.7 API.
|
||||
|
||||
RCCL (pronounced "Rickle") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is also initial support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe, xGMI as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in a single node or multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications.
|
||||
|
||||
The collective operations are implemented using ring and tree algorithms and have been optimized for throughput and latency. For best performance, small operations can be either batched into larger operations or aggregated through the API.
|
||||
|
||||
For more information about RCCL APIs and compatibility with NCCL v2.7, see
|
||||
https://rccl.readthedocs.io/en/develop/index.html
|
||||
|
||||
|
||||
## Singular Value Decomposition of Bi\-diagonal Matrices
|
||||
|
||||
Rocsolver_bdsqr now computes the Singular Value Decomposition (SVD) of bi-diagonal matrices. It is an auxiliary function for the SVD of general matrices (function rocsolver_gesvd).
|
||||
|
||||
BDSQR computes the singular value decomposition (SVD) of a n-by-n bidiagonal matrix B.
|
||||
|
||||
The SVD of B has the following form:
|
||||
|
||||
B = Ub * S * Vb'
|
||||
where
|
||||
• S is the n-by-n diagonal matrix of singular values of B
|
||||
• the columns of Ub are the left singular vectors of B
|
||||
• the columns of Vb are its right singular vectors
|
||||
|
||||
The computation of the singular vectors is optional; this function accepts input matrices U (of size nu-by-n) and V (of size n-by-nv) that are overwritten with U*Ub and Vb’*V. If nu = 0 no left vectors are computed; if nv = 0 no right vectors are computed.
|
||||
|
||||
Optionally, this function can also compute Ub’*C for a given n-by-nc input matrix C.
|
||||
|
||||
PARAMETERS
|
||||
|
||||
• [in] handle: rocblas_handle.
|
||||
|
||||
• [in] uplo: rocblas_fill.
|
||||
|
||||
Specifies whether B is upper or lower bidiagonal.
|
||||
|
||||
• [in] n: rocblas_int. n >= 0.
|
||||
|
||||
The number of rows and columns of matrix B.
|
||||
|
||||
• [in] nv: rocblas_int. nv >= 0.
|
||||
|
||||
The number of columns of matrix V.
|
||||
|
||||
• [in] nu: rocblas_int. nu >= 0.
|
||||
|
||||
The number of rows of matrix U.
|
||||
|
||||
• [in] nc: rocblas_int. nu >= 0.
|
||||
|
||||
The number of columns of matrix C.
|
||||
|
||||
• [inout] D: pointer to real type. Array on the GPU of dimension n.
|
||||
|
||||
On entry, the diagonal elements of B. On exit, if info = 0, the singular values of B in decreasing order; if info > 0, the diagonal elements of a bidiagonal matrix orthogonally equivalent to B.
|
||||
|
||||
• [inout] E: pointer to real type. Array on the GPU of dimension n-1.
|
||||
|
||||
On entry, the off-diagonal elements of B. On exit, if info > 0, the off-diagonal elements of a bidiagonal matrix orthogonally equivalent to B (if info = 0 this matrix converges to zero).
|
||||
|
||||
• [inout] V: pointer to type. Array on the GPU of dimension ldv*nv.
|
||||
|
||||
On entry, the matrix V. On exit, it is overwritten with Vb’*V. (Not referenced if nv = 0).
|
||||
|
||||
• [in] ldv: rocblas_int. ldv >= n if nv > 0, or ldv >=1 if nv = 0.
|
||||
|
||||
Specifies the leading dimension of V.
|
||||
|
||||
• [inout] U: pointer to type. Array on the GPU of dimension ldu*n.
|
||||
|
||||
On entry, the matrix U. On exit, it is overwritten with U*Ub. (Not referenced if nu = 0).
|
||||
|
||||
• [in] ldu: rocblas_int. ldu >= nu.
|
||||
|
||||
Specifies the leading dimension of U.
|
||||
|
||||
• [inout] C: pointer to type. Array on the GPU of dimension ldc*nc.
|
||||
|
||||
On entry, the matrix C. On exit, it is overwritten with Ub’*C. (Not referenced if nc = 0).
|
||||
|
||||
• [in] ldc: rocblas_int. ldc >= n if nc > 0, or ldc >=1 if nc = 0.
|
||||
|
||||
Specifies the leading dimension of C.
|
||||
|
||||
• [out] info: pointer to a rocblas_int on the GPU.
|
||||
|
||||
If info = 0, successful exit. If info = i > 0, i elements of E have not converged to zero.
|
||||
|
||||
For more information, see
|
||||
https://rocsolver.readthedocs.io/en/latest/userguide_api.html#rocsolver-type-bdsqr
|
||||
|
||||
|
||||
### rocSPARSE_gemmi\() Operations for Sparse Matrices
|
||||
|
||||
This enhancement provides a dense matrix sparse matrix multiplication using the CSR storage format.
|
||||
rocsparse_gemmi multiplies the scalar αα with a dense m×km×k matrix AA and the sparse k×nk×n matrix BB defined in the CSR storage format, and adds the result to the dense m×nm×n matrix CC that is multiplied by the scalar ββ, such that
|
||||
C:=α⋅op(A)⋅op(B)+β⋅CC:=α⋅op(A)⋅op(B)+β⋅C
|
||||
with
|
||||
|
||||
op(A)=⎧⎩⎨⎪⎪A,AT,AH,if trans_A == rocsparse_operation_noneif trans_A == rocsparse_operation_transposeif trans_A == rocsparse_operation_conjugate_transposeop(A)={A,if trans_A == rocsparse_operation_noneAT,if trans_A == rocsparse_operation_transposeAH,if trans_A == rocsparse_operation_conjugate_transpose
|
||||
|
||||
and
|
||||
|
||||
op(B)=⎧⎩⎨⎪⎪B,BT,BH,if trans_B == rocsparse_operation_noneif trans_B == rocsparse_operation_transposeif trans_B == rocsparse_operation_conjugate_transposeop(B)={B,if trans_B == rocsparse_operation_noneBT,if trans_B == rocsparse_operation_transposeBH,if trans_B == rocsparse_operation_conjugate_transpose
|
||||
Note: This function is non-blocking and executed asynchronously with the host. It may return before the actual computation has finished.
|
||||
|
||||
For more information and examples, see
|
||||
https://rocsparse.readthedocs.io/en/master/usermanual.html#rocsparse-gemmi
|
||||
|
||||
|
||||
# Known Issues
|
||||
The following are the known issues in this release.
|
||||
|
||||
## (AOMP) ‘Undefined Hidden Symbol’ Linker Error Causes Compilation Failure in HIP
|
||||
|
||||
The HIP example device_lib fails to compile due to unreferenced symbols with Link Time Optimization resulting in ‘undefined hidden symbol’ errors.
|
||||
|
||||
This issue is under investigation and there is no known workaround at this time.
|
||||
|
||||
|
||||
## MIGraphX Fails for fp16 Datatype
|
||||
The MIGraphX functionality does not work for the fp16 datatype.
|
||||
|
||||
The following workaround is recommended:
|
||||
|
||||
Use the AMD ROCm v3.3 of MIGraphX
|
||||
|
||||
Or
|
||||
|
||||
Build MIGraphX v3.7 from the source using AMD ROCm v3.3
|
||||
|
||||
## Missing Google Test Installation May Cause RCCL Unit Test Compilation Failure
|
||||
Users of the RCCL install.sh script may encounter an RCCL unit test compilation error. It is recommended to use CMAKE directly instead of install.sh to compile RCCL. Ensure Google Test 1.10+ is available in the CMAKE search path.
|
||||
|
||||
|
||||
As a workaround, use the latest RCCL from the GitHub development branch at:
|
||||
https://github.com/ROCmSoftwarePlatform/rccl/pull/237
|
||||
|
||||
|
||||
## Issue with Peer-to-Peer Transfers
|
||||
Using peer-to-peer (P2P) transfers on systems without the hardware P2P assistance may produce incorrect results.
|
||||
|
||||
Ensure the hardware supports peer-to-peer transfers and enable the peer-to-peer setting in the hardware to resolve this issue.
|
||||
|
||||
|
||||
## Partial Loss of Tracing Events for Large Applications
|
||||
An internal tracing buffer allocation issue can cause a partial loss of some tracing events for large applications.
|
||||
|
||||
As a workaround, rebuild the roctracer/rocprofiler libraries from the GitHub ‘roc-3.7’ branch at:
|
||||
• https://github.com/ROCm-Developer-Tools/rocprofiler
|
||||
• https://github.com/ROCm-Developer-Tools/roctracer
|
||||
|
||||
|
||||
## GPU Kernel C++ Names Not Demangled
|
||||
GPU kernel C++ names in the profiling traces and stats produced by ‘—hsa-trace’ option are not demangled.
|
||||
|
||||
As a workaround, users may choose to demangle the GPU kernel C++ names as required.
|
||||
|
||||
|
||||
## ‘rocprof’ option ‘--parallel-kernels’ Not Supported in This Release
|
||||
|
||||
‘rocprof’ option ‘--parallel-kernels’ is available in the options list, however, it is not fully validated and supported in this release.
|
||||
|
||||
## Random Soft Hang Observed When Running ResNet-Based Models
|
||||
|
||||
A random soft hang is observed when running ResNet-based models for a loop run of more than 25 to 30 hours. The issue is observed on both PyTorch and TensorFlow frameworks.
|
||||
You can terminate the unresponsive process to temporarily resolve the issue.
|
||||
|
||||
There is no known workaround at this time.
|
||||
|
||||
|
||||
|
||||
# Deploying ROCm
|
||||
AMD hosts both Debian and RPM repositories for the ROCm v3.7.x packages.
|
||||
|
||||
For more information on ROCM installation on all platforms, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
# Hardware and Software Support
|
||||
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
|
||||
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
|
||||
|
||||
#### Supported GPUs
|
||||
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
|
||||
|
||||
ROCm officially supports AMD GPUs that use following chips:
|
||||
|
||||
* GFX8 GPUs
|
||||
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
|
||||
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
|
||||
* GFX9 GPUs
|
||||
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
|
||||
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
|
||||
|
||||
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
|
||||
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
|
||||
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
|
||||
|
||||
* GFX8 GPUs
|
||||
* "Polaris 11" chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100
|
||||
* "Polaris 12" chips, such as on the AMD Radeon RX 550 and Radeon RX 540
|
||||
* GFX7 GPUs
|
||||
* "Hawaii" chips, such as the AMD Radeon R9 390X and FirePro W9100
|
||||
|
||||
As described in the next section, GFX8 GPUs require PCI Express 3.0 (PCIe 3.0) with support for PCIe atomics. This requires both CPU and motherboard support. GFX9 GPUs require PCIe 3.0 with support for PCIe atomics by default, but they can operate in most cases without this capability.
|
||||
|
||||
The integrated GPUs in AMD APUs are not officially supported targets for ROCm.
|
||||
As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven Ridge" APUs are enabled in our upstream drivers and the ROCm OpenCL runtime.
|
||||
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
|
||||
As such, they are not yet officially supported targets for ROCm.
|
||||
|
||||
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
|
||||
|
||||
#### Supported CPUs
|
||||
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
|
||||
In particular, the CPU and every active PCIe point between the CPU and GPU require support for PCIe 3.0 and PCIe atomics.
|
||||
The CPU root must indicate PCIe AtomicOp Completion capabilities and any intermediate switch must indicate PCIe AtomicOp Routing capabilities.
|
||||
|
||||
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
|
||||
|
||||
* AMD Ryzen CPUs
|
||||
* The CPUs in AMD Ryzen APUs
|
||||
* AMD Ryzen Threadripper CPUs
|
||||
* AMD EPYC CPUs
|
||||
* Intel Xeon E7 v3 or newer CPUs
|
||||
* Intel Xeon E5 v3 or newer CPUs
|
||||
* Intel Xeon E3 v3 or newer CPUs
|
||||
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer)
|
||||
* Some Ivy Bridge-E systems
|
||||
|
||||
Beginning with ROCm 1.8, GFX9 GPUs (such as Vega 10) no longer require PCIe atomics.
|
||||
We have similarly opened up more options for number of PCIe lanes.
|
||||
GFX9 GPUs can now be run on CPUs without PCIe atomics and on older PCIe generations, such as PCIe 2.0.
|
||||
This is not supported on GPUs below GFX9, e.g. GFX8 cards in the Fiji and Polaris families.
|
||||
|
||||
If you are using any PCIe switches in your system, please note that PCIe Atomics are only supported on some switches, such as Broadcom PLX.
|
||||
When you install your GPUs, make sure you install them in a PCIe 3.1.0 x16, x8, x4, or x1 slot attached either directly to the CPU's Root I/O controller or via a PCIe switch directly attached to the CPU's Root I/O controller.
|
||||
|
||||
In our experience, many issues stem from trying to use consumer motherboards which provide physical x16 connectors that are electrically connected as e.g. PCIe 2.0 x4, PCIe slots connected via the Southbridge PCIe I/O controller, or PCIe slots connected through a PCIe switch that does
|
||||
not support PCIe atomics.
|
||||
|
||||
If you attempt to run ROCm on a system without proper PCIe atomic support, you may see an error in the kernel log (`dmesg`):
|
||||
```
|
||||
kfd: skipped device 1002:7300, PCI rejects atomics
|
||||
export GPU_MAX_HW_QUEUES=16
|
||||
|
||||
```
|
||||
- Adding support for reusing ports in NET/IB channels
|
||||
- Opt-in with NCCL_IB_SOCK_CLIENT_PORT_REUSE=1 and NCCL_IB_SOCK_SERVER_PORT_REUSE=1
|
||||
- When "Call to bind failed: Address already in use" error happens in large-scale AlltoAll
|
||||
(for example, >=64 MI200 nodes), users are suggested to opt-in either one or both of the options to resolve the massive port usage issue
|
||||
- Avoid using NCCL_IB_SOCK_SERVER_PORT_REUSE when NCCL_NCHANNELS_PER_NET_PEER is tuned >1
|
||||
|
||||
Experimental support for our Hawaii (GFX7) GPUs (Radeon R9 290, R9 390, FirePro W9100, S9150, S9170)
|
||||
does not require or take advantage of PCIe Atomics. However, we still recommend that you use a CPU
|
||||
from the list provided above for compatibility purposes.
|
||||
##### Removed
|
||||
- Removed experimental clique-based kernels
|
||||
|
||||
#### Not supported or limited support under ROCm
|
||||
##### Limited support
|
||||
### Development Tools
|
||||
No notable changes in this release for development tools, including the compiler, profiler, and debugger.
|
||||
|
||||
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
|
||||
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
|
||||
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
|
||||
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
|
||||
* In addition, various "Carrizo" and "Bristol Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
|
||||
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
|
||||
* AMD "Raven Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
|
||||
* As of ROCm 2.1, "Raven Ridge" requires the use of upstream kernel drivers.
|
||||
* In addition, various "Raven Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
|
||||
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
|
||||
### Deployment and Management Tools
|
||||
No notable changes in this release for deployment and management tools.
|
||||
|
||||
##### Not supported
|
||||
|
||||
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
|
||||
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
|
||||
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
|
||||
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
|
||||
|
||||
#### ROCm support in upstream Linux kernels
|
||||
|
||||
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
|
||||
As such, users have the option of either using the ROCK kernel driver that are part of AMD's ROCm repositories or using the upstream driver and only installing ROCm user-level utilities from AMD's ROCm repositories.
|
||||
|
||||
These releases of the upstream Linux kernel support the following GPUs in ROCm:
|
||||
* 4.17: Fiji, Polaris 10, Polaris 11
|
||||
* 4.18: Fiji, Polaris 10, Polaris 11, Vega10
|
||||
* 4.20: Fiji, Polaris 10, Polaris 11, Vega10, Vega 7nm
|
||||
|
||||
The upstream driver may be useful for running ROCm software on systems that are not compatible with the kernel driver available in AMD's repositories.
|
||||
For users that have the option of using either AMD's or the upstreamed driver, there are various tradeoffs to take into consideration:
|
||||
|
||||
| | Using AMD's `rock-dkms` package | Using the upstream kernel driver |
|
||||
| ---- | ------------------------------------------------------------| ----- |
|
||||
| Pros | More GPU features, and they are enabled earlier | Includes the latest Linux kernel features |
|
||||
| | Tested by AMD on supported distributions | May work on other distributions and with custom kernels |
|
||||
| | Supported GPUs enabled regardless of kernel version | |
|
||||
| | Includes the latest GPU firmware | |
|
||||
| Cons | May not work on all Linux distributions or versions | Features and hardware support varies depending on kernel version |
|
||||
| | Not currently supported on kernels newer than 5.4 | Limits GPU's usage of system memory to 3/8 of system memory (before 5.6). For 5.6 and beyond, both DKMS and upstream kernels allow use of 15/16 of system memory. |
|
||||
| | | IPC and RDMA capabilities are not yet enabled |
|
||||
| | | Not tested by AMD to the same level as `rock-dkms` package |
|
||||
| | | Does not include most up-to-date firmware |
|
||||
|
||||
|
||||
|
||||
## Machine Learning and High Performance Computing Software Stack for AMD GPU
|
||||
|
||||
For an updated version of the software stack for AMD GPU, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#machine-learning-and-high-performance-computing-software-stack-for-amd-gpu-v3-5-0
|
||||
## Older ROCm™ Releases
|
||||
For release information for older ROCm™ releases, refer to [CHANGELOG](CHANGELOG.md).
|
||||
|
||||
65
default.xml
65
default.xml
@@ -1,27 +1,26 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<manifest>
|
||||
<remote name="roc-github"
|
||||
fetch="http://github.com/RadeonOpenCompute/" />
|
||||
fetch="https://github.com/RadeonOpenCompute/" />
|
||||
<remote name="rocm-devtools"
|
||||
fetch="https://github.com/ROCm-Developer-Tools/" />
|
||||
fetch="https://github.com/ROCm-Developer-Tools/" />
|
||||
<remote name="rocm-swplat"
|
||||
fetch="https://github.com/ROCmSoftwarePlatform/" />
|
||||
fetch="https://github.com/ROCmSoftwarePlatform/" />
|
||||
<remote name="gpuopen-libs"
|
||||
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
|
||||
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
|
||||
<remote name="gpuopen-tools"
|
||||
fetch="https://github.com/GPUOpen-Tools/" />
|
||||
fetch="https://github.com/GPUOpen-Tools/" />
|
||||
<remote name="KhronosGroup"
|
||||
fetch="https://github.com/KhronosGroup/" />
|
||||
<default revision="refs/tags/rocm-3.7.0"
|
||||
remote="roc-github"
|
||||
sync-c="true"
|
||||
sync-j="4" />
|
||||
<!--list of projects for ROCM-->
|
||||
fetch="https://github.com/KhronosGroup/" />
|
||||
<default revision="refs/tags/rocm-5.4.1"
|
||||
remote="roc-github"
|
||||
sync-c="true"
|
||||
sync-j="4" />
|
||||
<!--list of projects for ROCM-->
|
||||
<project name="ROCK-Kernel-Driver" />
|
||||
<project name="ROCT-Thunk-Interface" />
|
||||
<project name="ROCR-Runtime" />
|
||||
<project name="ROC-smi" />
|
||||
<project name="rocm_smi_lib" remote="roc-github" />
|
||||
<project name="rocm_smi_lib" />
|
||||
<project name="rocm-cmake" />
|
||||
<project name="rocminfo" />
|
||||
<project name="rocprofiler" remote="rocm-devtools" />
|
||||
@@ -29,29 +28,36 @@
|
||||
<project name="ROCm-OpenCL-Runtime" />
|
||||
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="6c03f8b58fafd9dd693eaac826749a5cfad515f8" />
|
||||
<project name="clang-ocl" />
|
||||
<!--HIP Projects-->
|
||||
<!--HIP Projects-->
|
||||
<project name="HIP" remote="rocm-devtools" />
|
||||
<project name="hipamd" remote="rocm-devtools" />
|
||||
<project name="HIP-Examples" remote="rocm-devtools" />
|
||||
<project name="ROCclr" remote="rocm-devtools" />
|
||||
<project name="HIPIFY" remote="rocm-devtools" />
|
||||
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
|
||||
<project name="llvm-project" path="llvm_amd-stg-open" />
|
||||
<project name="HIPCC" remote="rocm-devtools" />
|
||||
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
|
||||
<project name="llvm-project" />
|
||||
<project name="ROCm-Device-Libs" />
|
||||
<project name="atmi" />
|
||||
<project name="ROCm-CompilerSupport" />
|
||||
<project name="rocr_debug_agent" remote="rocm-devtools" revision="refs/tags/roc-3.7.0" />
|
||||
<project name="rocr_debug_agent" remote="rocm-devtools" />
|
||||
<project name="rocm_bandwidth_test" />
|
||||
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
|
||||
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
|
||||
<!-- gdb projects -->
|
||||
<!-- gdb projects -->
|
||||
<project name="ROCgdb" remote="rocm-devtools" />
|
||||
<project name="ROCdbgapi" remote="rocm-devtools" />
|
||||
<!-- ROCm Libraries -->
|
||||
<!-- ROCm Libraries -->
|
||||
<project name="rdc" />
|
||||
<project name="rocBLAS" remote="rocm-swplat" />
|
||||
<project name="Tensile" remote="rocm-swplat" />
|
||||
<project name="hipBLAS" remote="rocm-swplat" />
|
||||
<project name="rocFFT" remote="rocm-swplat" />
|
||||
<project name="hipFFT" remote="rocm-swplat" />
|
||||
<project name="rocRAND" remote="rocm-swplat" />
|
||||
<project name="rocSPARSE" remote="rocm-swplat" />
|
||||
<project name="rocSOLVER" remote="rocm-swplat" />
|
||||
<project name="hipSOLVER" remote="rocm-swplat" />
|
||||
<project name="hipSPARSE" remote="rocm-swplat" />
|
||||
<project name="rocALUTION" remote="rocm-swplat" />
|
||||
<project name="MIOpenGEMM" remote="rocm-swplat" />
|
||||
@@ -61,19 +67,12 @@
|
||||
<project name="rocThrust" remote="rocm-swplat" />
|
||||
<project name="hipCUB" remote="rocm-swplat" />
|
||||
<project name="rocPRIM" remote="rocm-swplat" />
|
||||
<project name="AMDMIGraphX" remote="rocm-swplat" revision="e66968a25f9342a28af1157b06cbdbf8579c5519" />
|
||||
<project name="rocWMMA" remote="rocm-swplat" />
|
||||
<project name="hipfort" remote="rocm-swplat" />
|
||||
<project name="AMDMIGraphX" remote="rocm-swplat" />
|
||||
<project name="ROCmValidationSuite" remote="rocm-devtools" />
|
||||
<!-- Projects for AOMP -->
|
||||
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
|
||||
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
|
||||
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
|
||||
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
|
||||
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
|
||||
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
|
||||
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
|
||||
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
|
||||
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
|
||||
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
|
||||
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
|
||||
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" />
|
||||
<!-- Projects for OpenMP-Extras -->
|
||||
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
|
||||
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
|
||||
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
|
||||
</manifest>
|
||||
|
||||
@@ -1,503 +0,0 @@
|
||||
## ROCm Version History
|
||||
This file contains archived version history information for the [ROCm project](https://github.com/RadeonOpenCompute/ROCm)
|
||||
|
||||
### Current ROCm Version: 3.3
|
||||
- [New features and enhancements in ROCm v3.1](#new-features-and-enhancements-in-rocm-v31)
|
||||
- [New features and enhancements in ROCm v3.0](#new-features-and-enhancements-in-rocm-v30)
|
||||
- [New features and enhancements in ROCm v2.10](#new-features-and-enhancements-in-rocm-v210)
|
||||
- [New features and enhancements in ROCm 2.9](#new-features-and-enhancements-in-rocm-29)
|
||||
- [New features and enhancements in ROCm 2.8](#new-features-and-enhancements-in-rocm-28)
|
||||
- [New features and enhancements in ROCm 2.7.2](#new-features-and-enhancements-in-rocm-272)
|
||||
- [New features and enhancements in ROCm 2.7](#new-features-and-enhancements-in-rocm-27)
|
||||
- [New features and enhancements in ROCm 2.6](#new-features-and-enhancements-in-rocm-26)
|
||||
- [New features and enhancements in ROCm 2.5](#new-features-and-enhancements-in-rocm-25)
|
||||
- [New features and enhancements in ROCm 2.4](#new-features-and-enhancements-in-rocm-24)
|
||||
- [New features and enhancements in ROCm 2.3](#new-features-and-enhancements-in-rocm-23)
|
||||
- [New features and enhancements in ROCm 2.2](#new-features-and-enhancements-in-rocm-22)
|
||||
- [New features and enhancements in ROCm 2.1](#new-features-and-enhancements-in-rocm-21)
|
||||
- [New features and enhancements in ROCm 2.0](#new-features-and-enhancements-in-rocm-20)
|
||||
- [New features and enhancements in ROCm 1.9.2](#new-features-and-enhancements-in-rocm-192)
|
||||
- [New features and enhancements in ROCm 1.9.2](#new-features-and-enhancements-in-rocm-192-1)
|
||||
- [New features and enhancements in ROCm 1.9.1](#new-features-and-enhancements-in-rocm-191)
|
||||
- [New features and enhancements in ROCm 1.9.0](#new-features-and-enhancements-in-rocm-190)
|
||||
- [New features as of ROCm 1.8.3](#new-features-as-of-rocm-183)
|
||||
- [New features as of ROCm 1.8](#new-features-as-of-rocm-18)
|
||||
- [New Features as of ROCm 1.7](#new-features-as-of-rocm-17)
|
||||
- [New Features as of ROCm 1.5](#new-features-as-of-rocm-15)
|
||||
|
||||
|
||||
## New features and enhancements in ROCm v3.2
|
||||
The AMD ROCm v3.2 release was not productized.
|
||||
|
||||
## New features and enhancements in ROCm v3.1
|
||||
### Change in ROCm Installation Directory Structure
|
||||
|
||||
A fresh installation of the ROCm toolkit installs the packages in the /opt/rocm-<version> folder. Previously, ROCm toolkit packages were installed in the /opt/rocm folder.
|
||||
|
||||
### Reliability, Accessibility, and Serviceability Support for Vega 7nm
|
||||
|
||||
The Reliability, Accessibility, and Serviceability (RAS) support for Vega7nm is now available.
|
||||
|
||||
### SLURM Support for AMD GPU
|
||||
|
||||
SLURM (Simple Linux Utility for Resource Management) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
|
||||
|
||||
|
||||
## New features and enhancements in ROCm v3.0
|
||||
### Support for CentOS RHEL v7.7 <a id="centos-anchor"></a>
|
||||
Support is extended for CentOS/RHEL v7.7 in the ROCm v3.0 release. For more information about the CentOS/RHEL v7.7 release, see:
|
||||
|
||||
[CentOS/RHEL](https://centos.org/forums/viewtopic.php?t=71657)
|
||||
|
||||
|
||||
### Initial distribution of AOMP 0.7-5 in ROCm v3.0 <a id="aomp-anchor"></a>
|
||||
The code base for this release of AOMP is the Clang/LLVM 9.0 sources as of October 8th, 2019. The LLVM-project branch used to build this release is AOMP-191008. It is now locked. With this release, an artifact tarball of the entire source tree is created. This tree includes a Makefile in the root directory used to build AOMP from the release tarball. You can use Spack to build AOMP from this source tarball or build manually without Spack.
|
||||
|
||||
For more information about AOMP 0.7-5, see: [AOMP](https://github.com/ROCm-Developer-Tools/aomp/tree/roc-3.0.0)
|
||||
|
||||
|
||||
### Fast Fourier Transform Updates
|
||||
The Fast Fourier Transform (FFT) is an efficient algorithm for computing the Discrete Fourier Transform. Fast Fourier transforms are used in signal processing, image processing, and many other areas. The following real FFT performance change is made in the ROCm v3.0 release:
|
||||
|
||||
• Implement efficient real/complex 2D transforms for even lengths.
|
||||
|
||||
Other improvements:
|
||||
|
||||
• More 2D test coverage sizes.
|
||||
|
||||
• Fix buffer allocation error for large 1D transforms.
|
||||
|
||||
• C++ compatibility improvements.
|
||||
|
||||
### MemCopy Enhancement for rocProf
|
||||
In the v3.0 release, the rocProf tool is enhanced with an additional capability to dump asynchronous GPU memcopy information into a .csv file. You can use the '-hsa-trace' option to create the results_mcopy.csv file.
|
||||
Future enhancements will include column labels.
|
||||
|
||||
|
||||
### New features and enhancements in ROCm v2.10
|
||||
#### rocBLAS Support for Complex GEMM
|
||||
The rocBLAS library is a gpu-accelerated implementation of the standard Basic Linear Algebra Subroutines (BLAS). rocBLAS is designed to enable you to develop algorithms, including high performance computing, image analysis, and machine learning.
|
||||
|
||||
In the AMD ROCm release v2.10, support is extended to the General Matrix Multiply (GEMM) routine for multiple small matrices processed simultaneously for rocBLAS in AMD Radeon Instinct MI50. Both single and double precision, CGEMM and ZGEMM, are now supported in rocBLAS.
|
||||
|
||||
#### Support for SLES 15 SP1
|
||||
In the AMD ROCm v2.10 release, support is added for SUSE Linux® Enterprise Server (SLES) 15 SP1. SLES is a modular operating system for both multimodal and traditional IT.
|
||||
|
||||
#### Code Marker Support for rocProfiler and rocTracer Libraries
|
||||
Code markers provide the external correlation ID for the calling thread. This function indicates that the calling thread is entering and leaving an external API region.
|
||||
|
||||
### New features and enhancements in ROCm 2.9
|
||||
|
||||
#### Initial release for Radeon Augmentation Library(RALI)
|
||||
The AMD Radeon Augmentation Library (RALI) is designed to efficiently decode and process images from a variety of storage formats and modify them through a processing graph programmable by the user. RALI currently provides C API.
|
||||
|
||||
#### Quantization in MIGraphX v0.4
|
||||
MIGraphX 0.4 introduces support for fp16 and int8 quantization. For additional details, as well as other new MIGraphX features, see [MIGraphX documentation](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.4).
|
||||
|
||||
#### rocSparse csrgemm
|
||||
csrgemm enables the user to perform matrix-matrix multiplication with two sparse matrices in CSR format.
|
||||
|
||||
#### Singularity Support
|
||||
ROCm 2.9 adds support for Singularity container version 2.5.2.
|
||||
|
||||
#### Initial release of rocTX
|
||||
ROCm 2.9 introduces rocTX, which provides a C API for code markup for performance profiling. This initial release of rocTX supports annotation of code ranges and ASCII markers. For an example, see this [code](https://github.com/ROCm-Developer-Tools/roctracer/blob/amd-master/test/MatrixTranspose_test/MatrixTranspose.cpp).
|
||||
|
||||
#### Added support for Ubuntu 18.04.3
|
||||
Ubuntu 18.04.3 is now supported in ROCm 2.9.
|
||||
|
||||
|
||||
|
||||
### New features and enhancements in ROCm 2.8
|
||||
|
||||
#### Support for NCCL2.4.8 API
|
||||
Implements ncclCommAbort() and ncclCommGetAsyncError() to match the NCCL 2.4.x API
|
||||
|
||||
### New features and enhancements in ROCm 2.7.2
|
||||
|
||||
This release is a hotfix for ROCm release 2.7.
|
||||
|
||||
#### Issues fixed in ROCm 2.7.2
|
||||
|
||||
##### A defect in upgrades from older ROCm releases has been fixed.
|
||||
|
||||
##### rocprofiler --hiptrace and --hsatrace fails to load roctracer library
|
||||
In ROCm 2.7.2, rocprofiler --hiptrace and --hsatrace fails to load roctracer library defect has been fixed.
|
||||
To generate traces, please provide directory path also using the parameter: -d <$directoryPath> for example:
|
||||
```shell
|
||||
/opt/rocm/bin/rocprof --hsa-trace -d $PWD/traces /opt/rocm/hip/samples/0_Intro/bit_extract/bit_extract
|
||||
```
|
||||
All traces and results will be saved under $PWD/traces path
|
||||
|
||||
#### Upgrading from ROCm 2.7 to 2.7.2
|
||||
|
||||
To upgrade, please remove 2.7 completely as specified [for ubuntu](#how-to-uninstall-from-ubuntu-1604-or-Ubuntu-1804) or [for centos/rhel](#how-to-uninstall-rocm-from-centosrhel-76), and install 2.7.2 as per instructions [install instructions](#installing-from-amd-rocm-repositories)
|
||||
|
||||
#### Other notes
|
||||
|
||||
To use rocprofiler features, the following steps need to be completed before using rocprofiler:
|
||||
|
||||
##### Step-1: Install roctracer
|
||||
|
||||
###### Ubuntu 16.04 or Ubuntu 18.04:
|
||||
|
||||
```shell
|
||||
sudo apt install roctracer-dev
|
||||
```
|
||||
|
||||
###### CentOS/RHEL 7.6:
|
||||
|
||||
```shell
|
||||
sudo yum install roctracer-dev
|
||||
```
|
||||
##### Step-2: Add /opt/rocm/roctracer/lib to LD_LIBRARY_PATH
|
||||
|
||||
### New features and enhancements in ROCm 2.7
|
||||
|
||||
#### [rocFFT] Real FFT Functional
|
||||
Improved real/complex 1D even-length transforms of unit stride. Performance improvements of up to 4.5x are observed. Large problem sizes should see approximately 2x.
|
||||
|
||||
#### rocRand Enhancements and Optimizations
|
||||
- Added support for new datatypes: uchar, ushort, half.
|
||||
- Improved performance on "Vega 7nm" chips, such as on the Radeon Instinct MI50
|
||||
- mtgp32 uniform double performance changes due generation algorithm standardization. Better quality random numbers now generated with 30% decrease in performance
|
||||
- Up to 5% performance improvements for other algorithms
|
||||
|
||||
#### RAS
|
||||
Added support for RAS on Radeon Instinct MI50, including:
|
||||
- Memory error detection
|
||||
- Memory error detection counter
|
||||
|
||||
#### ROCm-SMI enhancements
|
||||
Added ROCm-SMI CLI and LIB support for FW version, compute running processes, utilization rates, utilization counter, link error counter, and unique ID.
|
||||
|
||||
### New features and enhancements in ROCm 2.6
|
||||
|
||||
#### ROCmInfo enhancements
|
||||
ROCmInfo was extended to do the following:
|
||||
For ROCr API call errors including initialization determine if the error could be explained by:
|
||||
- ROCk (driver) is not loaded / available
|
||||
- User does not have membership in appropriate group - "video"
|
||||
- If not above print the error string that is mapped to the returned error code
|
||||
- If no error string is available, print the error code in hex
|
||||
|
||||
#### Thrust - Functional Support on Vega20
|
||||
ROCm2.6 contains the first official release of rocThrust and hipCUB. rocThrust is a port of thrust, a parallel algorithm library. hipCUB is a port of CUB, a reusable software component library. Thrust/CUB has been ported to the HIP/ROCm platform to use the rocPRIM library. The HIP ported library works on HIP/ROCm platforms.
|
||||
|
||||
Note: rocThrust and hipCUB library replaces https://github.com/ROCmSoftwarePlatform/thrust (hip-thrust), i.e. hip-thrust has been separated into two libraries, rocThrust and hipCUB. Existing hip-thrust users are encouraged to port their code to rocThrust and/or hipCUB. Hip-thrust will be removed from official distribution later this year.
|
||||
|
||||
#### MIGraphX v0.3
|
||||
MIGraphX optimizer adds support to read models frozen from Tensorflow framework. Further details and an example usage at https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.3
|
||||
|
||||
#### MIOpen 2.0
|
||||
- This release contains several new features including an immediate mode for selecting convolutions, bfloat16 support, new layers, modes, and algorithms.
|
||||
- MIOpenDriver, a tool for benchmarking and developing kernels is now shipped with MIOpen.
|
||||
BFloat16 now supported in HIP requires an updated rocBLAS as a GEMM backend.
|
||||
- Immediate mode API now provides the ability to quickly obtain a convolution kernel.
|
||||
- MIOpen now contains HIP source kernels and implements the ImplicitGEMM kernels. This is a new feature and is currently disabled by default. Use the environmental variable "MIOPEN_DEBUG_CONV_IMPLICIT_GEMM=1" to activation this feature. ImplicitGEMM requires an up to date HIP version of at least 1.5.9211.
|
||||
- A new "loss" catagory of layers has been added, of which, CTC loss is the first. See the API reference for more details.
|
||||
2.0 is the last release of active support for gfx803 architectures. In future releases, MIOpen will not actively debug and develop new features specifically for gfx803.
|
||||
- System Find-Db in memory cache is disabled by default. Please see build instructions to enable this feature.
|
||||
Additional documentation can be found here: https://rocmsoftwareplatform.github.io/MIOpen/doc/html/
|
||||
|
||||
#### Bloat16 software support in rocBLAS/Tensile
|
||||
Added mixed precision bfloat16/IEEE f32 to gemm_ex. The input and output matrices are bfloat16. All arithmetic is in IEEE f32.
|
||||
|
||||
#### AMD Infinity Fabric™ Link enablement
|
||||
The ability to connect four Radeon Instinct MI60 or Radeon Instinct MI50 boards in two hives or two Radeon Instinct MI60 or Radeon Instinct MI50 boards in four hives via AMD Infinity Fabric™ Link GPU interconnect technology has been added.
|
||||
|
||||
#### ROCm-smi features and bug fixes
|
||||
- mGPU & Vendor check
|
||||
- Fix clock printout if DPM is disabled
|
||||
- Fix finding marketing info on CentOS
|
||||
- Clarify some error messages
|
||||
|
||||
#### ROCm-smi-lib enhancements
|
||||
- Documentation updates
|
||||
- Improvements to *name_get functions
|
||||
|
||||
#### RCCL2 Enablement
|
||||
RCCL2 supports collectives intranode communication using PCIe, Infinity Fabric™, and pinned host memory, as well as internode communication using Ethernet (TCP/IP sockets) and Infiniband/RoCE (Infiniband Verbs). Note: For Infiniband/RoCE, RDMA is not currently supported.
|
||||
|
||||
#### rocFFT enhancements
|
||||
- Added: Debian package with FFT test, benchmark, and sample programs
|
||||
- Improved: hipFFT interfaces
|
||||
- Improved: rocFFT CPU reference code, plan generation code and logging code
|
||||
|
||||
### New features and enhancements in ROCm 2.5
|
||||
|
||||
#### UCX 1.6 support
|
||||
Support for UCX version 1.6 has been added.
|
||||
|
||||
#### BFloat16 GEMM in rocBLAS/Tensile
|
||||
Software support for BFloat16 on Radeon Instinct MI50, MI60 has been added. This includes:
|
||||
- Mixed precision GEMM with BFloat16 input and output matrices, and all arithmetic in IEEE32 bit
|
||||
- Input matrix values are converted from BFloat16 to IEEE32 bit, all arithmetic and accumulation is IEEE32 bit. Output values are rounded from IEEE32 bit to BFloat16
|
||||
- Accuracy should be correct to 0.5 ULP
|
||||
|
||||
#### ROCm-SMI enhancements
|
||||
CLI support for querying the memory size, driver version, and firmware version has been added to ROCm-smi.
|
||||
|
||||
#### [PyTorch] multi-GPU functional support (CPU aggregation/Data Parallel)
|
||||
Multi-GPU support is enabled in PyTorch using Dataparallel path for versions of PyTorch built using the 06c8aa7a3bbd91cda2fd6255ec82aad21fa1c0d5 commit or later.
|
||||
|
||||
#### rocSparse optimization on Radeon Instinct MI50 and MI60
|
||||
This release includes performance optimizations for csrsv routines in the rocSparse library.
|
||||
|
||||
#### [Thrust] Preview
|
||||
Preview release for early adopters. rocThrust is a port of thrust, a parallel algorithm library. Thrust has been ported to the HIP/ROCm platform to use the rocPRIM library. The HIP ported library works on HIP/ROCm platforms.
|
||||
|
||||
Note: This library will replace https://github.com/ROCmSoftwarePlatform/thrust in a future release. The package for rocThrust (this library) currently conflicts with version 2.5 package of thrust. They should not be installed together.
|
||||
|
||||
#### Support overlapping kernel execution in same HIP stream
|
||||
HIP API has been enhanced to allow independent kernels to run in parallel on the same stream.
|
||||
|
||||
#### AMD Infinity Fabric™ Link enablement
|
||||
The ability to connect four Radeon Instinct MI60 or Radeon Instinct MI50 boards in one hive via AMD Infinity Fabric™ Link GPU interconnect technology has been added.
|
||||
### New features and enhancements in ROCm 2.4
|
||||
|
||||
#### TensorFlow 2.0 support
|
||||
ROCm 2.4 includes the enhanced compilation toolchain and a set of bug fixes to support TensorFlow 2.0 features natively
|
||||
|
||||
#### AMD Infinity Fabric™ Link enablement
|
||||
ROCm 2.4 adds support to connect two Radeon Instinct MI60 or Radeon Instinct MI50 boards via AMD Infinity Fabric™ Link GPU interconnect technology.
|
||||
|
||||
### New features and enhancements in ROCm 2.3
|
||||
|
||||
#### Mem usage per GPU
|
||||
Per GPU memory usage is added to rocm-smi.
|
||||
Display information regarding used/total bytes for VRAM, visible VRAM and GTT, via the --showmeminfo flag
|
||||
|
||||
#### MIVisionX, v1.1 - ONNX
|
||||
ONNX parser changes to adjust to new file formats
|
||||
|
||||
#### MIGraphX, v0.2
|
||||
MIGraphX 0.2 supports the following new features:
|
||||
* New Python API
|
||||
* Support for additional ONNX operators and fixes that now enable a large set of Imagenet models
|
||||
* Support for RNN Operators
|
||||
* Support for multi-stream Execution
|
||||
* [Experimental] Support for Tensorflow frozen protobuf files
|
||||
|
||||
See: [Getting-started:-using-the-new-features-of-MIGraphX-0.2](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.2) for more details
|
||||
|
||||
#### MIOpen, v1.8 - 3d convolutions and int8
|
||||
* This release contains full 3-D convolution support and int8 support for inference.
|
||||
* Additionally, there are major updates in the performance database for major models including those found in Torchvision.
|
||||
|
||||
See: [MIOpen releases](https://github.com/ROCmSoftwarePlatform/MIOpen/releases)
|
||||
|
||||
#### Caffe2 - mGPU support
|
||||
Multi-gpu support is enabled for Caffe2.
|
||||
|
||||
#### rocTracer library, ROCm tracing API for collecting runtimes API and asynchronous GPU activity traces
|
||||
HIP/HCC domains support is introduced in rocTracer library.
|
||||
|
||||
#### BLAS - Int8 GEMM performance, Int8 functional and performance
|
||||
Introduces support and performance optimizations for Int8 GEMM, implements TRSV support, and includes improvements and optimizations with Tensile.
|
||||
|
||||
#### Prioritized L1/L2/L3 BLAS (functional)
|
||||
Functional implementation of BLAS L1/L2/L3 functions
|
||||
|
||||
#### BLAS - tensile optimization
|
||||
Improvements and optimizations with tensile
|
||||
|
||||
#### MIOpen Int8 support
|
||||
Support for int8
|
||||
|
||||
### New features and enhancements in ROCm 2.2
|
||||
|
||||
#### rocSparse Optimization on Vega20
|
||||
Cache usage optimizations for csrsv (sparse triangular solve), coomv
|
||||
(SpMV in COO format) and ellmv (SpMV in ELL format) are available.
|
||||
|
||||
#### DGEMM and DTRSM Optimization
|
||||
Improved DGEMM performance for reduced matrix sizes (k=384, k=256)
|
||||
|
||||
#### Caffe2
|
||||
Added support for multi-GPU training
|
||||
|
||||
### New features and enhancements in ROCm 2.1
|
||||
|
||||
#### RocTracer v1.0 preview release – 'rocprof' HSA runtime tracing and statistics support -
|
||||
Supports HSA API tracing and HSA asynchronous GPU activity including kernels execution and memory copy
|
||||
|
||||
#### Improvements to ROCM-SMI tool -
|
||||
Added support to show real-time PCIe bandwidth usage via the -b/--showbw flag
|
||||
|
||||
#### DGEMM Optimizations -
|
||||
Improved DGEMM performance for large square and reduced matrix sizes (k=384, k=256)
|
||||
|
||||
### New features and enhancements in ROCm 2.0
|
||||
|
||||
#### Adds support for RHEL 7.6 / CentOS 7.6 and Ubuntu 18.04.1
|
||||
|
||||
#### Adds support for Vega 7nm, Polaris 12 GPUs
|
||||
|
||||
#### Introduces MIVisionX
|
||||
* A comprehensive computer vision and machine intelligence libraries, utilities and applications bundled into a single toolkit.
|
||||
|
||||
#### Improvements to ROCm Libraries
|
||||
* rocSPARSE & hipSPARSE
|
||||
* rocBLAS with improved DGEMM efficiency on Vega 7nm
|
||||
|
||||
#### MIOpen
|
||||
* This release contains general bug fixes and an updated performance database
|
||||
* Group convolutions backwards weights performance has been improved
|
||||
* RNNs now support fp16
|
||||
|
||||
#### Tensorflow multi-gpu and Tensorflow FP16 support for Vega 7nm
|
||||
* TensorFlow v1.12 is enabled with fp16 support
|
||||
|
||||
#### PyTorch/Caffe2 with Vega 7nm Support
|
||||
* fp16 support is enabled
|
||||
* Several bug fixes and performance enhancements
|
||||
* Known Issue: breaking changes are introduced in ROCm 2.0 which are not addressed upstream yet. Meanwhile, please continue to use ROCm fork at https://github.com/ROCmSoftwarePlatform/pytorch
|
||||
|
||||
#### Improvements to ROCProfiler tool
|
||||
* Support for Vega 7nm
|
||||
|
||||
#### Support for hipStreamCreateWithPriority
|
||||
* Creates a stream with the specified priority. It creates a stream on which enqueued kernels have a different priority for execution compared to kernels enqueued on normal priority streams. The priority could be higher or lower than normal priority streams.
|
||||
|
||||
#### OpenCL 2.0 support
|
||||
* ROCm 2.0 introduces full support for kernels written in the OpenCL 2.0 C language on certain devices and systems. Applications can detect this support by calling the “clGetDeviceInfo” query function with “parame_name” argument set to “CL_DEVICE_OPENCL_C_VERSION”. In order to make use of OpenCL 2.0 C language features, the application must include the option “-cl-std=CL2.0” in options passed to the runtime API calls responsible for compiling or building device programs. The complete specification for the OpenCL 2.0 C language can be obtained using the following link: https://www.khronos.org/registry/OpenCL/specs/opencl-2.0-openclc.pdf
|
||||
|
||||
#### Improved Virtual Addressing (48 bit VA) management for Vega 10 and later GPUs
|
||||
* Fixes Clang AddressSanitizer and potentially other 3rd-party memory debugging tools with ROCm
|
||||
* Small performance improvement on workloads that do a lot of memory management
|
||||
* Removes virtual address space limitations on systems with more VRAM than system memory
|
||||
|
||||
#### Kubernetes support
|
||||
|
||||
### New features and enhancements in ROCm 1.9.2
|
||||
#### RDMA(MPI) support on Vega 7nm
|
||||
* Support ROCnRDMA based on Mellanox InfiniBand
|
||||
|
||||
#### Improvements to HCC
|
||||
* Improved link time optimization
|
||||
|
||||
#### Improvements to ROCProfiler tool
|
||||
* General bug fixes and implemented versioning APIs
|
||||
|
||||
### New features and enhancements in ROCm 1.9.2
|
||||
#### RDMA(MPI) support on Vega 7nm
|
||||
* Support ROCnRDMA based on Mellanox InfiniBand
|
||||
|
||||
#### Improvements to HCC
|
||||
* Improved link time optimization
|
||||
|
||||
#### Improvements to ROCProfiler tool
|
||||
* General bug fixes and implemented versioning APIs
|
||||
|
||||
#### Critical bug fixes
|
||||
|
||||
### New features and enhancements in ROCm 1.9.1
|
||||
#### Added DPM support to Vega 7nm
|
||||
* Dynamic Power Management feature is enabled on Vega 7nm.
|
||||
|
||||
#### Fix for 'ROCm profiling' that used to fail with a “Version mismatch between HSA runtime and libhsa-runtime-tools64.so.1” error
|
||||
|
||||
### New features and enhancements in ROCm 1.9.0
|
||||
|
||||
#### Preview for Vega 7nm
|
||||
* Enables developer preview support for Vega 7nm
|
||||
|
||||
#### System Management Interface
|
||||
* Adds support for the ROCm SMI (System Management Interface) library, which provides monitoring and management capabilities for AMD GPUs.
|
||||
|
||||
#### Improvements to HIP/HCC
|
||||
* Support for gfx906
|
||||
* Added deprecation warning for C++AMP. This will be the last version of HCC supporting C++AMP.
|
||||
* Improved optimization for global address space pointers passing into a GPU kernel
|
||||
* Fixed several race conditions in the HCC runtime
|
||||
* Performance tuning to the unpinned copy engine
|
||||
* Several codegen enhancement fixes in the compiler backend
|
||||
|
||||
#### Preview for rocprof Profiling Tool
|
||||
Developer preview (alpha) of profiling tool rocProfiler. It includes a command-line front-end, `rpl_run.sh`, which enables:
|
||||
* Cmd-line tool for dumping public per kernel perf-counters/metrics and kernel timestamps
|
||||
* Input file with counters list and kernels selecting parameters
|
||||
* Multiple counters groups and app runs supported
|
||||
* Output results in CSV format
|
||||
|
||||
The tool can be installed from the `rocprofiler-dev` package. It will be installed into: `/opt/rocm/bin/rpl_run.sh`
|
||||
|
||||
#### Preview for rocr Debug Agent rocr_debug_agent
|
||||
The ROCr Debug Agent is a library that can be loaded by ROCm Platform Runtime to provide the following functionality:
|
||||
* Print the state for wavefronts that report memory violation or upon executing a "s_trap 2" instruction.
|
||||
* Allows SIGINT (`ctrl c`) or SIGTERM (`kill -15`) to print wavefront state of aborted GPU dispatches.
|
||||
* It is enabled on Vega10 GPUs on ROCm1.9.
|
||||
|
||||
The ROCm1.9 release will install the ROCr Debug Agent library at `/opt/rocm/lib/librocr_debug_agent64.so`
|
||||
|
||||
|
||||
#### New distribution support
|
||||
|
||||
* Binary package support for Ubuntu 18.04
|
||||
|
||||
#### ROCm 1.9 is ABI compatible with KFD in upstream Linux kernels.
|
||||
Upstream Linux kernels support the following GPUs in these releases:
|
||||
4.17: Fiji, Polaris 10, Polaris 11
|
||||
4.18: Fiji, Polaris 10, Polaris 11, Vega10
|
||||
|
||||
Some ROCm features are not available in the upstream KFD:
|
||||
* More system memory available to ROCm applications
|
||||
* Interoperability between graphics and compute
|
||||
* RDMA
|
||||
* IPC
|
||||
|
||||
To try ROCm with an upstream kernel, install ROCm as normal, but do not install the rock-dkms package. Also add a udev rule to control `/dev/kfd` permissions:
|
||||
|
||||
```
|
||||
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
|
||||
```
|
||||
|
||||
### New features as of ROCm 1.8.3
|
||||
|
||||
* ROCm 1.8.3 is a minor update meant to fix compatibility issues on Ubuntu releases running kernel 4.15.0-33
|
||||
|
||||
### New features as of ROCm 1.8
|
||||
|
||||
#### DKMS driver installation
|
||||
|
||||
* Debian packages are provided for DKMS on Ubuntu
|
||||
* RPM packages are provided for CentOS/RHEL 7.4 and 7.5 support
|
||||
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x) for additional documentation on driver setup
|
||||
|
||||
#### New distribution support
|
||||
|
||||
* Binary package support for Ubuntu 16.04 and 18.04
|
||||
* Binary package support for CentOS 7.4 and 7.5
|
||||
* Binary package support for RHEL 7.4 and 7.5
|
||||
|
||||
#### Improved OpenMPI via UCX support
|
||||
|
||||
* UCX support for OpenMPI
|
||||
* ROCm RDMA
|
||||
|
||||
### New Features as of ROCm 1.7
|
||||
|
||||
#### DKMS driver installation
|
||||
|
||||
* New driver installation uses Dynamic Kernel Module Support (DKMS)
|
||||
* Only amdkfd and amdgpu kernel modules are installed to support AMD hardware
|
||||
* Currently only Debian packages are provided for DKMS (no Fedora suport available)
|
||||
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.7.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.7.x) for additional documentation on driver setup
|
||||
|
||||
### New Features as of ROCm 1.5
|
||||
|
||||
#### Developer preview of the new OpenCL 1.2 compatible language runtime and compiler
|
||||
|
||||
* OpenCL 2.0 compatible kernel language support with OpenCL 1.2 compatible
|
||||
runtime
|
||||
* Supports offline ahead of time compilation today;
|
||||
during the Beta phase we will add in-process/in-memory compilation.
|
||||
|
||||
#### Binary Package support for Ubuntu 16.04
|
||||
|
||||
#### Binary Package support for Fedora 24 is not currently available
|
||||
|
||||
#### Dropping binary package support for Ubuntu 14.04, Fedora 23
|
||||
|
||||
#### IPC support
|
||||
Reference in New Issue
Block a user