Compare commits

...

94 Commits

Author SHA1 Message Date
nelsonc-amd
5a0d73e84f Update for rel 2.5 (#814)
* Updates for release 2.4 README.md

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Updates to version_history.md for release 2.4

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Organize README.md

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* README.md & version_history.md for rocm release 2.5

Fix numerous links, some syntax.
Add links for rocThrust project.

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Update README.md

* Update default.xml
2019-06-08 08:24:11 -07:00
zhang2amd
1ea6f22864 Update default.xml to release 2.5 (#813) 2019-06-08 03:15:31 -07:00
nelsonc-amd
f26b2d6af3 release 2.5 README.md and version_history.md (#812)
* Updates for release 2.4 README.md

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Updates to version_history.md for release 2.4

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Organize README.md

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* README.md & version_history.md for rocm release 2.5

Fix numerous links, some syntax.
Add links for rocThrust project.

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Update README.md
2019-06-07 14:33:12 -07:00
Aakash Sudhanwa
ce5e75fb6a Update default.xml (#789)
ROCm 2.4 tag update
2019-05-07 15:47:04 -07:00
nelsonc-amd
a84e61094a Release 2.4 text files (#788)
* Updates for release 2.4 README.md

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Updates to version_history.md for release 2.4

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Organize README.md

Signed-off-by: Cole Nelson <cole.nelson@amd.com>
2019-05-07 15:46:38 -07:00
Konstantin Zhuravlyov
1fc51c91ab Fix llvm and lld 2.3 tags for opencl and hcc (#770) 2019-04-15 12:47:30 -07:00
nelsonc-amd
0ad89931cc Add top level install link for 2.3 (#767)
* README.md: Update links to ROCm 2.3 repos

Change-Id: I49c6ca76deb61afeaa90fa7e4af6f94bf3914768
Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* README.md: Update more links for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Text changes and URL updates for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Text changes for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Test updates for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Update links for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Top level install link for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>
2019-04-14 07:53:45 -07:00
nelsonc-amd
f73d5b629c Update kernel URL to 2.3.0 (#765)
* README.md: Update links to ROCm 2.3 repos

Change-Id: I49c6ca76deb61afeaa90fa7e4af6f94bf3914768
Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* README.md: Update more links for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Text changes and URL updates for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Text changes for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Test updates for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Update links for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>
2019-04-13 12:22:57 -07:00
Aakash Sudhanwa
a13f8c94a4 ROCm 2.3: Updated relevant tags and git hashes
Signed-off-by: Aakash Sudhanwa <Aakash.Sudhanwa@amd.com>
2019-04-12 17:37:39 -07:00
nelsonc-amd
38c8ed8136 Updates for release 2.3 README.md (#762)
* README.md: Update links to ROCm 2.3 repos

Change-Id: I49c6ca76deb61afeaa90fa7e4af6f94bf3914768
Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* README.md: Update more links for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Text changes and URL updates for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Text changes for 2.3 release

Signed-off-by: Cole Nelson <cole.nelson@amd.com>

* Test updates for release 2.3

Signed-off-by: Cole Nelson <cole.nelson@amd.com>
2019-04-12 16:34:46 -07:00
Joseph Greathouse
beebbf0c1d Update links to ROCm 2.2 code repos 2019-03-15 18:47:08 -05:00
Joseph Greathouse
7063676c93 Update clang-ocl tag 2019-03-15 18:45:37 -05:00
Joseph Greathouse
ce3a851464 Merge pull request #736 from RadeonOpenCompute/kentrussell-patch-1
Update README.md
2019-03-14 09:32:07 -05:00
Kent Russell
edcdd2a947 Update README.md
Clarify kernel/OS support for 2.2
2019-03-14 05:59:19 -04:00
Icarus Sparry
444ec79edb Fix 773 - duplicated part of manifest (#734)
Signed-off-by: Icarus Sparry <icarus.sparry@amd.com>
2019-03-13 12:47:20 -07:00
Icarus Sparry
e7fd4042f4 Update for 2.2 (#730)
Signed-off-by: Icarus Sparry <icarus.sparry@amd.com>
2019-03-12 18:01:54 -07:00
Joseph Greathouse
48f21b22e6 Add roctracer to manifest 2019-02-06 12:45:35 -06:00
zhang2amd
a8989c7ed0 Update tag for release 2.1 (#698) 2019-02-06 09:01:25 +05:30
ChristinaElder
159a69a8ab Update README.md (#697)
* Update README.md

* Update version_history.md
2019-02-06 08:57:50 +05:30
Joseph Greathouse
7cf79c8dc4 Fix repo manifest for MIVisionX 2019-01-29 12:09:35 -06:00
Joseph Greathouse
a3340581a7 Update rocm-cmake version to correct 2.0 release 2019-01-02 11:51:33 -06:00
Joseph Greathouse
d07fdb05d7 Adding missing packages to list and manifest 2018-12-31 10:56:00 -06:00
Joseph Greathouse
c86686a2e7 Spacing updates to make RST generation easier 2018-12-24 15:31:20 -06:00
Joseph Greathouse
04e2bba9ed Update README.md 2018-12-24 15:19:06 -06:00
Joseph Greathouse
83f9bd1272 Update README.md 2018-12-24 15:11:10 -06:00
Joseph Greathouse
4a5104c882 Update README.md 2018-12-24 12:23:39 -06:00
Joseph Greathouse
40b4be64e6 Merge pull request #643 from jlgreathouse/master
README and repo overhaul for the ROCm 2.0 release
2018-12-24 12:07:05 -06:00
Joseph Greathouse
bd790cb4d2 README and repo overhaul for the ROCm 2.0 release
Community feedback has pointed out a number of confusing,
oudated, or missing sections in our ROCm README file. For example,
we do not describe what our ROCm package structure is, or how the
packages and meta-packages fit together. This can make it confusing
for users who do not want to just install rocm-dkms and move on.

Our repo manifest (default.xml) is severely out of date. It is
missing almost all of the current ROCm projects, and it always
pulls from the main development branch. This means we do not have
a pinned manifest that allows you to pull the code from a
particular ROCm reelease. Manifest updated, and the section of the
README discussing it is majorly overhauled (including links for
information/scripts about building the code after downloading it).

Rather than continually grow our version history in the main
README page, this splits off old version information into its own
file.
2018-12-21 09:26:31 -06:00
Icarus Sparry
35a5c80b55 ROCm 2.0 (#639)
* Update README.md

* Update README.md

Signed-off-by: Icarus Sparry <icarus.sparry@amd.com>
2018-12-19 15:43:23 -08:00
Icarus Sparry
3d7812a48c Update README.md (#620) 2018-11-19 16:42:34 -08:00
Joseph Greathouse
23beff10b8 Update README.md 2018-10-23 23:57:30 -05:00
Peng
ddc0e1f2b4 Updated doc on OS support (#569)
* Updated doc on OS support

This commit specifies the ROCm recommended Ubuntu kernel versions.
And advise users to remove ROCm packages if need to upgrade the CentOS versions. There are known DKMS limitations can cause the system fail to upgrade if rock-dkms modules were installed.
2018-10-05 14:33:39 -07:00
ChristinaElder
dd00206633 Update README.md
Update README.md

    Update the version #
2018-10-05 13:58:34 -07:00
Joseph Greathouse
575c4c9a63 Remove outdated README information for ROCm 1.9
No longer need to set HSA_ENABLE_SDMA=0 for non-PCIe-atomic operation. No longer include the HSAIL finalizer in the closed source tools repo.
2018-09-19 11:33:43 -05:00
Joseph Greathouse
e4efd1c9f6 Update information about rocProfiler 2018-09-19 10:37:34 -05:00
Icarus Sparry
0b95356d45 Fix Typo 2018-09-16 16:17:05 -07:00
Joseph Greathouse
d4ccc7729e Updating README file to better describe hardware support (#533)
* ROCm 1.9 changes

* Update README.md

Update ROCr Debug Agent description

Revised wording for upstream KFD per request

* Update installation instruction

Added instruction to uninstall previous version of ROCm before install new version. Added Ubuntu 18.04 as supported distribution.

* Update README to better list supported hardware.

* Add a table of contents to the README
2018-09-15 07:48:08 -07:00
Icarus Sparry
239b9ee77e Update to Roc 1.9.0, this time at the right time. (#532)
* ROCm 1.9 changes

Update ROCr Debug Agent description

* Update README.md

Added instruction to uninstall previous version of ROCm before install new version. Added Ubuntu 18.04 as supported distribution.
2018-09-14 15:20:09 -07:00
Icarus Sparry
c6763c13c4 Revert "Update to Roc 1.9.0 (#530)" (#531)
This reverts commit dd33dc7742.
2018-09-14 15:13:45 -07:00
Icarus Sparry
dd33dc7742 Update to Roc 1.9.0 (#530)
* ROCm 1.9 changes

* Update installation instruction

Added instruction to uninstall previous version of ROCm before install new version. Added Ubuntu 18.04 as supported distribution.

* Update README.md
2018-09-14 15:08:44 -07:00
Joseph Greathouse
030455ef47 Commands for installing OpenCL-only ROCm on CentOS 2018-08-30 15:11:02 -05:00
Joseph Greathouse
f02621bc8a Update README for ROCm 1.8.3
Update README for ROCm 1.8.3
2018-08-30 08:59:55 -05:00
Joseph Greathouse
2a4c16ee51 Minor update to README to differentiate 1.8.3 and 1.8.2 2018-08-30 08:58:56 -05:00
Joseph Greathouse
4dec756b2d Update README for 1.8.3 release 2018-08-30 08:57:22 -05:00
JC Baratault
77cc24a773 Update README.md 2018-08-28 15:13:30 +02:00
Joseph Greathouse
7bfed202a0 Update to the "OpenCL-only install" directions. 2018-08-23 19:42:56 -05:00
Joseph Greathouse
980738d46e Merge pull request #367 from settle/master
Update README.md, delaying use of sudo until necessary
2018-08-18 23:48:42 -05:00
Joseph Greathouse
d7c97882e1 Update README based on some outstanding issues
Two user issues pointed out some confusing text in our current README. In particular: updated the text describing when to disable SDMA engines on Vega 10 (on any system that does not have PCIe atomic support), and show some directions for how to do an OpenCL-focused installation on Ubuntu.
2018-08-18 23:47:31 -05:00
Peng
c4eb6cd4be Merge pull request #489 from RadeonOpenCompute/zhang2amd-key_update-1
Update README.md
2018-08-03 09:19:38 -05:00
zhang2amd
3e4bda88e7 Update README.md
Added a comment to update the key if signature verification failed. Also updated key file hash since the key has extended expiration date.
2018-08-02 15:36:03 -07:00
James Edwards
8512273309 Merge pull request #460 from RadeonOpenCompute/roc-1.8.2
ROCm 1.8.2 Updates.
2018-07-19 11:59:12 -05:00
James Edwards
a1bb81003b ROCm 1.8.2 Updates. 2018-07-19 11:44:43 -05:00
James Edwards
fbcc9809de Update README.md 2018-07-06 14:42:37 -05:00
James Edwards
b834187cae Merge pull request #434 from RadeonOpenCompute/roc-1.8.1
Update README.md
2018-06-14 11:51:06 -05:00
James Edwards
260cb81efd Update README.md 2018-06-14 11:49:58 -05:00
James Edwards
62c11b68f7 Merge pull request #432 from RadeonOpenCompute/roc-1.8.1
Change extraction protocol to http.
2018-06-13 10:45:48 -05:00
James Edwards
c7ea2df946 Change extraction protocol to http. 2018-06-13 10:42:26 -05:00
Gregory Stoner
3b442534f8 Update README.md 2018-06-06 07:53:40 -05:00
Gregory Stoner
84a097d55d Update README.md 2018-06-06 07:53:06 -05:00
Gregory Stoner
7cc5548ea3 Update README.md 2018-06-06 07:48:28 -05:00
James Edwards
cf7c039199 Merge pull request #428 from RadeonOpenCompute/roc-1.8.1
Roc 1.8.1
2018-06-05 09:38:55 -05:00
James Edwards
648af6f3f8 ROCm 1.8.1 updates 2018-06-04 15:00:29 -05:00
James Edwards
4f8d605b12 ROCm 1.8.1 updates 2018-06-04 14:47:22 -05:00
Gregory Stoner
783eec4643 Update README.md 2018-05-17 10:59:51 -05:00
JC Baratault
dfdb135954 Update README.md 2018-05-15 09:58:41 +02:00
JC Baratault
2b19ff91a6 Update README.md 2018-05-15 09:49:32 +02:00
JC Baratault
ac4bd217aa Update README.md 2018-05-15 08:02:12 +02:00
JC Baratault
7c07ce6e89 Update README.md 2018-05-15 08:00:53 +02:00
Gregory Stoner
17dce4c250 Update README.md 2018-05-12 10:15:57 -05:00
Gregory Stoner
5a113b7799 Merge pull request #411 from RadeonOpenCompute/roc-1.8.0
Update README.md
2018-05-12 08:00:19 -07:00
Peng
36d82f83f1 Merge branch 'master' into roc-1.8.0 2018-05-12 09:03:17 -05:00
Peng
2d09dfa9ca Update README.md
Update CentOS instructions
2018-05-12 09:01:45 -05:00
Gregory Stoner
ae280c5745 Update README.md 2018-05-12 08:57:48 -05:00
Gregory Stoner
af228d3b64 Update README.md 2018-05-11 14:14:40 -07:00
Gregory Stoner
620a4af0b3 Merge pull request #410 from RadeonOpenCompute/roc-1.8.0
Roc 1.8.0
2018-05-11 16:10:56 -05:00
Peng
549042b40e Update README.md
Update install instructions for CentOS/RHEL 7.4, remove the instructions for "yum update".
2018-05-11 13:54:58 -05:00
Peng
a6e1b016fa Update README.md
Add recommendation to guard against updating to CentOS7.5 kernel.
2018-05-11 11:55:49 -05:00
Gregory Stoner
ca40c6ff09 Update README.md
Add kernel update instructions for CentOS/RHEL 7.4
2018-05-11 08:42:30 -07:00
James Edwards
9959f915b3 Merge pull request #409 from RadeonOpenCompute/roc-1.8.0
Roc 1.8.0
2018-05-10 10:50:36 -05:00
James Edwards
94ef8cd402 ROCm 1.8.0 updates 2018-05-10 10:44:37 -05:00
James Edwards
f8af328270 ROCm 1.8.0 updates 2018-05-10 10:35:57 -05:00
James Edwards
d8e77a4181 ROCm 1.8.0 updates 2018-05-09 12:46:51 -05:00
James Edwards
8b91b9c980 ROCm 1.8.0 updates 2018-05-09 12:44:46 -05:00
James Edwards
378cf1eb7d ROCm 1.8.0 updates 2018-05-09 12:43:17 -05:00
James Edwards
73bb1da071 ROCm 1.8.0 updates 2018-05-09 12:39:51 -05:00
James Edwards
cd4ea291e2 ROCm 1.8.0 updates 2018-05-09 12:35:53 -05:00
James Edwards
eeae755296 ROCm 1.8.0 updates 2018-05-09 12:26:58 -05:00
Gregory Stoner
9f8d733da1 Update README.md 2018-05-05 10:05:27 -05:00
James Edwards
389750df8c Merge pull request #396 from RadeonOpenCompute/roc-1.7.2
Update README for 1.7.2 release.
2018-04-26 09:55:24 -05:00
James Adrian Edwards
93301e03e2 Update README for 1.7.2 release. 2018-04-26 09:29:43 -05:00
Gregory Stoner
7f15331a67 Update README.md 2018-03-21 19:42:19 -05:00
James Edwards
3f4e60c4d0 Merge pull request #370 from RadeonOpenCompute/roc-1.7.1
Roc 1.7.1
2018-03-21 14:23:21 -05:00
Sean Settle
cf622281f4 Delay use of sudo until necessary 2018-03-18 15:38:01 -07:00
James Edwards
08257cbca7 Merge pull request #358 from RadeonOpenCompute/roc-1.7.1
Roc 1.7.1
2018-03-11 20:00:17 -05:00
3 changed files with 943 additions and 196 deletions

794
README.md
View File

@@ -1,112 +1,346 @@
## Are You Ready to ROCK?
The ROCm Platform brings a rich foundation to advanced computing by seamlessly
integrating the CPU and GPU with the goal of solving real-world problems.
The ROCm Platform brings a rich foundation to advanced computing by seamlessly integrating the CPU and GPU with the goal of solving real-world problems.
This software enables the high-performance operation of AMD GPUs for computation oriented tasks in the Linux operating system.
### Current ROCm Version: 2.5
- [New features and enhancements in ROCm 2.5](#new-features-and-enhancements-in-rocm-25)
- [The latest ROCm platform - ROCm 2.5](#the-latest-rocm-platform-rocm-25)
- [Hardware Support](#hardware-support)
* [Supported GPUs](#supported-gpus)
* [Supported CPUs](#supported-cpus)
* [Not supported or limited support under ROCm](#not-supported-or-limited-support-under-rocm)
- [Supported Operating Systems](#supported-operating-systems-new-operating-systems-available)
* [ROCm support in upstream Linux kernels](#rocm-support-in-upstream-linux-kernels)
- [Installing from AMD ROCm repositories](#installing-from-amd-rocm-repositories)
* [ROCm Binary Package Structure](#rocm-binary-package-structure)
* [Ubuntu Support - installing from a Debian repository](#ubuntu-support-installing-from-a-debian-repository)
* [CentOS/RHEL 7 (7.4, 7.5, 7.6) Support](#centosrhel-7-74-75-76-support)
- [Known issues / workarounds](#known-issues-workarounds)
- [Closed source components](#closed-source-components)
- [Getting ROCm source code](#getting-rocm-source-code)
* [Installing repo](#installing-repo)
* [Downloading the ROCm source code](#downloading-the-rocm-source-code)
* [Building the ROCm source code](#building-the-rocm-source-code)
- [Deprecation Notice](#deprecation-notice-hcc)
- [Final notes](#final-notes)
### New features and enhancements in ROCm 2.5
#### UCX 1.6 support
Support for UCX version 1.6 has been added.
#### BFloat16 GEMM in rocBLAS/Tensile
Software support for BFloat16 on Radeon Instinct MI50, MI60 has been added. This includes:
- Mixed precision GEMM with BFloat16 input and output matrices, and all arithmetic in IEEE32 bit
- Input matrix values are converted from BFloat16 to IEEE32 bit, all arithmetic and accumulation is IEEE32 bit. Output values are rounded from IEEE32 bit to BFloat16
- Accuracy should be correct to 0.5 ULP
#### ROCm-SMI enhancements
CLI support for querying the memory size, driver version, and firmware version has been added to ROCm-smi.
#### [PyTorch] multi-GPU functional support (CPU aggregation/Data Parallel)
Multi-GPU support is enabled in PyTorch using Dataparallel path for versions of PyTorch built using the 06c8aa7a3bbd91cda2fd6255ec82aad21fa1c0d5 commit or later.
#### rocSparse optimization on Radeon Instinct MI50 and MI60
This release includes performance optimizations for csrsv routines in the rocSparse library.
#### [Thrust] Preview
Preview release for early adopters. rocThrust is a port of thrust, a parallel algorithm library. Thrust has been ported to the HIP/ROCm platform to use the rocPRIM library. The HIP ported library works on HIP/ROCm platforms.
Note: This library will replace https://github.com/ROCmSoftwarePlatform/thrust in a future release. The package for rocThrust (this library) currently conflicts with version 2.5 package of thrust. They should not be installed together.
#### Support overlapping kernel execution in same HIP stream
HIP API has been enhanced to allow independent kernels to run in parallel on the same stream.
#### AMD Infinity Fabric&#x2122; Link enablement
The ability to connect four Radeon Instinct MI60 or Radeon Instinct MI50 boards in one hive via AMD Infinity Fabric™ Link GPU interconnect technology has been added.
Features and enhancements introduced in previous versions of ROCm can be found in [version_history.md](version_history.md)
### The latest ROCm platform - ROCm 2.5
The latest supported version of the drivers, tools, libraries and source code for the ROCm platform have been released and are available from the following GitHub repositories:
* ROCm Core Components
- [ROCk Kernel Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-2.5.0)
- [ROCr Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/roc-2.5.0)
- [ROCt Thunk Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-2.5.0)
* ROCm Support Software
- [ROCm SMI](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-2.5.0)
- [ROCm cmake](https://github.com/RadeonOpenCompute/rocm-cmake/tree/ac45c6e2)
- [rocminfo](https://github.com/RadeonOpenCompute/rocminfo/tree/d34b716a)
- [ROCm Bandwidth Test](https://github.com/RadeonOpenCompute/rocm_bandwidth_test/tree/roc-2.5.0)
* ROCm Development Tools
- [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/roc-2.5.0)
- [HIP](https://github.com/ROCm-Developer-Tools/HIP/tree/roc-2.5.0)
- [ROCm Device Libraries](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/roc-2.5.0)
- ROCm OpenCL, which is created from the following components:
- [ROCm OpenCL Runtime](http://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/tree/roc-2.5.0)
- [ROCm OpenCL Driver](http://github.com/RadeonOpenCompute/ROCm-OpenCL-Driver/tree/roc-2.5.0)
- The ROCm OpenCL compiler, which is created from the following components:
- [ROCm LLVM OCL](http://github.com/RadeonOpenCompute/llvm/tree/roc-ocl-2.5.0)
- [ROCm LLVM HCC](http://github.com/RadeonOpenCompute/llvm/tree/roc-hcc-2.5.0)
- [ROCm Clang](http://github.com/RadeonOpenCompute/clang/tree/roc-2.5.0)
- [ROCm lld OCL](http://github.com/RadeonOpenCompute/lld/tree/roc-ocl-2.5.0)
- [ROCm lld HCC](http://github.com/RadeonOpenCompute/lld/tree/roc-hcc-2.5.0)
- [ROCm Device Libraries](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/roc-2.5.0)
- [ROCM Clang-OCL Kernel Compiler](https://github.com/RadeonOpenCompute/clang-ocl/tree/roc-2.5.0)
- [Asynchronous Task and Memory Interface (ATMI)](https://github.com/RadeonOpenCompute/atmi/tree/4dd14ad8)
- [ROCr Debug Agent](https://github.com/ROCm-Developer-Tools/rocr_debug_agent/tree/roc-2.5.0)
- [ROCm Code Object Manager](https://github.com/RadeonOpenCompute/ROCm-CompilerSupport/tree/roc-2.5.0)
- [ROC Profiler](https://github.com/ROCm-Developer-Tools/rocprofiler/tree/roc-2.5.x)
- [ROC Tracer](https://github.com/ROCm-Developer-Tools/roctracer/tree/roc-2.5.x)
- [Radeon Compute Profiler](https://github.com/GPUOpen-Tools/RCP/tree/3a49405)
- Example Applications:
- [HCC Examples](https://github.com/ROCm-Developer-Tools/HCC-Example-Application/tree/ffd65333)
- [HIP Examples](https://github.com/ROCm-Developer-Tools/HIP-Examples/tree/roc-2.5.0)
* ROCm Libraries
- [rocBLAS](https://github.com/ROCmSoftwarePlatform/rocBLAS/tree/master-rocm-2.5)
- [hipBLAS](https://github.com/ROCmSoftwarePlatform/hipBLAS/tree/master-rocm-2.5)
- [rocFFT](https://github.com/ROCmSoftwarePlatform/rocFFT/tree/master-rocm-2.5)
- [rocRAND](https://github.com/ROCmSoftwarePlatform/rocRAND/tree/master-rocm-2.5)
- [rocSPARSE](https://github.com/ROCmSoftwarePlatform/rocSPARSE/tree/master-rocm-2.5)
- [hipSPARSE](https://github.com/ROCmSoftwarePlatform/hipSPARSE/tree/master-rocm-2.5)
- [rocALUTION](https://github.com/ROCmSoftwarePlatform/rocALUTION/tree/master-rocm-2.5)
- [MIOpenGEMM](https://github.com/ROCmSoftwarePlatform/MIOpenGEMM/tree/9547fb9e)
- [MIOpen](https://github.com/ROCmSoftwarePlatform/MIOpen/tree/9fb1826d)
- [HIP Thrust](https://github.com/ROCmSoftwarePlatform/Thrust/tree/master-rocm.2.5)
- [rocThrust](https://github.com/ROCmSoftwarePlatform/rocThrust/tree/master-rocm-2.5)
- [ROCm SMI Lib](https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/roc-2.5.0)
- [RCCL](https://github.com/ROCmSoftwarePlatform/rccl/tree/master-rocm-2.5)
- [MIVisionX](https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX/tree/f05574dc)
- [CUB HIP](https://github.com/ROCmSoftwarePlatform/cub-hip/tree/hip_port_1.7.4)
### Hardware Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* "Polaris 11" chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100
* "Polaris 12" chips, such as on the AMD Radeon RX 550 and Radeon RX 540
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX7 GPUs
* "Hawaii" chips, such as the AMD Radeon R9 390X and FirePro W9100
As described in the next section, GFX8 GPUs require PCI Express 3.0 (PCIe 3.0) with support for PCIe atomics. This requires both CPU and motherboard support. GFX9 GPUs, by default, also require PCIe 3.0 with support for PCIe atomics, but they can operate in most cases without this capability.
At this time, the integrated GPUs in AMD APUs are not officially supported targets for ROCm.
As descried [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven Ridge" APUs are enabled in our upstream drivers and the ROCm OpenCL runtime.
However, they are not enabled in our HCC or HIP runtimes, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
#### Supported CPUs
The ROCm Platform leverages PCIe Atomics (Fetch ADD, Compare and SWAP,
Unconditional SWAP, AtomicsOpCompletion).
[PCIe atomics](https://github.com/RadeonOpenCompute/RadeonOpenCompute.github.io/blob/master/ROCmPCIeFeatures.md)
are only supported on PCIe Gen3 Enabled CPUs and PCIe Gen3 Switches like
Broadcom PLX. When you install your GPUs make sure you install them in a fully
PCIe Gen3 x16 or x8 slot attached either directly to the CPU's Root I/O
controller or via a PCIe switch directly attached to the CPU's Root I/O
controller. In our experience many issues stem from trying to use consumer
motherboards which provide Physical x16 Connectors that are electrically
connected as e.g. PCIe Gen2 x4. This typically occurs when connecting via the
Southbridge PCIe I/O controller. If you motherboard is part of this category,
please do not use this connector for your GPUs, if you intend to exploit ROCm.
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
In particular, the CPU and every active PCIe point between the CPU and GPU require support for PCIe 3.0 and PCIe atomics.
The CPU root must indicate PCIe AtomicOp Completion capabilities and any intermediate switch must indicate PCIe AtomicOp Routing capabilities.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
Our GFX8 GPU's (Fiji & Polaris Family) and GFX9 (Vega) use PCIe Gen 3 and PCIe Atomics.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* Intel Xeon E5 v3 or newer CPUs;
* Intel Xeon E3 v3 or newer CPUs;
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer).
* AMD Ryzen CPUs;
* The CPUs in AMD Ryzen APUs;
* AMD Ryzen Threadripper CPUs
* AMD EPYC CPUs;
* Intel Xeon E7 v3 or newer CPUs;
* Intel Xeon E5 v3 or newer CPUs;
* Intel Xeon E3 v3 or newer CPUs;
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer).
* Some Ivy Bridge-E systems
Beginning with ROCm 1.8, GFX9 GPUs (such as Vega 10) no longer require PCIe atomics.
We have similarly opened up more options for number of PCIe lanes.
GFX9 GPUs can now be run on CPUs without PCIe atomics and on older PCIe generations, such as PCIe 2.0.
This is not supported on GPUs below GFX9, e.g. GFX8 cards in the Fiji and Polaris families.
If you are using any PCIe switches in your system, please note that PCIe Atomics are only supported on some switches, such as Broadcom PLX.
When you install your GPUs, make sure you install them in a PCIe 3.0 x16, x8, x4, or x1 slot attached either directly to the CPU's Root I/O controller or via a PCIe switch directly attached to the CPU's Root I/O controller.
In our experience, many issues stem from trying to use consumer motherboards which provide physical x16 connectors that are electrically connected as e.g. PCIe 2.0 x4, PCIe slots connected via the Southbridge PCIe I/O controller, or PCIe slots connected through a PCIe switch that does
not support PCIe atomics.
If you attempt to run ROCm on a system without proper PCIe atomic support, you may see an error in the kernel log (`dmesg`):
```
kfd: skipped device 1002:7300, PCI rejects atomics
```
Experimental support for our Hawaii (GFX7) GPUs (Radeon R9 290, R9 390, FirePro W9100, S9150, S9170)
does not require or take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above for compatibility purposes.
#### Not supported or limited support under ROCm
##### Limited support
* ROCm 2.5.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HCC, HIP, or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
* In addition, various "Carrizo" and "Bristol Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
* AMD "Raven Ridge" APUs are enabled to run OpenCL, but do not yet support HCC, HIP, or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Raven Ridge" requires the use of upstream kernel drivers.
* In addition, various "Raven Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
Upcoming CPUs which will support PCIe Gen3 + PCIe Atomics are:
* AMD Naples Server CPUs;
* Cavium Thunder X Server Processor.
##### Not supported
Experimental support for our GFX7 GPUs Radeon R9 290, R9 390, AMD FirePro S9150, S9170 note they do not support or
take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above.
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.5.x
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
#### Not supported or very limited support under ROCm
* We do not support ROCm with PCIe Gen 2 enabled CPUs such as the AMD Opteron,
Phenom, Phenom II, Athlon, Athlon X2, Athlon II and Older Intel Xeon and Intel
Core Architecture and Pentium CPUs.
* We also do not support AMD Carrizo and Kaveri APU as host for compliant dGPU
attachments.
* Thunderbolt 1 and 2 enabled GPU's are not supported by ROCm. Thunderbolt 1 & 2
are PCIe Gen2 based.
* AMD Carrizo based APUs have limited support due to OEM & ODM's choices when it
comes to some key configuration parameters. On point, we have observed that
Carrizo Laptops, AIOs and Desktop systems showed inconsistencies in exposing and
enabling the System BIOS parameters required by the ROCm stack. Before
purchasing a Carrizo system for ROCm, please verify that the BIOS provides an
option for enabling IOMMUv2. If this is the case, the final requirement is
associated with correct CRAT table support - please inquire with the OEM about
the latter.
* AMD Merlin/Falcon Embedded System is also not currently supported by the public Repo.
* AMD Raven Ridge APU are currently not supported
### Supported Operating Systems - New operating systems available
### New Features to ROCm 1.7
The ROCm 2.5.x platform supports the following operating systems:
#### DKMS driver installation
* Ubuntu 16.04.x, 18.04.1 and 18.04.2 (Version 16.04.3 and newer or kernels 4.13-4.15)
* CentOS 7.4, 7.5, and 7.6 (Using devtoolset-7 runtime support)
* RHEL 7.4, 7.5, and 7.6 (Using devtoolset-7 runtime support)
* New driver installation uses Dynamic Kernel Module Support (DKMS)
* Only amdkfd and amdgpu kernel modules are installed to support AMD hardware
* Currently only Debian packages are provided for DKMS (no Fedora suport available)
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.7.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.7.x) for additional documentation on driver setup
#### ROCm support in upstream Linux kernels
#### Developer preview of the new OpenCL 1.2 compatible language runtime and compiler
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
As such, users have the option of either using the ROCK kernel driver that are part of AMD's ROCm repositories or using the upstream driver and only installing ROCm user-level utilities from AMD's ROCm repositories.
* OpenCL 2.0 compatible kernel language support with OpenCL 1.2 compatible
runtime
* Supports offline ahead of time compilation today;
during the Beta phase we will add in-process/in-memory compilation.
* Binary Package support for Ubuntu 16.04
* Binary Package support for Fedora 24 is not currently available
* Dropping binary package support for Ubuntu 14.04, Fedora 23
#### IPC support
These releases of the upstream Linux kernel support the following GPUs in ROCm:
* 4.17: Fiji, Polaris 10, Polaris 11
* 4.18: Fiji, Polaris 10, Polaris 11, Vega10
* 4.20: Fiji, Polaris 10, Polaris 11, Vega10, Vega 7nm
### The latest ROCm platform - ROCm 1.7
The latest tested version of the drivers, tools, libraries and source code for
the ROCm platform have been released and are available under the roc-1.7.x or rocm-1.7.x tag
of the following GitHub repositories:
The upstream driver may be useful for running ROCm software on systems that are not compatible with the kernel driver available in AMD's repositories.
For users that have the option of using either AMD's or the upstreamed driver, there are various tradeoffs to take into consideration:
* [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.7.x)
* [ROCR-Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/roc-1.7.x)
* [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.7.x)
* [ROC-smi](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-1.7.x)
* [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/roc-1.7.x)
* [compiler-runtime](https://github.com/RadeonOpenCompute/compiler-rt/tree/roc-1.7.x)
* [HIP](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP/tree/roc-1.7.x)
* [HIP-Examples](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP-Examples/tree/roc-1.7.x)
* [atmi](https://github.com/RadeonOpenCompute/atmi/tree/0.3.7)
| | Using AMD's `rock-dkms` package | Using the upstream kernel driver |
| ---- | ------------------------------------------------------------| ----- |
| Pros | More GPU features, and they are enabled earlier | Includes the latest Linux kernel features |
| | Tested by AMD on supported distributions | May work on other distributions and with custom kernels |
| | Supported GPUs enabled regardless of kernel version | |
| | Includes the latest GPU firmware | |
| Cons | May not work on all Linux distributions or versions | Features and hardware support varies depending on kernel version |
| | Not currently supported on kernels newer than 4.18 | Limits GPU's usage of system memory to 3/8 of system memory |
| | | IPC and RDMA capabilities are not yet enabled |
| | | Not tested by AMD to the same level as `rock-dkms` package |
| | | Does not include most up-to-date firmware |
Additionally, the following mirror repositories that support the HCC compiler
are also available on GitHub, and frozen for the rocm-1.7.1 release:
* [llvm](https://github.com/RadeonOpenCompute/llvm/tree/roc-1.7.x)
* [ldd](https://github.com/RadeonOpenCompute/lld/tree/roc-1.7.x)
* [hcc-clang-upgrade](https://github.com/RadeonOpenCompute/hcc-clang-upgrade/tree/roc-1.7.x)
* [ROCm-Device-Libs](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/roc-1.7.x)
#### Supported Operating Systems
The ROCm 1.7 platform has been tested on the following operating systems:
* Ubuntu 16.04
### Installing from AMD ROCm repositories
AMD is hosting only debian repositories for the ROCm 1.7 packages at this time. It is expected
that an rpm repository will be available in the next point release.
AMD hosts both [Debian](http://repo.radeon.com/rocm/apt/debian/) and [RPM](http://repo.radeon.com/rocm/yum/rpm/) repositories for the ROCm 2.5.x packages at this time.
The packages in the Debian repository have been signed to ensure package integrity.
Directions for each repository are given below:
#### ROCm Binary Package Structure
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
In AMD's package distributions, these software projects are provided as a separate packages.
This allows users to install only the packages they need, if they do not wish to install all of ROCm.
These packages will install most of the ROCm software into `/opt/rocm/` by default.
The packages for each of the major ROCm components are:
* ROCm Core Components
- ROCk Kernel Driver: `rock-dkms`
- ROCr Runtime: `hsa-rocr-dev`, `hsa-ext-rocr-dev`
- ROCt Thunk Interface: `hsakmt-roct`, `hsakmt-roct-dev`
* ROCm Support Software
- ROCm SMI: `rocm-smi`
- ROCm cmake: `rocm-cmake`
- rocminfo: `rocminfo`
- ROCm Bandwidth Test: `rocm_bandwidth_test`
* ROCm Development Tools
- HCC compiler: `hcc`
- HIP: `hip_base`, `hip_doc`, `hip_hcc`, `hip_samples`
- ROCm Device Libraries: `rocm-device-libs`
- ROCm OpenCL: `rocm-opencl`, `rocm-opencl-devel` (on RHEL/CentOS), `rocm-opencl-dev` (on Ubuntu)
- ROCM Clang-OCL Kernel Compiler: `rocm-clang-ocl`
- Asynchronous Task and Memory Interface (ATMI): `atmi`
- ROCr Debug Agent: `rocr_debug_agent`
- ROCm Code Object Manager: `comgr`
- ROC Profiler: `rocprofiler-dev`
- ROC Tracer: `roctracer-dev`
- Radeon Compute Profiler: `rocm-profiler`
* ROCm Libraries
- rocBLAS: `rocblas`
- hipBLAS: `hipblas`
- rocFFT: `rocfft`
- rocRAND: `rocrand`
- rocSPARSE: `rocsparse`
- hipSPARSE: `hipsparse`
- rocALUTION: `rocalution:`
- MIOpenGEMM: `miopengemm`
- MIOpen: `MIOpen-HIP` (for the HIP version), `MIOpen-OpenCL` (for the OpenCL version)
- HIP Thrust: `thrust` (on RHEL/CentOS), `hip-thrust` (on Ubuntu)
- ROCm SMI Lib: `rocm_smi_lib64`
- RCCL: `rccl`
- MIVisionX: `mivisionx`
- CUB HIP: `cub-hip`
To make it easier to install ROCm, the AMD binary repos provide a number of meta-packages that will automatically install multiple other packages.
For example, `rocm-dkms` is the primary meta-package that is used to install most of the base technology needed for ROCm to operate.
It will install the `rock-dkms` kernel driver, and another meta-package (`rocm-dev`) which installs most of the user-land ROCm core components, support software, and development tools.
The `rocm-utils` meta-package will install useful utilities that, while not required for ROCm to operate, may still be beneficial to have.
Finally, the `rocm-libs` meta-package will install some (but not all) of the libraries that are part of ROCm.
The chain of software installed by these meta-packages is illustrated below
```
rocm-dkms
|-- rock-dkms
\-- rocm-dev
|--hsa-rocr-dev
|--hsa-ext-rocr-dev
|--rocm-device-libs
|--rocm-utils
|-- rocminfo
|-- rocm-cmake
\-- rocm-clang-ocl # This will cause OpenCL to be installed
|--hcc
|--hip_base
|--hip_doc
|--hip_hcc
|--hip_samples
|--rocm-smi
|--hsakmt-roct
|--hsakmt-roct-dev
|--hsa-amd-aqlprofile
|--comgr
\--rocr_debug_agent
rocm-libs
|-- rocblas
|-- rocfft
|-- rocrand
\-- hipblas
```
These meta-packages are not required but may be useful to make it easier to install ROCm on most systems.
Some users may want to skip certain packages. For instance, a user that wants to use the upstream kernel drivers (rather than those supplied by AMD) may want to skip the `rocm-dkms` and `rock-dkms` packages, and instead directly install `rocm-dev`.
Similarly, a user that only wants to install OpenCL support instead of HCC and HIP may want to skip the `rocm-dkms` and `rocm-dev` packages.
Instead, they could directly install `rock-dkms`, `rocm-opencl`, and `rocm-opencl-dev` and their dependencies.
#### Ubuntu Support - installing from a Debian repository
The following directions show how to install ROCm on supported Debian-based systems such as Ubuntu 18.04.
These directions may not work as written on unsupported Debian-based distributions.
For example, newer versions of Ubuntu may not be compatible with the `rock-dkms` kernel driver.
As such, users may want to skip the `rocm-dkms` and `rock-dkms` packages, as described [above](#rocm-binary-package-structure), and instead [use the upstream kernel driver](#using-debian-based-rocm-with-upstream-kernel-drivers).
##### First make sure your system is up to date
@@ -116,49 +350,37 @@ sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot
```
#### Optional: Upgrade to 4.13 kernel
Although not required, it is recommended as of ROCm 1.7.1 that the system's kernel is upgraded to the latest 4.13 version available:
```shell
sudo apt install linux-headers-4.13.0-32-generic linux-image-4.13.0-32-generic linux-image-extra-4.13.0-32-generic linux-signed-image-4.13.0-32-generic
sudo reboot
```
#### Packaging server update
The packaging server has been changed from the old http://packages.amd.com
to the new repository site http://repo.radeon.com.
#### Debian repository - apt
##### Add the ROCm apt repository
For Debian based systems, like Ubuntu, configure the Debian ROCm repository as
For Debian-based systems like Ubuntu, configure the Debian ROCm repository as
follows:
```shell
wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
echo 'deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list
```
The gpg key might change, so it may need to be updated when installing a new
release. The current rocm.gpg.key is not avialable in a standard key ring distribution,
but has the following sha1sum hash:
The gpg key might change, so it may need to be updated when installing a new release.
If the key signature verification is failed while update, please re-add the key from
ROCm apt repository. The current rocm.gpg.key is not available in a standard key ring
distribution, but has the following sha1sum hash:
f0d739836a9094004b0a39058d046349aacc1178 rocm.gpg.key
`f7f8147431c75e505c58a6f3a3548510869357a6 rocm.gpg.key`
##### Install or Update
Next, update the apt repository list and install/update the rocm package:
##### Install
>**Warning**: Before proceeding, make sure to completely
>[uninstall any previous ROCm package](https://github.com/RadeonOpenCompute/ROCm#removing-pre-release-packages):
Next, update the apt repository list and install the `rocm-dkms` meta-package:
```shell
sudo apt update
sudo apt install rocm-dkms
```
###### Next set your permsions
With move to upstreaming the KFD driver and the support of DKMS, for all Console aka headless user, you will need to add all your users to the 'video" group by setting the Unix permissions
##### Next set your permissions
Configure
Ensure that your user account is a member of the "video" group prior to using the ROCm driver. You can find which groups you are a member of with the following command:
Users will need to be in the `video` group in order to have access to the GPU.
As such, you should ensure that your user account is a member of the `video` group prior to using ROCm.
You can find which groups you are a member of with the following command:
```shell
groups
@@ -168,49 +390,57 @@ To add yourself to the video group you will need the sudo password and can use t
```shell
sudo usermod -a -G video $LOGNAME
```
```
You may want to ensure that any future users you add to your system are put into the "video" group by default. To do that, you can run the following commands:
```shell
echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf
```
Once complete, reboot your system.
We recommend you [verify your installation](https://github.com/RadeonOpenCompute/ROCm#verify-installation) to make sure everything completed successfully.
##### Test basic ROCm installation
#### To install ROCm with Developer Preview of OpenCL
##### Start by following the instruction of installing ROCm with Debian repository:
No additional steps are required. The rocm-opencl package is now installed with rocm-dkms as a dependency. This includes the development package, rocm-opencl-dev.
###### Upon restart, To test your OpenCL instance
Build and run Hello World OCL app..
HelloWorld sample:
```
wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cpp
wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cl
```
Build it using the default ROCm OpenCL include and library locations:
```
g++ -I /opt/rocm/opencl/include/ ./HelloWorld.cpp -o HelloWorld -L/opt/rocm/opencl/lib/x86_64 -lOpenCL
```
Run it:
```
./HelloWorld
```
##### Un-install
To un-install the entire rocm development package execute:
After rebooting the system run the following commands to verify that the ROCm installation was successful. If you see your GPUs listed by both of these commands, you should be ready to go!
```shell
sudo apt autoremove rocm-dkms
/opt/rocm/bin/rocminfo
/opt/rocm/opencl/bin/x86_64/clinfo
```
Note that, to make running ROCm programs easier, you may wish to put the ROCm binaries in your PATH.
```shell
echo 'export PATH=$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin/x86_64' | sudo tee -a /etc/profile.d/rocm.sh
```
If you have an [install issue](https://rocm.github.io/install_issues.html) please read this FAQ.
##### Performing an OpenCL-only Installation of ROCm
Some users may want to install a subset of the full ROCm installation.
In particular, if you are trying to install on a system with a limited amount of storage space, or which will only run a small collection of known applications, you may want to install only the packages that are required to run OpenCL applications.
To do that, you can run the following installation command **instead** of the command to install `rocm-dkms`.
```shell
sudo apt-get install dkms rock-dkms rocm-opencl-dev
```
##### How to uninstall from Ubuntu 16.04 or Ubuntu 18.04
To uninstall the ROCm packages installed in the above directions, you can execute;
```shell
sudo apt autoremove rocm-dkms rocm-dev rocm-utils
```
##### Installing development packages for cross compilation
It is often useful to develop and test on different systems. In this scenario,
you may prefer to avoid installing the ROCm Kernel to your development system.
It is often useful to develop and test on different systems.
For example, some development or build systems may not have an AMD GPU installed.
In this scenario, you may prefer to avoid installing the ROCK kernel driver to your development system.
In this case, install the development subset of packages:
@@ -218,60 +448,266 @@ In this case, install the development subset of packages:
sudo apt update
sudo apt install rocm-dev
```
>**Note:** To execute ROCm enabled apps you will require a system with the full
>ROCm driver stack installed
##### Removing pre-release packages
If you installed any of the ROCm pre-release packages from github, they will
need to be manually un-installed:
##### Using Debian-based ROCm with upstream kernel drivers
As described in [the above section about upstream Linux kernel support](#rocm-support-in-upstream-linux-kernels), users may want to try installing ROCm user-level software without installing AMD's custom ROCK kernel driver.
Users who do want to use upstream kernels can run the following commands instead of installing `rocm-dkms`
```shell
sudo apt purge libhsakmt
sudo apt purge compute-firmware
sudo apt purge $(dpkg -l | grep 'kfd\|rocm' | grep linux | grep -v libc | awk '{print $2}')
sudo apt update
sudo apt install rocm-dev
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
```
If possible, we would recommend starting with a fresh OS install.
#### CentOS/RHEL 7 (7.4, 7.5, 7.6) Support
#### RPM repository - dnf (yum)
The following directions show how to install ROCm on supported RPM-based systems such as CentOS 7.6.
These directions may not work as written on unsupported RPM-based distributions.
For example, Fedora may work but may not be compatible with the `rock-dkms` kernel driver.
As such, users may want to skip the `rocm-dkms` and `rock-dkms` packages, as described [above](#rocm-binary-package-structure), and instead [use the upstream kernel driver](#using-rpm-based-rocm-with-upstream-kernel-drivers).
A repository containing rpm packages is currently not available for the ROCm 1.7 release.
Support for CentOS/RHEL 7 was added in ROCm 1.8, but ROCm requires a special
runtime environment provided by the RHEL Software Collections and additional
dkms support packages to properly install and run.
#### Closed source components
The ROCm platform relies on a few closed source components to provide legacy
functionality like HSAIL finalization and debugging/profiling support. These
components are only available through the ROCm repositories, and will either be
deprecated or become open source components in the future. These components are
made available in the following packages:
##### Preparing RHEL 7 (7.4, 7.5, 7.6) for installation
RHEL is a subscription-based operating system, and you must enable several external
repositories to enable installation of the devtoolset-7 environment and the DKMS
support files. These steps are not required for CentOS.
First, the subscription for RHEL must be enabled and attached to a pool id. Please
see Obtaining an RHEL image and license page for instructions on registering your
system with the RHEL subscription server and attaching to a pool id.
Second, enable the following repositories:
```shell
sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms
sudo subscription-manager repos --enable rhel-7-server-optional-rpms
sudo subscription-manager repos --enable rhel-7-server-extras-rpms
```
Third, enable additional repositories by downloading and installing the epel-release-latest-7 repository RPM:
```shell
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
```
##### Install and setup Devtoolset-7
To setup the Devtoolset-7 environment, follow the instructions on this page:
https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/
Note that devtoolset-7 is a Software Collections package, and it is not supported by AMD.
##### Prepare CentOS/RHEL (7.4, 7.5, 7.6) for DKMS Install
Installing kernel drivers on CentOS/RHEL 7.4/7.5/7.6 requires dkms tool being installed:
```shell
sudo yum install -y epel-release
sudo yum install -y dkms kernel-headers-`uname -r` kernel-devel-`uname -r`
```
##### Installing ROCm on the system
It is recommended to [remove previous ROCm installations](https://github.com/RadeonOpenCompute/ROCm#how-to-uninstall-rocm-from-centosrhel-74-75-and-76) before installing the latest version to ensure a smooth installation.
At this point ROCm can be installed on the target system. Create a /etc/yum.repos.d/rocm.repo file with the following contents:
```shell
[ROCm]
name=ROCm
baseurl=http://repo.radeon.com/rocm/yum/rpm
enabled=1
gpgcheck=0
```
The repo's URL should point to the location of the repositories repodata database. Install ROCm components using these commands:
```shell
sudo yum install rocm-dkms
```
The rock-dkms component should be installed and the `/dev/kfd` device should be available on reboot.
##### Set up permissions
Ensure that your user account is a member of the "video" or "wheel" group prior to using the ROCm driver.
You can find which groups you are a member of with the following command:
```shell
groups
```
To add yourself to the video (or wheel) group you will need the sudo password and can use the
following command:
```shell
sudo usermod -a -G video $LOGNAME
```
You may want to ensure that any future users you add to your system are put into the "video" group by default. To do that, you can run the following commands:
```shell
echo 'ADD_EXTRA_GROUPS=1' | sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video' | sudo tee -a /etc/adduser.conf
```
Current release supports CentOS/RHEL 7.4, 7.5, 7.6. If users want to update the OS version, they should completely remove ROCm packages before updating to the latest version of the OS, to avoid DKMS related issues.
Once complete, reboot your system.
###### Test basic ROCm installation
After rebooting the system run the following commands to verify that the ROCm installation was successful. If you see your GPUs listed by both of these commands, you should be ready to go!
```shell
/opt/rocm/bin/rocminfo
/opt/rocm/opencl/bin/x86_64/clinfo
```
Note that, to make running ROCm programs easier, you may wish to put the ROCm binaries in your PATH.
```shell
echo 'export PATH=$PATH:/opt/rocm/bin:/opt/rocm/profiler/bin:/opt/rocm/opencl/bin/x86_64' | sudo tee -a /etc/profile.d/rocm.sh
```
If you have an [install issue](https://rocm.github.io/install_issues.html) please read this FAQ.
###### Performing an OpenCL-only Installation of ROCm
Some users may want to install a subset of the full ROCm installation.
In particular, if you are trying to install on a system with a limited amount of storage space, or which will only run a small collection of known applications, you may want to install only the packages that are required to run OpenCL applications.
To do that, you can run the following installation command **instead** of the command to install `rocm-dkms`.
```shell
sudo yum install rock-dkms rocm-opencl-devel
```
##### Compiling applications using HCC, HIP, and other ROCm software
To compile applications or samples, please use gcc-7.2 provided by the devtoolset-7 environment.
To do this, compile all applications after running this command:
```shell
scl enable devtoolset-7 bash
```
##### How to uninstall ROCm from CentOS/RHEL 7.4, 7.5 and 7.6
To uninstall the ROCm packages installed by the above directions, you can execute:
```shell
sudo yum autoremove rocm-dkms rock-dkms
```
##### Installing development packages for cross compilation
It is often useful to develop and test on different systems.
For example, some development or build systems may not have an AMD GPU installed.
In this scenario, you may prefer to avoid installing the ROCK kernel driver to your development system.
In this case, install the development subset of packages:
```shell
sudo yum install rocm-dev
```
>**Note:** To execute ROCm enabled apps you will require a system with the full
>ROCm driver stack installed
##### Using ROCm with upstream kernel drivers
As described in [the above section about upstream Linux kernel support](#rocm-support-in-upstream-linux-kernels), use
rs may want to try installing ROCm user-level software without installing AMD's custom ROCK kernel driver.
Users who do want to use upstream kernels can run the following commands instead of installing `rocm-dkms`
```shell
sudo yum install rocm-dev
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
```
### Known issues / workarounds
#### TensorFlow
Observed memory access fault while running SAGAN TensorFlow model in Polaris based ASIC
#### Radeon Instinct MI50, MI60
GPU reset is not currently supported on Radeon Instinct MI50, MI60, in single card configurations or with boards connected with AMD Infinity Fabric&#x2122; Link GPU interconnect technology. Workaround is too reboot the system.
#### Gromacs
There are known failures with a few tests on Gromacs on CentOS
#### HIP sample
HIP sample test fails at module_api_global with a segmentation fault
### Closed source components
The ROCm platform relies on a few closed source components to provide functionality
such as HSA image support. These components are only available through the ROCm
repositories, and they will either be deprecated or become open source components in the
future. These components are made available in the following packages:
* hsa-ext-rocr-dev
### Getting ROCm source code
Modifications can be made to the ROCm 1.7 components by modifying the open
source code base and rebuilding the components. Source code can be cloned from
each of the GitHub repositories using git, or users can use the repo command
and the ROCm 1.7 manifest file to download the entire ROCm 1.7 source code.
ROCm is built from open source software.
As such, it is possible to make modifications to the various components of ROCm by downloading the source code, making modifications to it, and rebuilding the components.
The source code for ROCm components can be cloned from each of the GitHub repositories using git.
In order to make it easier to download the correct versions of each of these tools, this ROCm repository contains a [repo](https://gerrit.googlesource.com/git-repo/) manifest file, [default.xml](default.xml).
Interested users can thus use this manifest file to download the source code for all of the ROCm software.
#### Installing repo
Google's repo tool allows you to manage multiple git repositories
simultaneously. You can install it by executing the following commands:
Google's repo tool allows you to manage multiple git repositories simultaneously.
You can install it by executing the following example commands:
```shell
mkdir -p ~/bin/
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
```
Note: make sure ~/bin exists and it is part of your PATH
Note that you can choose a different folder to install repo into if you desire. `~/bin/` is simply used as an example.
#### Downloading the ROCm source code
The following example shows how to use the `repo` binary downloaded above to download all of the ROCm source code.
If you chose a directory other than `~/bin/` to install `repo`, you should use that directory below.
#### Cloning the code
```shell
mkdir ROCm && cd ROCm
repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-1.7.1
mkdir -p ~/ROCm/
cd ~/ROCm/
~/bin/repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-2.5.0
repo sync
```
These series of commands will pull all of the open source code associated with
the ROCm 1.7 release. Please ensure that ssh-keys are configured for the
target machine on GitHub for your GitHub ID.
* OpenCL Runtime and Compiler will be submitted to the Khronos Group, prior to
the final release, for conformance testing.
This will cause repo to download all of the open source code associated with this ROCm release.
You may want to ensure that you have ssh-keys configured on your machine for your GitHub ID.
#### Building the ROCm source code
Each ROCm component repository contains directions for building that component.
As such, you should go to the repository you are interested in building to find how to build it.
That said, AMD also offers [a project](https://github.com/RadeonOpenCompute/Experimental_ROC) that demonstrates how to download, build, package, and install ROCm software on various distributions.
The scripts here may be useful for anyone looking to build ROCm components.
### Deprecation Notice - HCC
AMD is deprecating HCC to put more focus on HIP development and on
other languages supporting heterogeneous compute. We will no longer
develop any new feature in HCC and we will stop maintaining HCC after
its final release, which is planned for June 2019. If your
application was developed with the hc C++ API, we would encourage you
to transition it to other languages supported by AMD, such as HIP or
OpenCL. HIP and hc language share the same compiler technology, so
many hc kernel language features (including inline assembly) are also
available through the HIP compilation path.
### Final notes
* OpenCL Runtime and Compiler will be submitted to the Khronos Group for conformance testing prior to its final release.

View File

@@ -2,26 +2,71 @@
<manifest>
<remote name="roc-github"
fetch="ssh://git@github.com/RadeonOpenCompute/" />
<remote name="pctools-github"
fetch="ssh://git@github.com/GPUOpen-ProfessionalCompute-Tools/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
fetch="https://github.com/ROCmSoftwarePlatform/" />
<remote name="gpuopen-libs"
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
<remote name="gpuopen-tools"
fetch="https://github.com/GPUOpen-Tools/" />
<default revision="roc-1.7.x"
<default revision="refs/tags/roc-2.5.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<project path="ROCK-Kernel-Driver" name="ROCK-Kernel-Driver" />
<project path="ROCT-Thunk-Interface" name="ROCT-Thunk-Interface" />
<project path="ROC-smi" name="ROC-smi" />
<project path="ROCR-Runtime" name="ROCR-Runtime" />
<project path="hcc" name="hcc" />
<project path="compiler-rt" name="compiler-rt" />
<project path="HIP" remote="pctools-github" name="HIP" />
<project path="HIP-Examples" remote="pctools-github" name="HIP-Examples" />
<project path="atmi" name="atmi" revision="master" />
<project path="llvm" name="llvm" />
<project path="lld" name="lld" />
<project path="hcc-clang-upgrade" name="hcc-clang-upgrade" />
<project path="ROCm-Device-Libs" name="ROCm-Device-Libs" />
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="ROC-smi" />
<project name="rocm-cmake" revision="ac45c6e269d1fd1dbd5dfc81cfe47a7452c96daf" />
<project name="rocminfo" revision="1bb0ccc731f772bb1a553e37b41d06eb0a684926" />
<project name="rocprofiler" remote="rocm-devtools" />
<project name="roctracer" remote="rocm-devtools" />
<!-- If you want to get the full OpenCL runtime, there is a separate repo
manifest that is more authoritative than the copy in this file. It can
be found at the following URL:
https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/blob/roc-2.0.0/opencl.xml -->
<remote name="KhronosGroup" fetch="https://github.com/KhronosGroup/" />
<project name="ROCm-OpenCL-Runtime" />
<project path="ROCm-OpenCL-Runtime/compiler/driver" name="ROCm-OpenCL-Driver"/>
<project path="ROCm-OpenCL-Runtime/compiler/llvm" name="llvm" revision="refs/tags/roc-hcc-2.5.0" />
<project path="ROCm-OpenCL-Runtime/compiler/llvm/tools/clang" name="clang" />
<project path="ROCm-OpenCL-Runtime/compiler/llvm/tools/lld" name="lld" revision="refs/tags/roc-hcc-2.5.0" />
<project path="ROCm-OpenCL-Runtime/library/amdgcn" name="ROCm-Device-Libs"/>
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="261c1288aadd9dcc4637aca08332f603e6c13715" />
<project name="clang-ocl" />
<!-- HCC needs to be recursively synced to get it submodules -->
<project name="hcc" sync-s="true" />
<project name="HCC-Example-Application" remote="rocm-devtools" revision="ffd6533305e79eed667badd3c4cdb7879a1281b8" />
<project name="HIP" remote="rocm-devtools" />
<project name="HIP-Examples" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm" path="llvm_amd-common" revision="refs/tags/roc-hcc-2.5.0" />
<project name="lld" path="llvm_amd-common/lld" revision="refs/tags/roc-hcc-2.5.0" />
<project name="clang" path="llvm_amd-common/clang" />
<project name="ROCm-Device-Libs" />
<project name="atmi" revision="4dd14ad8fafc64dc8f35b0646cfe84e3e36a3c64" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocm_bandwidth_test" />
<project name="RCP" remote="gpuopen-tools" revision="refs/tags/v5.6" />
<!-- ROCm Libraries -->
<project name="rocBLAS" remote="rocm-swplat" revision="refs/tags/rocm-2.5" />
<project name="hipBLAS" remote="rocm-swplat" revision="refs/tags/rocm-2.5" />
<project name="rocFFT" remote="rocm-swplat" revision="refs/tags/v0.9.3" />
<project name="rocRAND" remote="rocm-swplat" revision="refs/tags/2.5.0" />
<project name="rocSPARSE" remote="rocm-swplat" revision="refs/tags/rocm-2.5" />
<project name="hipSPARSE" remote="rocm-swplat" revision="refs/tags/rocm-2.5" />
<project name="rocALUTION" remote="rocm-swplat" revision="refs/tags/rocm-2.5" />
<project name="MIOpenGEMM" remote="rocm-swplat" revision="9547fb9e8499a5a9f16da83b1e6b749de82dd9fb" />
<project name="MIOpen" remote="rocm-swplat" revision="refs/tags/roc-2.5.0" />
<project name="Thrust" remote="rocm-swplat" revision="refs/tags/2.5" sync-s="true" />
<project name="rocm_smi_lib" />
<project name="rccl" remote="rocm-swplat" revision="refs/tags/2.5.0" />
<project name="MIVisionX" remote="gpuopen-libs" revision="refs/tags/1.2.0" />
</manifest>

266
version_history.md Normal file
View File

@@ -0,0 +1,266 @@
## ROCm Version History
This file contains archived version history information for the [ROCm project](https://github.com/RadeonOpenCompute/ROCm)
### Current ROCm Version: 2.5
- [New features and enhancements in ROCm 2.4](#new-features-and-enhancements-in-rocm-24)
- [New features and enhancements in ROCm 2.3](#new-features-and-enhancements-in-rocm-23)
- [New features and enhancements in ROCm 2.2](#new-features-and-enhancements-in-rocm-22)
- [New features and enhancements in ROCm 2.1](#new-features-and-enhancements-in-rocm-21)
- [New features and enhancements in ROCm 2.0](#new-features-and-enhancements-in-rocm-20)
- [New features and enhancements in ROCm 1.9.2](#new-features-and-enhancements-in-rocm-192)
- [New features and enhancements in ROCm 1.9.2](#new-features-and-enhancements-in-rocm-192-1)
- [New features and enhancements in ROCm 1.9.1](#new-features-and-enhancements-in-rocm-191)
- [New features and enhancements in ROCm 1.9.0](#new-features-and-enhancements-in-rocm-190)
- [New features as of ROCm 1.8.3](#new-features-as-of-rocm-183)
- [New features as of ROCm 1.8](#new-features-as-of-rocm-18)
- [New Features as of ROCm 1.7](#new-features-as-of-rocm-17)
- [New Features as of ROCm 1.5](#new-features-as-of-rocm-15)
### New features and enhancements in ROCm 2.4
#### TensorFlow 2.0 support
ROCm 2.4 includes the enhanced compilation toolchain and a set of bug fixes to support TensorFlow 2.0 features natively
#### AMD Infinity Fabric&#x2122; Link enablement
ROCm 2.4 adds support to connect two Radeon Instinct MI60 or Radeon Instinct MI50 boards via AMD Infinity Fabric&#x2122; Link GPU interconnect technology.
### New features and enhancements in ROCm 2.3
#### Mem usage per GPU
Per GPU memory usage is added to rocm-smi.
Display information regarding used/total bytes for VRAM, visible VRAM and GTT, via the --showmeminfo flag
#### MIVisionX, v1.1 - ONNX
ONNX parser changes to adjust to new file formats
#### MIGraphX, v0.2
MIGraphX 0.2 supports the following new features:
* New Python API
* Support for additional ONNX operators and fixes that now enable a large set of Imagenet models
* Support for RNN Operators
* Support for multi-stream Execution
* [Experimental] Support for Tensorflow frozen protobuf files
See: [Getting-started:-using-the-new-features-of-MIGraphX-0.2](https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/wiki/Getting-started:-using-the-new-features-of-MIGraphX-0.2) for more details
#### MIOpen, v1.8 - 3d convolutions and int8
* This release contains full 3-D convolution support and int8 support for inference.
* Additionally, there are major updates in the performance database for major models including those found in Torchvision.
See: [MIOpen releases](https://github.com/ROCmSoftwarePlatform/MIOpen/releases)
#### Caffe2 - mGPU support
Multi-gpu support is enabled for Caffe2.
#### rocTracer library, ROCm tracing API for collecting runtimes API and asynchronous GPU activity traces
HIP/HCC domains support is introduced in rocTracer library.
#### BLAS - Int8 GEMM performance, Int8 functional and performance
Introduces support and performance optimizations for Int8 GEMM, implements TRSV support, and includes improvements and optimizations with Tensile.
#### Prioritized L1/L2/L3 BLAS (functional)
Functional implementation of BLAS L1/L2/L3 functions
#### BLAS - tensile optimization
Improvements and optimizations with tensile
#### MIOpen Int8 support
Support for int8
### New features and enhancements in ROCm 2.2
#### rocSparse Optimization on Vega20
Cache usage optimizations for csrsv (sparse triangular solve), coomv
(SpMV in COO format) and ellmv (SpMV in ELL format) are available.
#### DGEMM and DTRSM Optimization
Improved DGEMM performance for reduced matrix sizes (k=384, k=256)
#### Caffe2
Added support for multi-GPU training
### New features and enhancements in ROCm 2.1
#### RocTracer v1.0 preview release 'rocprof' HSA runtime tracing and statistics support -
Supports HSA API tracing and HSA asynchronous GPU activity including kernels execution and memory copy
#### Improvements to ROCM-SMI tool -
Added support to show real-time PCIe bandwidth usage via the -b/--showbw flag
#### DGEMM Optimizations -
Improved DGEMM performance for large square and reduced matrix sizes (k=384, k=256)
### New features and enhancements in ROCm 2.0
#### Adds support for RHEL 7.6 / CentOS 7.6 and Ubuntu 18.04.1
#### Adds support for Vega 7nm, Polaris 12 GPUs
#### Introduces MIVisionX
* A comprehensive computer vision and machine intelligence libraries, utilities and applications bundled into a single toolkit.
#### Improvements to ROCm Libraries
* rocSPARSE & hipSPARSE
* rocBLAS with improved DGEMM efficiency on Vega 7nm
#### MIOpen
* This release contains general bug fixes and an updated performance database
* Group convolutions backwards weights performance has been improved
* RNNs now support fp16
#### Tensorflow multi-gpu and Tensorflow FP16 support for Vega 7nm
* TensorFlow v1.12 is enabled with fp16 support
#### PyTorch/Caffe2 with Vega 7nm Support
* fp16 support is enabled
* Several bug fixes and performance enhancements
* Known Issue: breaking changes are introduced in ROCm 2.0 which are not addressed upstream yet. Meanwhile, please continue to use ROCm fork at https://github.com/ROCmSoftwarePlatform/pytorch
#### Improvements to ROCProfiler tool
* Support for Vega 7nm
#### Support for hipStreamCreateWithPriority
* Creates a stream with the specified priority. It creates a stream on which enqueued kernels have a different priority for execution compared to kernels enqueued on normal priority streams. The priority could be higher or lower than normal priority streams.
#### OpenCL 2.0 support
* ROCm 2.0 introduces full support for kernels written in the OpenCL 2.0 C language on certain devices and systems.  Applications can detect this support by calling the “clGetDeviceInfo” query function with “parame_name” argument set to “CL_DEVICE_OPENCL_C_VERSION”.  In order to make use of OpenCL 2.0 C language features, the application must include the option “-cl-std=CL2.0” in options passed to the runtime API calls responsible for compiling or building device programs.  The complete specification for the OpenCL 2.0 C language can be obtained using the following link: https://www.khronos.org/registry/OpenCL/specs/opencl-2.0-openclc.pdf
#### Improved Virtual Addressing (48 bit VA) management for Vega 10 and later GPUs
* Fixes Clang AddressSanitizer and potentially other 3rd-party memory debugging tools with ROCm
* Small performance improvement on workloads that do a lot of memory management
* Removes virtual address space limitations on systems with more VRAM than system memory
#### Kubernetes support
### New features and enhancements in ROCm 1.9.2
#### RDMA(MPI) support on Vega 7nm
* Support ROCnRDMA based on Mellanox InfiniBand
#### Improvements to HCC
* Improved link time optimization
#### Improvements to ROCProfiler tool
* General bug fixes and implemented versioning APIs
### New features and enhancements in ROCm 1.9.2
#### RDMA(MPI) support on Vega 7nm
* Support ROCnRDMA based on Mellanox InfiniBand
#### Improvements to HCC
* Improved link time optimization
#### Improvements to ROCProfiler tool
* General bug fixes and implemented versioning APIs
#### Critical bug fixes
### New features and enhancements in ROCm 1.9.1
#### Added DPM support to Vega 7nm
* Dynamic Power Management feature is enabled on Vega 7nm.
#### Fix for 'ROCm profiling' that used to fail with a “Version mismatch between HSA runtime and libhsa-runtime-tools64.so.1” error
### New features and enhancements in ROCm 1.9.0
#### Preview for Vega 7nm
* Enables developer preview support for Vega 7nm
#### System Management Interface
* Adds support for the ROCm SMI (System Management Interface) library, which provides monitoring and management capabilities for AMD GPUs.
#### Improvements to HIP/HCC
* Support for gfx906
* Added deprecation warning for C++AMP. This will be the last version of HCC supporting C++AMP.
* Improved optimization for global address space pointers passing into a GPU kernel
* Fixed several race conditions in the HCC runtime
* Performance tuning to the unpinned copy engine
* Several codegen enhancement fixes in the compiler backend
#### Preview for rocprof Profiling Tool
Developer preview (alpha) of profiling tool rocProfiler. It includes a command-line front-end, `rpl_run.sh`, which enables:
* Cmd-line tool for dumping public per kernel perf-counters/metrics and kernel timestamps
* Input file with counters list and kernels selecting parameters
* Multiple counters groups and app runs supported
* Output results in CSV format
The tool can be installed from the `rocprofiler-dev` package. It will be installed into: `/opt/rocm/bin/rpl_run.sh`
#### Preview for rocr Debug Agent rocr_debug_agent
The ROCr Debug Agent is a library that can be loaded by ROCm Platform Runtime to provide the following functionality:
* Print the state for wavefronts that report memory violation or upon executing a "s_trap 2" instruction.
* Allows SIGINT (`ctrl c`) or SIGTERM (`kill -15`) to print wavefront state of aborted GPU dispatches.
* It is enabled on Vega10 GPUs on ROCm1.9.
The ROCm1.9 release will install the ROCr Debug Agent library at `/opt/rocm/lib/librocr_debug_agent64.so`
#### New distribution support
* Binary package support for Ubuntu 18.04
#### ROCm 1.9 is ABI compatible with KFD in upstream Linux kernels.
Upstream Linux kernels support the following GPUs in these releases:
4.17: Fiji, Polaris 10, Polaris 11
4.18: Fiji, Polaris 10, Polaris 11, Vega10
Some ROCm features are not available in the upstream KFD:
* More system memory available to ROCm applications
* Interoperability between graphics and compute
* RDMA
* IPC
To try ROCm with an upstream kernel, install ROCm as normal, but do not install the rock-dkms package. Also add a udev rule to control `/dev/kfd` permissions:
```
echo 'SUBSYSTEM=="kfd", KERNEL=="kfd", TAG+="uaccess", GROUP="video"' | sudo tee /etc/udev/rules.d/70-kfd.rules
```
### New features as of ROCm 1.8.3
* ROCm 1.8.3 is a minor update meant to fix compatibility issues on Ubuntu releases running kernel 4.15.0-33
### New features as of ROCm 1.8
#### DKMS driver installation
* Debian packages are provided for DKMS on Ubuntu
* RPM packages are provided for CentOS/RHEL 7.4 and 7.5 support
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x) for additional documentation on driver setup
#### New distribution support
* Binary package support for Ubuntu 16.04 and 18.04
* Binary package support for CentOS 7.4 and 7.5
* Binary package support for RHEL 7.4 and 7.5
#### Improved OpenMPI via UCX support
* UCX support for OpenMPI
* ROCm RDMA
### New Features as of ROCm 1.7
#### DKMS driver installation
* New driver installation uses Dynamic Kernel Module Support (DKMS)
* Only amdkfd and amdgpu kernel modules are installed to support AMD hardware
* Currently only Debian packages are provided for DKMS (no Fedora suport available)
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.7.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.7.x) for additional documentation on driver setup
### New Features as of ROCm 1.5
#### Developer preview of the new OpenCL 1.2 compatible language runtime and compiler
* OpenCL 2.0 compatible kernel language support with OpenCL 1.2 compatible
runtime
* Supports offline ahead of time compilation today;
during the Beta phase we will add in-process/in-memory compilation.
#### Binary Package support for Ubuntu 16.04
#### Binary Package support for Fedora 24 is not currently available
#### Dropping binary package support for Ubuntu 14.04, Fedora 23
#### IPC support