Compare commits

...

53 Commits

Author SHA1 Message Date
James Edwards
a5efe65ee6 Change extraction protocol to http. 2018-06-13 10:51:08 -05:00
Peng
36d82f83f1 Merge branch 'master' into roc-1.8.0 2018-05-12 09:03:17 -05:00
Peng
2d09dfa9ca Update README.md
Update CentOS instructions
2018-05-12 09:01:45 -05:00
Gregory Stoner
ae280c5745 Update README.md 2018-05-12 08:57:48 -05:00
Gregory Stoner
af228d3b64 Update README.md 2018-05-11 14:14:40 -07:00
Gregory Stoner
620a4af0b3 Merge pull request #410 from RadeonOpenCompute/roc-1.8.0
Roc 1.8.0
2018-05-11 16:10:56 -05:00
Peng
549042b40e Update README.md
Update install instructions for CentOS/RHEL 7.4, remove the instructions for "yum update".
2018-05-11 13:54:58 -05:00
Peng
a6e1b016fa Update README.md
Add recommendation to guard against updating to CentOS7.5 kernel.
2018-05-11 11:55:49 -05:00
Gregory Stoner
ca40c6ff09 Update README.md
Add kernel update instructions for CentOS/RHEL 7.4
2018-05-11 08:42:30 -07:00
James Edwards
9959f915b3 Merge pull request #409 from RadeonOpenCompute/roc-1.8.0
Roc 1.8.0
2018-05-10 10:50:36 -05:00
James Edwards
94ef8cd402 ROCm 1.8.0 updates 2018-05-10 10:44:37 -05:00
James Edwards
f8af328270 ROCm 1.8.0 updates 2018-05-10 10:35:57 -05:00
James Edwards
d8e77a4181 ROCm 1.8.0 updates 2018-05-09 12:46:51 -05:00
James Edwards
8b91b9c980 ROCm 1.8.0 updates 2018-05-09 12:44:46 -05:00
James Edwards
378cf1eb7d ROCm 1.8.0 updates 2018-05-09 12:43:17 -05:00
James Edwards
73bb1da071 ROCm 1.8.0 updates 2018-05-09 12:39:51 -05:00
James Edwards
cd4ea291e2 ROCm 1.8.0 updates 2018-05-09 12:35:53 -05:00
James Edwards
eeae755296 ROCm 1.8.0 updates 2018-05-09 12:26:58 -05:00
Gregory Stoner
9f8d733da1 Update README.md 2018-05-05 10:05:27 -05:00
James Edwards
389750df8c Merge pull request #396 from RadeonOpenCompute/roc-1.7.2
Update README for 1.7.2 release.
2018-04-26 09:55:24 -05:00
James Adrian Edwards
93301e03e2 Update README for 1.7.2 release. 2018-04-26 09:29:43 -05:00
Gregory Stoner
7f15331a67 Update README.md 2018-03-21 19:42:19 -05:00
James Edwards
3f4e60c4d0 Merge pull request #370 from RadeonOpenCompute/roc-1.7.1
Roc 1.7.1
2018-03-21 14:23:21 -05:00
James Edwards
f558960c7e Update README.md 2018-03-21 14:21:55 -05:00
James Edwards
c1e71f0dcc Update README.md
Add information on kernel upgrade.
2018-03-21 14:20:02 -05:00
James Edwards
08257cbca7 Merge pull request #358 from RadeonOpenCompute/roc-1.7.1
Roc 1.7.1
2018-03-11 20:00:17 -05:00
James Edwards
88d4832e84 Merge branch 'master' into roc-1.7.1 2018-03-11 19:57:16 -05:00
James Edwards
4a77b8ec63 Merge pull request #357 from RadeonOpenCompute/roc-1.7.0
Update README.md
2018-03-11 19:52:57 -05:00
James Edwards
47cb66122f Update README.md 2018-03-11 19:52:23 -05:00
James Edwards
cb75f9faeb Update README.md 2018-03-11 19:47:51 -05:00
James Adrian Edwards
8c1d89e69f ROCm 1.7.1 2018-03-11 19:44:33 -05:00
Gregory Stoner
1a315e093f Update README.md 2018-03-11 08:43:20 -06:00
Gregory Stoner
42e58efc65 Update README.md 2018-03-11 08:42:08 -06:00
Gregory Stoner
04628c0e85 Update README.md 2017-12-29 11:49:00 -06:00
Gregory Stoner
883df8b9f5 Update README.md 2017-12-23 09:48:17 -06:00
Gregory Stoner
d46888fe1e Update README.md 2017-12-23 09:36:46 -06:00
Gregory Stoner
5dd391eb49 Update README.md 2017-12-23 09:32:50 -06:00
Gregory Stoner
04534f7b52 Update README.md 2017-12-23 09:29:27 -06:00
Gregory Stoner
3e82c69b04 Update README.md 2017-12-19 21:41:32 -06:00
James Edwards
39c0ecbbda Merge pull request #272 from RadeonOpenCompute/roc-1.7.0
ROCm 1.7.0
2017-12-19 14:56:20 -06:00
James Adrian Edwards
95c6ddd586 Update ROCm 1.7 support information. 2017-12-19 14:53:48 -06:00
James Adrian Edwards
3b73215554 Update ROCm 1.7 support information. 2017-12-19 14:51:45 -06:00
James Adrian Edwards
39ebe697ba Update ROCm 1.7 support information. 2017-12-18 15:29:30 -06:00
James Adrian Edwards
b6c6392ee4 Update ROCm 1.7 support information. 2017-12-18 14:57:03 -06:00
James Adrian Edwards
6e5b253e67 Update ROCm 1.7 support information. 2017-12-18 14:33:33 -06:00
Gregory Stoner
a93a3fe488 Update README.md 2017-10-15 10:24:02 -05:00
Gregory Stoner
dfff4e0f40 Merge pull request #155 from RadeonOpenCompute/notice-incompatibility-15-16
Add warning to inform users about incompatibilities between 1.5 and 1.6
2017-09-02 07:31:52 -05:00
Wen-Heng (Jack) Chung
8bd9db4e8e Add warning to inform users about incompatibilities between 1.5 and 1.6 2017-07-07 11:30:29 -05:00
Gregory Stoner
bb158e9d7f Update README.md 2017-06-30 09:57:32 -05:00
Gregory Stoner
3387f5b9d8 Update README.md 2017-06-30 09:54:07 -05:00
Gregory Stoner
42ed737183 Update README.md 2017-06-30 09:53:29 -05:00
Gregory Stoner
eab9718cb8 Update README.md 2017-06-30 09:53:04 -05:00
James Edwards
4c7d3cdd4c Merge pull request #137 from RadeonOpenCompute/roc-1.6.0
ROCm 1.6 updates.
2017-06-30 00:43:22 -05:00
2 changed files with 248 additions and 169 deletions

411
README.md
View File

@@ -2,140 +2,132 @@
The ROCm Platform brings a rich foundation to advanced computing by seamlessly
integrating the CPU and GPU with the goal of solving real-world problems.
On April 25th, 2016, we delivered ROCm 1.0 built around three pillars:
1) Open Heterogeneous Computing Platform (Linux Driver and Runtime Stack),
optimized for HPC & Ultra-scale class computing;
2) Heterogeneous C and C++ Single Source Compiler, to approach computation
holistically, on a system level, rather than as a discrete GPU artifact;
3) HIP, acknowledging the need for freedom of choice when it comes to platforms
and APIs for GPU computing.
Using our knowledge of the HSA Standards and, more importantly, the HSA
Runtime, we have been able to successfully extended support to the dGPU with
critical features for accelerating NUMA computation. As a result, the ROCK
driver is composed of several components based on our efforts to develop the
Heterogeneous System Architecture for APUs, including the new AMDGPU driver,
the Kernel Fusion Driver (KFD), the HSA+ Runtime and an LLVM based compilation
stack which provides support for key languages. This support starts with AMDs
Fiji family of dGPUs, and has expanded to include the Hawaii dGPU family in ROCm
1.2. ROCm 1.3 further extends support to include the Polaris family of ASICs.
#### Supported CPUs
The ROCm Platform leverages PCIe Atomics (Fetch ADD, Compare and SWAP,
Starting with ROCm 1.8 we have relexed the use PCIe Atomics and also PCIe Lane choice for Vega10/GFX9 class GPU. So now you can support CPU with out PCIe Atomics and also use Gen2 x1 lanes.
Currently our GFX8 GPU's (Fiji & Polaris Family)still need to use PCIe Gen 3 and PCIe Atomics, but are looking at relaxing this in a future release, once we have fully tested firmware.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* AMD Ryzen CPUs;
* AMD EPYC CPUs;
* Intel Xeon E7 V3 or newer CPUs;
* Intel Xeon E5 v3 or newer CPUs;
* Intel Xeon E3 v3 or newer CPUs;
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer).
For FIJI and Polaris GPU's the ROCm Platform leverages PCIe Atomics (Fetch ADD, Compare and SWAP,
Unconditional SWAP, AtomicsOpCompletion).
[PCIe atomics](https://github.com/RadeonOpenCompute/RadeonOpenCompute.github.io/blob/master/ROCmPCIeFeatures.md)
are only supported on PCIe Gen3 Enabled CPUs and PCIe Gen3 Switches like
PCIe Atomics are only supported on PCIe Gen3 Enabled CPUs and PCIe Gen3 Switches like
Broadcom PLX. When you install your GPUs make sure you install them in a fully
PCIe Gen3 x16 or x8 slot attached either directly to the CPU's Root I/O
PCIe Gen3 x16 or x8, x4 or x1 slot attached either directly to the CPU's Root I/O
controller or via a PCIe switch directly attached to the CPU's Root I/O
controller. In our experience many issues stem from trying to use consumer
motherboards which provide Physical x16 Connectors that are electrically
connected as e.g. PCIe Gen2 x4. This typically occurs when connecting via the
Southbridge PCIe I/O controller. If you motherboard is part of this category,
please do not use this connector for your GPUs, if you intend to exploit ROCm.
Our GFX8 GPU's (Fiji & Polaris Family) use PCIe Gen 3 and PCIe Atomics.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* Intel Xeon E5 v3 or newer CPUs;
* Intel Xeon E3 v3 or newer CPUs;
* Intel Core i7 v3, Core i5 v3, Core i3 v3 or newer CPUs (i.e. Haswell family or newer).
* AMD Ryzen CPUs;
Upcoming CPUs which will support PCIe Gen3 + PCIe Atomics are:
* AMD Naples Server CPUs;
* Cavium Thunder X Server Processor.
connected as e.g. PCIe Gen2 x4 connected via the
Southbridge PCIe I/O controller.
Experimental support for our GFX7 GPUs Radeon R9 290, R9 390, AMD FirePro S9150, S9170 note they do not support or
take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above.
#### Not supported or very limited support under ROCm
* We do not support ROCm with PCIe Gen 2 enabled CPUs such as the AMD Opteron,
Phenom, Phenom II, Athlon, Athlon X2, Athlon II and Older Intel Xeon and Intel
Core Architecture and Pentium CPUs.
* We also do not support AMD Carrizo and Kaveri APU as host for compliant dGPU
attachments.
* Thunderbolt 1 and 2 enabled GPU's are not supported by ROCm. Thunderbolt 1 & 2
are PCIe Gen2 based.
* AMD Carrizo based APUs have limited support due to OEM & ODM's choices when it
comes to some key configuration parameters. On point, we have observed that
Carrizo Laptops, AIOs and Desktop systems showed inconsistencies in exposing and
enabling the System BIOS parameters required by the ROCm stack. Before
purchasing a Carrizo system for ROCm, please verify that the BIOS provides an
option for enabling IOMMUv2. If this is the case, the final requirement is
associated with correct CRAT table support - please inquire with the OEM about
the latter.
* AMD Merlin/Falcon Embedded System is also not currently supported by the public Repo.
###### Limited Support
#### Support for future APUs
We are well aware of the excitement and anticipation built around using ROCm
with an APU system which fully exposes Shared Virtual Memory alongside and cache
coherency between the CPU and GPU. To this end, in mid 2017 we plan on testing
commercial AM4 motherboards for the Bristol Ridge and Raven Ridge families of
APUs. Just like you, we still waiting for access to them! Once we have the first
boards in the lab we will detail our experiences via our blog, as well as build
a list of motherboard that are qualified for use with ROCm.
### New Features to ROCm
* With ROCm 1.8 and Vega10 it should support PCIe Gen 2 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and Older Intel Xeon and Intel Core Architecture and Pentium CPUs. But we have done very limited testing. Since our test farm today has been catering to CPU listed above. This is where we need comunity support.
* Thunderbolt 1,2 &. 3 enabled breakout boxes GPU's should now be able to work with ROCm. Thunderbolt 1 & 2 are PCIe Gen2 based. But we have done no testing on this config and would need comunity support do limited access to this type of equipment
#### Developer preview of the new OpenCL 1.2 compatible language runtime and compiler
###### Not Supported
* OpenCL 2.0 compatible kernel language support with OpenCL 1.2 compatible
runtime
* Supports offline ahead of time compilation today;
during the Beta phase we will add in-process/in-memory compilation.
* Binary Package support for Ubuntu 16.04 and Fedora 24
* Dropping binary package support for Ubuntu 14.04 and Fedora 23
* We also do not support AMD Carrizo and Kaveri APU as host for compliant dGPU attachments.
* Thunderbolt 1 and 2 enabled GPU's are not supported by ROCm. Thunderbolt 1 & 2 are PCIe Gen2 based.
* AMD Carrizo based APUs have limited support due to OEM & ODM's choices when it comes to some key configuration parameters. On point, we have observed that Carrizo Laptops, AIOs and Desktop systems showed inconsistencies in exposing and enabling the System BIOS parameters required by the ROCm stack. Before purchasing a Carrizo system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2. If this is the case, the final requirement is associated with correct CRAT table support - please inquire with the OEM about the latter.
* AMD Merlin/Falcon Embedded System is also not currently supported by the public Repo.
* AMD Raven Ridge APU are currently not supported
### New Features to ROCm 1.8
#### DKMS driver installation
* Debian packages are provided for DKMS on Ubuntu
* RPM packages are provided for CentOS/RHEL 7.4 support
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x) for additional documentation on driver setup
#### New Distrubtuion Suppport
* Binary Package support for Ubuntu 16.04
* Binary Package support for CentoOS 7.4
* Binary Package support for RHEL 7.4
#### IPC support
#### Improved OpenMPI via UCX support
* UCX support for OpenMPI
* ROCm RDMA
### The latest ROCm platform - ROCm 1.8
### The latest ROCm platform - ROCm 1.6
The latest tested version of the drivers, tools, libraries and source code for
the ROCm platform have been released and are available under the roc-1.6.0 or rocm-1.6.0 tag
the ROCm platform have been released and are available under the roc-1.8.x or rocm-1.8.x tag
of the following GitHub repositories:
* [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.6.0)
* [ROCR-Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/roc-1.6.0)
* [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.6.0)
* [ROC-smi](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-1.6.0)
* [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/rocm-1.6.0)
* [compiler-runtime](https://github.com/RadeonOpenCompute/compiler-rt/tree/rocm-1.6.0)
* [HIP](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP/tree/roc-1.6.0)
* [HIP-Examples](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP-Examples/tree/roc-1.6.0)
* [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x)
* [ROCR-Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/roc-1.8.x)
* [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x)
* [ROC-smi](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-1.8.x)
* [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/roc-1.8.x)
* [compiler-runtime](https://github.com/RadeonOpenCompute/compiler-rt/tree/roc-1.8.x)
* [HIP](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP/tree/roc-1.8.x)
* [HIP-Examples](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP-Examples/tree/roc-1.8.x)
* [atmi](https://github.com/RadeonOpenCompute/atmi/tree/0.3.7)
Additionally, the following mirror repositories that support the HCC compiler
are also available on GitHub, and frozen for the rocm-1.6.0 release:
are also available on GitHub, and frozen for the rocm-1.8.0 release:
* [llvm](https://github.com/RadeonOpenCompute/llvm/tree/rocm-1.6.0)
* [lld](https://github.com/RadeonOpenCompute/lld/tree/rocm-1.6.0)
* [hcc-clang-upgrade](https://github.com/RadeonOpenCompute/hcc-clang-upgrade/tree/rocm-1.6.0)
* [ROCm-Device-Libs](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/rocm-1.6.0)
* [llvm](https://github.com/RadeonOpenCompute/llvm/tree/roc-1.8.x)
* [ldd](https://github.com/RadeonOpenCompute/lld/tree/roc-1.8.x)
* [hcc-clang-upgrade](https://github.com/RadeonOpenCompute/hcc-clang-upgrade/tree/roc-1.8.x)
* [ROCm-Device-Libs](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/roc-1.8.x)
#### Supported Operating Systems
#### Supported Operating Systems - New operating systems available
The ROCm platform has been tested on the following operating systems:
The ROCm 1.8 platform has been tested on the following operating systems:
* Ubuntu 16.04
* Fedora 24
* CentOS 7.4 (Using devetoolset-7 runtime support)
* RHEL 7.4 (Using devetoolset-7 runtime support)
### Installing from AMD ROCm repositories
AMD is hosting both debian and rpm repositories for the ROCm 1.6 packages.
AMD is hosting both debian and RPM repositories for the ROCm 1.8 packages at this time.
The packages in the Debian repository have been signed to ensure package integrity.
Directions for each repository are given below:
#### Packaging server update
The packaging server has been changed from the old http://packages.amd.com
to the new repository site http://repo.radeon.com.
#### Installing from a debian repository
#### Debian repository - apt-get
##### First make sure your system is up to date
```shell
sudo apt update
sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot
```
##### Optional: Upgrade to 4.13 kernel
Although not required, it is recommended as of ROCm 1.8.0 that the system's kernel is upgraded to the latest 4.13 version available:
```shell
sudo apt install linux-headers-4.13.0-32-generic linux-image-4.13.0-32-generic linux-image-extra-4.13.0-32-generic linux-signed-image-4.13.0-32-generic
sudo reboot
```
##### Add the ROCm apt repository
For Debian based systems, like Ubuntu, configure the Debian ROCm repository as
follows:
@@ -149,151 +141,235 @@ but has the following sha1sum hash:
f0d739836a9094004b0a39058d046349aacc1178 rocm.gpg.key
##### Install or Update
Next, update the apt-get repository list and install/update the rocm package:
##### Install
Next, update the apt repository list and install the rocm package:
>**Warning**: Before proceeding, make sure to completely
>[uninstall any previous ROCm package](https://github.com/RadeonOpenCompute/ROCm#removing-pre-release-packages):
```shell
sudo apt-get update
sudo apt-get install rocm
sudo apt update
sudo apt install rocm-dkms
```
Then, make the ROCm kernel your default kernel. If using grub2 as your
bootloader, you can edit the `GRUB_DEFAULT` variable in the following file:
###### Next set your permsions
With move to upstreaming the KFD driver and the support of DKMS, for all Console aka headless user, you will need to add all your users to the 'video" group by setting the Unix permissions
Configure
Ensure that your user account is a member of the "video" group prior to using the ROCm driver. You can find which groups you are a member of with the following command:
```shell
sudo vi /etc/default/grub
sudo update-grub
groups
```
To add yourself to the video group you will need the sudo password and can use the following command:
```shell
sudo usermod -a -G video $LOGNAME
```
Once complete, reboot your system.
We recommend you [verify your installation](https://github.com/RadeonOpenCompute/ROCm#verify-installation) to make sure everything completed successfully.
Upon Reboot run
```shell
rocminfo
clinfo
```
#### To install ROCm with Developer Preview of OpenCL
If you have an [Install Issue ](https://rocm.github.io/install_issues.html) please read this FAQ .
##### Start by following the instruction of installing ROCm with Debian repository:
at the step "sudo apt-get install rocm" replace it with:
```shell
sudo apt-get install rocm rocm-opencl
```
To install the development kit for OpenCL, which includes the OpenCL header files, execute this installation command:
```shell
sudo apt-get install rocm-opencl-dev
```
Then follow the direction for Debian Repository
###### Upon restart, To test your OpenCL instance
Build and run Hello World OCL app..
HelloWorld sample:
```
```shell
wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cpp
wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cl
```
Build it using the default ROCm OpenCL include and library locations:
```
```shell
g++ -I /opt/rocm/opencl/include/ ./HelloWorld.cpp -o HelloWorld -L/opt/rocm/opencl/lib/x86_64 -lOpenCL
```
Run it:
```
```shell
./HelloWorld
```
##### Un-install
##### How to un-install from Ubuntu 16.04
To un-install the entire rocm development package execute:
```shell
sudo apt-get autoremove rocm
sudo apt autoremove rocm-dkms
```
##### Installing development packages for cross compilation
It is often useful to develop and test on different systems. In this scenario,
you may prefer to avoid installing the ROCm Kernel to your development system.
In this case, install the development subset of packages:
```shell
sudo apt-get update
sudo apt-get install rocm-dev
sudo apt update
sudo apt install rocm-dev
```
>**Note:** To execute ROCm enabled apps you will require a system with the full
>ROCm driver stack installed
##### Removing pre-release packages
If you installed any of the ROCm pre-release packages from github, they will
need to be manually un-installed:
```shell
sudo apt-get purge libhsakmt
sudo apt-get purge radeon-firmware
sudo apt-get purge $(dpkg -l | grep 'kfd\|rocm' | grep linux | grep -v libc | awk '{print $2}')
sudo apt purge libhsakmt
sudo apt purge compute-firmware
sudo apt purge $(dpkg -l | grep 'kfd\|rocm' | grep linux | grep -v libc | awk '{print $2}')
```
If possible, we would recommend starting with a fresh OS install.
#### RPM repository - dnf (yum)
### CentOS/RHEL 7 Support
A dnf (yum) repository is also available for installation of rpm packages.
To configure a system to use the ROCm rpm directory create the file
/etc/yum.repos.d/rocm.repo with the following contents:
Support for CentOS/RHEL 7 has been added in ROCm 1.8, but requires a special
runtime environment provided by the RHEL Software Collections and additional
dkms support packages to properly install in run.
#### Preparing RHEL 7 for installation
RHEL is a subscription based operating system, and must enable several external
repositories to enable installation of the devtoolset-7 environment and the DKMS
support files. These steps are not required for CentOS.
First, the subscription for RHEL must be enabled and attached to a pool id. Please
see Obtaining an RHEL image and license page for instructions on registering your
system with the RHEL subscription server and attaching to a pool id.
Second, enable the following repositories:
```shell
[remote]
sudo subscription-manager repos --enable rhel-7-server-rhscl-rpms
sudo subscription-manager repos --enable rhel-7-server-optional-rpms
sudo subscription-manager repos --enable rhel-7-server-extras-rpms
```
name=ROCm Repo
Third, enable additional repositories by downloading and installing the epel-release-latest-7 repository RPM:
baseurl=http://repo.radeon.com/rocm/yum/rpm/
```shell
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
```
#### Install and setup Devtoolset-7
To setup the Devtoolset-7 environment, follow the instructions on this page:
https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/
Note that devtoolset-7 is a Software Collections package, and is not supported by AMD.
#### Prepare CentOS/RHEL 7.4 for DKMS Install
Installing kernel drivers on CentOS/RHEL 7.4 requires dkms tool being installed:
```shell
sudo yum install -y epel-release
sudo yum install -y dkms kernel-headers-`uname -r`
```
At this point they system can install ROCm using the DKMS drivers.
Installing ROCm on the system
At this point ROCm can be installed on the target system. Create a /etc/yum.repos.d/rocm.repo file with the following contents:
```shell
[ROCm]
name=ROCm
baseurl=http://repo.radeon.com/rocm/yum/rpm
enabled=1
gpgcheck=0
```
Execute the following commands:
The repo's URL should point to the location of the repositories repodata database. Install ROCm components using these commands:
```shell
sudo dnf clean all
sudo dnf install rocm
sudo yum install rocm-dkms
```
As with the debian packages, it is possible to install rocm-dev individually.
To uninstall the packages execute:
The rock-dkms component should be installed and the /dev/kfd device should be available on reboot.
Ensure that your user account is a member of the "video" or "wheel" group prior to using the ROCm driver.
You can find which groups you are a member of with the following command:
```shell
sudo dnf remove rocm
groups
```
Just like Ubuntu installs, the ROCm kernel must be the default kernel used at boot time.
#### Manual installation steps for Fedora
A fully functional Fedora installation requires a few manual steps to properly
setup, including:
* [Building compatible libc++ and libc++abi libraries for Fedora](https://github.com/RadeonOpenCompute/hcc/wiki#fedora)
#### Verify installation
To verify that the ROCm stack completed successfully you can execute to HSA
vectory\_copy sample application (we do recommend that you copy it to a
separate folder and invoke make therein):
To add yourself to the video (or wheel) group you will need the sudo password and can use the
following command:
```shell
cd /opt/rocm/hsa/sample
make
./vector_copy
sudo usermod -a -G video $LOGNAME
```
Current release supports up to CentOS/RHEL 7.4. If for any reason the system needs to be updated to 7.5, dont update the kernel. Add “--exclude=kernel*” flag to yum install. For example:
```shell
sudo yum update --exclude=kernel*
```
#### Compiling applications using hcc, hip, etc.
To compile applications or samples, please use gcc-7.2 provided by the devtoolset-7 environment.
To do this, compile all applications after running this command:
```shell
scl enable devtoolset-7 bash
```
#### How to un-install ROCm from CentOS/RHEL 7.4
To un-install the entire rocm development package execute:
```shell
sudo yum autoremove rocm-dkms
```
#### Known Issues / Workarounds
##### If you Plan to Run with X11 - we are seeing X freezes under load
ROCm 1.8.0 a kernel parameter noretry has been set to 1 to improve overall system performance. However it has been proven to bring instability to graphics driver shipped with Ubuntu. This is an ongoing issue and we are looking into it.
Before that, please try apply this change by changing noretry bit to 0.
```shell
echo 0 | sudo tee /sys/module/amdkfd/parameters/noretry
```
Files under /sys won't be preserved after reboot so you'll need to do it every time.
One way to keep noretry=0 is to change /etc/modprobe.d/amdkfd.conf and make it be:
options amdkfd noretry=0
Once it's done, run sudo update-initramfs -u. Reboot and verify /sys/module/amdkfd/parameters/noretry stays as 0.
##### If you are you are using hipCaffe Alexnet training on ImageNet - we are seeing sporadic hangs of hipCaffe during training
#### Closed source components
The ROCm platform relies on a few closed source components to provide legacy
functionality like HSAIL finalization and debugging/profiling support. These
components are only available through the ROCm repositories, and will either be
@@ -303,12 +379,14 @@ made available in the following packages:
* hsa-ext-rocr-dev
### Getting ROCm source code
Modifications can be made to the ROCm 1.6 components by modifying the open
Modifications can be made to the ROCm 1.8 components by modifying the open
source code base and rebuilding the components. Source code can be cloned from
each of the GitHub repositories using git, or users can use the repo command
and the ROCm 1.6 manifest file to download the entire ROCm 1.6 source code.
and the ROCm 1.8 manifest file to download the entire ROCm 1.8 source code.
#### Installing repo
Google's repo tool allows you to manage multiple git repositories
simultaneously. You can install it by executing the following commands:
@@ -319,13 +397,14 @@ chmod a+x ~/bin/repo
Note: make sure ~/bin exists and it is part of your PATH
#### Cloning the code
```shell
mkdir ROCm && cd ROCm
repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-1.6.0
repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-1.8.0
repo sync
```
These series of commands will pull all of the open source code associated with
the ROCm 1.6 release. Please ensure that ssh-keys are configured for the
the ROCm 1.8 release. Please ensure that ssh-keys are configured for the
target machine on GitHub for your GitHub ID.
* OpenCL Runtime and Compiler will be submitted to the Khronos Group, prior to

View File

@@ -2,11 +2,11 @@
<manifest>
<remote name="roc-github"
fetch="ssh://git@github.com/RadeonOpenCompute/" />
fetch="http://git@github.com/RadeonOpenCompute/" />
<remote name="pctools-github"
fetch="ssh://git@github.com/GPUOpen-ProfessionalCompute-Tools/" />
fetch="http://git@github.com/GPUOpen-ProfessionalCompute-Tools/" />
<default revision="roc-1.6.x"
<default revision="roc-1.8.x"
remote="roc-github"
sync-j="4" />