Compare commits

..

2 Commits

Author SHA1 Message Date
James Edwards
2be530ac1f Change extraction protocol to http. 2018-06-13 10:59:07 -05:00
Gregory Stoner
c6a92a6250 Update README.md 2017-10-15 10:25:14 -05:00
2 changed files with 171 additions and 273 deletions

442
README.md
View File

@@ -2,389 +2,290 @@
The ROCm Platform brings a rich foundation to advanced computing by seamlessly
integrating the CPU and GPU with the goal of solving real-world problems.
On April 25th, 2016, we delivered ROCm 1.0 built around three pillars:
1) Open Heterogeneous Computing Platform (Linux Driver and Runtime Stack),
optimized for HPC & Ultra-scale class computing;
2) Heterogeneous C and C++ Single Source Compiler, to approach computation
holistically, on a system level, rather than as a discrete GPU artifact;
3) HIP, acknowledging the need for freedom of choice when it comes to platforms
and APIs for GPU computing.
Using our knowledge of the HSA Standards and, more importantly, the HSA
Runtime, we have been able to successfully extended support to the dGPU with
critical features for accelerating NUMA computation. As a result, the ROCK
driver is composed of several components based on our efforts to develop the
Heterogeneous System Architecture for APUs, including the new AMDGPU driver,
the Kernel Fusion Driver (KFD), the HSA+ Runtime and an LLVM based compilation
stack which provides support for key languages. This support starts with AMDs
Fiji family of dGPUs, and has expanded to include the Hawaii dGPU family in ROCm
1.2. ROCm 1.3 further extends support to include the Polaris family of ASICs.
#### Supported CPUs
The ROCm Platform leverages PCIe Atomics (Fetch ADD, Compare and SWAP,
Unconditional SWAP, AtomicsOpCompletion).
[PCIe atomics](https://github.com/RadeonOpenCompute/RadeonOpenCompute.github.io/blob/master/ROCmPCIeFeatures.md)
are only supported on PCIe Gen3 Enabled CPUs and PCIe Gen3 Switches like
Broadcom PLX. When you install your GPUs make sure you install them in a fully
PCIe Gen3 x16 or x8 slot attached either directly to the CPU's Root I/O
controller or via a PCIe switch directly attached to the CPU's Root I/O
controller. In our experience many issues stem from trying to use consumer
motherboards which provide Physical x16 Connectors that are electrically
connected as e.g. PCIe Gen2 x4. This typically occurs when connecting via the
Southbridge PCIe I/O controller. If you motherboard is part of this category,
please do not use this connector for your GPUs, if you intend to exploit ROCm.
Starting with ROCm 1.8 we have relaxed the use of PCIe Atomics and also PCIe lane choice for Vega10/GFX9 class GPU. So now you can support CPU without PCIe Atomics and also use Gen2 x1 lanes.
Currently our GFX8 GPU's (Fiji & Polaris family) still need to use PCIe Gen 3 and PCIe Atomics, but are looking at relaxing this in a future release, once we have fully tested firmware.
Our GFX8 GPU's (Fiji & Polaris Family) use PCIe Gen 3 and PCIe Atomics.
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
* AMD Ryzen CPUs;
* AMD EPYC CPUs;
* Intel Xeon E7 V3 or newer CPUs;
* Intel Xeon E5 v3 or newer CPUs;
* Intel Xeon E3 v3 or newer CPUs;
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer).
* AMD Ryzen CPUs;
Upcoming CPUs which will support PCIe Gen3 + PCIe Atomics are:
* AMD Naples Server CPUs;
* Cavium Thunder X Server Processor.
For Fiji and Polaris GPU's the ROCm platform leverages PCIe Atomics (Fetch and Add, Compare and Swap,
Unconditional Swap, AtomicsOp Completion).
PCIe Atomics are only supported on PCIe Gen3 enabled CPUs and PCIe Gen3 switches like
Broadcom PLX. When you install your GPUs make sure you install them in a fully
PCIe Gen3 x16 or x8, x4 or x1 slot attached either directly to the CPU's Root I/O
controller or via a PCIe switch directly attached to the CPU's Root I/O
controller. In our experience many issues stem from trying to use consumer
motherboards which provide physical x16 connectors that are electrically
connected as e.g. PCIe Gen2 x4 connected via the
Southbridge PCIe I/O controller.
Experimental support for our GFX7 GPUs Radeon R9 290, R9 390, AMD FirePro S9150, S9170 note they do not support or
Our GFX7 GPUs Radeon R9 290, R9 390, AMD FirePro S9150, S9170 do not support or
take advantage of PCIe Atomics. However, we still recommend that you use a CPU
from the list provided above.
#### Not supported or very limited support under ROCm
###### Limited support
* We do not support ROCm with PCIe Gen 2 enabled CPUs such as the AMD Opteron,
Phenom, Phenom II, Athlon, Athlon X2, Athlon II and Older Intel Xeon and Intel
Core Architecture and Pentium CPUs.
* We also do not support AMD Carrizo and Kaveri APU as host for compliant dGPU
attachments.
* Thunderbolt 1 and 2 enabled GPU's are not supported by ROCm. Thunderbolt 1 & 2
are PCIe Gen2 based.
* AMD Carrizo based APUs have limited support due to OEM & ODM's choices when it
comes to some key configuration parameters. On point, we have observed that
Carrizo Laptops, AIOs and Desktop systems showed inconsistencies in exposing and
enabling the System BIOS parameters required by the ROCm stack. Before
purchasing a Carrizo system for ROCm, please verify that the BIOS provides an
option for enabling IOMMUv2. If this is the case, the final requirement is
associated with correct CRAT table support - please inquire with the OEM about
the latter.
* AMD Merlin/Falcon Embedded System is also not currently supported by the public Repo.
#### Support for future APUs
We are well aware of the excitement and anticipation built around using ROCm
with an APU system which fully exposes Shared Virtual Memory alongside and cache
coherency between the CPU and GPU. To this end, in mid 2017 we plan on testing
commercial AM4 motherboards for the Bristol Ridge and Raven Ridge families of
APUs. Just like you, we still waiting for access to them! Once we have the first
boards in the lab we will detail our experiences via our blog, as well as build
a list of motherboard that are qualified for use with ROCm.
* With ROCm 1.8 and Vega10 it should support PCIe Gen2 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. But we have done very limited testing. Since our test farm today has been catering to CPU listed above. This is where we need community support.
* Thunderbolt 1,2 and 3 enabled breakout boxes GPU's should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe Gen2 based. But we have done no testing on this config and would need comunity support do limited access to this type of equipment
### New Features to ROCm
###### Not supported
#### Developer preview of the new OpenCL 1.2 compatible language runtime and compiler
* We also do not support AMD Carrizo and Kaveri APU as host for compliant dGPU attachments.
* Thunderbolt 1 and 2 enabled GPU's are not supported by ROCm. Thunderbolt 1 & 2 are PCIe Gen2 based.
* AMD Carrizo based APUs have limited support due to OEM & ODM's choices when it comes to some key configuration parameters. On point, we have observed that Carrizo laptops, AIOs and desktop systems showed inconsistencies in exposing and enabling the System BIOS parameters required by the ROCm stack. Before purchasing a Carrizo system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2. If this is the case, the final requirement is associated with correct CRAT table support - please inquire with the OEM about the latter.
* AMD Merlin/Falcon Embedded System is also not currently supported by the public repo.
* AMD Raven Ridge APU are currently not supported
### New features to ROCm 1.8.2
#### DKMS driver installation
* Debian packages are provided for DKMS on Ubuntu
* RPM packages are provided for CentOS/RHEL 7.4 and 7.5 support
* See the [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x) and [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x) for additional documentation on driver setup
#### New distribution suppport
* Binary package support for Ubuntu 16.04
* Binary package support for CentOS 7.4 and 7.5
* Binary package support for RHEL 7.4 and 7.5
* OpenCL 2.0 compatible kernel language support with OpenCL 1.2 compatible
runtime
* Supports offline ahead of time compilation today;
during the Beta phase we will add in-process/in-memory compilation.
* Binary Package support for Ubuntu 16.04 and Fedora 24
* Dropping binary package support for Ubuntu 14.04 and Fedora 23
#### Improved OpenMPI via UCX support
* UCX support for OpenMPI
* ROCm RDMA
### The latest ROCm platform - ROCm 1.8.2
#### IPC support
### The latest ROCm platform - ROCm 1.5
The latest tested version of the drivers, tools, libraries and source code for
the ROCm platform have been released and are available under the roc-1.8.x or rocm-1.8.x tag
the ROCm platform have been released and are available under the roc-1.5.0 or rocm-1.5.0 tag
of the following GitHub repositories:
* [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.8.x)
* [ROCR-Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/roc-1.8.x)
* [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.8.x)
* [ROC-smi](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-1.8.x)
* [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/roc-1.8.x)
* [compiler-runtime](https://github.com/RadeonOpenCompute/compiler-rt/tree/roc-1.8.x)
* [HIP](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP/tree/roc-1.8.x)
* [HIP-Examples](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP-Examples/tree/roc-1.8.x)
* [ROCK-Kernel-Driver](https://github.com/RadeonOpenCompute/ROCK-Kernel-Driver/tree/roc-1.5.0)
* [ROCR-Runtime](https://github.com/RadeonOpenCompute/ROCR-Runtime/tree/roc-1.5.0)
* [ROCT-Thunk-Interface](https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/tree/roc-1.5.0)
* [ROC-smi](https://github.com/RadeonOpenCompute/ROC-smi/tree/roc-1.5.0)
* [HCC compiler](https://github.com/RadeonOpenCompute/hcc/tree/rocm-1.5.0)
* [compiler-runtime](https://github.com/RadeonOpenCompute/compiler-rt/tree/rocm-1.5.0)
* [HIP](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP/tree/roc-1.5.0)
* [HIP-Examples](https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP-Examples/tree/roc-1.5.0)
* [atmi](https://github.com/RadeonOpenCompute/atmi/tree/0.3.7)
Additionally, the following mirror repositories that support the HCC compiler
are also available on GitHub, and frozen for the rocm-1.8.2 release:
are also available on GitHub, and frozen for the rocm-1.5.0 release:
* [llvm](https://github.com/RadeonOpenCompute/llvm/tree/roc-1.8.x)
* [ldd](https://github.com/RadeonOpenCompute/lld/tree/roc-1.8.x)
* [hcc-clang-upgrade](https://github.com/RadeonOpenCompute/hcc-clang-upgrade/tree/roc-1.8.x)
* [ROCm-Device-Libs](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/roc-1.8.x)
* [llvm](https://github.com/RadeonOpenCompute/llvm/tree/rocm-1.5.0)
* [lld](https://github.com/RadeonOpenCompute/lld/tree/rocm-1.5.0)
* [hcc-clang-upgrade](https://github.com/RadeonOpenCompute/hcc-clang-upgrade/tree/rocm-1.5.0)
* [ROCm-Device-Libs](https://github.com/RadeonOpenCompute/ROCm-Device-Libs/tree/rocm-1.5.0)
#### Supported Operating Systems - New operating systems available
#### Supported Operating Systems
The ROCm 1.8.2 platform has been tested on the following operating systems:
The ROCm platform has been tested on the following operating systems:
* Ubuntu 16.04
* CentOS 7.4 &. 7.5 (Using devetoolset-7 runtime support)
* RHEL 7.4. &. 7.5 (Using devetoolset-7 runtime support)
* Fedora 24 (Hawaii based GPUs, i.e. Radeon R9 290, R9 390, AMD FirePro S9150, S9170, are not supported)
### Installing from AMD ROCm repositories
AMD is hosting both debian and rpm repositories for the ROCm 1.5 packages. The
packages in the Debian repository have been signed to ensure package integrity.
Directions for each repository are given below:
AMD is hosting both Debian and RPM repositories for the ROCm 1.8.2 packages at this time.
The packages in the Debian repository have been signed to ensure package integrity.
#### Installing from a Debian repository
##### First make sure your system is up to date
```shell
sudo apt update
sudo apt dist-upgrade
sudo apt install libnuma-dev
sudo reboot
```
#### Debian repository - apt-get
##### Add the ROCm apt repository
For Debian based systems, like Ubuntu, configure the Debian ROCm repository as
follows:
```shell
wget -qO - http://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
wget -qO - http://packages.amd.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64] http://packages.amd.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
```
The gpg key might change, so it may need to be updated when installing a new
release. The current rocm.gpg.key is not avialable in a standard key ring distribution,
but has the following sha1sum hash:
release.
f0d739836a9094004b0a39058d046349aacc1178 rocm.gpg.key
##### Install
Next, update the apt repository list and install the rocm package:
##### Install or Update
Next, update the apt-get repository list and install/update the rocm package:
>**Warning**: Before proceeding, make sure to completely
>[uninstall any previous ROCm package](https://github.com/RadeonOpenCompute/ROCm#removing-pre-release-packages):
>[uninstall any pre-release ROCm packages](https://github.com/RadeonOpenCompute/ROCm#removing-pre-release-packages):
```shell
sudo apt update
sudo apt install rocm-dkms
sudo apt-get update
sudo apt-get install rocm
```
###### Next set your permissions
With move to upstreaming the KFD driver and the support of DKMS, for all Console aka headless user, you will need to add all your users to the 'video" group by setting the Unix permissions
Configure
Ensure that your user account is a member of the "video" group prior to using the ROCm driver. You can find which groups you are a member of with the following command:
Then, make the ROCm kernel your default kernel. If using grub2 as your
bootloader, you can edit the `GRUB_DEFAULT` variable in the following file:
```shell
groups
sudo vi /etc/default/grub
sudo update-grub
```
To add yourself to the video group you will need the sudo password and can use the following command:
```shell
sudo usermod -a -G video $LOGNAME
```
Once complete, reboot your system.
Upon Reboot run
```shell
rocminfo
clinfo
```
We recommend you [verify your installation](https://github.com/RadeonOpenCompute/ROCm#verify-installation) to make sure everything completed successfully.
If you have an [Install Issue ](https://rocm.github.io/install_issues.html) please read this FAQ .
#### To install ROCm with Developer Preview of OpenCL
##### For Vega10 Users who want to run ROCm without supporting PCIe atomic support must set HSA_ENABLE_SDMA=0
##### Start by following the instruction of installing ROCm with Debian repository:
Currently with Vega10 GPUs to disable PCIe atomics support in ROCm, you need to turn off SDMA functionality.
at the step "sudo apt-get install rocm" replace it with:
```shell
sudo apt-get install rocm opencl-rocm
```
To install the development kit for OpenCL, which includes the OpenCL header files, execute this installation command instead:
```shell
sudo apt-get install rocm opencl-rocm-dev
```
Then follow the direction for Debian Repository
###### Upon restart, To test your OpenCL instance
```shell
export HSA_ENABLE_SDMA=0
```
###### Upon restart, to test your OpenCL instance
Build and run Hello World OCL app.
Build and run Hello World OCL app..
HelloWorld sample:
```shell
```
wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cpp
wget https://raw.githubusercontent.com/bgaster/opencl-book-samples/master/src/Chapter_2/HelloWorld/HelloWorld.cl
```
Build it using the default ROCm OpenCL include and library locations:
```shell
```
g++ -I /opt/rocm/opencl/include/ ./HelloWorld.cpp -o HelloWorld -L/opt/rocm/opencl/lib/x86_64 -lOpenCL
```
Run it:
```shell
```
./HelloWorld
```
##### How to un-install from Ubuntu 16.04
##### Un-install
To un-install the entire rocm development package execute:
```shell
sudo apt autoremove rocm-dkms
sudo apt-get autoremove rocm
```
##### Installing development packages for cross compilation
It is often useful to develop and test on different systems. In this scenario,
you may prefer to avoid installing the ROCm Kernel to your development system.
In this case, install the development subset of packages:
```shell
sudo apt update
sudo apt install rocm-dev
sudo apt-get update
sudo apt-get install rocm-dev
```
>**Note:** To execute ROCm enabled apps you will require a system with the full
>ROCm driver stack installed
##### Removing pre-release packages
If you installed any of the ROCm pre-release packages from github, they will
need to be manually un-installed:
```shell
sudo apt purge hsakmt-roct
sudo apt purge hsakmt-roct-dev
sudo apt purge compute-firmware
sudo apt purge $(dpkg -l | grep 'kfd\|rocm' | grep linux | grep -v libc | awk '{print $2}')
sudo apt-get purge libhsakmt
sudo apt-get purge radeon-firmware
sudo apt-get purge $(dpkg -l | grep 'kfd\|rocm' | grep linux | grep -v libc | awk '{print $2}')
```
If possible, we would recommend starting with a fresh OS install.
### CentOS/RHEL 7 (both 7.4 and 7.5) Support
#### RPM repository - dnf (yum)
Support for CentOS/RHEL 7 has been added in ROCm 1.8, but requires a special
runtime environment provided by the RHEL Software Collections and additional
dkms support packages to properly install in run.
#### Preparing RHEL 7 for installation
RHEL is a subscription based operating system, and must enable several external
repositories to enable installation of the devtoolset-7 environment and the DKMS
support files. These steps are not required for CentOS.
First, the subscription for RHEL must be enabled and attached to a pool id. Please
see Obtaining an RHEL image and license page for instructions on registering your
system with the RHEL subscription server and attaching to a pool id.
Second, enable the following repositories:
A dnf (yum) repository is also available for installation of rpm packages.
To configure a system to use the ROCm rpm directory create the file
/etc/yum.repos.d/rocm.repo with the following contents:
```shell
sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms
sudo subscription-manager repos --enable rhel-7-server-optional-rpms
sudo subscription-manager repos --enable rhel-7-server-extras-rpms
```
[remote]
Third, enable additional repositories by downloading and installing the epel-release-latest-7 repository RPM:
name=ROCm Repo
```shell
sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
```
baseurl=http://packages.amd.com/rocm/yum/rpm/
#### Install and setup Devtoolset-7
To setup the Devtoolset-7 environment, follow the instructions on this page:
https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/
Note that devtoolset-7 is a Software Collections package, and is not supported by AMD.
#### Prepare CentOS/RHEL 7.4 or 7.5 for DKMS Install
Installing kernel drivers on CentOS/RHEL 7.4/7.5 requires dkms tool being installed:
```shell
sudo yum install -y epel-release
sudo yum install -y dkms kernel-headers-`uname -r`
```
At this point they system can install ROCm using the DKMS drivers.
Installing ROCm on the system
At this point ROCm can be installed on the target system. Create a /etc/yum.repos.d/rocm.repo file with the following contents:
```shell
[ROCm]
name=ROCm
baseurl=http://repo.radeon.com/rocm/yum/rpm
enabled=1
gpgcheck=0
```
The repo's URL should point to the location of the repositories repodata database. Install ROCm components using these commands:
Execute the following commands:
```shell
sudo yum install rocm-dkms
sudo dnf clean all
sudo dnf install rocm
```
The rock-dkms component should be installed and the /dev/kfd device should be available on reboot.
Ensure that your user account is a member of the "video" or "wheel" group prior to using the ROCm driver.
You can find which groups you are a member of with the following command:
As with the debian packages, it is possible to install rocm-dev individually.
To uninstall the packages execute:
```shell
groups
sudo dnf remove rocm
```
To add yourself to the video (or wheel) group you will need the sudo password and can use the
following command:
Just like Ubuntu installs, the ROCm kernel must be the default kernel used at boot time.
#### Manual installation steps for Fedora
A fully functional Fedora installation requires a few manual steps to properly
setup, including:
* [Building compatible libc++ and libc++abi libraries for Fedora](https://github.com/RadeonOpenCompute/hcc/wiki#fedora)
#### Verify installation
To verify that the ROCm stack completed successfully you can execute to HSA
vectory\_copy sample application (we do recommend that you copy it to a
separate folder and invoke make therein):
```shell
sudo usermod -a -G video $LOGNAME
```
Current release supports up to CentOS/RHEL 7.4 and 7.5. Users should update to the latest version of the OS:
```shell
sudo yum update
```
##### For Vega10 Users who want to run ROCm without supporting PCIe atomic support must set HSA_ENABLE_SDMA=0
Currently with Vega10 GPUs to disable PCIe atomics support in ROCm, you need to turn off SDMA functionality.
```shell
export HSA_ENABLE_SDMA=0
```
#### Compiling applications using hcc, hip, etc.
To compile applications or samples, please use gcc-7.2 provided by the devtoolset-7 environment.
To do this, compile all applications after running this command:
```shell
scl enable devtoolset-7 bash
```
#### How to un-install ROCm from CentOS/RHEL 7.4
To un-install the entire rocm development package execute:
```shell
sudo yum autoremove rocm-dkms
```
#### Known Issues / Workarounds
##### If you Plan to Run with X11 - we are seeing X freezes under load
ROCm 1.8.2 a kernel parameter noretry has been set to 1 to improve overall system performance. However it has been proven to bring instability to graphics driver shipped with Ubuntu. This is an ongoing issue and we are looking into it.
Before that, please try apply this change by changing noretry bit to 0.
```shell
echo 0 | sudo tee /sys/module/amdkfd/parameters/noretry
```
Files under /sys won't be preserved after reboot so you'll need to do it every time.
One way to keep noretry=0 is to change /etc/modprobe.d/amdkfd.conf and make it be:
options amdkfd noretry=0
Once it's done, run sudo update-initramfs -u. Reboot and verify /sys/module/amdkfd/parameters/noretry stays as 0.
##### If you are you are using hipCaffe Alexnet training on ImageNet - we are seeing sporadic hangs of hipCaffe during training
##### Vega10 Users who want to run ROCm without supporting PCIe atomic support must set HSA_ENABLE_SDMA=0
Currently with Vega10 GPUs to disable PCIe atomics support in ROCm, you need to turn off SDMA functionality.
```shell
export HSA_ENABLE_SDMA=0
cd /opt/rocm/hsa/sample
make
./vector_copy
```
#### Closed source components
The ROCm platform relies on a few closed source components to provide legacy
functionality like HSAIL finalization and debugging/profiling support. These
components are only available through the ROCm repositories, and will either be
@@ -394,14 +295,12 @@ made available in the following packages:
* hsa-ext-rocr-dev
### Getting ROCm source code
Modifications can be made to the ROCm 1.8 components by modifying the open
Modifications can be made to the ROCm 1.5 components by modifying the open
source code base and rebuilding the components. Source code can be cloned from
each of the GitHub repositories using git, or users can use the repo command
and the ROCm 1.8 manifest file to download the entire ROCm 1.8 source code.
and the ROCm 1.5 manifest file to download the entire ROCm 1.5 source code.
#### Installing repo
Google's repo tool allows you to manage multiple git repositories
simultaneously. You can install it by executing the following commands:
@@ -412,14 +311,13 @@ chmod a+x ~/bin/repo
Note: make sure ~/bin exists and it is part of your PATH
#### Cloning the code
```shell
mkdir ROCm && cd ROCm
repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-1.8.2
repo init -u https://github.com/RadeonOpenCompute/ROCm.git -b roc-1.5.0
repo sync
```
These series of commands will pull all of the open source code associated with
the ROCm 1.8 release. Please ensure that ssh-keys are configured for the
the ROCm 1.5 release. Please ensure that ssh-keys are configured for the
target machine on GitHub for your GitHub ID.
* OpenCL Runtime and Compiler will be submitted to the Khronos Group, prior to

View File

@@ -6,7 +6,7 @@
<remote name="pctools-github"
fetch="http://git@github.com/GPUOpen-ProfessionalCompute-Tools/" />
<default revision="roc-1.8.x"
<default revision="roc-1.5.x"
remote="roc-github"
sync-j="4" />