Compare commits

...

50 Commits

Author SHA1 Message Date
Joseph Macaranas
fdad6cfa27 Update pytorch.yml 2024-12-19 13:49:10 -05:00
Joseph Macaranas
b6d6e83a9b Update pytorch.yml 2024-12-19 12:40:52 -05:00
Joseph Macaranas
fe4c5dbe62 Update pytorch.yml 2024-12-19 00:07:28 -05:00
Joseph Macaranas
73f660b683 Checkout 0.20 release vision to try build 2024-12-18 18:01:54 -05:00
Joseph Macaranas
39bec204c0 Update pytorch.yml 2024-12-17 20:50:33 -05:00
Joseph Macaranas
2ab1041ffb Revert "Merge branch 'amd/jmacaran/pytorch_hip_fp16' of https://github.com/ROCm/ROCm into amd/jmacaran/pytorch_hip_fp16"
This reverts commit 4b66b6d7be, reversing
changes made to ebb6f29b58.
2024-12-17 19:38:20 -05:00
Joseph Macaranas
aacf7a96e0 Revert "Update pytorch.yml"
This reverts commit 35c25a762a.
2024-12-17 19:31:27 -05:00
Joseph Macaranas
35c25a762a Update pytorch.yml 2024-12-17 16:45:42 -05:00
Joseph Macaranas
1de2d2306b Try specific clr/HIP build 2024-12-17 16:01:50 -05:00
Joseph Macaranas
cac821b9e4 Update pytorch.yml 2024-12-17 15:10:39 -05:00
Joseph Macaranas
61827b7192 Update pytorch.yml 2024-12-17 15:10:00 -05:00
Joseph Macaranas
24b99fd952 Merge branch 'develop' into amd/jmacaran/pytorch_hip_fp16 2024-12-17 15:07:51 -05:00
Joseph Macaranas
6d965ebdb4 Update pytorch.yml 2024-12-17 15:07:25 -05:00
Pratik Basyal
6a7d8654ad Revamped PCIe into new format and incorporated style guide (#4051)
* Revamped PCIe into new format and incorporated style guide

* Title case fixed

* Quick fix and changes

* Added RMW to wordlist and updated titles

* Grammatical fixes incorporated

* Sandra's review feedback incorporated

* Removed PCIe3 feature reference

* Leo's feedback incorporated

* Sandra's feedback incorporated

* Replaced execute with run

* Replaced executing with running

* SME review feedback incorporated

* Minor feedback updated

* Sandra's feedback incorporated

* Filename renamed

* File rename changes updated

* Document title updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-17 12:00:00 -05:00
Joseph Macaranas
4b66b6d7be Merge branch 'amd/jmacaran/pytorch_hip_fp16' of https://github.com/ROCm/ROCm into amd/jmacaran/pytorch_hip_fp16 2024-12-16 22:30:52 -05:00
Joseph Macaranas
ebb6f29b58 Update pytorch.yml 2024-12-16 22:30:23 -05:00
darren-amd
dc648ad764 External CI: Disable rocprof-system openmp target example (#4166) 2024-12-16 14:38:55 -05:00
amd-jmacaran
656e7a21f7 External CI: temp patch to test pytorch build failure 2024-12-16 14:05:45 -05:00
Peter Park
f9dbc1f21f add megatron training doc (#4159)
* add megatron training doc

update toc

add images

update formatting and wording

formatting

update formatting

update conf.py

update formatting

update docker img

tweak formatting

Fix stuff

fix mock-data/data-path

add specific commit hash to checkout

update docker pull tag

fix docker run cmd and examples path

fix docker cmd

* wording

words

words

* improve title
2024-12-16 13:37:35 -05:00
Yanyao Wang
a857597340 Merge pull request #4162 from WBobby/develop-pr
Update build scripts of ROCm6.3 release to develop branch
2024-12-16 12:23:27 -06:00
Wang, Yanyao
a2d128749c Update build scripts of ROCm6.3 release to develop branch 2024-12-15 17:06:23 -08:00
Joseph Macaranas
bacb49681e External CI: Typo in rocPyDecode Pipeline Parameter (#4157) 2024-12-13 14:35:35 -05:00
Jeffrey Novotny
04fdc08328 Change reference to kernel-mode GPU compute driver in ROCm (#4147)
* Change reference to kernel-mode GPU compute driver in ROCm

* More changes for kernel-mode terminology

* Fix linting
2024-12-13 11:46:02 -05:00
darren-amd
1b33f1d7da External CI: llvm project comgr disable spirv (#4154)
External CI: add flag to disable SPIRV from comgr build in llvm-project
2024-12-13 10:51:36 -05:00
Joseph Macaranas
fd067f7b3b External CI: HIP shared library symlinks for rocPyDecode (#4153)
- Modifying the HIP shared libraries installed to follow the Linux symbolic link convention resolves test failures in rocPyDecode.
2024-12-13 09:58:25 -05:00
spolifroni-amd
2a7520f08a Added MIGraphX changes (#4150)
* Added MIGraphX changes

* removed gfx support

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

---------

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2024-12-12 11:19:28 -05:00
Joseph Macaranas
5271c2c82d External CI: MIOpen Test Parameters (#4148)
- Exclude lone, consistently failing MIOpen test.
- test_rnn_seq_api is the only ctest failure, so let's filter it out for now to easily identify new failures.
2024-12-11 13:31:12 -05:00
JeniferC99
59a928f3a7 Merge pull request #4137 from ROCm/JeniferC99-patch-1
Update default.xml
2024-12-10 11:47:38 -06:00
Joseph Macaranas
b028a3af96 Patch CK 2024-12-09 20:42:32 -05:00
Joseph Macaranas
77f7795edc Patch CK 2024-12-09 20:19:42 -05:00
David Galiffi
22572a9857 Add TransferBench and hipBLAS-common 2024-12-09 18:37:20 -05:00
Joseph Macaranas
0c2159c67d Adjust patch 2024-12-09 16:52:42 -05:00
amd-jmacaran
8f21bc9d1e External CI: temp patch to test pytorch build failure 2024-12-09 16:44:09 -05:00
randyh62
49e50b93c6 Update index.md (#4144) (#4146)
Remove Programming Guide topic from "How to"
2024-12-09 12:17:54 -08:00
Istvan Kiss
3354099b9c Remove GPU memory page 2024-12-09 17:23:57 +01:00
David Galiffi
794b34f40e Update default.xml
Fixed merge conflicts
2024-12-09 11:18:39 -05:00
David Galiffi
25ef417b31 Merge branch 'develop' into JeniferC99-patch-1 2024-12-09 11:17:17 -05:00
Peter Park
78f9adc6ec fix rccl hip streams section in workload tuning guide (#4140) 2024-12-09 11:06:12 -05:00
David Galiffi
4abcae54a8 Update default.xml (#4136)
Add rocJPEG
Rename omniperf to rocprofiler-compute
Rename omnitrace to rocprofiler-compute
2024-12-09 10:56:07 -05:00
JeniferC99
2690506e64 Update default.xml
SWDEV-502858
Rename  Omnitrace and Omniperf
2024-12-06 23:01:56 -06:00
darren-amd
3dffe1998a External CI: add aomp dependency to rocprofiler-sdk (#4135)
External CI: Add aomp dependency for rocprofiler-sdk
2024-12-06 16:20:09 -05:00
Peter Park
b0722b3228 Add @hongxiayang updates to MI300X workload tuning guide (#4123)
minor fixes to formatting

fix spelling errors

more spelling

fixes

quantization update

fix format

simplify wording in tunableops and format fix

Apply suggestions from code review

review feedback by Peter

Co-authored-by: Peter Park <peter.park@amd.com>

Apply suggestions from code review

addressing feedback

Co-authored-by: Peter Park <peter.park@amd.com>

Apply suggestions from code review

feedback again

Co-authored-by: Peter Park <peter.park@amd.com>

add hipblaslt yaml file figure

feedback and minor formatting

formatting

update wordlist.txt

remove outdated sentence regarding fsdp and rccl

(cherry picked from commit 87fa9fd83a2e623f6cab4e69d65f49e3db0a45f6)

update wordlist

Co-authored-by: hongxyan <hongxyan@amd.com>
2024-12-06 12:10:57 -05:00
Daniel Su
73e21c82c0 External CI: finalize rocJPEG enablement (#4125) 2024-12-06 11:47:45 -05:00
Swati Rawat
5e6ddec385 Update what-is-rocm.rst (#4122) 2024-12-06 10:22:27 -05:00
Peter Park
1a4d54a4f1 remove programming guide from TOC (#4116) 2024-12-05 16:50:39 -05:00
Daniel Su
788796bfe1 External CI: create pipeline files for rocJPEG (#4117) 2024-12-05 16:17:42 -05:00
Daniel Su
922209e5c9 External CI: change rocm-core staging branch to master (#4115) 2024-12-05 14:45:38 -05:00
Peter Park
3b1d1fa5b7 fix stack image (#4112) 2024-12-04 21:55:17 -05:00
dependabot[bot]
c954022547 Build(deps): Bump rocm-docs-core from 1.9.2 to 1.11.0 in /docs/sphinx (#4111)
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.9.2 to 1.11.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.9.2...v1.11.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-04 19:30:13 -07:00
Peter Park
0e9f50d093 fix links to smi tools full changelog on GH (#4108) 2024-12-04 19:05:15 -07:00
125 changed files with 3589 additions and 2817 deletions

View File

@@ -197,3 +197,4 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: MIOpen
testParameters: '-VV --output-on-failure --force-new-ctest-process --output-junit test_output.xml --exclude-regex test_rnn_seq_api'

View File

@@ -126,6 +126,7 @@ jobs:
componentName: comgr
extraBuildFlags: >-
-DCMAKE_PREFIX_PATH="$(Build.SourcesDirectory)/llvm/build;$(Build.SourcesDirectory)/amd/device-libs/build"
-DCOMGR_DISABLE_SPIRV=1
-DCMAKE_BUILD_TYPE=Release
cmakeBuildDir: 'amd/comgr/build'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml

View File

@@ -0,0 +1,148 @@
parameters:
- name: checkoutRepo
type: string
default: 'self'
- name: checkoutRef
type: string
default: ''
- name: aptPackages
type: object
default:
- cmake
- libdrm-dev
- libstdc++-12-dev
- libva-amdgpu-dev
- mesa-amdgpu-va-drivers
- ninja-build
- pkg-config
- name: rocmDependencies
type: object
default:
- clr
- llvm-project
- rocm-cmake
- rocminfo
- rocm-core
- rocprofiler-register
- ROCR-Runtime
- name: rocmTestDependencies
type: object
default:
- clr
- llvm-project
- rocminfo
- rocprofiler-register
- ROCR-Runtime
jobs:
- job: rocJPEG
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
# Since mesa-amdgpu-multimedia-devel is not directly available from apt, register it
- task: Bash@3
displayName: 'Register ROCm packages'
inputs:
targetType: inline
script: |
sudo mkdir --parents --mode=0755 /etc/apt/keyrings
wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/${{ variables.KEYRING_VERSION }}/ubuntu jammy main" | sudo tee /etc/apt/sources.list.d/amdgpu.list
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/${{ variables.KEYRING_VERSION }} jammy main" | sudo tee --append /etc/apt/sources.list.d/rocm.list
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: $(JOB_GPU_TARGET)
# CI case: download latest default branch build
${{ if eq(parameters.checkoutRef, 'develop') }}:
dependencySource: staging
# manual build case: triggered by ROCm/ROCm repo
${{ elseif ne(parameters.checkoutRef, 'develop') }}:
dependencySource: tag-builds
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters:
extraBuildFlags: >-
-DROCM_PATH=$(Agent.BuildDirectory)/rocm
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm
-DCMAKE_BUILD_TYPE=Release
-GNinja
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- job: rocJPEG_testing
dependsOn: rocJPEG
condition: and(succeeded(), eq(variables.ENABLE_GFX942_TESTS, 'true'), not(containsValue(split(variables.DISABLED_GFX942_TESTS, ','), variables['Build.DefinitionName'])))
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool:
name: $(JOB_TEST_POOL)
demands: firstRenderDeviceAccess
workspace:
clean: all
strategy:
matrix:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
steps:
# Since mesa-amdgpu-multimedia-devel is not directly available from apt, register it
- task: Bash@3
displayName: 'Register ROCm packages'
inputs:
targetType: inline
script: |
sudo mkdir --parents --mode=0755 /etc/apt/keyrings
wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/${{ variables.KEYRING_VERSION }}/ubuntu jammy main" | sudo tee /etc/apt/sources.list.d/amdgpu.list
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/${{ variables.KEYRING_VERSION }} jammy main" | sudo tee --append /etc/apt/sources.list.d/rocm.list
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
parameters:
${{ if eq(parameters.checkoutRef, 'develop') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, 'develop') }}:
dependencySource: tag-builds
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
dependencyList: ${{ parameters.rocmTestDependencies }}
gpuTarget: $(JOB_GPU_TARGET)
${{ if eq(parameters.checkoutRef, 'develop') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, 'develop') }}:
dependencySource: tag-builds
# anything in /opt may be persistent across runs
# so we need to remove the symlink if it already exists
- script: |
sudo rm -rf /opt/rocm
sudo ln -s $(Agent.BuildDirectory)/rocm /opt/rocm
mkdir rocJPEG-tests
cd rocJPEG-tests
cmake $(Agent.BuildDirectory)/rocm/share/rocjpeg/test
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: rocJPEG
testDir: 'rocJPEG-tests'
- script: sudo rm /opt/rocm
condition: always()

View File

@@ -181,6 +181,7 @@ jobs:
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: $(JOB_GPU_TARGET)
setupHIPLibrarySymlinks: true
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, '') }}:

View File

@@ -41,6 +41,7 @@ parameters:
- ROCR-Runtime
- rocprofiler-register
- roctracer
- aomp
jobs:
- job: rocprofilersdk

View File

@@ -109,6 +109,7 @@ jobs:
-DROCPROFSYS_BUILD_TESTING=ON
-DROCPROFSYS_BUILD_DYNINST=ON
-DROCPROFSYS_BUILD_LIBUNWIND=ON
-DROCPROFSYS_DISABLE_EXAMPLES="openmp-target"
-DDYNINST_BUILD_TBB=ON
-DDYNINST_BUILD_ELFUTILS=ON
-DDYNINST_BUILD_LIBIBERTY=ON

View File

@@ -99,7 +99,7 @@ parameters:
default:
- rocminfo
- MIOpen
- clr
# - clr
- hipBLAS
- hipFFT
- hipRAND
@@ -120,10 +120,11 @@ parameters:
- rocm-core
- rocPRIM
# below are additional dependencies not called out by build script, but throw errors during cmake
- hipCUB
- rocThrust
- hipBLAS-common
- composable_kernel
- hipBLAS-common
- hipCUB
- rocminfo
- rocThrust
- name: rocmTestDependencies
type: object
default:
@@ -166,11 +167,11 @@ jobs:
- template: /.azuredevops/variables-global.yml
# various flags/parameters expected by bash scripts in pytorch repo's .ci directory
- name: ROCM_VERSION
value: 6.3.0
value: 6.4.0
- name: ROCM_PATH
value: /opt/rocm
- name: DESIRED_CUDA
value: 6.3.0
value: 6.4.0
- name: MKLROOT
value: /opt/intel
- name: AOTRITON_INSTALLED_PREFIX
@@ -211,11 +212,36 @@ jobs:
script: |
sudo mkdir -p /opt/python/cp310-cp310/lib/python3.10
sudo ln -s /usr/local/lib/python3.10/dist-packages /opt/python/cp310-cp310/lib/python3.10/site-packages
- task: DownloadPipelineArtifact@2
displayName: Download Specific HIP
inputs:
buildType: 'specific'
project: ROCm-CI
definition: 145
specificBuildWithTriggering: true
itemPattern: '**/*'
buildVersionToDownload: specific
targetPath: '$(Pipeline.Workspace)/d'
pipelineId: 16515
- task: ExtractFiles@1
displayName: Extract clr
inputs:
archiveFilePatterns: '$(Pipeline.Workspace)/d/**/*.tar.gz'
destinationFolder: '$(Agent.BuildDirectory)/rocm'
cleanDestinationFolder: false
overwriteExistingFiles: true
- task: DeleteFiles@1
displayName: Cleanup Compressed clr
inputs:
SourceFolder: '$(Pipeline.Workspace)/d'
Contents: '**/*.tar.gz'
RemoveDotFiles: true
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
dependencySource: staging
gpuTarget: $(JOB_GPU_TARGET)
setupHIPLibrarySymlinks: true
- task: Bash@3
displayName: ROCm symbolic link
inputs:
@@ -226,8 +252,14 @@ jobs:
displayName: git clone upstream pytorch
inputs:
targetType: inline
script: git clone https://github.com/pytorch/pytorch.git --depth=1 --recurse-submodules
script: git clone https://github.com/pytorch/pytorch.git --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: checkout pytorch 2.5
inputs:
targetType: inline
script: git checkout release/2.5
workingDirectory: $(Build.SourcesDirectory)/pytorch
# builder clone still needed due to run_tests.sh at end of build_common.sh call
- task: Bash@3
displayName: git clone pytorch builder
@@ -271,6 +303,18 @@ jobs:
targetType: inline
script: sudo bash ./common/install_aotriton.sh /opt/rocm
workingDirectory: $(Build.SourcesDirectory)/pytorch/.ci/docker
# - task: Bash@3
# displayName: Temporarily Patch HIP
# inputs:
# targetType: inline
# script: git apply $(Build.SourcesDirectory)/.azuredevops/patches/pytorch_hip_fp16.diff
# workingDirectory: $(Agent.BuildDirectory)/rocm
# - task: Bash@3
# displayName: Temporarily Patch CK Submodule
# inputs:
# targetType: inline
# script: git pull origin develop
# workingDirectory: $(Build.SourcesDirectory)/pytorch/third_party/composable_kernel
- task: Bash@3
displayName: Run ROCm Build Script
inputs:
@@ -318,8 +362,14 @@ jobs:
displayName: git clone pytorch vision
inputs:
targetType: inline
script: git clone https://github.com/pytorch/vision.git --depth=1 --recurse-submodules
script: git clone https://github.com/pytorch/vision.git --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: checkout release vision
inputs:
targetType: inline
script: git checkout release/0.20
workingDirectory: $(Build.SourcesDirectory)/vision
- task: Bash@3
displayName: Build vision
inputs:

View File

@@ -35,6 +35,7 @@ parameters:
- rocDecode
- rocFFT
- ROCgdb
- rocJPEG
- rocm-cmake
- rocm-core
- rocm-examples

View File

@@ -0,0 +1,27 @@
From 342133a5cb404beae4d7e1994338120ff99a76d2 Mon Sep 17 00:00:00 2001
From: Jatin Chaudhary <JatinJaikishan.Chaudhary@amd.com>
Date: Mon, 09 Dec 2024 11:24:29 +0000
Subject: [PATCH] SWDEV-503299 - Do not use operator to check for nan
Some libs use __HIP_NO_HALF_OPERATORS__ and __HIP_NO_HALF_CONVERSIONS__
which results in operators being hidden and can cause errors.
Change-Id: I83c194d7d727cba30b46d7c296f7d396549f5fca
---
diff --git a/include/hip/amd_detail/amd_hip_fp16.h b/include/hip/amd_detail/amd_hip_fp16.h
index c8117b1..1a08bb8 100644
--- a/include/hip/amd_detail/amd_hip_fp16.h
+++ b/include/hip/amd_detail/amd_hip_fp16.h
@@ -1679,8 +1679,9 @@
__HOST_DEVICE__
bool __hisinf(__half x)
{
- // +Inf/-Inf
- return x == HIPRT_INF_FP16 || x == __ushort_as_half((unsigned short)0xFC00U);
+ __half_raw hr = x;
+ // +/-Inf
+ return hr.x == 0x7C00U || hr.x == 0xFC00U;
}
inline
__HOST_DEVICE__

View File

@@ -0,0 +1,29 @@
variables:
- group: common
- template: /.azuredevops/variables-global.yml
parameters:
- name: checkoutRef
type: string
default: refs/tags/$(LATEST_RELEASE_TAG)
resources:
repositories:
- repository: pipelines_repo
type: github
endpoint: ROCm
name: ROCm/ROCm
- repository: release_repo
type: github
endpoint: ROCm
name: ROCm/rocJPEG
ref: ${{ parameters.checkoutRef }}
trigger: none
pr: none
jobs:
- template: ${{ variables.CI_COMPONENT_PATH }}/rocJPEG.yml
parameters:
checkoutRepo: release_repo
checkoutRef: ${{ parameters.checkoutRef }}

View File

@@ -60,8 +60,9 @@ parameters:
rocDecode: develop
rocFFT: develop
ROCgdb: amd-staging
rocJPEG: develop
rocm-cmake: develop
rocm-core: amd-staging
rocm-core: master
rocm-examples: develop
rocminfo: amd-staging
rocMLIR: develop
@@ -121,7 +122,8 @@ parameters:
ROCdbgapi : amd-mainline
rocDecode: mainline
rocFFT: mainline
ROCgdb: amd-mainline-rocgdb-15 #
ROCgdb: amd-mainline-rocgdb-15
rocJPEG: mainline
rocm-cmake: mainline
rocm-core: amd-master
rocm-examples: develop # no mainline

View File

@@ -65,6 +65,7 @@ parameters:
rocDecode: $(ROCDECODE_PIPELINE_ID)
rocFFT: $(ROCFFT_PIPELINE_ID)
ROCgdb: $(ROCGDB_PIPELINE_ID)
rocJPEG: $(ROCJPEG_PIPELINE_ID)
rocm-cmake: $(ROCM_CMAKE_PIPELINE_ID)
rocm-core: $(ROCM_CORE_PIPELINE_ID)
rocm-examples: $(ROCM_EXAMPLES_PIPELINE_ID)
@@ -128,6 +129,7 @@ parameters:
rocDecode: $(ROCDECODE_TAGGED_PIPELINE_ID)
rocFFT: $(ROCFFT_TAGGED_PIPELINE_ID)
ROCgdb: $(ROCGDB_TAGGED_PIPELINE_ID)
rocJPEG: $(ROCJPEG_TAGGED_PIPELINE_ID)
rocm-cmake: $(ROCM_CMAKE_TAGGED_PIPELINE_ID)
rocm-core: $(ROCM_CORE_TAGGED_PIPELINE_ID)
rocm-examples: $(ROCM_EXAMPLES_TAGGED_PIPELINE_ID)
@@ -163,6 +165,11 @@ parameters:
- name: skipLlvmSymlink
type: boolean
default: false
# set to true if dlopen calls for HIP libraries are causing failures
# because they do not follow shared library symlink convention
- name: setupHIPLibrarySymlinks
type: boolean
default: false
# some ROCm components can specify GPU target and this will affect downloads
- name: gpuTarget
type: string
@@ -278,6 +285,37 @@ steps:
for file in amdclang amdclang++ amdclang-cl amdclang-cpp amdflang amdlld aompcc mygpu mycpu offload-arch; do
sudo ln -s $(Agent.BuildDirectory)/rocm/llvm/bin/$file $(Agent.BuildDirectory)/rocm/bin/$file
done
# dlopen calls within a ctest or pytest sequence runs into issues when shared library symlink convention is not followed
# the convention is as follows:
# unversioned .so is a symlink to major version .so
# major version .so is a symlink to detailed version .so
# HIP libraries do not follow this convention, and each .so is a copy of each other
# changing the library structure to follow the symlink convention resolves some test failures
- ${{ if eq(parameters.setupHIPLibrarySymlinks, true) }}:
- task: Bash@3
displayName: Setup symlinks for hip libraries
inputs:
targetType: inline
workingDirectory: $(Agent.BuildDirectory)/rocm/lib
script: |
LIBRARIES=("libamdhip64" "libhiprtc-builtins" "libhiprtc")
for LIB_NAME in "${LIBRARIES[@]}"; do
VERSIONED_SO=$(ls ${LIB_NAME}.so.* 2>/dev/null | grep -E "${LIB_NAME}\.so\.[0-9]+\.[0-9]+\.[0-9]+(-.*)?" | sort -V | tail -n 1)
if [[ -z "$VERSIONED_SO" ]]; then
continue
fi
MAJOR_VERSION=$(echo "$VERSIONED_SO" | grep -oP "${LIB_NAME}\.so\.\K[0-9]+")
if [[ -e "${LIB_NAME}.so.${MAJOR_VERSION}" && ! -L "${LIB_NAME}.so.${MAJOR_VERSION}" ]]; then
rm -f "${LIB_NAME}.so.${MAJOR_VERSION}"
fi
if [[ -e "${LIB_NAME}.so" && ! -L "${LIB_NAME}.so" ]]; then
rm -f "${LIB_NAME}.so"
fi
ln -sf "$VERSIONED_SO" "${LIB_NAME}.so.${MAJOR_VERSION}"
ln -sf "${LIB_NAME}.so.${MAJOR_VERSION}" "${LIB_NAME}.so"
echo "Symlinks created for $LIB_NAME:"
ls -l ${LIB_NAME}.so*
done
- task: Bash@3
displayName: 'List downloaded ROCm files'
inputs:

View File

@@ -34,7 +34,7 @@ variables:
- name: LATEST_DOCKER_VERSION
value: 6.1
- name: KEYRING_VERSION
value: 6.1
value: 6.3
- name: AMDMIGRAPHX_GFX942_TEST_PIPELINE_ID
value: 197
- name: AMDMIGRAPHX_PIPELINE_ID
@@ -219,6 +219,10 @@ variables:
value: 134
- name: ROCGDB_TAGGED_PIPELINE_ID
value: 50
- name: ROCJPEG_PIPELINE_ID
value: 262
- name: ROCJPEG_TAGGED_PIPELINE_ID
value: 263
- name: ROCM_BANDWIDTH_TEST_PIPELINE_ID
value: 88
- name: ROCM_BANDWIDTH_TEST_TAGGED_PIPELINE_ID

View File

@@ -159,6 +159,7 @@ HWS
Haswell
Higgs
Hyperparameters
Huggingface
ICD
ICV
IDE
@@ -188,6 +189,7 @@ Jupyter
KFD
KFDTest
KiB
KMD
KV
KVM
Keras
@@ -313,6 +315,7 @@ RDMA
RDNA
README
RHEL
RMW
RNN
RNNs
ROC
@@ -381,6 +384,7 @@ TCR
TF
TFLOPS
TP
TPS
TPU
TPUs
TSME
@@ -457,10 +461,12 @@ api
atmi
atomics
autogenerated
autotune
avx
awk
backend
backends
benchmarked
benchmarking
bfloat
bilinear
@@ -530,6 +536,7 @@ disambiguates
distro
distros
dkms
dtype
el
embeddings
enablement
@@ -562,6 +569,7 @@ heterogenous
hipBLAS
hipBLASLt
hipBLASLt's
hipblaslt
hipCUB
hipFFT
hipLIB
@@ -585,6 +593,7 @@ hpp
hsa
hsakmt
hyperparameter
hyperparameters
iDRAC
ib_core
inband
@@ -605,7 +614,9 @@ ipo
jax
kdb
kfd
kv
latencies
len
libfabric
libjpeg
libs
@@ -631,6 +642,7 @@ mutex
mvffr
namespace
namespaces
num
numref
ocl
opencl
@@ -726,7 +738,9 @@ runtimes
sL
scalability
scalable
seealso
sendmsg
seqs
serializers
shader
sharding
@@ -767,6 +781,7 @@ txt
uarch
uncached
uncorrectable
underoptimized
unhandled
uninstallation
unmapped

View File

@@ -232,7 +232,7 @@ Click {fab}`github` to go to the component's source code on GitHub.
</tr>
<tr>
<td><a href="https://rocm.docs.amd.com/projects/AMDMIGraphX/en/docs-6.3.0/index.html">MIGraphX</a></td>
<td>2.11.0</td>
<td>2.10.0&nbsp;&Rightarrow;&nbsp;<a href="#migraphx-2-11-0">2.11.0</a></td>
<td><a href="https://github.com/ROCm/AMDMIGraphX"><i class="fab fa-github fa-lg"></i></a></td>
</tr>
<tr>
@@ -643,7 +643,7 @@ The following sections describe key changes to ROCm components.
- The command will be at full functionality once additional partition information from `amdsmi_get_gpu_accelerator_partition_profile()` has been implemented.
```{note}
See the full [AMD SMI changelog](https://github.com/ROCm/amdsmi/blob/rocm-6.3.x/CHANGELOG.md) for more details and examples.
See the full [AMD SMI changelog](https://github.com/ROCm/amdsmi/blob/6.3.x/CHANGELOG.md) for more details and examples.
```
### **HIP** (6.3.0)
@@ -897,6 +897,78 @@ See the full [AMD SMI changelog](https://github.com/ROCm/amdsmi/blob/rocm-6.3.x/
srcLane, width)` function when one of the parameters to the function is undefined along some path
to the function. See [issue #3499](https://github.com/ROCm/ROCm/issues/3499) on GitHub.
### **MIGraphX** (2.11.0)
#### Added
* Initial code to run on Windows
* Support for `FP8` and `INT4`
* Support for the Log2 internal operator
* Support for the GCC 14 compiler
* The `BitwiseAnd`, `Scan`, `SoftmaxCrossEntropyLoss`, `GridSample`, and `NegativeLogLikelihoodLoss` ONNX operators
* The `MatMulNBits`, `QuantizeLinear`/`DequantizeLinear`, `GroupQueryAttention`, `SkipSimplifiedLayerNormalization`, and `SimpliedLayerNormalizationMicrosoft` Contrib operators
* Dynamic batch parameter support to `OneHot` operator
* Split-K as an optional performance improvement
* Scripts to validate ONNX models from the ONNX Model Zoo
* GPU Pooling Kernel
* `--mlir` flag the migraphx-driver program to offload entire module to MLIR
* Fusing split-reduce with MLIR
* Multiple outputs for the MLIR + Pointwise fusions
* Pointwise fusions with MLIR across reshape operations
* `MIGRAPHX_MLIR_DUMP` environment variable to dump MLIR modules to MXRs
* The `3` option to `MIGRAPHX_TRACE_BENCHMARKING` to print the MLIR program for improved debug output
* `MIGRAPHX_ENABLE_HIPBLASLT_GEMM` environment variable to call hipBLASLt libraries
* `MIGRAPHX_VERIFY_DUMP_DIFF` to improve the debugging of accuracy issues
* `reduce_any` and `reduce_all` options to the `Reduce` operation via Torch MIGraphX
* Examples for RNNT, and ControlNet
#### Changed
* Switched to MLIR's 3D Convolution operator.
* MLIR is now used for Attention operations by default on gfx942 and newer ASICs.
* Names and locations for VRM specific libraries have changed.
* Use random mode for benchmarking GEMMs and convolutions.
* Python version is now printed with an actual version number.
#### Removed
* Disabled requirements for MIOpen and rocBLAS when running on Windows.
* Removed inaccurate warning messages when using exhaustive-tune.
* Remove the hard coded path in `MIGRAPHX_CXX_COMPILER` allowing the compiler to be installed in different locations.
#### Optimized
* Improved:
* Infrastructure code to enable better Kernel fusions with all supported data types
* Subsequent model compile time by creating a cache for already performant kernels
* Use of Attention fusion with models
* Performance of the Softmax JIT kernel and of the Pooling operator
* Tuning operations through a new 50ms delay before running the next kernel
* Performance of several convolution-based models through an optimized NHWC layout
* Performance for the `FP8` datatype
* GPU utilization
* Verification tools
* Debug prints
* Documentation, including gpu-driver utility documentation
* Summary section of the `migraphx-driver perf` command
* Reduced model compilation time
* Reordered some compiler passes to allow for more fusions
* Preloaded tiles into LDS to improve performance of pointwise transposes
* Exposed the `external_data_path` property in `onnx_options` to set the path from `onnxruntime`
#### Resolved issues
* Fixed a bug with gfx1030 that overwrote `dpp_reduce`.
* Fixed a bug in 1-arg dynamic reshape that created a failure.
* Fixed a bug with `dot_broadcast` and `inner_broadcast` that caused compile failures.
* Fixed a bug where some configs were failing when using exhaustive-tune.
* Fixed the ROCm Install Guide URL.
* Fixed an issue while building a whl package due to an apostrophe.
* Fixed the BERT Squad example requirements file to support different versions of Python.
* Fixed a bug that stopped the Vicuna model from compiling.
* Fixed failures with the verify option of migraphx-driver that would cause the application to exit early.
### **MIOpen** (3.3.0)
#### Added
@@ -1170,7 +1242,7 @@ memory partition modes upon an invalid argument return from memory partition mod
- C++ tests for `memorypartition_read_write` are to be re-enabled in a future ROCm release.
```{note}
See the full [ROCm SMI changelog](https://github.com/ROCm/rocm_smi_lib/blob/rocm-6.3.x/CHANGELOG.md) for more details and examples.
See the full [ROCm SMI changelog](https://github.com/ROCm/rocm_smi_lib/blob/6.3.x/CHANGELOG.md) for more details and examples.
```
### **ROCm Systems Profiler** (0.1.0)

View File

@@ -8,10 +8,7 @@
<!--list of projects for ROCm-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCR-Runtime" />
<project name="ROCT-Thunk-Interface" />
<project name="amdsmi" />
<project name="omniperf" />
<project name="omnitrace" />
<project name="rdc" />
<project name="rocm_bandwidth_test" />
<project name="rocm_smi_lib" />
@@ -21,6 +18,8 @@
<project name="rocprofiler" />
<project name="rocprofiler-register" />
<project name="rocprofiler-sdk" />
<project name="rocprofiler-compute" />
<project name="rocprofiler-systems" />
<project name="roctracer" />
<!--HIP Projects-->
<project name="HIP" />
@@ -42,6 +41,7 @@
<project groups="mathlibs" name="ROCmValidationSuite" />
<project groups="mathlibs" name="Tensile" />
<project groups="mathlibs" name="composable_kernel" />
<project groups="mathlibs" name="hipBLAS-common" />
<project groups="mathlibs" name="hipBLAS" />
<project groups="mathlibs" name="hipBLASLt" />
<project groups="mathlibs" name="hipCUB" />
@@ -57,6 +57,7 @@
<project groups="mathlibs" name="rocALUTION" />
<project groups="mathlibs" name="rocBLAS" />
<project groups="mathlibs" name="rocDecode" />
<project groups="mathlibs" name="rocJPEG" />
<project groups="mathlibs" name="rocPyDecode" />
<project groups="mathlibs" name="rocFFT" />
<project groups="mathlibs" name="rocPRIM" />
@@ -67,6 +68,7 @@
<project groups="mathlibs" name="rocWMMA" />
<project groups="mathlibs" name="rocm-cmake" />
<project groups="mathlibs" name="rpp" />
<project groups="mathlibs" name="TransferBench" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" />

View File

@@ -34,7 +34,7 @@ ROCm Version,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0, 6.1.2, 6.1.1, 6.1.0, 6.0.2, 6.0.0
Thrust,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
CUB,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
,,,,,,,,,,
KFD & USER SPACE [#kfd_support-past-60]_,.. _kfd-userspace-support-compatibility-matrix-past-60:,,,,,,,,,
KMD & USER SPACE [#kfd_support-past-60]_,.. _kfd-userspace-support-compatibility-matrix-past-60:,,,,,,,,,
Tested user space versions,"6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x"
,,,,,,,,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix-past-60:,,,,,,,,,
1 ROCm Version 6.3.0 6.2.4 6.2.2 6.2.1 6.2.0 6.1.2 6.1.1 6.1.0 6.0.2 6.0.0
34 Thrust 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
35 CUB 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
36
37 KFD & USER SPACE [#kfd_support-past-60]_ KMD & USER SPACE [#kfd_support-past-60]_ .. _kfd-userspace-support-compatibility-matrix-past-60:
38 Tested user space versions 6.3.x, 6.2.x, 6.1.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x
39
40 ML & COMPUTER VISION .. _mllibs-support-compatibility-matrix-past-60:

View File

@@ -61,7 +61,7 @@ compatibility and system requirements.
Thrust,2.3.2,2.2.0,2.1.0
CUB,2.3.2,2.2.0,2.1.0
,,,
KFD & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,,
KMD & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,,
Tested user space versions,"6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x"
,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix:,,
@@ -150,7 +150,7 @@ compatibility and system requirements.
.. [#oracle89] Oracle Linux is supported only on AMD Instinct MI300X.
.. [#mi300_624] **For ROCm 6.2.4** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_610] **For ROCm 6.1.0** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4.
.. [#kfd_support] ROCm provides forward and backward compatibility between the Kernel Fusion Driver (KFD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#kfd_support] ROCm provides forward and backward compatibility between the AMD Kernel-mode GPU Driver (KMD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#ROCT-rocr] As of ROCm 6.3.0, the ROCT Thunk Interface is now included as part of the ROCr runtime package.
.. _OS-kernel-versions:
@@ -232,5 +232,5 @@ Expand for full historical view of:
.. [#mi300_610-past-60] **For ROCm 6.1.0** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4.
.. [#mi300_602-past-60] **For ROCm 6.0.2** - MI300A (gfx942) is supported on Ubuntu 22.04.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.3.
.. [#mi300_600-past-60] **For ROCm 6.0.0** - MI300A (gfx942) is supported on Ubuntu 22.04.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.3.
.. [#kfd_support-past-60] ROCm provides forward and backward compatibility between the Kernel Fusion Driver (KFD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#kfd_support-past-60] ROCm provides forward and backward compatibility between the AMD Kernel-mode GPU Driver (KMD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#ROCT-rocr-past-60] As of ROCm 6.3.0, the ROCT Thunk Interface is now included as part of the ROCr runtime package.

View File

@@ -1,156 +0,0 @@
.. meta::
:description: How ROCm uses PCIe atomics
:keywords: PCIe, PCIe atomics, atomics, BAR memory, AMD, ROCm
*****************************************************************************
How ROCm uses PCIe atomics
*****************************************************************************
ROCm PCIe feature and overview of BAR memory
================================================================
ROCm is an extension of HSA platform architecture, so it shares the queuing model, memory model,
signaling and synchronization protocols. Platform atomics are integral to perform queuing and
signaling memory operations where there may be multiple-writers across CPU and GPU agents.
The full list of HSA system architecture platform requirements are here:
`HSA Sys Arch Features <http://hsafoundation.com/wp-content/uploads/2021/02/HSA-SysArch-1.2.pdf>`_.
AMD ROCm Software uses the new PCI Express 3.0 (Peripheral Component Interconnect Express [PCIe]
3.0) features for atomic read-modify-write transactions which extends inter-processor synchronization
mechanisms to IO to support the defined set of HSA capabilities needed for queuing and signaling
memory operations.
The new PCIe atomic operations operate as completers for ``CAS`` (Compare and Swap), ``FetchADD``,
``SWAP`` atomics. The atomic operations are initiated by the I/O device which support 32-bit, 64-bit and
128-bit operand which target address have to be naturally aligned to operation sizes.
For ROCm the Platform atomics are used in ROCm in the following ways:
* Update HSA queue's read_dispatch_id: 64 bit atomic add used by the command processor on the
GPU agent to update the packet ID it processed.
* Update HSA queue's write_dispatch_id: 64 bit atomic add used by the CPU and GPU agent to
support multi-writer queue insertions.
* Update HSA Signals -- 64bit atomic ops are used for CPU & GPU synchronization.
The PCIe 3.0 atomic operations feature allows atomic transactions to be requested by, routed through
and completed by PCIe components. Routing and completion does not require software support.
Component support for each is detectable via the Device Capabilities 2 (DevCap2) register. Upstream
bridges need to have atomic operations routing enabled or the atomic operations will fail even though
PCIe endpoint and PCIe I/O devices has the capability to atomic operations.
To do atomic operations routing capability between two or more Root Ports, each associated Root Port
must indicate that capability via the atomic operations routing supported bit in the DevCap2 register.
If your system has a PCIe Express Switch it needs to support atomic operations routing. Atomic
operations requests are permitted only if a component's ``DEVCTL2.ATOMICOP_REQUESTER_ENABLE``
field is set. These requests can only be serviced if the upstream components support atomic operation
completion and/or routing to a component which does. Atomic operations routing support=1, routing
is supported; atomic operations routing support=0, routing is not supported.
An atomic operation is a non-posted transaction supporting 32-bit and 64-bit address formats, there
must be a response for Completion containing the result of the operation. Errors associated with the
operation (uncorrectable error accessing the target location or carrying out the atomic operation) are
signaled to the requester by setting the Completion Status field in the completion descriptor, they are
set to to Completer Abort (CA) or Unsupported Request (UR).
To understand more about how PCIe atomic operations work, see
`PCIe atomics <https://pcisig.com/specifications/pciexpress/specifications/ECN_Atomic_Ops_080417.pdf>`_
`Linux Kernel Patch to pci_enable_atomic_request <https://patchwork.kernel.org/project/linux-pci/patch/1443110390-4080-1-git-send-email-jay@jcornwall.me/>`_
There are also a number of papers which talk about these new capabilities:
* `Atomic Read Modify Write Primitives by Intel <https://www.intel.es/content/dam/doc/white-paper/atomic-read-modify-write-primitives-i-o-devices-paper.pdf>`_
* `PCI express 3 Accelerator White paper by Intel <https://www.intel.sg/content/dam/doc/white-paper/pci-express3-accelerator-white-paper.pdf>`_
* `PCIe Generation 4 Base Specification includes atomic operations <https://astralvx.com/storage/2020/11/PCI_Express_Base_4.0_Rev0.3_February19-2014.pdf>`_
* `Xilinx PCIe Ultrascale White paper <https://docs.xilinx.com/v/u/8OZSA2V1b1LLU2rRCDVGQw>`_
Other I/O devices with PCIe atomics support:
* Mellanox ConnectX-5 InfiniBand Card
* Cray Aries Interconnect
* Xilinx 7 Series Devices
Future bus technology with richer I/O atomics operation Support
* GenZ
New PCIe Endpoints with support beyond AMD Ryzen and EPYC CPU; Intel Haswell or newer CPUs
with PCIe Generation 3.0 support.
* Mellanox Bluefield SOC
* Cavium Thunder X2
In ROCm, we also take advantage of PCIe ID based ordering technology for P2P when the GPU
originates two writes to two different targets:
* Write to another GPU memory
* Write to system memory to indicate transfer complete
They are routed off to different ends of the computer but we want to make sure the write to system
memory to indicate transfer complete occurs AFTER P2P write to GPU has complete.
BAR memory overview
----------------------------------------------------------------------------------------------------
On a Xeon E5 based system in the BIOS we can turn on above 4GB PCIe addressing, if so he need to set
memory-mapped input/output (MMIO) base address (MMIOH base) and range (MMIO high size) in the BIOS.
In the Supermicro system in the system bios you need to see the following
* Advanced->PCIe/PCI/PnP configuration-\> Above 4G Decoding = Enabled
* Advanced->PCIe/PCI/PnP Configuration-\>MMIOH Base = 512G
* Advanced->PCIe/PCI/PnP Configuration-\>MMIO High Size = 256G
When we support Large Bar Capability there is a Large Bar VBIOS which also disable the IO bar.
For GFX9 and Vega10 which have Physical Address up 44 bit and 48 bit Virtual address.
* BAR0-1 registers: 64bit, prefetchable, GPU memory. 8GB or 16GB depending on Vega10 SKU. Must
be placed < 2^44 to support P2P access from other Vega10.
* BAR2-3 registers: 64bit, prefetchable, Doorbell. Must be placed \< 2^44 to support P2P access from
other Vega10.
* BAR4 register: Optional, not a boot device.
* BAR5 register: 32bit, non-prefetchable, MMIO. Must be placed \< 4GB.
Here is how our base address register (BAR) works on GFX 8 GPUs with 40 bit Physical Address Limit ::
11:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Fiji [Radeon R9 FURY / NANO
Series] (rev c1)
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 0b35
Flags: bus master, fast devsel, latency 0, IRQ 119
Memory at bf40000000 (64-bit, prefetchable) [size=256M]
Memory at bf50000000 (64-bit, prefetchable) [size=2M]
I/O ports at 3000 [size=256]
Memory at c7400000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at c7440000 [disabled] [size=128K]
Legend:
1 : GPU Frame Buffer BAR -- In this example it happens to be 256M, but typically this will be size of the
GPU memory (typically 4GB+). This BAR has to be placed \< 2^40 to allow peer-to-peer access from
other GFX8 AMD GPUs. For GFX9 (Vega GPU) the BAR has to be placed \< 2^44 to allow peer-to-peer
access from other GFX9 AMD GPUs.
2 : Doorbell BAR -- The size of the BAR is typically will be \< 10MB (currently fixed at 2MB) for this
generation GPUs. This BAR has to be placed \< 2^40 to allow peer-to-peer access from other current
generation AMD GPUs.
3 : IO BAR -- This is for legacy VGA and boot device support, but since this the GPUs in this project are
not VGA devices (headless), this is not a concern even if the SBIOS does not setup.
4 : MMIO BAR -- This is required for the AMD Driver SW to access the configuration registers. Since the
reminder of the BAR available is only 1 DWORD (32bit), this is placed \< 4GB. This is fixed at 256KB.
5 : Expansion ROM -- This is required for the AMD Driver SW to access the GPU video-bios. This is
currently fixed at 128KB.
For more information, you can review
`Overview of Changes to PCI Express 3.0 <https://www.mindshare.com/files/resources/PCIe%203-0.pdf>`_.

View File

@@ -1,241 +0,0 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="GPU memory">
<meta name="keywords" content="GPU memory, VRAM, video random access memory, pageable
memory, pinned memory, managed memory, AMD, ROCm">
</head>
# GPU memory
For the HIP reference documentation, see:
* {doc}`hip:doxygen/html/group___memory`
* {doc}`hip:doxygen/html/group___memory_m`
Host memory exists on the host (e.g. CPU) of the machine in random access memory (RAM).
Device memory exists on the device (e.g. GPU) of the machine in video random access memory (VRAM).
Recent architectures use graphics double data rate (GDDR) synchronous dynamic random-access memory (SDRAM)such as GDDR6, or high-bandwidth memory (HBM) such as HBM2e.
## Memory allocation
Memory can be allocated in two ways: pageable memory, and pinned memory.
The following API calls with result in these allocations:
| API | Data location | Allocation |
|--------------------|---------------|------------|
| System allocated | Host | Pageable |
| `hipMallocManaged` | Host | Managed |
| `hipHostMalloc` | Host | Pinned |
| `hipMalloc` | Device | Pinned |
:::{tip}
`hipMalloc` and `hipFree` are blocking calls, however, HIP recently added non-blocking versions `hipMallocAsync` and `hipFreeAsync` which take in a stream as an additional argument.
:::
### Pageable memory
Pageable memory is usually gotten when calling `malloc` or `new` in a C++ application.
It is unique in that it exists on "pages" (blocks of memory), which can be migrated to other memory storage.
For example, migrating memory between CPU sockets on a motherboard, or a system that runs out of space in RAM and starts dumping pages of RAM into the swap partition of your hard drive.
### Pinned memory
Pinned memory (or page-locked memory, or non-pageable memory) is host memory that is mapped into the address space of all GPUs, meaning that the pointer can be used on both host and device.
Accessing host-resident pinned memory in device kernels is generally not recommended for performance, as it can force the data to traverse the host-device interconnect (e.g. PCIe), which is much slower than the on-device bandwidth (>40x on MI200).
Pinned host memory can be allocated with one of two types of coherence support:
:::{note}
In HIP, pinned memory allocations are coherent by default (`hipHostMallocDefault`).
There are additional pinned memory flags (e.g. `hipHostMallocMapped` and `hipHostMallocPortable`).
On MI200 these options do not impact performance.
<!-- TODO: link to programming_manual#memory-allocation-flags -->
For more information, see the section *memory allocation flags* in the HIP Programming Guide: {doc}`hip:how-to/programming_manual`.
:::
Much like how a process can be locked to a CPU core by setting affinity, a pinned memory allocator does this with the memory storage system.
On multi-socket systems it is important to ensure that pinned memory is located on the same socket as the owning process, or else each cache line will be moved through the CPU-CPU interconnect, thereby increasing latency and potentially decreasing bandwidth.
In practice, pinned memory is used to improve transfer times between host and device.
For transfer operations, such as `hipMemcpy` or `hipMemcpyAsync`, using pinned memory instead of pageable memory on host can lead to a ~3x improvement in bandwidth.
:::{tip}
If the application needs to move data back and forth between device and host (separate allocations), use pinned memory on the host side.
:::
### Managed memory
Managed memory refers to universally addressable, or unified memory available on the MI200 series of GPUs.
Much like pinned memory, managed memory shares a pointer between host and device and (by default) supports fine-grained coherence, however, managed memory can also automatically migrate pages between host and device.
The allocation will be managed by AMD GPU driver using the Linux HMM (Heterogeneous Memory Management) mechanism.
If heterogenous memory management (HMM) is not available, then `hipMallocManaged` will default back to using system memory and will act like pinned host memory.
Other managed memory API calls will have undefined behavior.
It is therefore recommended to check for managed memory capability with: `hipDeviceGetAttribute` and `hipDeviceAttributeManagedMemory`.
HIP supports additional calls that work with page migration:
* `hipMemAdvise`
* `hipMemPrefetchAsync`
:::{tip}
If the application needs to use data on both host and device regularly, does not want to deal with separate allocations, and is not worried about maxing out the VRAM on MI200 GPUs (64 GB per GCD), use managed memory.
:::
:::{tip}
If managed memory performance is poor, check to see if managed memory is supported on your system and if page migration (XNACK) is enabled.
:::
## Access behavior
Memory allocations for GPUs behave as follow:
| API | Data location | Host access | Device access |
|--------------------|---------------|--------------|----------------------|
| System allocated | Host | Local access | Unhandled page fault |
| `hipMallocManaged` | Host | Local access | Zero-copy |
| `hipHostMalloc` | Host | Local access | Zero-copy* |
| `hipMalloc` | Device | Zero-copy | Local access |
Zero-copy accesses happen over the Infinity Fabric interconnect or PCI-E lanes on discrete GPUs.
:::{note}
While `hipHostMalloc` allocated memory is accessible by a device, the host pointer must be converted to a device pointer with `hipHostGetDevicePointer`.
Memory allocated through standard system allocators such as `malloc`, can be accessed a device by registering the memory via `hipHostRegister`.
The device pointer to be used in kernels can be retrieved with `hipHostGetDevicePointer`.
Registered memory is treated like `hipHostMalloc` and will have similar performance.
On devices that support and have [](#xnack) enabled, such as the MI250X, `hipHostRegister` is not required as memory accesses are handled via automatic page migration.
:::
### XNACK
Normally, host and device memory are separate and data has to be transferred manually via `hipMemcpy`.
On a subset of GPUs, such as the MI200, there is an option to automatically migrate pages of memory between host and device.
This is important for managed memory, where the locality of the data is important for performance.
Depending on the system, page migration may be disabled by default in which case managed memory will act like pinned host memory and suffer degraded performance.
*XNACK* describes the GPUs ability to retry memory accesses that failed due a page fault (which normally would lead to a memory access error), and instead retrieve the missing page.
This also affects memory allocated by the system as indicated by the following table:
| API | Data location | Host after device access | Device after host access |
|--------------------|---------------|--------------------------|--------------------------|
| System allocated | Host | Migrate page to host | Migrate page to device |
| `hipMallocManaged` | Host | Migrate page to host | Migrate page to device |
| `hipHostMalloc` | Host | Local access | Zero-copy |
| `hipMalloc` | Device | Zero-copy | Local access |
To check if page migration is available on a platform, use `rocminfo`:
```sh
$ rocminfo | grep xnack
Name: amdgcn-amd-amdhsa--gfx90a:sramecc+:xnack-
```
Here, `xnack-` means that XNACK is available but is disabled by default.
Turning on XNACK by setting the environment variable `HSA_XNACK=1` and gives the expected result, `xnack+`:
```sh
$ HSA_XNACK=1 rocminfo | grep xnack
Name: amdgcn-amd-amdhsa--gfx90a:sramecc+:xnack+
```
`hipcc`by default will generate code that runs correctly with both XNACK enabled or disabled.
Setting the `--offload-arch=`-option with `xnack+` or `xnack-` forces code to be only run with XNACK enabled or disabled respectively.
```sh
# Compiled kernels will run regardless if XNACK is enabled or is disabled.
hipcc --offload-arch=gfx90a
# Compiled kernels will only be run if XNACK is enabled with XNACK=1.
hipcc --offload-arch=gfx90a:xnack+
# Compiled kernels will only be run if XNACK is disabled with XNACK=0.
hipcc --offload-arch=gfx90a:xnack-
```
:::{tip}
If you want to make use of page migration, use managed memory. While pageable memory will migrate correctly, it is not a portable solution and can have performance issues if the accessed data isn't page aligned.
:::
### Coherence
* *Coarse-grained coherence* means that memory is only considered up to date at kernel boundaries, which can be enforced through `hipDeviceSynchronize`, `hipStreamSynchronize`, or any blocking operation that acts on the null stream (e.g. `hipMemcpy`).
For example, cacheable memory is a type of coarse-grained memory where an up-to-date copy of the data can be stored elsewhere (e.g. in an L2 cache).
* *Fine-grained coherence* means the coherence is supported while a CPU/GPU kernel is running.
This can be useful if both host and device are operating on the same dataspace using system-scope atomic operations (e.g. updating an error code or flag to a buffer).
Fine-grained memory implies that up-to-date data may be made visible to others regardless of kernel boundaries as discussed above.
| API | Flag | Coherence |
|-------------------------|------------------------------|----------------|
| `hipHostMalloc` | `hipHostMallocDefault` | Fine-grained |
| `hipHostMalloc` | `hipHostMallocNonCoherent` | Coarse-grained |
| API | Flag | Coherence |
|-------------------------|------------------------------|----------------|
| `hipExtMallocWithFlags` | `hipDeviceMallocDefault` | Coarse-grained |
| `hipExtMallocWithFlags` | `hipDeviceMallocFinegrained` | Fine-grained |
| API | `hipMemAdvise` argument | Coherence |
|-------------------------|------------------------------|----------------|
| `hipMallocManaged` | | Fine-grained |
| `hipMallocManaged` | `hipMemAdviseSetCoarseGrain` | Coarse-grained |
| `malloc` | | Fine-grained |
| `malloc` | `hipMemAdviseSetCoarseGrain` | Coarse-grained |
:::{tip}
Try to design your algorithms to avoid host-device memory coherence (e.g. system scope atomics). While it can be a useful feature in very specific cases, it is not supported on all systems, and can negatively impact performance by introducing the host-device interconnect bottleneck.
:::
The availability of fine- and coarse-grained memory pools can be checked with `rocminfo`:
```sh
$ rocminfo
...
*******
Agent 1
*******
Name: AMD EPYC 7742 64-Core Processor
...
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
...
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
...
*******
Agent 9
*******
Name: gfx90a
...
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
...
```
## System direct memory access
In most cases, the default behavior for HIP in transferring data from a pinned host allocation to device will run at the limit of the interconnect.
However, there are certain cases where the interconnect is not the bottleneck.
The primary way to transfer data onto and off of a GPU, such as the MI200, is to use the onboard System Direct Memory Access engine, which is used to feed blocks of memory to the off-device interconnect (either GPU-CPU or GPU-GPU).
Each GCD has a separate SDMA engine for host-to-device and device-to-host memory transfers.
Importantly, SDMA engines are separate from the computing infrastructure, meaning that memory transfers to and from a device will not impact kernel compute performance, though they do impact memory bandwidth to a limited extent.
The SDMA engines are mainly tuned for PCIe-4.0 x16, which means they are designed to operate at bandwidths up to 32 GB/s.
:::{note}
An important feature of the MI250X platform is the Infinity Fabric™ interconnect between host and device.
The Infinity Fabric interconnect supports improved performance over standard PCIe-4.0 (usually ~50% more bandwidth); however, since the SDMA engine does not run at this speed, it will not max out the bandwidth of the faster interconnect.
:::
The bandwidth limitation can be countered by bypassing the SDMA engine and replacing it with a type of copy kernel known as a "blit" kernel.
Blit kernels will use the compute units on the GPU, thereby consuming compute resources, which may not always be beneficial.
The easiest way to enable blit kernels is to set an environment variable `HSA_ENABLE_SDMA=0`, which will disable the SDMA engine.
On systems where the GPU uses a PCIe interconnect instead of an Infinity Fabric interconnect, blit kernels will not impact bandwidth, but will still consume compute resources.
The use of SDMA vs blit kernels also applies to MPI data transfers and GPU-GPU transfers.

View File

@@ -0,0 +1,57 @@
.. meta::
:description: How ROCm uses PCIe atomics
:keywords: PCIe, PCIe atomics, atomics, Atomic operations, AMD, ROCm
*****************************************************************************
How ROCm uses PCIe atomics
*****************************************************************************
AMD ROCm is an extension of the Heterogeneous System Architecture (HSA). To meet the requirements of an HSA-compliant system, ROCm supports queuing models, memory models, and signaling and synchronization protocols. ROCm can perform atomic Read-Modify-Write (RMW) transactions that extend inter-processor synchronization mechanisms to Input/Output (I/O) devices starting from Peripheral Component Interconnect Express 3.0 (PCIe™ 3.0). It supports the defined HSA capabilities for queuing and signaling memory operations. To learn more about the requirements of an HSA-compliant system, see the
`HSA Platform System Architecture Specification <http://hsafoundation.com/wp-content/uploads/2021/02/HSA-SysArch-1.2.pdf>`_.
ROCm uses platform atomics to perform memory operations like queuing, signaling, and synchronization across multiple CPU, GPU agents, and I/O devices. Platform atomics ensure that atomic operations run synchronously, without interruptions or conflicts, across multiple shared resources.
Platform atomics in ROCm
==============================
Platform atomics enable the set of atomic operations that perform RMW actions across multiple processors, devices, and memory locations so that they run synchronously without interruption. An atomic operation is a sequence of computing instructions run as a single, indivisible unit. These instructions are completed in their entirety without any interruptions. If the instructions can't be completed as a unit without interruption, none of the instructions are run. These operations support 32-bit and 64-bit address formats.
Some of the operations for which ROCm uses platform atomics are:
* Update the HSA queue's ``read_dispatch_id``. The command processor on the GPU agent uses a 64-bit atomic add operation. It updates the packet ID it processed.
* Update the HSA queue's ``write_dispatch_id``. The CPU and GPU agents use a 64-bit atomic add operation. It supports multi-writer queue insertions.
* Update HSA Signals. A 64-bit atomic operation is used for CPU & GPU synchronization.
PCIe for atomic operations
----------------------------
ROCm requires CPUs that support PCIe atomics. Similarly, all connected I/O devices should also support PCIe atomics for optimum compatibility. PCIe supports the ``CAS`` (Compare and Swap), ``FetchADD``, and ``SWAP`` atomic operations across multiple resources. These atomic operations are initiated by the I/O devices that support 32-bit, 64-bit, and 128-bit operands. Likewise, the target memory address where these atomic operations are performed should also be aligned to the size of the operand. This alignment ensures that the operations are performed efficiently and correctly without failure.
When an atomic operation is successful, the requester receives a response of completion along with the operation result. However, any errors associated with the operation are signaled to the requester by updating the Completion Status field. Issues accessing the target location or running the atomic operation are common errors. Depending upon the error, the Completion Status field is updated to Completer Abort (CA) or Unsupported Request (UR). The field is present in the Completion Descriptor.
To learn more about the industry standards and specifications of PCIe, see `PCI-SIG Specification <https://pcisig.com/specifications>`_.
To learn more about PCIe and its capabilities, consult the following white papers:
* `Atomic Read Modify Write Primitives by Intel <https://www.intel.es/content/dam/doc/white-paper/atomic-read-modify-write-primitives-i-o-devices-paper.pdf>`_
* `PCI Express 3 Accelerator White paper by Intel <https://www.intel.sg/content/dam/doc/white-paper/pci-express3-accelerator-white-paper.pdf>`_
* `PCIe Generation 4 Base Specification includes atomic operations <https://astralvx.com/storage/2020/11/PCI_Express_Base_4.0_Rev0.3_February19-2014.pdf>`_
* `Xilinx PCIe Ultrascale White paper <https://docs.xilinx.com/v/u/8OZSA2V1b1LLU2rRCDVGQw>`_
Working with PCIe 3.0 in ROCm
-------------------------------
Starting with PCIe 3.0, atomic operations can be requested, routed through, and completed by PCIe components. Routing and completion do not require software support. Component support for each can be identified by the Device Capabilities 2 (DevCap2) register. Upstream
bridges need to have atomic operations routing enabled. If not enabled, the atomic operations will fail even if the
PCIe endpoint and PCIe I/O devices can perform atomic operations.
If your system uses PCIe switches to connect and enable communication between multiple PCIe components, the switches must also support atomic operations routing.
To enable atomic operations routing between multiple root ports, each root port must support atomic operation routing. This capability can be identified from the atomic operations routing support bit in the DevCap2 register. If the bit has value of 1, routing is supported. Atomic operation requests are permitted only if a component's ``DEVCTL2.ATOMICOP_REQUESTER_ENABLE``
field is set. These requests can only be serviced if the upstream components also support atomic operation completion or if the requests can be routed to a component that supports atomic operation completion.
ROCm uses the PCIe-ID-based ordering technology for peer-to-peer (P2P) data transmission. PCIe-ID-based ordering technology is used when the GPU initiates multiple write operations to different memory locations.
For more information on changes implemented in PCIe 3.0, see `Overview of Changes to PCI Express 3.0 <https://www.mindshare.com/files/resources/PCIe%203-0.pdf>`_.

View File

@@ -43,6 +43,7 @@ article_pages = [
{"file": "how-to/rocm-for-ai/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/install", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/train-a-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/accelerate-training", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/deploy-your-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/hugging-face-models", "os": ["linux"]},
{"file": "how-to/rocm-for-hpc/index", "os": ["linux"]},

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 341 KiB

After

Width:  |  Height:  |  Size: 173 KiB

View File

@@ -135,11 +135,13 @@ Installing vLLM
{"text":["What is AMD Instinct?\nAmd Instinct is a brand new line of high-performance computing (HPC) processors from Advanced Micro Devices (AMD). These processors are designed to deliver unparalleled performance for HPC workloads, including scientific simulations, data analytics, and machine learning.\nThe Instinct lineup includes a range of processors, from the entry-level Inst"]}
Refer to :ref:`mi300x-vllm-optimization` for performance optimization tips.
.. seealso::
ROCm provides a prebuilt optimized Docker image for validating the performance of LLM inference with vLLM
on the MI300X accelerator. The Docker image includes ROCm, vLLM, PyTorch, and tuning files in the CSV
format. For more information, see :doc:`/how-to/performance-validation/mi300x/vllm-benchmark`.
See :ref:`mi300x-vllm-optimization` for performance optimization tips.
ROCm provides a prebuilt optimized Docker image for validating the performance of LLM inference with vLLM
on the MI300X accelerator. The Docker image includes ROCm, vLLM, PyTorch, and tuning files in CSV
format. For more information, see :doc:`/how-to/performance-validation/mi300x/vllm-benchmark`.
.. _fine-tuning-llms-tgi:

View File

@@ -16,6 +16,8 @@ In this guide, you'll learn about:
- :doc:`Installing ROCm and machine learning frameworks <install>`
- :doc:`Scaling model training <scale-model-training>`
- :doc:`Training a model <train-a-model>`
- :doc:`Running models from Hugging Face <hugging-face-models>`

View File

@@ -0,0 +1,135 @@
.. meta::
:description: How to scale and accelerate model training
:keywords: ROCm, AI, LLM, train, fine-tune, deploy, FSDP, DeepSpeed, LLaMA, tutorial
**********************
Scaling model training
**********************
To train a large-scale model like OpenAI GPT-2 or Meta Llama 2 70B, a single accelerator or GPU cannot store all the
model parameters required for training. This immense scale presents a fundamental challenge: no single GPU or
accelerator can simultaneously store and process the entire model's parameters during training. PyTorch
provides an answer to this computational constraint through its distributed training frameworks.
.. _rocm-for-ai-pytorch-distributed:
PyTorch distributed
===================
Features in ``torch.distributed`` are categorized into three main components:
- `Distributed data-parallel training
<https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`_ (DDP)
- `RPC-Based distributed training <https://pytorch.org/docs/stable/rpc.html>`_ (RPC)
- `Collective communication <https://pytorch.org/docs/stable/distributed.html>`_
In this topic, the focus is on the distributed data-parallelism strategy as its the most popular. To get started with DDP,
you need to first understand how to coordinate the model and its training data across multiple accelerators or GPUs.
The DDP workflow on multiple accelerators or GPUs is as follows:
#. Split the current global training batch into small local batches on each GPU. For instance, if you have 8 GPUs and
the global batch is set at 32 samples, each of the 8 GPUs will have a local batch size of 4 samples.
#. Copy the model to every device so each can process its local batches independently.
#. Run a forward pass, then a backward pass, and output the gradient of the weights with respect to the loss of the
model for that local batch. This happens in parallel on multiple devices.
#. Synchronize the local gradients computed by each device and combine them to update the model weights. The updated
weights are then redistributed to each device.
In DDP training, each process or worker owns a replica of the model and processes a batch of data, and then the reducer uses
``allreduce`` to sum up gradients over different workers.
See the following developer blogs for more in-depth explanations and examples.
* `Multi GPU training with DDP — PyTorch Tutorials <https://pytorch.org/tutorials/beginner/ddp_series_multigpu.html>`_
* `Building a decoder transformer model on AMD GPUs — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/decoder-transformer/README.html#distributed-training-on-multiple-gpus>`_
.. _rocm-for-ai-pytorch-fsdp:
PyTorch FSDP
------------
As noted in :ref:`PyTorch distributed <rocm-for-ai-pytorch-distributed>`, DDP model weights and optimizer states
are evenly replicated across all workers. Fully Sharded Data Parallel (FSDP) is a type of data parallelism that shards
model parameters, optimizer states, and gradients across DDP ranks.
When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes
training some very large models feasible by allowing larger models or batch sizes to fit on-device. However, this
comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations
like overlapping communication and computation.
For a high-level overview of how FSDP works, review `Getting started with Fully Sharded Data Parallel
<https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html#how-fsdp-works>`_.
For detailed training steps, see `PyTorch FSDP examples
<https://github.com/pytorch/examples/tree/main/distributed/FSDP>`_.
.. _rocm-for-ai-deepspeed:
DeepSpeed
---------
`DeepSpeed <https://deepspeed.ai>`_ offers system innovations that make large-scale deep learning training effective,
efficient, and easy to use. Innovations such as ZeRO, 3D-Parallelism, DeepSpeed-MoE, ZeRO-Infinity, and so on fall under
the training pillar.
See `Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs
<https://rocm.blogs.amd.com/artificial-intelligence/megatron-deepspeed-pretrain/README.html>`_ for a detailed example of
training with DeepSpeed on an AMD accelerator or GPU.
.. _rocm-for-ai-automatic-mixed-precision:
Automatic mixed precision (AMP)
-------------------------------
As models increase in size, so do the time and memory needed to train them; their cost also increases. Any measure we
can take to reduce training time and memory usage through `automatic mixed precision
<https://pytorch.org/docs/stable/amp.html>`_ (AMP) is highly beneficial for most use cases.
See `Automatic mixed precision in PyTorch using AMD GPUs — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/automatic-mixed-precision/README.html#automatic-mixed-precision-in-pytorch-using-amd-gpus>`_
for more information about running AMP on an AMD accelerator.
.. _rocm-for-ai-fine-tune:
Fine-tuning your model
======================
ROCm supports multiple techniques for :ref:`optimizing fine-tuning <fine-tuning-llms-concept-optimizations>`, for
example, LoRA, QLoRA, PEFT, and FSDP.
Learn more about challenges and solutions for model fine-tuning in :doc:`../llm-fine-tuning-optimization/index`.
The following developer blogs showcase examples of fine-tuning a model on an AMD accelerator or GPU.
* Fine-tuning Llama2 with LoRA
* `Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering
<https://rocm.blogs.amd.com/artificial-intelligence/llama2-lora/README.html>`_
* Fine-tuning Llama2 with QLoRA
* `Enhancing LLM accessibility: A deep dive into QLoRA through fine-tuning Llama 2 on a single AMD GPU
<https://rocm.blogs.amd.com/artificial-intelligence/llama2-Qlora/README.html>`_
* Fine-tuning a BERT-based LLM for a text classification task using JAX
* `LLM distributed supervised fine-tuning with JAX
<https://rocm.blogs.amd.com/artificial-intelligence/distributed-sft-jax/README.html>`_
* Fine-tuning StarCoder using PEFT
* `Instruction fine-tuning of StarCoder with PEFT on multiple AMD GPUs
<https://rocm.blogs.amd.com/artificial-intelligence/starcoder-fine-tune/README.html>`_
* Recipes for fine-tuning Llama2 and 3 with ``llama-recipes``
* `meta-llama/llama-recipes: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover
single/multi-node GPUs <https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/finetuning>`_

View File

@@ -1,140 +1,503 @@
.. meta::
:description: How to use ROCm for AI
:keywords: ROCm, AI, LLM, train, fine-tune, FSDP, DeepSpeed, LLaMA, tutorial
:description: How to train a model using ROCm Megatron-LM
:keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch
****************
Training a model
****************
**************************************
Training a model with ROCm Megatron-LM
**************************************
The following is a brief overview of popular component paths per AI development use-case, such as training, LLMs,
and inferencing.
.. _amd-megatron-lm:
Accelerating model training
===========================
The ROCm Megatron-LM framework is a specialized fork of the robust Megatron-LM, designed to
enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X
accelerators, AMD Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI
workloads. It is purpose-built to :ref:`support models <amd-megatron-lm-model-support>`
like Meta's Llama 2, Llama 3, and Llama 3.1, enabling developers to train next-generation AI models with greater
efficiency. See the GitHub repository at `<https://github.com/ROCm/Megatron-LM>`__.
To train a large model like GPT2 or Llama 2 70B, a single accelerator or GPU cannot store all the model parameters
required for training. What if you could convert the single-GPU training code to run on multiple accelerators or GPUs?
PyTorch offers distributed training solutions to facilitate this.
For ease of use, AMD provides a ready-to-use Docker image for MI300X accelerators containing essential
components, including PyTorch, PyTorch Lightning, ROCm libraries, and Megatron-LM utilities. It contains the
following software to accelerate training workloads:
.. _rocm-for-ai-pytorch-distributed:
+--------------------------+--------------------------------+
| Software component | Version |
+==========================+================================+
| ROCm | 6.1 |
+--------------------------+--------------------------------+
| PyTorch | 2.4.0 |
+--------------------------+--------------------------------+
| PyTorch Lightning | 2.4.0 |
+--------------------------+--------------------------------+
| Megatron Core | 0.9.0 |
+--------------------------+--------------------------------+
| Transformer Engine | 1.5.0 |
+--------------------------+--------------------------------+
| Flash Attention | v2.6 |
+--------------------------+--------------------------------+
| Transformers | 4.44.0 |
+--------------------------+--------------------------------+
PyTorch distributed
-------------------
Supported features and models
=============================
As of PyTorch 1.6.0, features in ``torch.distributed`` are categorized into three main components:
Megatron-LM provides the following key features to train large language models efficiently:
- `Distributed data-parallel training
<https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`_ (DDP)
- Transformer Engine (TE)
- `RPC-Based distributed training <https://pytorch.org/docs/stable/rpc.html>`_ (RPC)
- APEX
- `Collective communication <https://pytorch.org/docs/stable/distributed.html>`_
- GEMM tuning
In this guide, the focus is on the distributed data-parallelism strategy as its the most popular. To get started with DDP,
lets first understand how to coordinate the model and its training data across multiple accelerators or GPUs.
- Torch.compile
The DDP workflow on multiple accelerators or GPUs is as follows:
- 3D parallelism: TP + SP + CP
#. Split the current global training batch into small local batches on each GPU. For instance, if you have 8 GPUs and
the global batch is set at 32 samples, each of the 8 GPUs will have a local batch size of 4 samples.
- Distributed optimizer
#. Copy the model to every device so each device can process its local batches independently.
- Flash Attention (FA) 2
#. Run a forward pass, then a backward pass, and output the gradient of the weights with respect to the loss of the
model for that local batch. This happens in parallel on multiple devices.
- Fused kernels
#. Synchronize the local gradients computed by each device and combine them to update the model weights. The updated
weights are then redistributed to each device.
- Pre-training
In DDP training, each process or worker owns a replica of the model and processes a batch of data, then the reducer uses
``allreduce`` to sum up gradients over different workers.
.. _amd-megatron-lm-model-support:
See the following developer blogs for more in-depth explanations and examples.
The following models are pre-optimized for performance on the AMD Instinct MI300X accelerator.
* `Multi GPU training with DDP — PyTorch Tutorials <https://pytorch.org/tutorials/beginner/ddp_series_multigpu.html>`_
* Llama 2 7B
* `Building a decoder transformer model on AMD GPUs — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/decoder-transformer/README.html#distributed-training-on-multiple-gpus>`_
* Llama 2 70B
.. _rocm-for-ai-pytorch-fsdp:
* Llama 3 8B
PyTorch FSDP
------------
* Llama 3 70B
As noted in :ref:`PyTorch distributed <rocm-for-ai-pytorch-distributed>`, in DDP model weights and optimizer states
are evenly replicated across all workers. Fully Sharded Data Parallel (FSDP) is a type of data parallelism that shards
model parameters, optimizer states, and gradients across DDP ranks.
* Llama 3.1 8B
When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes
the training of some very large models feasible by allowing larger models or batch sizes to fit on-device. However, this
comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations
like overlapping communication and computation.
* Llama 3.1 70B
For a high-level overview of how FSDP works, review `Getting started with Fully Sharded Data Parallel
<https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html#how-fsdp-works>`_.
Prerequisite system validation steps
====================================
For detailed training steps, refer to the `PyTorch FSDP examples
<https://github.com/pytorch/examples/tree/main/distributed/FSDP>`_.
Complete the following system validation and optimization steps to set up your system before starting training.
.. _rocm-for-ai-deepspeed:
Disable NUMA auto-balancing
---------------------------
DeepSpeed
---------
Generally, application performance can benefit from disabling NUMA auto-balancing. However,
it might be detrimental to performance with certain types of workloads.
`DeepSpeed <https://deepspeed.ai>`_ offers system innovations that make large-scale deep learning training effective,
efficient, and easy to use. Innovations such as ZeRO, 3D-Parallelism, DeepSpeed-MoE, ZeRO-Infinity, and so on fall under
the training pillar.
Run the command ``cat /proc/sys/kernel/numa_balancing`` to check your current NUMA (Non-Uniform
Memory Access) settings. Output ``0`` indicates this setting is disabled. If there is no output or
the output is ``1``, run the following command to disable NUMA auto-balancing.
See `Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/megatron-deepspeed-pretrain/README.html>`_ for a detailed example of
training with DeepSpeed on an AMD accelerator or GPU.
.. code-block:: shell
.. _rocm-for-ai-automatic-mixed-precision:
sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'
Automatic mixed precision (AMP)
See :ref:`mi300x-disable-numa` for more information.
Hardware verification with ROCm
-------------------------------
As models increase in size, the time and memory needed to train them; that is, their cost also increases. Any measure we
can take to reduce training time and memory usage through `automatic mixed precision
<https://pytorch.org/docs/stable/amp.html>`_ (AMP) is highly beneficial for most use cases.
Use the command ``rocm-smi --setperfdeterminism 1900`` to set the max clock speed up to 1900 MHz
instead of the default 2100 MHz. This can reduce the chance of a PCC event lowering the attainable
GPU clocks. This setting will not be required for new IFWI releases with the production PRC feature.
You can restore this setting to its default value with the ``rocm-smi -r`` command.
See `Automatic mixed precision in PyTorch using AMD GPUs — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/automatic-mixed-precision/README.html#automatic-mixed-precision-in-pytorch-using-amd-gpus>`_
for more information about running AMP on an AMD accelerator.
Run the command:
.. _rocm-for-ai-fine-tune:
.. code-block:: shell
Fine-tuning your model
======================
rocm-smi --setperfdeterminism 1900
ROCm supports multiple techniques for :ref:`optimizing fine-tuning <fine-tuning-llms-concept-optimizations>`, for
example, LoRA, QLoRA, PEFT, and FSDP.
See :ref:`mi300x-hardware-verification-with-rocm` for more information.
Learn more about challenges and solutions for model fine-tuning in :doc:`../llm-fine-tuning-optimization/index`.
RCCL Bandwidth Test
-------------------
The following developer blogs showcase examples of how to fine-tune a model on an AMD accelerator or GPU.
ROCm Collective Communications Library (RCCL) is a standalone library of standard collective communication
routines for GPUs. See the :doc:`RCCL documentation <rccl:index>` for more information. Before starting
pre-training, running a RCCL bandwidth test helps ensure that the multi-GPU or multi-node setup is optimized
for efficient distributed training.
* Fine-tuning Llama2 with LoRA
Running the RCCL bandwidth test helps verify that:
* `Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/llama2-lora/README.html>`_
- The GPUs can communicate across nodes or within a single node.
* Fine-tuning Llama2 with QLoRA
- The interconnect (such as InfiniBand, Ethernet, or Infinite fabric) is functioning as expected and
provides adequate bandwidth for communication.
* `Enhancing LLM accessibility: A deep dive into QLoRA through fine-tuning Llama 2 on a single AMD GPU — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/llama2-Qlora/README.html>`_
- No hardware setup or cabling issues could affect the communication between GPUs
* Fine-tuning a BERT-based LLM for a text classification task using JAX
Tuning and optimizing hyperparameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* `LLM distributed supervised fine-tuning with JAX — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/distributed-sft-jax/README.html>`_
In distributed training, specific hyperparameters related to distributed communication can be tuned based on
the results of the RCCL bandwidth test. These variables are already set in the Docker image:
* Fine-tuning StarCoder using PEFT
.. code-block:: shell
* `Instruction fine-tuning of StarCoder with PEFT on multiple AMD GPUs — ROCm Blogs
<https://rocm.blogs.amd.com/artificial-intelligence/starcoder-fine-tune/README.html>`_
# force all RCCL streams to be high priority
export TORCH_NCCL_HIGH_PRIORITY=1
* Recipes for fine-tuning Llama2 and 3 with ``llama-recipes``
# specify which RDMA interfaces to use for communication
export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7
# define the Global ID index used in RoCE mode
export NCCL_IB_GID_INDEX=3
# avoid data corruption/mismatch issue that existed in past releases
export RCCL_MSCCL_ENABLE=0
Running the RCCL Bandwidth Test
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It's recommended you run the RCCL bandwidth test before launching training. It ensures system
performance is sufficient to launch training. RCCL is not included in the AMD Megatron-LM Docker
image; follow the instructions in `<https://github.com/ROCm/rccl-tests>`__ to get started.
See :ref:`mi300x-rccl` for more information.
Run on 8 GPUs (``-g 8``), scanning from 8 bytes to 10 GB:
.. code-block:: shell
./build/all_reduce_perf -b 8 -e 10G -f 2 -g 8
.. image:: ../../data/how-to/rocm-for-ai/rccl-tests-8-gpu.png
:width: 800
Using one MPI process per GPU and ``-g 1`` for performance-oriented runs on both single-node and multi-node is
recommended. So, a run on 8 GPUs looks something like:
.. code-block:: shell
mpirun -np 8 --bind-to numa ./build/all_reduce_perf -b 8 -e 10G -f 2 -g 1
.. image:: ../../data/how-to/rocm-for-ai/rccl-tests-1-mpi-process-per-gpu.png
:width: 800
Running with one MPI process per GPU ensures a one-to-one mapping for CPUs and GPUs, which can be beneficial
for smaller message sizes. This better represents the real-world use of RCCL in deep learning frameworks like
PyTorch and TensorFlow.
Use the following script to run the RCCL test for four MI300X GPU nodes. Modify paths and node addresses as needed.
.. code-block::
/home/$USER/ompi_for_gpu/ompi/bin/mpirun -np 32 -H tw022:8,tw024:8,tw010:8, tw015:8 \
--mca pml ucx \
--mca btl ^openib \
-x NCCL_SOCKET_IFNAME=ens50f0np0 \
-x NCCL_IB_HCA=rdma0:1,rdma1:1,rdma2:1,rdma3:1,rdma4:1,rdma5:1,rdma6:1,rdma7:1 \
-x NCCL_IB_GID_INDEX=3 \
-x NCCL_MIN_NCHANNELS=40 \
-x NCCL_DEBUG=version \
$HOME/rccl-tests/build/all_reduce_perf -b 8 -e 8g -f 2 -g 1
.. image:: ../../data/how-to/rocm-for-ai/rccl-tests-4-mi300x-gpu-nodes.png
:width: 800
.. _mi300x-amd-megatron-lm-training:
Start training on MI300X accelerators
=====================================
The pre-built ROCm Megatron-LM environment allows users to quickly validate system performance, conduct
training benchmarks, and achieve superior performance for models like Llama 2 and Llama 3.1.
Use the following instructions to set up the environment, configure the script to train models, and
reproduce the benchmark results on the MI300X accelerators with the AMD Megatron-LM Docker
image.
.. _amd-megatron-lm-requirements:
Download the Docker image and required packages
-----------------------------------------------
1. Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull rocm/megatron-lm:24.12-dev
2. Launch the Docker container.
.. code-block:: shell
docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $CACHE_DIR:/root/.cache --name megatron-dev-env rocm/megatron-lm:24.12-dev /bin/bash
3. Clone the ROCm Megatron-LM repository to a local directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/Megatron-LM
cd Megatron-LM
.. note::
This release is validated with ``ROCm/Megatron-LM`` commit `bb93ccb <https://github.com/ROCm/Megatron-LM/tree/bb93ccbfeae6363c67b361a97a27c74ab86e7e92>`_.
Checking out this specific commit is recommended for a stable and reproducible environment.
.. code-block:: shell
git checkout bb93ccbfeae6363c67b361a97a27c74ab86e7e92
Prepare training datasets
-------------------------
If you already have the preprocessed data, you can skip this section.
Use the following command to process datasets. We use GPT data as an example. You may change the merge table, use an
end-of-document token, remove sentence splitting, and use the tokenizer type.
.. code-block:: shell
python tools/preprocess_data.py \
--input my-corpus.json \
--output-prefix my-gpt2 \
--vocab-file gpt2-vocab.json \
--tokenizer-type GPT2BPETokenizer \
--merge-file gpt2-merges.txt \
--append-eod
In this case, the automatically generated output files are named ``my-gpt2_text_document.bin`` and
``my-gpt2_text_document.idx``.
.. image:: ../../data/how-to/rocm-for-ai/prep-training-datasets-my-gpt2-text-document.png
:width: 800
.. _amd-megatron-lm-environment-setup:
Environment setup
-----------------
In the ``examples/llama`` directory of Megatron-LM, if you're working with Llama 2 7B or Llama 2 70 B, use the
``train_llama2.sh`` configuration script. Likewise, if you're working with Llama 3 or Llama 3.1, then use
``train_llama3.sh`` and update the configuration script accordingly.
Network interface
^^^^^^^^^^^^^^^^^
To avoid connectivity issues, ensure the correct network interface is set in your training scripts.
1. Run the following command to find the active network interface on your system.
.. code-block:: shell
ip a
2. Update the ``NCCL_SOCKET_IFNAME`` and ``GLOO_SOCKET_IFNAME`` variables with your systems network interface. For
example:
.. code-block:: shell
export NCCL_SOCKET_IFNAME=ens50f0np0
export GLOO_SOCKET_IFNAME=ens50f0np0
Dataset options
^^^^^^^^^^^^^^^
You can use either mock data or real data for training.
* If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset.
.. code-block:: shell
DATA_DIR="/root/.cache/data" # Change to where your dataset is stored
DATA_PATH=${DATA_DIR}/bookcorpus_text_sentence
.. code-block:: shell
--data-path $DATA_PATH
Ensure that the files are accessible inside the Docker container.
* Mock data can be useful for testing and validation. If you're using mock data, replace ``--data-path $DATA_PATH`` with the ``--mock-data`` option.
.. code-block:: shell
--mock-data
Tokenizer
^^^^^^^^^
Tokenization is the process of converting raw text into tokens that can be processed by the model. For Llama
models, this typically involves sub-word tokenization, where words are broken down into smaller units based on
a fixed vocabulary. The tokenizer is trained along with the model on a large corpus of text, and it learns a
fixed vocabulary that can represent a wide range of text from different domains. This allows Llama models to
handle a variety of input sequences, including unseen words or domain-specific terms.
To train any of the Llama 2 models that this Docker image supports, use the ``Llama2Tokenizer``.
To train any of Llama 3 and Llama 3.1 models that this Docker image supports, use the ``HuggingFaceTokenizer``.
Set the Hugging Face model link in the ``TOKENIZER_MODEL`` variable.
For example, if you're using the Llama 3.1 8B model:
.. code-block:: shell
TOKENIZER_MODEL=meta-llama/Llama-3.1-8B
Run benchmark tests
-------------------
.. note::
If you're running **multi node training**, update the following environment variables. They can
also be passed as command line arguments.
* Change ``localhost`` to the master node's hostname:
.. code-block:: shell
MASTER_ADDR="${MASTER_ADDR:-localhost}"
* Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``):
.. code-block:: shell
NNODES="${NNODES:-1}"
* Set the rank of each node (0 for master, 1 for the first worker node, and so on):
.. code-block:: shell
NODE_RANK="${NODE_RANK:-0}"
* Use this command to run a performance benchmark test of any of the Llama 2 models that this Docker image supports (see :ref:`variables <amd-megatron-lm-benchmark-test-vars>`).
.. code-block:: shell
{variables} bash examples/llama/train_llama2.sh
* Use this command to run a performance benchmark test of any of the Llama 3 and Llama 3.1 models that this Docker image supports (see :ref:`variables <amd-megatron-lm-benchmark-test-vars>`).
.. code-block:: shell
{variables} bash examples/llama/train_llama3.sh
.. _amd-megatron-lm-benchmark-test-vars:
The benchmark tests support the same set of variables:
+--------------------------+-----------------------+-----------------------+
| Name | Options | Description |
+==========================+=======================+=======================+
| ``TEE_OUTPUT`` | 0 or 1 | 0: disable training |
| | | log |
| | | |
| | | 1: enable training |
| | | log |
+--------------------------+-----------------------+-----------------------+
| ``MBS`` | | Micro batch size |
+--------------------------+-----------------------+-----------------------+
| ``BS`` | | Batch size |
+--------------------------+-----------------------+-----------------------+
| ``TP`` | 1, 2, 4, 8 | Tensor parallel |
+--------------------------+-----------------------+-----------------------+
| ``TE_FP8`` | 0 or 1 | Datatype. |
| | | If it is set to 1, |
| | | FP8. |
| | | |
| | | If it is set to 0. |
| | | BP16 |
+--------------------------+-----------------------+-----------------------+
| ``NO_TORCH_COMPILE`` | 0 or 1 | If it is set to 1, |
| | | enable torch.compile. |
| | | |
| | | If it is set to 0. |
| | | Disable torch.compile |
| | | (default) |
+--------------------------+-----------------------+-----------------------+
| ``SEQ_LENGTH`` | | Input sequence length |
+--------------------------+-----------------------+-----------------------+
| ``GEMM_TUNING`` | 0 or 1 | If it is set to 1, |
| | | enable gemm tuning. |
| | | |
| | | If it is set to 0, |
| | | disable gemm tuning |
+--------------------------+-----------------------+-----------------------+
| ``USE_FLASH_ATTN`` | 0 or 1 | 0: disable flash |
| | | attention |
| | | |
| | | 1: enable flash |
| | | attention |
+--------------------------+-----------------------+-----------------------+
| ``ENABLE_PROFILING`` | 0 or 1 | 0: disable torch |
| | | profiling |
| | | |
| | | 1: enable torch |
| | | profiling |
+--------------------------+-----------------------+-----------------------+
| ``MODEL_SIZE`` | | The size of the mode: |
| | | 7B/70B, etc. |
+--------------------------+-----------------------+-----------------------+
| ``TOTAL_ITERS`` | | Total number of |
| | | iterations |
+--------------------------+-----------------------+-----------------------+
| ``transformer-impl`` | transformer_engine or | Enable transformer |
| | local | engine by default |
+--------------------------+-----------------------+-----------------------+
Benchmarking examples
^^^^^^^^^^^^^^^^^^^^^
.. tab-set::
.. tab-item:: Single node training
:sync: single
Use this command to run training with Llama 2 7B model on a single node. You can specify MBS, BS, FP,
datatype, and so on.
.. code-block:: bash
TEE_OUTPUT=1 MBS=5 BS=120 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1
SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh
You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script <amd-megatron-lm-environment-setup>`.
See the sample output:
.. image:: ../../data/how-to/rocm-for-ai/llama2-7b-training-log-sample.png
:width: 800
.. tab-item:: Multi node training
:sync: multi
Launch the Docker container on each node.
In this example, run training with Llama 2 7B model on 2 nodes with specific MBS, BS, FP, datatype, and
so on.
On the master node:
.. code-block:: bash
TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1
SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh
On the worker node:
.. code-block:: bash
TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1
SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh
You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script <amd-megatron-lm-environment-setup>`.
Sample output for 2-node training:
Master node:
.. image:: ../../data/how-to/rocm-for-ai/2-node-training-master.png
:width: 800
Worker node:
.. image:: ../../data/how-to/rocm-for-ai/2-node-training-worker.png
:width: 800
* `meta-llama/llama-recipes: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover
single/multi-node GPUs <https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/finetuning>`_

View File

@@ -537,6 +537,8 @@ installation was successful, refer to the
:doc:`rocm-install-on-linux:install/post-install`.
Should verification fail, consult :doc:`/how-to/system-debugging`.
.. _mi300x-hardware-verification-with-rocm:
Hardware verification with ROCm
-------------------------------

File diff suppressed because it is too large Load Diff

View File

@@ -37,7 +37,6 @@ ROCm documentation is organized into the following categories:
:::{grid-item-card} How to
:class-body: rocm-card-banner rocm-hue-12
* [Programming guide](./how-to/hip_programming_guide.rst)
* [Use ROCm for AI](./how-to/rocm-for-ai/index.rst)
* [Use ROCm for HPC](./how-to/rocm-for-hpc/index.rst)
* [Fine-tune LLMs and inference optimization](./how-to/llm-fine-tuning-optimization/index.rst)
@@ -55,12 +54,11 @@ ROCm documentation is organized into the following categories:
:class-body: rocm-card-banner rocm-hue-8
* [GPU architecture overview](./conceptual/gpu-arch.md)
* [GPU memory](./conceptual/gpu-memory.md)
* [Input-Output Memory Management Unit (IOMMU)](./conceptual/iommu.rst)
* [File structure (Linux FHS)](./conceptual/file-reorg.md)
* [GPU isolation techniques](./conceptual/gpu-isolation.md)
* [Using CMake](./conceptual/cmake-packages.rst)
* [ROCm & PCIe atomics](./conceptual/More-about-how-ROCm-uses-PCIe-Atomics.rst)
* [PCIe atomics in ROCm](./conceptual/pcie-atomics.rst)
* [Inception v3 with PyTorch](./conceptual/ai-pytorch-inception.md)
* [Oversubscription of hardware resources](./conceptual/oversubscription.rst)
:::

View File

@@ -63,7 +63,7 @@
* {doc}`hipSPARSELt <hipsparselt:index>`
* {doc}`rocALUTION <rocalution:index>`
* {doc}`rocWMMA <rocwmma:index>`
* {doc}`Tensile <tensile:index>`
* {doc}`Tensile <tensile:src/index>`
:::
::::

View File

@@ -32,8 +32,6 @@ subtrees:
- caption: How to
entries:
- file: how-to/programming_guide.rst
title: Programming guide
- file: how-to/rocm-for-ai/index.rst
title: Use ROCm for AI
subtrees:
@@ -42,6 +40,8 @@ subtrees:
title: Installation
- file: how-to/rocm-for-ai/train-a-model.rst
title: Train a model
- file: how-to/rocm-for-ai/scale-model-training.rst
title: Scale model training
- file: how-to/rocm-for-ai/hugging-face-models.rst
title: Run models from Hugging Face
- file: how-to/rocm-for-ai/deploy-your-model.rst
@@ -146,8 +146,6 @@ subtrees:
title: AMD Instinct MI100/CDNA1 ISA
- url: https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf
title: White paper
- file: conceptual/gpu-memory.md
title: GPU memory
- file: conceptual/iommu.rst
title: Input-Output Memory Management Unit (IOMMU)
- file: conceptual/file-reorg.md
@@ -156,8 +154,8 @@ subtrees:
title: GPU isolation techniques
- file: conceptual/cmake-packages.rst
title: Using CMake
- file: conceptual/More-about-how-ROCm-uses-PCIe-Atomics.rst
title: ROCm & PCIe atomics
- file: conceptual/pcie-atomics.rst
title: PCIe atomics in ROCm
- file: conceptual/ai-pytorch-inception.md
title: Inception v3 with PyTorch
- file: conceptual/oversubscription.rst

View File

@@ -1,3 +1,3 @@
rocm-docs-core==1.9.2
rocm-docs-core==1.11.0
sphinx-reredirects
sphinx-sitemap

View File

@@ -90,7 +90,7 @@ requests==2.32.3
# via
# pygithub
# sphinx
rocm-docs-core==1.9.2
rocm-docs-core==1.11.0
# via -r requirements.in
smmap==5.0.1
# via gitdb

View File

@@ -75,7 +75,7 @@ Math
":doc:`rocSOLVER <rocsolver:index>`", "An implementation of LAPACK routines on ROCm software, implemented in the HIP programming language and optimized for AMD's latest discrete GPUs"
":doc:`rocSPARSE <rocsparse:index>`", "Exposes a common interface that provides BLAS for sparse computation implemented on ROCm runtime and toolchains (in the HIP programming language)"
":doc:`rocWMMA <rocwmma:index>`", "C++ library for accelerating mixed-precision matrix multiply-accumulate (MMA) operations"
":doc:`Tensile <tensile:index>`", "Creates benchmark-driven backend libraries for GEMMs, GEMM-like problems, and general N-dimensional tensor contractions"
":doc:`Tensile <tensile:src/index>`", "Creates benchmark-driven backend libraries for GEMMs, GEMM-like problems, and general N-dimensional tensor contractions"
Primitives
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -66,46 +66,44 @@ endef
# It is a space seperated list with zero or more elements.
$(call adddep,amd_smi_lib,${ASAN_DEP})
$(call adddep,aqlprofile,${ASAN_DEP} hsa)
$(call adddep,aqlprofile,${ASAN_DEP} rocr)
$(call adddep,comgr,lightning devicelibs)
$(call adddep,dbgapi,hsa comgr)
$(call adddep,dbgapi,rocr comgr)
$(call adddep,devicelibs,lightning)
$(call adddep,hip_on_rocclr,${ASAN_DEP} hsa comgr hipcc rocprofiler-register)
$(call adddep,hip_on_rocclr,${ASAN_DEP} rocr comgr hipcc rocprofiler-register)
$(call adddep,hipcc,)
$(call adddep,hipify_clang,hip_on_rocclr lightning)
$(call adddep,hsa,${ASAN_DEP} thunk lightning devicelibs rocprofiler-register)
$(call adddep,lightning,)
$(call adddep,omniperf,${ASAN_DEP})
$(call adddep,omnitrace,hipcc hsa hip_on_rocclr rocm_smi_lib rocprofiler roctracer)
$(call adddep,opencl_icd_loader,)
$(call adddep,opencl_on_rocclr,${ASAN_DEP} hsa comgr opencl_icd_loader)
$(call adddep,openmp_extras,thunk lightning devicelibs hsa)
$(call adddep,rdc,${ASAN_DEP} rocm_smi_lib hsa rocprofiler)
$(call adddep,rocclr,${ASAN_DEP} hsa comgr hipcc rocprofiler-register)
$(call adddep,rocm_bandwidth_test,${ASAN_DEP} hsa)
$(call adddep,opencl_on_rocclr,${ASAN_DEP} rocr comgr)
$(call adddep,openmp_extras,lightning devicelibs rocr)
$(call adddep,rocm_bandwidth_test,${ASAN_DEP} rocr)
$(call adddep,rocm_smi_lib,${ASAN_DEP})
$(call adddep,rocm-cmake,${ASAN_DEP})
$(call adddep,rocm-core,${ASAN_DEP})
$(call adddep,rocm-gdb,dbgapi)
$(call adddep,rocminfo,${ASAN_DEP} hsa)
$(call adddep,rocminfo,${ASAN_DEP} rocr)
$(call adddep,rocprofiler-register,${ASAN_DEP})
$(call adddep,rocprofiler-sdk,${ASAN_DEP} hsa aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocprofiler,${ASAN_DEP} hsa roctracer aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocr_debug_agent,${ASAN_DEP} hip_on_rocclr hsa dbgapi)
$(call adddep,roctracer,${ASAN_DEP} hsa hip_on_rocclr)
$(call adddep,thunk,${ASAN_DEP})
$(call adddep,rocprofiler-sdk,${ASAN_DEP} rocr aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocprofiler-systems,${ASAN_DEP} hipcc rocr hip_on_rocclr rocm_smi_lib rocprofiler roctracer rocprofiler-sdk)
$(call adddep,rocprofiler,${ASAN_DEP} rocr roctracer aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocprofiler-compute,${ASAN_DEP})
$(call adddep,rocr,${ASAN_DEP} lightning rocm_smi_lib devicelibs rocprofiler-register)
$(call adddep,rocr_debug_agent,${ASAN_DEP} hip_on_rocclr rocr dbgapi)
$(call adddep,roctracer,${ASAN_DEP} rocr hip_on_rocclr)
# rocm-dev points to all possible last finish components of Stage1 build.
rocm-dev-components :=rdc hipify_clang openmp_extras \
omniperf omnitrace rocm-core amd_smi_lib hipcc \
rocm_bandwidth_test rocr_debug_agent rocm-gdb
$(call adddep,rocm-dev,$(filter-out ${NOBUILD},${rocm-dev-components}))
rocm-dev-components :=amd_smi_lib aqlprofile comgr dbgapi devicelibs hip_on_rocclr hipcc hipify_clang \
lightning rocprofiler-compute opencl_on_rocclr openmp_extras rocm_bandwidth_test rocm_smi_lib \
rocm-cmake rocm-core rocm-gdb rocminfo rocprofiler-register rocprofiler-sdk rocprofiler-systems \
rocprofiler rocr rocr_debug_agent roctracer
$(call adddep,rocm-dev,$(filter-out ${NOBUILD} kernel_ubuntu,${rocm-dev-components}))
$(call adddep,amdmigraphx,hip_on_rocclr half rocblas miopen-hip lightning hipcc)
$(call adddep,amdmigraphx,hip_on_rocclr half rocblas miopen-hip lightning hipcc hiptensor)
$(call adddep,composable_kernel,lightning hipcc hip_on_rocclr rocm-cmake)
$(call adddep,half,rocm-cmake)
$(call adddep,hipblas-common,lightning)
$(call adddep,hipblas,hip_on_rocclr rocblas rocsolver lightning hipcc)
$(call adddep,hipblaslt,hip_on_rocclr openmp_extras hipblas lightning hipcc)
$(call adddep,hipblaslt,hip_on_rocclr openmp_extras lightning hipcc hipblas-common rocm-dev)
$(call adddep,hipcub,hip_on_rocclr rocprim lightning hipcc)
$(call adddep,hipfft,hip_on_rocclr openmp_extras rocfft rocrand hiprand lightning hipcc)
$(call adddep,hipfort,rocblas hipblas rocsparse hipsparse rocfft hipfft rocrand hiprand rocsolver hipsolver lightning hipcc)
@@ -115,22 +113,25 @@ $(call adddep,hipsparse,hip_on_rocclr rocsparse lightning hipcc)
$(call adddep,hipsparselt,hip_on_rocclr hipsparse lightning hipcc openmp_extras)
$(call adddep,hiptensor,hip_on_rocclr composable_kernel lightning hipcc)
$(call adddep,miopen-deps,lightning hipcc)
$(call adddep,miopen-hip,composable_kernel half hip_on_rocclr miopen-deps rocblas roctracer lightning hipcc)
$(call adddep,miopen-hip,composable_kernel half hip_on_rocclr miopen-deps hipblas hipblaslt rocrand roctracer lightning hipcc)
$(call adddep,mivisionx,amdmigraphx miopen-hip rpp lightning hipcc)
$(call adddep,rccl,hip_on_rocclr hsa lightning hipcc rocm_smi_lib hipify_clang)
$(call adddep,rccl,rocm-core hip_on_rocclr rocr lightning hipcc rocm_smi_lib hipify_clang)
$(call adddep,rdc,rocm_smi_lib rocprofiler rocmvalidationsuite)
$(call adddep,rocalution,rocblas rocsparse rocrand lightning hipcc)
$(call adddep,rocblas,hip_on_rocclr openmp_extras lightning hipcc)
$(call adddep,rocblas,hip_on_rocclr openmp_extras lightning hipcc hipblaslt)
$(call adddep,rocal,mivisionx)
$(call adddep,rocdecode,hip_on_rocclr lightning hipcc)
$(call adddep,rocdecode,hip_on_rocclr lightning hipcc amdmigraphx)
$(call adddep,rocfft,hip_on_rocclr rocrand hiprand lightning hipcc openmp_extras)
$(call adddep,rocmvalidationsuite,hip_on_rocclr hsa rocblas rocm-core lightning hipcc rocm_smi_lib)
$(call adddep,rocjpeg,hip_on_rocclr lightning hipcc rocm-dev)
$(call adddep,rocmvalidationsuite,hip_on_rocclr rocr hipblas hiprand hipblaslt rocm-core lightning hipcc rocm_smi_lib)
$(call adddep,rocprim,hip_on_rocclr lightning hipcc)
$(call adddep,rocrand,hip_on_rocclr lightning hipcc)
$(call adddep,rocsolver,hip_on_rocclr rocblas rocsparse lightning hipcc)
$(call adddep,rocsolver,hip_on_rocclr rocblas rocsparse rocprim lightning hipcc)
$(call adddep,rocsparse,hip_on_rocclr rocprim lightning hipcc)
$(call adddep,rocthrust,hip_on_rocclr rocprim lightning hipcc)
$(call adddep,rocwmma,hip_on_rocclr rocblas lightning hipcc rocm-cmake rocm_smi_lib)
$(call adddep,rpp,half lightning hipcc openmp_extras)
$(call adddep,transferbench,hip_on_rocclr lightning hipcc)
# -------------------------------------------------------------------------
@@ -189,7 +190,7 @@ else # } {
# Pass in jobserver info using the RMAKE variable
${RMAKE}@( if set -x && source $${INFRA_REPO}/envsetup.sh && \
rm -f $$@.errors $$@ $$@.repackaged && \
$${INFRA_REPO}/build_$1.sh -c && source $${INFRA_REPO}/ccache-env-mathlib.sh && \
$${INFRA_REPO}/build_$1.sh -c && \
time bash -x $${INFRA_REPO}/build_$1.sh $${RELEASE_FLAG} $${SANITIZER_FLAG} && $${INFRA_REPO}/post_inst_pkg.sh "$1" ; \
then mv $$@.inprogress $$@ ; \
else mv $$@.inprogress $$@.errors ; echo Error in $1 >&2 ; exit 1 ;\
@@ -216,11 +217,14 @@ $(call peval,$(foreach dep,$(strip ${components}),$(call toplevel,${dep})))
all: $(addprefix T_,$(filter-out ${NOBUILD},${components}))
@echo All ROCm components built
# Do not document this target
upload: $(addprefix U_,${components})
upload: $(addprefix U_,$(filter-out ${NOBUILD},${components}))
@echo All ROCm components built and uploaded
upload-rocm-dev: $(addprefix U_,$(filter-out ${NOBUILD},${components}))
@echo All rocm-dev components built and uploaded
##help rocm-dev: Build a subset of ROCm
rocm-dev: T_rocm-dev
rocm-dev: $(addprefix T_,$(filter-out ${NOBUILD},${components}))
@echo rocm-dev built
${OUT_DIR}/logs:

View File

@@ -22,15 +22,15 @@ printUsage() {
return 0
}
PROJ_NAME="amdsmi"
PACKAGE_ROOT="$(getPackageRoot)"
TARGET="build"
PACKAGE_LIB=$(getLibPath)
PACKAGE_INCLUDE="$(getIncludePath)"
AMDSMI_BUILD_DIR=$(getBuildPath amdsmi)
AMDSMI_PACKAGE_DEB_DIR="$(getPackageRoot)/deb/amdsmi"
AMDSMI_PACKAGE_RPM_DIR="$(getPackageRoot)/rpm/amdsmi"
AMDSMI_BUILD_DIR=$(getBuildPath $PROJ_NAME)
AMDSMI_PACKAGE_DEB_DIR="$PACKAGE_ROOT/deb/$PROJ_NAME"
AMDSMI_PACKAGE_RPM_DIR="$PACKAGE_ROOT/rpm/$PROJ_NAME"
AMDSMI_BUILD_TYPE="debug"
BUILD_TYPE="Debug"
@@ -57,10 +57,9 @@ do
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on
# TODO - support standard option of passing cmake environment vars - CFLAGS,CXXFLAGS etc., to enable address sanitizer
ADDRESS_SANITIZER=true ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)

View File

@@ -10,7 +10,9 @@ build_amdmigraphx() {
cd $COMPONENT_SRC
pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
if ! command -v rbuild &> /dev/null; then
pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
@@ -20,7 +22,7 @@ build_amdmigraphx() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx900;gfx906;gfx908;gfx90a;gfx1030;gfx1100;gfx1101;gfx1102;gfx942;gfx1200;gfx1201"
fi
init_rocm_common_cmake_params
@@ -29,7 +31,7 @@ build_amdmigraphx() {
--cxx="${ROCM_PATH}/llvm/bin/clang++" \
--cc="${ROCM_PATH}/llvm/bin/clang" \
"${rocm_math_common_cmake_params[@]}" \
-DCMAKE_MODULE_LINKER_FLAGS="-Wl,--enable-new-dtags -Wl,--rpath,$ROCM_LIB_RPATH" \
-DCMAKE_MODULE_LINKER_FLAGS="-Wl,--enable-new-dtags,--build-id=sha1,--rpath,$ROCM_LIB_RPATH" \
-DGPU_TARGETS="${GPU_TARGETS}" \
-DCMAKE_INSTALL_RPATH=""

View File

@@ -11,7 +11,9 @@ printUsage() {
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
type referred to by pkg_type"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore.
No effect of the param on this build"
echo " -h, --help Prints this help"
echo
echo "Possible values for <type>:"

View File

@@ -1,136 +0,0 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [-c|-r|-h] [makeopts]"
echo
echo "Options:"
echo " -c, --clean Removes all clang-ocl build artifacts"
echo " -r, --release Build non-debug version clang-ocl (default is debug)"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -h, --help Prints this help"
echo " -s, --static Supports static CI by accepting this param & not bailing out. No effect of the param though"
echo
return 0
}
TARGET="build"
CLANG_OCL_DEST="$(getBinPath)"
CLANG_OCL_SRC_ROOT="$CLANG_OCL_ROOT"
CLANG_OCL_BUILD_DIR="$(getBuildPath clang-ocl)"
MAKEARG="$DASH_JAY"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_UTILS="$(getUtilsPath)"
CLANG_OCL_PACKAGE_DEB="$PACKAGE_ROOT/deb/clang-ocl"
CLANG_OCL_PACKAGE_RPM="$PACKAGE_ROOT/rpm/clang-ocl"
BUILD_TYPE="Debug"
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
VALID_STR=`getopt -o hcraso:g: --long help,clean,release,clean,static,address_sanitizer,outdir:,gpu_list: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-r | --release)
MAKEARG="$MAKEARG BUILD_TYPE=rel" ; BUILD_TYPE="Release" ; shift ;;
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-g | --gpu_list )
GPU_LIST=$2; shift 2 ;;
--) shift; break;;
(*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
done
RET_CONFLICT=1
check_conflicting_options $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
if [ $RET_CONFLICT -ge 30 ]; then
print_vars $API_NAME $TARGET $BUILD_TYPE $SHARED_LIBS $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
exit $RET_CONFLICT
fi
clean_clang-ocl() {
echo "Removing clang-ocl"
rm -rf $CLANG_OCL_DEST/clang-ocl
rm -rf $CLANG_OCL_BUILD_DIR
rm -rf $CLANG_OCL_PACKAGE_DEB
rm -rf $CLANG_OCL_PACKAGE_RPM
}
build_clang-ocl() {
if [ ! -d "$CLANG_OCL_BUILD_DIR" ]; then
mkdir -p $CLANG_OCL_BUILD_DIR
pushd $CLANG_OCL_BUILD_DIR
if [ -e $PACKAGE_ROOT/lib/bitcode/opencl.amdgcn.bc ]; then
BC_DIR="$ROCM_INSTALL_PATH/lib"
else
BC_DIR="$ROCM_INSTALL_PATH/amdgcn/bitcode"
fi
cmake \
$(rocm_cmake_params) \
-DDISABLE_CHECKS="ON" \
-DCLANG_BIN="$ROCM_INSTALL_PATH/llvm/bin" \
-DBITCODE_DIR="$BC_DIR" \
$(rocm_common_cmake_params) \
-DCPACK_SET_DESTDIR="OFF" \
$CLANG_OCL_SRC_ROOT
echo "Making clang-ocl:"
cmake --build . -- $MAKEARG
cmake --build . -- $MAKEARG install
cmake --build . -- $MAKEARG package
popd
fi
copy_if DEB "${CPACKGEN:-"DEB;RPM"}" "$CLANG_OCL_PACKAGE_DEB" $CLANG_OCL_BUILD_DIR/rocm-clang-ocl*.deb
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$CLANG_OCL_PACKAGE_RPM" $CLANG_OCL_BUILD_DIR/rocm-clang-ocl*.rpm
}
print_output_directory() {
case ${PKGTYPE} in
("deb")
echo ${CLANG_OCL_PACKAGE_DEB};;
("rpm")
echo ${CLANG_OCL_PACKAGE_RPM};;
(*)
echo "Invalid package type \"${PKGTYPE}\" provided for -o" >&2; exit 1;;
esac
exit
}
case $TARGET in
(clean) clean_clang-ocl ;;
(build) build_clang-ocl ;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac
echo "Operation complete"
exit 0

View File

@@ -6,73 +6,53 @@ source "$(dirname "${BASH_SOURCE[0]}")/compute_helper.sh"
set_component_src composable_kernel
GPU_ARCH_LIST="gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
build_miopen_ck() {
echo "Start Building Composable Kernel"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
GPU_ARCH_LIST="gfx908:xnack+;gfx90a:xnack+;gfx942:xnack+"
else
unset_asan_env_vars
set_address_sanitizer_off
fi
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
GPU_ARCH_LIST="gfx942"
ack_and_skip_static
fi
PYTHON_VERSION_WORKAROUND=''
echo "DISTRO_ID: ${DISTRO_ID}"
if [ "$DISTRO_ID" = "rhel-8.8" ] || [ "$DISTRO_ID" = "sles-15.5" ] ; then
EXTRA_PYTHON_PATH=/opt/Python-3.8.13
PYTHON_VERSION_WORKAROUND="-DCK_USE_ALTERNATIVE_PYTHON=${EXTRA_PYTHON_PATH}/bin/python3.8"
# For the python interpreter we need to export LD_LIBRARY_PATH.
export LD_LIBRARY_PATH=${EXTRA_PYTHON_PATH}/lib:$LD_LIBRARY_PATH
fi
cd $COMPONENT_SRC
mkdir "$BUILD_DIR" && cd "$BUILD_DIR"
init_rocm_common_cmake_params
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="-DAMDGPU_TARGETS=${GPU_ARCHS}"
fi
if [ "${ASAN_CMAKE_PARAMS}" == "true" ] ; then
cmake -DBUILD_DEV=OFF \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE:-'RelWithDebInfo'} \
-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++ \
-DCMAKE_CXX_FLAGS=" -O3 " \
-DCMAKE_PREFIX_PATH="${ROCM_PATH%-*}/lib/cmake;${ROCM_PATH%-*}/$ASAN_LIBDIR;${ROCM_PATH%-*}/llvm;${ROCM_PATH%-*}" \
-DCMAKE_SHARED_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_LIB_RPATH" \
-DCMAKE_EXE_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_EXE_RPATH" \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DCMAKE_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH=${ROCM_PATH} \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
${LAUNCHER_FLAGS} \
-DINSTANCES_ONLY=ON \
-DENABLE_ASAN_PACKAGING=true \
"${GPU_TARGETS}" \
"$COMPONENT_SRC"
else
cmake -DBUILD_DEV=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++ \
-DCMAKE_CXX_FLAGS=" -O3 " \
-DCMAKE_PREFIX_PATH=${ROCM_PATH%-*} \
-DCMAKE_SHARED_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN' \
-DCMAKE_EXE_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN/../lib' \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DCMAKE_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH=${ROCM_PATH} \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DINSTANCES_ONLY=ON \
"${GPU_TARGETS}" \
"$COMPONENT_SRC"
fi
cmake \
-DBUILD_DEV=OFF \
"${rocm_math_common_cmake_params[@]}" \
${PYTHON_VERSION_WORKAROUND} \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DGPU_ARCHS="${GPU_ARCH_LIST}" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CXX_FLAGS=" -O3 " \
"$COMPONENT_SRC"
cmake --build . -- -j${PROC} package
cmake --build "$BUILD_DIR" -- install
mkdir -p $PACKAGE_DIR && cp ./*.${PKGTYPE} $PACKAGE_DIR
rm -rf *
}
unset_asan_env_vars() {
@@ -88,85 +68,6 @@ set_address_sanitizer_off() {
export LDFLAGS=""
}
build_miopen_ckProf() {
ENABLE_ADDRESS_SANITIZER=false
echo "Start Building Composable Kernel Profiler"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
else
unset_asan_env_vars
set_address_sanitizer_off
fi
cd $COMPONENT_SRC
cd "$BUILD_DIR"
rm -rf *
architectures='gfx10 gfx11 gfx90 gfx94'
if [ -n "$GPU_ARCHS" ]; then
architectures=$(echo ${GPU_ARCHS} | awk -F';' '{for(i=1;i<=NF;i++) a[substr($i,1,5)]} END{for(i in a) printf i" "}')
fi
for arch in ${architectures}
do
if [ "${ASAN_CMAKE_PARAMS}" == "true" ] ; then
cmake -DBUILD_DEV=OFF \
-DCMAKE_PREFIX_PATH="${ROCM_PATH%-*}/lib/cmake;${ROCM_PATH%-*}/$ASAN_LIBDIR;${ROCM_PATH%-*}/llvm;${ROCM_PATH%-*}" \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE:-'RelWithDebInfo'} \
-DCMAKE_SHARED_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_LIB_RPATH" \
-DCMAKE_EXE_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_EXE_RPATH" \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX="${ROCM_PATH}" \
-DCMAKE_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH="${ROCM_PATH}" \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DPROFILER_ONLY=ON \
-DENABLE_ASAN_PACKAGING=true \
-DGPU_ARCH="${arch}" \
"$COMPONENT_SRC"
else
cmake -DBUILD_DEV=OFF \
-DCMAKE_PREFIX_PATH="${ROCM_PATH%-*}" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_SHARED_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN' \
-DCMAKE_EXE_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN/../lib' \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX="${ROCM_PATH}" \
-DCMAKE_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH="${ROCM_PATH}" \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DPROFILER_ONLY=ON \
-DGPU_ARCH="${arch}" \
"$COMPONENT_SRC"
fi
cmake --build . -- -j${PROC} package
cp ./*ckprofiler*.${PKGTYPE} $PACKAGE_DIR
rm -rf *
done
rm -rf _CPack_Packages/ && find -name '*.o' -delete
echo "Finished building Composable Kernel"
show_build_cache_stats
}
clean_miopen_ck() {
echo "Cleaning MIOpen-CK build directory: ${BUILD_DIR} ${PACKAGE_DIR}"
rm -rf "$BUILD_DIR" "$PACKAGE_DIR"
@@ -176,7 +77,7 @@ clean_miopen_ck() {
stage2_command_args "$@"
case $TARGET in
build) build_miopen_ck; build_miopen_ckProf;;
build) build_miopen_ck ;;
outdir) print_output_directory ;;
clean) clean_miopen_ck ;;
*) die "Invalid target $TARGET" ;;

View File

@@ -15,7 +15,7 @@ printUsage() {
type referred to by pkg_type"
echo " -h, --help Prints this help"
echo " -M, --skip_man_pages Do not build the 'docs' target"
echo " -s, --static Supports static CI by accepting this param & not bailing out. No effect of the param though"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo
echo "Possible values for <type>:"
echo " deb -> Debian format (default)"
@@ -65,7 +65,7 @@ do
set_asan_env_vars
set_address_sanitizer_on ;;
(-s | --static)
SHARED_LIBS="OFF" ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; ((CLEAN_OR_OUT|=2)) ; shift 1 ;;
(-M | --skip_man_pages) DODOCSBUILD=false;;

View File

@@ -96,6 +96,7 @@ build_devicelibs() {
if [ ! -e Makefile ]; then
cmake $(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DROCM_DEVICE_LIBS_BITCODE_INSTALL_LOC_NEW="$bitcodeInstallLoc/amdgcn" \
-DROCM_DEVICE_LIBS_BITCODE_INSTALL_LOC_OLD="amdgcn" \
"$DEVICELIBS_ROOT"

View File

@@ -24,14 +24,15 @@ printUsage() {
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
MAKEOPTS="$DASH_JAY"
PROJ_NAME="hip-on-rocclr"
BUILD_PATH="$(getBuildPath hip-on-rocclr)"
BUILD_PATH="$(getBuildPath $PROJ_NAME)"
TARGET="build"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_SRC="$(getSrcPath)"
PACKAGE_DEB="$PACKAGE_ROOT/deb/hip-on-rocclr"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/hip-on-rocclr"
PACKAGE_DEB="$PACKAGE_ROOT/deb/$PROJ_NAME"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/$PROJ_NAME"
PREFIX_PATH="$PACKAGE_ROOT"
CORE_BUILD_DIR="$(getBuildPath hsa-core)"
ROCclr_BUILD_DIR="$(getBuildPath rocclr)"
@@ -52,7 +53,7 @@ MAKETARGET="deb"
PKGTYPE="deb"
OFFLOAD_ARCH=()
DEFAULT_OFFLOAD_ARCH=(gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942 gfx1030 gfx1031 gfx1033 gfx1034 gfx1035 gfx1100 gfx1101 gfx1102 gfx1103)
DEFAULT_OFFLOAD_ARCH=(gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942 gfx1030 gfx1031 gfx1033 gfx1034 gfx1035 gfx1100 gfx1101 gfx1102 gfx1200 gfx1201)
VALID_STR=`getopt -o hcrast:o: --long help,clean,release,address_sanitizer,static,offload-arch=:,outdir: -- "$@"`
eval set -- "$VALID_STR"
@@ -168,9 +169,11 @@ build_catch_tests() {
export ROCM_PATH="$ROCM_INSTALL_PATH"
cmake \
-DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DHIP_PLATFORM=amd \
-DROCM_PATH="$ROCM_INSTALL_PATH" \
-DOFFLOAD_ARCH_STR="$OFFLOAD_ARCH_STR" \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DCPACK_RPM_DEBUGINFO_PACKAGE=FALSE \
-DCPACK_DEBIAN_DEBUGINFO_PACKAGE=FALSE \
@@ -206,6 +209,8 @@ package_samples() {
export ROCM_PATH="$ROCM_INSTALL_PATH"
cmake \
-DROCM_PATH="$ROCM_INSTALL_PATH" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DCMAKE_MODULE_PATH="$CMAKE_PATH/hip" \
-DCPACK_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \

View File

@@ -0,0 +1,41 @@
#!/bin/bash
set -ex
source "$(dirname "${BASH_SOURCE[0]}")/compute_helper.sh"
set_component_src hipBLAS-common
build_hipblas-common() {
echo "Start build"
cd $COMPONENT_SRC
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
init_rocm_common_cmake_params
cmake \
"${rocm_math_common_cmake_params[@]}" \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- install
cmake --build "$BUILD_DIR" -- package
rm -rf _CPack_Packages/ && find -name '*.o' -delete
mkdir -p $PACKAGE_DIR && cp ${BUILD_DIR}/*.${PKGTYPE} $PACKAGE_DIR
show_build_cache_stats
}
clean_hipblas-common() {
echo "Cleaning hipBLAS-common build directory: ${BUILD_DIR} ${PACKAGE_DIR}"
rm -rf "$BUILD_DIR" "$PACKAGE_DIR"
echo "Done!"
}
stage2_command_args "$@"
case $TARGET in
build) build_hipblas-common ;;
outdir) print_output_directory ;;
clean) clean_hipblas-common ;;
*) die "Invalid target $TARGET" ;;
esac

View File

@@ -10,6 +10,12 @@ build_hipblas() {
echo "Start build"
CXX="g++"
CXX_FLAG=
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
CXX="amdclang++"
CXX_FLAG="-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++"
fi
CLIENTS_SAMPLES="ON"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
@@ -17,6 +23,8 @@ build_hipblas() {
CLIENTS_SAMPLES="OFF"
fi
SHARED_LIBS="ON"
echo "C compiler: $CC"
echo "CXX compiler: $CXX"
echo "FC compiler: $FC"
@@ -33,11 +41,12 @@ build_hipblas() {
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DUSE_CUDA=OFF \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DBUILD_CLIENTS_SAMPLES="${CLIENTS_SAMPLES}" \
-DCPACK_SET_DESTDIR=OFF \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
${CXX_FLAG} \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- -j${PROC}

View File

@@ -8,6 +8,10 @@ set_component_src hipBLASLt
build_hipblaslt() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -40,7 +44,6 @@ build_hipblaslt() {
-DBUILD_CLIENTS_SAMPLES=ON \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DCPACK_SET_DESTDIR=OFF \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
"$COMPONENT_SRC"

View File

@@ -9,10 +9,12 @@ printUsage() {
echo "Options:"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -h, --help Prints this help"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -r, --release Makes a release build"
echo " -h, --help Prints this help"
echo
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo
return 0
@@ -25,25 +27,31 @@ PROJ_NAME=$API_NAME
TARGET="build"
MAKEOPTS="$DASH_JAY"
BUILD_TYPE="Debug"
SHARED_LIBS="ON"
BUILD_DIR=$(getBuildPath $API_NAME)
PACKAGE_DEB=$(getPackageRoot)/deb/$API_NAME
PACKAGE_RPM=$(getPackageRoot)/rpm/$API_NAME
PACKAGE_SRC="$(getSrcPath)"
while [ "$1" != "" ];
VALID_STR=`getopt -o hcraswo:p: --long help,clean,release,address_sanitizer,static,outdir,wheel:,package: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case $1 in
case "$1" in
(-a | --address_sanitizer)
ack_and_ignore_asan ;;
(-c | --clean)
TARGET="clean" ;;
(-o | --outdir)
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 1 ;;
(-r | --release)
BUILD_TYPE="RelWithDebInfo" ;;
(-s | --static)
SHARED_LIBS="OFF" ;;
(-h | --help)
printUsage ; exit 0 ;;
--) shift; break;;
(*)
echo "Invalid option [$1]" >&2; printUsage; exit 1 ;;
esac
@@ -79,6 +87,7 @@ build() {
fi
cmake \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DHIPCC_BACKWARD_COMPATIBILITY=OFF \
@@ -87,7 +96,7 @@ build() {
popd
cmake --build "$BUILD_DIR" -- $MAKEOPTS
echo "Installing and Packaging hipcc"
cmake --build "$BUILD_DIR" -- $MAKEOPTS install
cmake --build "$BUILD_DIR" -- $MAKEOPTS package

View File

@@ -9,6 +9,10 @@ set_component_src hipCUB
build_hipcub() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
cd $COMPONENT_SRC
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
@@ -22,7 +26,7 @@ build_hipcub() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
CXX=$(set_build_variables CXX)\

View File

@@ -21,7 +21,7 @@ build_hipfft() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
cmake \

View File

@@ -8,11 +8,18 @@ set_component_src hipfort
build_hipfort() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
cmake --trace \
cmake \
-DCPACK_PACKAGING_INSTALL_PREFIX=${ROCM_PATH}\
-DHIPFORT_INSTALL_DIR="${ROCM_PATH}" \
-DCMAKE_PREFIX_PATH="${ROCM_PATH}/llvm;${ROCM_PATH}" \
-DCMAKE_BUILD_TYPE=Release \
-DCPACK_SET_DESTDIR="OFF" \
-DCPACK_RPM_PACKAGE_RELOCATABLE="ON" \
-DHIPFORT_COMPILER="${ROCM_PATH}/${ROCM_LLVMDIR}/bin/flang" \
-DCMAKE_Fortran_FLAGS="-Mfree" \
-DHIPFORT_COMPILER_FLAGS="-cpp" \

View File

@@ -22,12 +22,12 @@ printUsage() {
TARGET="build"
MAKEOPTS="$DASH_JAY"
HIPIFY_CLANG_BUILD_DIR="$(getBuildPath $HIPIFY_ROOT)"
HIPIFY_CLANG_DIST_DIR="$HIPIFY_CLANG_BUILD_DIR/dist"
BUILD_TYPE="Debug"
PACKAGE_ROOT="$(getPackageRoot)"
HIPIFY_CLANG_HASH=""
LIGHTNING_PATH="$ROCM_INSTALL_PATH/llvm"
ADDRESS_SANITIZER=false
INSTALL_CLANG_HEADERS="OFF"
DEB_PATH="$(getDebPath hipify)"
RPM_PATH="$(getRpmPath hipify)"
SHARED_LIBS="ON"
@@ -53,7 +53,7 @@ do
set_address_sanitizer_on
ADDRESS_SANITIZER=true ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
--) shift; break;;
@@ -74,7 +74,6 @@ fi
clean_hipify() {
echo "Cleaning hipify-clang"
rm -rf "$HIPIFY_CLANG_BUILD_DIR"
rm -rf "$HIPIFY_CLANG_DIST_DIR"
rm -rf "$DEB_PATH"
rm -rf "$RPM_PATH"
}
@@ -101,16 +100,16 @@ package_hipify() {
build_hipify() {
echo "Building hipify-clang binaries"
mkdir -p "$HIPIFY_CLANG_BUILD_DIR"
mkdir -p "$HIPIFY_CLANG_DIST_DIR"
pushd "$HIPIFY_CLANG_BUILD_DIR"
cmake \
-DCMAKE_BUILD_TYPE="$BUILD_TYPE" \
$(rocm_common_cmake_params) \
-DCMAKE_INSTALL_PREFIX="$HIPIFY_CLANG_DIST_DIR" \
-DCMAKE_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \
-DCPACK_PACKAGING_INSTALL_PREFIX=$ROCM_INSTALL_PATH \
-DCMAKE_PREFIX_PATH="$LIGHTNING_PATH" \
-DADDRESS_SANITIZER="$ADDRESS_SANITIZER" \
-DHIPIFY_INSTALL_CLANG_HEADERS="$INSTALL_CLANG_HEADERS" \
$HIPIFY_ROOT
cmake --build . -- $MAKEOPTS install

View File

@@ -21,6 +21,11 @@ done
build_hiprand() {
echo "Start build"
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -34,17 +39,20 @@ build_hiprand() {
mkdir "$BUILD_DIR" && cd "$BUILD_DIR"
init_rocm_common_cmake_params
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
CXX=$(set_build_variables CXX)\
cmake \
${LAUNCHER_FLAGS} \
$(rocm_common_cmake_params) \
"${rocm_math_common_cmake_params[@]}" \
-DAMDGPU_TARGETS=${GPU_TARGETS} \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DBUILD_TEST=ON \
-DBUILD_BENCHMARK=ON \
-DBUILD_CRUSH_TEST=ON \
@@ -60,7 +68,6 @@ build_hiprand() {
rm -rf _CPack_Packages/ && find -name '*.o' -delete
mkdir -p $PACKAGE_DIR && cp ${BUILD_DIR}/*.${PKGTYPE} $PACKAGE_DIR
}
clean_hiprand() {

View File

@@ -9,14 +9,23 @@ set_component_src hipSOLVER
build_hipsolver() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
CXX_FLAG="-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++"
fi
cd $COMPONENT_SRC
CXX="g++"
CXX="amdclang++"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
fi
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
echo "C compiler: $CC"
echo "CXX compiler: $CXX"
echo "FC compiler: $FC"
@@ -30,13 +39,15 @@ build_hipsolver() {
init_rocm_common_cmake_params
cmake \
-DUSE_CUDA=OFF \
-DCMAKE_CXX_COMPILER=${CXX} \
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DBUILD_CLIENTS_SAMPLES=ON \
-DCPACK_SET_DESTDIR=OFF \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
${CXX_FLAG} \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- -j${PROC}

View File

@@ -10,14 +10,26 @@ set_component_src hipSPARSE
build_hipsparse() {
echo "Start build"
CXX="g++"
CXX_FLAG=
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
CXX="${ROCM_PATH}/llvm/bin/clang++"
CXX_FLAG="-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++"
fi
cd $COMPONENT_SRC
CXX="g++"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
fi
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
echo "C compiler: $CC"
echo "CXX compiler: $CXX"
@@ -25,15 +37,16 @@ build_hipsparse() {
init_rocm_common_cmake_params
cmake \
-DCPACK_SET_DESTDIR=OFF \
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DUSE_CUDA=OFF \
-DBUILD_CLIENTS_SAMPLES=ON \
-DBUILD_CLIENTS_TESTS=ON \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DCMAKE_MODULE_PATH="${ROCM_PATH}/lib/cmake/hip;${ROCM_PATH}/hip/cmake" \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
${CXX_FLAG} \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- -j${PROC}

View File

@@ -21,6 +21,10 @@ done
build_hipsparselt() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -50,7 +54,6 @@ build_hipsparselt() {
-DBUILD_CLIENTS_SAMPLES=ON \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DCPACK_SET_DESTDIR=OFF \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
"$COMPONENT_SRC"

View File

@@ -9,6 +9,10 @@ set_component_src hipTensor
build_hiptensor() {
echo "Start build hipTensor"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -18,7 +22,6 @@ build_hiptensor() {
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
init_rocm_common_cmake_params
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else

View File

@@ -1,135 +0,0 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [options ...] [make options]"
echo
echo "Options:"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -r, --release Make a release build instead of a debug build"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of type referred to by pkg_type"
echo " -h, --help Prints this help"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo
echo
return 0
}
TARGET="build"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_SRC="$(getSrcPath)"
PACKAGE_LIB="$(getLibPath)"
PACKAGE_BIN="$(getBinPath)"
PACKAGE_DEB="$(getPackageRoot)/deb/rocr"
PACKAGE_RPM="$(getPackageRoot)/rpm/rocr"
MAKEARG=""
CORE_BUILD_DIR="$(getBuildPath hsa-core)"
ROCR_DEV_BUILD_DIR="$(getBuildPath hsa-rocr-dev)"
PREFIX_PATH="$PACKAGE_ROOT"
BUILD_TYPE="Debug"
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
unset HIP_DEVICE_LIB_PATH
unset ROCM_PATH
VALID_STR=`getopt -o hcraso: --long help,clean,release,static,address_sanitizer,outdir: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-r | --release)
BUILD_TYPE="RelWithDebInfo" ; shift ;;
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
--) shift; break;;
(*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
done
RET_CONFLICT=1
check_conflicting_options $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
if [ $RET_CONFLICT -ge 30 ]; then
print_vars $API_NAME $TARGET $BUILD_TYPE $SHARED_LIBS $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
exit $RET_CONFLICT
fi
clean_hsa() {
echo "Cleaning HSA"
rm -rf "$CORE_BUILD_DIR"
rm -rf "$PACKAGE_RPM"
rm -rf "$PACKAGE_DEB"
rm -f "$PACKAGE_ROOT"/lib/libhsa-runtime*
rm -rf "$PACKAGE_ROOT/lib/cmake/hsa-runtime64"
rm -rf "$PACKAGE_ROOT/include/hsa"
rm -rf "$PACKAGE_ROOT/share/doc/hsa-runtime64"
rm -rf "$PACKAGE_ROOT/hsa"
}
build_hsa_core() {
echo "Build HSA"
local coreMakeOpts="$DASH_JAY -C $CORE_BUILD_DIR"
echo "$HSA_CORE_ROOT"
if [ ! -d "$CORE_BUILD_DIR" ]; then
mkdir -p "$CORE_BUILD_DIR"
pushd "$CORE_BUILD_DIR"
print_lib_type $SHARED_LIBS
cmake $(rocm_cmake_params) \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DENABLE_LDCONFIG=OFF \
$(rocm_common_cmake_params) \
-DADDRESS_SANITIZER="$ADDRESS_SANITIZER" \
"$HSA_CORE_ROOT"
popd
fi
time cmake --build "$CORE_BUILD_DIR" -- $coreMakeOpts
time cmake --build "$CORE_BUILD_DIR" -- $coreMakeOpts install
time cmake --build "$CORE_BUILD_DIR" -- $coreMakeOpts package
copy_if DEB "${CPACKGEN:-"DEB;RPM"}" "$PACKAGE_DEB" $CORE_BUILD_DIR/hsa-rocr*.deb
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$PACKAGE_RPM" $CORE_BUILD_DIR/hsa-rocr*.rpm
}
print_output_directory() {
case ${PKGTYPE} in
("deb")
echo ${PACKAGE_DEB};;
("rpm")
echo ${PACKAGE_RPM};;
(*)
echo "Invalid package type \"${PKGTYPE}\" provided for -o" >&2; exit 1;;
esac
exit
}
case $TARGET in
(clean) clean_hsa ;;
(build) build_hsa_core;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac
echo "Operation complete"

View File

@@ -11,7 +11,6 @@ printUsage() {
echo "Usage: $(basename "${BASH_SOURCE}") [options ...]"
echo
echo "Options:"
echo " -t, --alt Build the 'alt' variant"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -d, --debug Build a debug version of llvm (excludes packaging)"
echo " -r, --release Build a release version of the package"
@@ -33,35 +32,26 @@ printUsage() {
return 0
}
PROJ_NAME="lightning"
ROCM_LLVM_LIB_RPATH='\$ORIGIN'
ROCM_LLVM_EXE_RPATH='\$ORIGIN/../lib:\$ORIGIN/../../../lib'
PACKAGE_OUT="$(getPackageRoot)"
BUILD_PATH="$(getBuildPath lightning)"
DEB_PATH="$(getDebPath lightning)"
RPM_PATH="$(getRpmPath lightning)"
BUILD_PATH="$(getBuildPath $PROJ_NAME)"
DEB_PATH="$(getDebPath $PROJ_NAME)"
RPM_PATH="$(getRpmPath $PROJ_NAME)"
INSTALL_PATH="${ROCM_INSTALL_PATH}/lib/llvm"
LLVM_ROOT_LCL="${LLVM_ROOT}"
ROCM_WHEEL_DIR="${BUILD_PATH}/_wheel"
TARGET="all"
MAKEOPTS="$DASH_JAY"
BUILD_TYPE="Release"
case "${JOB_NAME}" in
( *"rel"* | \
*"afar"* | \
*"nfar"* )
ENABLE_ASSERTIONS=0 ;;
( * )
ENABLE_ASSERTIONS=1 ;;
esac
ENABLE_ASSERTIONS=0
SHARED_LIBS="ON"
BUILD_LLVM_DYLIB="OFF"
FLANG_NEW=0
BUILD_ALT=0
CLEAN_OR_OUT=0;
PKGTYPE="deb"
MAKETARGET="deb"
@@ -74,10 +64,10 @@ BUILD_MANPAGES="ON"
STATIC_FLAG=
SANITIZER_AMDGPU=1
HSA_INC_PATH="$WORK_ROOT/ROCR-Runtime/src/inc"
COMGR_INC_PATH="$WORK_ROOT/llvm-project/amd/comgr/include"
HSA_INC_PATH="$WORK_ROOT/ROCR-Runtime/runtime/hsa-runtime/inc/"
COMGR_INC_PATH="$COMGR_ROOT/include"
VALID_STR=`getopt -o htcV:v:draAswlo:BPNM --long help,alt,clean,assert_llvm_ver_major:,assert_llvm_ver_minor:,debug,release,address_sanitizer,no_address_sanitizer,static,build_llvm_static,wheel,build,package,skip_lit_tests,skip_man_pages,outdir: -- "$@"`
VALID_STR=`getopt -o hcV:v:draAswlo:BPNM --long help,clean,assert_llvm_ver_major:,assert_llvm_ver_minor:,debug,release,address_sanitizer,no_address_sanitizer,static,build_llvm_static,wheel,build,package,skip_lit_tests,skip_man_pages,outdir: -- "$@"`
eval set -- "$VALID_STR"
set_dwarf_version(){
@@ -96,11 +86,10 @@ set_dwarf_version(){
while true ;
do
#echo "processing $1"
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-t | --alt)
BUILD_ALT=1 ; shift ;;
(-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-V | --assert_llvm_ver_major)
@@ -115,7 +104,7 @@ do
set_dwarf_version
SANITIZER_AMDGPU=1 ;
HSA_INC_PATH="$WORK_ROOT/hsa/runtime/opensrc/hsa-runtime/inc" ;
COMGR_INC_PATH="$WORK_ROOT/external/llvm-project/amd/comgr/include" ; shift ;;
COMGR_INC_PATH="$COMGR_ROOT/include" ; shift ;;
(-A | --no_address_sanitizer)
SANITIZER_AMDGPU=0 ;
unset HSA_INC_PATH ;
@@ -155,24 +144,11 @@ LLVM_PROJECTS="clang;lld;clang-tools-extra"
ENABLE_RUNTIMES="compiler-rt;libunwind"
BOOTSTRAPPING_BUILD_LIBCXX=0
BUILD_AMDCLANG="ON"
if [ $BUILD_ALT -eq 1 ]; then
BUILD_PATH="${BUILD_PATH}-alt"
DEB_PATH="${DEB_PATH}-alt"
RPM_PATH="${RPM_PATH}-alt"
INSTALL_PATH="${INSTALL_PATH}/alt"
LLVM_ROOT_LCL="${LLVM_ALT_ROOT}"
BUILD_AMDCLANG="OFF"
BUILD_MANPAGES="OFF"
SANITIZER_AMDGPU=0
unset HSA_INC_PATH
unset COMGR_INC_PATH
else
ENABLE_RUNTIMES="$ENABLE_RUNTIMES;libcxx;libcxxabi";
BOOTSTRAPPING_BUILD_LIBCXX=1
fi
ENABLE_RUNTIMES="$ENABLE_RUNTIMES;libcxx;libcxxabi"
BOOTSTRAPPING_BUILD_LIBCXX=1
clean_lightning() {
rm -rf "$ROCM_WHEEL_DIR"
rm -rf "$BUILD_PATH"
rm -rf "$DEB_PATH"
rm -rf "$RPM_PATH"
@@ -188,22 +164,14 @@ setup_llvm_info() {
local LLVM_URL_BRANCH
if [[ "${JOB_NAME}" == *rel* ]]; then
if [ $BUILD_ALT -eq 1 ]; then
LLVM_URL_BRANCH=$(git rev-parse HEAD)
else
LLVM_URL_NAME="https://github.com/RadeonOpenCompute/llvm-project"
LLVM_BRANCH_NAME="roc-${ROCM_VERSION}"
LLVM_URL_BRANCH="${LLVM_URL_NAME} ${LLVM_BRANCH_NAME}"
fi
else
LLVM_REMOTE_NAME=$(git remote)
LLVM_URL_NAME=$(git config --get remote."${LLVM_REMOTE_NAME}".url)
if [ $BUILD_ALT -eq 1 ]; then
LLVM_BRANCH_NAME=$(repo manifest | sed -n 's/.*path="external\/llvm-project-alt\/llvm-project".* upstream="\([^"]*\)".*/\1/p' )
else
LLVM_REMOTE_NAME=$(git remote)
LLVM_URL_NAME=$(git config --get remote."${LLVM_REMOTE_NAME}".url)
LLVM_BRANCH_NAME=$(repo manifest | sed -n 's/.*path="external\/llvm-project".* upstream="\([^"]*\)".*/\1/p' )
fi
LLVM_URL_BRANCH="${LLVM_URL_NAME} ${LLVM_BRANCH_NAME}"
LLVM_URL_BRANCH="${LLVM_URL_NAME} ${LLVM_BRANCH_NAME}"
fi
LLVM_COMMIT_GITDATE=$(git show -s --format=@%ct | xargs | date -f - --utc +%y%U%w)
@@ -283,24 +251,27 @@ build_lightning() {
mkdir -p "$BUILD_PATH"
pushd "$BUILD_PATH"
eval EXTRA_LLVM_CMAKE_PARAMS_ARRAY=($EXTRA_LLVM_CMAKE_PARAMS)
if [ ! -e Makefile ]; then
echo "Building LLVM CMake environment"
if [ -e "$LLVM_ROOT_LCL/../flang/AFARrelease" ]; then
LLVM_PROJECTS="$LLVM_PROJECTS;mlir"
if [ -e "$LLVM_ROOT_LCL/../flang/EnableFlangBuild" ]; then
FLANG_NEW=1
LLVM_PROJECTS="$LLVM_PROJECTS;flang;mlir"
LLVM_PROJECTS="$LLVM_PROJECTS;flang"
ENABLE_RUNTIMES="$ENABLE_RUNTIMES;openmp";
else
if [[ "${JOB_NAME}" != *afar* ]] && [ -e "$LLVM_ROOT_LCL/../flang/DoROCmRelease" ]; then
FLANG_NEW=1
LLVM_PROJECTS="$LLVM_PROJECTS;flang;mlir"
else
echo "NOT building project flang"
fi
if [[ "${JOB_NAME}" != *afar* ]] && [ -e "$LLVM_ROOT_LCL/../flang/DoROCmRelease" ]; then
FLANG_NEW=1
LLVM_PROJECTS="$LLVM_PROJECTS;flang"
else
echo "NOT building project flang"
fi
fi
set -x
cmake $(rocm_cmake_params) ${GEN_NINJA} \
${STATIC_FLAG} \
${PYTHON_VERSION_WORKAROUND} \
-DCMAKE_INSTALL_PREFIX="$INSTALL_PATH" \
-DLLVM_TARGETS_TO_BUILD="AMDGPU;X86" \
-DLLVM_ENABLE_PROJECTS="$LLVM_PROJECTS" \
@@ -342,6 +313,7 @@ build_lightning() {
-DCLANG_LINK_FLANG_LEGACY=ON \
-DCMAKE_CXX_STANDARD=17 \
-DFLANG_INCLUDE_DOCS=OFF \
"${EXTRA_LLVM_CMAKE_PARAMS_ARRAY[@]}" \
"$LLVM_ROOT_LCL"
set +x
echo "CMake complete"
@@ -358,28 +330,11 @@ build_lightning() {
echo "End Workaround for race condition"
cmake --build . -- $MAKEOPTS
case "$DISTRO_ID" in
(rhel*|centos*)
RHEL_BUILD=1
;;
(*)
RHEL_BUILD=0
;;
esac
if [ $SKIP_LIT_TESTS -eq 0 ]; then
if [ $RHEL_BUILD -eq 1 ] && [ $BUILD_ALT != 1 ]; then
if [ $FLANG_NEW -eq 1 ]; then
cmake --build . -- $MAKEOPTS check-lld check-mlir
else
cmake --build . -- $MAKEOPTS check-lld
fi
elif [ "$DISTRO_NAME" != "sles" ] && [ $BUILD_ALT != 1 ]; then
if [ $FLANG_NEW -eq 1 ]; then
cmake --build . -- $MAKEOPTS check-llvm check-clang check-lld check-mlir
else
cmake --build . -- $MAKEOPTS check-llvm check-clang check-lld
fi
if [ $RHEL_BUILD -eq 1 ]; then
cmake --build . -- $MAKEOPTS check-lld check-mlir
elif [ "$DISTRO_NAME" != "sles" ]; then
cmake --build . -- $MAKEOPTS check-llvm check-clang check-lld check-mlir
fi
fi
cmake --build . -- $MAKEOPTS clang-tidy
@@ -396,23 +351,15 @@ package_lightning_dynamic(){
get_llvm_version
local llvmParsedVersion="${LLVM_VERSION_MAJOR}.${LLVM_VERSION_MINOR}.${LLVM_VERSION_PATCH}"
local packageName="rocm-llvm"
local packageSummary="ROCm compiler"
local packageSummaryLong="ROCm compiler based on LLVM $llvmParsedVersion"
local installPath="$ROCM_INSTALL_PATH/lib/llvm/"
if [ $BUILD_ALT -eq 1 ]; then
local packageName="rocm-llvm-alt"
local packageSummary="Proprietary ROCm compiler"
local packageSummaryLong="ROCm compiler, including proprietary optimizations, based on LLVM $llvmParsedVersion"
local installPath="$ROCM_INSTALL_PATH/lib/llvm/alt"
else
local packageName="rocm-llvm"
local packageSummary="ROCm compiler"
local packageSummaryLong="ROCm compiler based on LLVM $llvmParsedVersion"
local installPath="$ROCM_INSTALL_PATH/lib/llvm/"
if [ "$BUILD_LLVM_DYLIB" == "ON" ] ; then
local packageNameCore="rocm-llvm-core"
local packageSummaryCore="ROCm core compiler dylibs"
local packageSummaryLongCore="ROCm compiler based on LLVM $llvmParsedVersion"
fi
if [ "$BUILD_LLVM_DYLIB" == "ON" ] ; then
local packageNameCore="rocm-llvm-core"
local packageSummaryCore="ROCm core compiler dylibs"
local packageSummaryLongCore="ROCm compiler based on LLVM $llvmParsedVersion"
fi
local packageArch="amd64"
@@ -433,9 +380,6 @@ package_lightning_dynamic(){
local prermFile="$packageDeb/DEBIAN/prerm"
local specFile="$packageDir/$packageName.spec"
local debDependencies="python3, libc6, libstdc++6|libstdc++8, libstdc++-5-dev|libstdc++-7-dev|libstdc++-11-dev, libgcc-5-dev|libgcc-7-dev|libgcc-11-dev, rocm-core"
if [ $BUILD_ALT -eq 1 ]; then
debDependencies="${debDependencies}, rocm-llvm"
fi
local debRecommends="gcc, g++, gcc-multilib, g++-multilib"
local packageRpm="$packageDir/rpm"
@@ -508,42 +452,33 @@ package_lightning_dynamic(){
debDependencies="${debDependencies}, ${packageNameCore}"
fi
if [ $BUILD_ALT -eq 0 ] ; then
cp -r "$LLVM_ROOT_LCL/LICENSE.TXT" "$packageDeb/$licenseDir"
else
cp -r "$LLVM_PROJECT_ALT_ROOT/EULA" "$packageDeb/$licenseDir"
cp -r "$LLVM_PROJECT_ALT_ROOT/DISCLAIMER.txt" "$packageDeb/$licenseDir"
fi
cp -r "$LLVM_ROOT_LCL/LICENSE.TXT" "$packageDeb/$licenseDir"
cp -r "$distBin" "$packageDeb/$installPath/bin"
cp -r "$distInc" "$packageDeb/$installPath/include"
cp -r "$distLib" "$packageDeb/$installPath/lib"
if [ "$BUILD_MANPAGES" == "ON" ]; then
if [ $BUILD_ALT -eq 0 ]; then
for i in "${man_pages[@]}"; do
gzip -f "$distMan/man1/$i"
for i in "${man_pages[@]}"; do
gzip -f "$distMan/man1/$i"
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
ln -sf "clang.1.gz" "$distMan/man1/$i"
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
ln -sf "clang.1.gz" "$distMan/man1/$i"
done
fi
fi
fi
cp -r "$distMan" "$packageDeb/$installPath/share"
if [ $BUILD_ALT -eq 0 ]; then
touch "$postinstFile" "$prermFile"
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\"" >> $postinstFile
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$packageDeb/$installPath/bin/$i" ]; then
echo "ln -s \"../lib/llvm/bin/$i\" \"$ROCM_INSTALL_PATH/bin/$i\"" >> $postinstFile
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\"" >> $prermFile
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\"" >> $prermFile
chmod 0555 "$postinstFile" "$prermFile"
cp -P "$backwardsCompatibleSymlink" "$packageDeb/$ROCM_INSTALL_PATH"
fi
touch "$postinstFile" "$prermFile"
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\"" >> $postinstFile
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$packageDeb/$installPath/bin/$i" ]; then
echo "ln -s \"../lib/llvm/bin/$i\" \"$ROCM_INSTALL_PATH/bin/$i\"" >> $postinstFile
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\"" >> $prermFile
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\"" >> $prermFile
chmod 0555 "$postinstFile" "$prermFile"
cp -P "$backwardsCompatibleSymlink" "$packageDeb/$ROCM_INSTALL_PATH"
echo "Package: $packageName" > $controlFile
echo "Architecture: $packageArch" >> $controlFile
@@ -613,16 +548,12 @@ package_lightning_dynamic(){
echo "Release: ${JOB_DESIGNATOR}${SLES_BUILD_ID_PREFIX}${BUILD_ID}%{?dist}" >> $specFile
echo "Summary: $packageSummary" >> $specFile
echo "Group: System Environment/Libraries" >> $specFile
if [ $BUILD_ALT -eq 1 ]; then
echo "License: AMD Proprietary" >> $specFile
else
echo "License: ASL 2.0 with exceptions" >> $specFile
fi
echo "License: ASL 2.0 with exceptions" >> $specFile
echo "Requires: $rpmRequires" >> $specFile
if [ $BUILD_ALT -eq 1 ]; then
echo "%define _build_id_links none" >> $specFile
fi
# The following is commented as Centos 7 has a version of rpm
# that does not understand it. When we no longer support Centos 7
# then we should have a correct recommends line.
#echo "Recommends: $rpmRecommends" >> $specFile
echo "%description" >> $specFile
echo "$packageSummaryLong" >> $specFile
@@ -638,28 +569,20 @@ package_lightning_dynamic(){
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/share/man" >> $specFile
echo "mkdir -p \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
if [ $BUILD_ALT -eq 0 ]; then
echo "cp -R $LLVM_ROOT_LCL/LICENSE.TXT \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
echo "cp -P $backwardsCompatibleSymlink \$RPM_BUILD_ROOT/$ROCM_INSTALL_PATH" >> $specFile
else
echo "cp -R $LLVM_PROJECT_ALT_ROOT/EULA \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
echo "cp -R $LLVM_PROJECT_ALT_ROOT/DISCLAIMER.txt \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
fi
echo "cp -R $LLVM_ROOT_LCL/LICENSE.TXT \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
echo "cp -P $backwardsCompatibleSymlink \$RPM_BUILD_ROOT/$ROCM_INSTALL_PATH" >> $specFile
echo "cp -R $distBin \$RPM_BUILD_ROOT/$installPath" >> $specFile
echo "cp -R $distInc \$RPM_BUILD_ROOT/$installPath" >> $specFile
echo "cp -R $distLib \$RPM_BUILD_ROOT/$installPath" >> $specFile
if [ "$BUILD_MANPAGES" == "ON" ]; then
if [ $BUILD_ALT -eq 0 ]; then
for i in "${man_pages[@]}"; do
echo "gzip -f $distMan/man1/$i" >> $specFile
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
echo "ln -sf clang.1.gz \"$distMan/man1/$i\"" >> $specFile
done
fi
fi
for i in "${man_pages[@]}"; do
echo "gzip -f $distMan/man1/$i" >> $specFile
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
echo "ln -sf clang.1.gz \"$distMan/man1/$i\"" >> $specFile
done
fi
fi
echo "cp -R $distMan \$RPM_BUILD_ROOT/$installPath/share" >> $specFile
@@ -676,25 +599,20 @@ package_lightning_dynamic(){
echo "$ROCM_INSTALL_PATH" >> $specFile
echo "%post" >> $specFile
if [ $BUILD_ALT -eq 0 ]; then
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\"" >> $specFile
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "ln -sf ../lib/llvm/bin/$i \"$ROCM_INSTALL_PATH/bin/$i\"" >> $specFile
fi
done
fi
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\"" >> $specFile
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "ln -sf ../lib/llvm/bin/$i \"$ROCM_INSTALL_PATH/bin/$i\"" >> $specFile
fi
done
echo "%preun" >> $specFile
if [ $BUILD_ALT -eq 0 ]; then
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\"" >> $specFile
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\"" >> $specFile
fi
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\"" >> $specFile
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\"" >> $specFile
echo "%postun" >> $specFile
rpmbuild --define "_topdir $packageRpm" -ba $specFile
@@ -711,32 +629,17 @@ package_lightning_static() {
get_llvm_version
local llvmParsedVersion="${LLVM_VERSION_MAJOR}.${LLVM_VERSION_MINOR}.${LLVM_VERSION_PATCH}"
if [ $BUILD_ALT -eq 1 ]; then
local packageName="rocm-llvm-alt"
local packageSummary="Proprietary ROCm core compiler"
local packageSummaryLong="ROCm core compiler, including proprietary optimizations based on LLVM $llvmParsedVersion"
if [ "$PACKAGEEXT" = "deb" ]; then
local packageNameExtra="rocm-llvm-alt-dev"
else
local packageNameExtra="rocm-llvm-alt-devel"
fi
local packageSummaryExtra="Proprietary ROCm compiler dev tools"
local packageSummaryLongExtra="ROCm compiler dev tools and documentation, including proprietary optimizations, based on LLVM $llvmParsedVersion"
local installPath="$ROCM_INSTALL_PATH/lib/llvm/alt"
local packageName="rocm-llvm"
local packageSummary="ROCm core compiler"
local packageSummaryLong="ROCm core compiler based on LLVM $llvmParsedVersion"
if [ "$PACKAGEEXT" = "deb" ]; then
local packageNameExtra="rocm-llvm-dev"
else
local packageName="rocm-llvm"
local packageSummary="ROCm core compiler"
local packageSummaryLong="ROCm core compiler based on LLVM $llvmParsedVersion"
if [ "$PACKAGEEXT" = "deb" ]; then
local packageNameExtra="rocm-llvm-dev"
else
local packageNameExtra="rocm-llvm-devel"
fi
local packageSummaryExtra="ROCm compiler dev tools"
local packageSummaryLongExtra="ROCm compiler dev tools and documentation, based on LLVM $llvmParsedVersion"
local installPath="$ROCM_INSTALL_PATH/lib/llvm/"
local packageNameExtra="rocm-llvm-devel"
fi
local packageSummaryExtra="ROCm compiler dev tools"
local packageSummaryLongExtra="ROCm compiler dev tools and documentation, based on LLVM $llvmParsedVersion"
local installPath="$ROCM_INSTALL_PATH/lib/llvm/"
local packageArch="amd64"
local packageVersion="${llvmParsedVersion}.${LLVM_COMMIT_GITDATE}"
@@ -746,7 +649,6 @@ package_lightning_static() {
local distLib="$INSTALL_PATH/lib"
local distMan="$INSTALL_PATH/share/man"
local licenseDir="$ROCM_INSTALL_PATH/share/doc/$packageName"
local licenseDirExtra="$ROCM_INSTALL_PATH/share/doc/$packageNameExtra"
local packageDir="$BUILD_PATH/package"
local backwardsCompatibleSymlink="$ROCM_INSTALL_PATH/llvm"
@@ -756,9 +658,6 @@ package_lightning_static() {
local prermFile="$packageDeb/DEBIAN/prerm"
local specFile="$packageDir/$packageName.spec"
local debDependencies="python3, libc6, libstdc++6|libstdc++8, libstdc++-5-dev|libstdc++-7-dev|libstdc++-11-dev, libgcc-5-dev|libgcc-7-dev|libgcc-11-dev, rocm-core"
if [ $BUILD_ALT -eq 1 ]; then
debDependencies="${debDependencies}, rocm-llvm"
fi
local debRecommends="gcc, g++, gcc-multilib, g++-multilib"
local packageRpm="$packageDir/rpm"
@@ -767,10 +666,6 @@ package_lightning_static() {
local specFileExtra="$packageDir/$packageNameExtra.spec"
local rpmRequires="rocm-core"
local rpmRequiresExtra="rocm-core, $packageName"
if [ $BUILD_ALT -eq 1 ]; then
rpmRequires+=", rocm-llvm"
rpmRequiresExtra+=", rocm-llvm-devel"
fi
local rpmRecommends="gcc, gcc-c++, devtoolset-7-gcc-c++"
rm -rf "$packageDir"
@@ -807,12 +702,7 @@ package_lightning_static() {
mkdir -p "$DEB_PATH"
mkdir -p "$packageDeb/$licenseDir"
if [ $BUILD_ALT -eq 0 ] ; then
cp -r "$LLVM_ROOT_LCL/LICENSE.TXT" "$packageDeb/$licenseDir"
else
cp -r "$LLVM_PROJECT_ALT_ROOT/EULA" "$packageDeb/$licenseDir"
cp -r "$LLVM_PROJECT_ALT_ROOT/DISCLAIMER.txt" "$packageDeb/$licenseDir"
fi
cp -r "$LLVM_ROOT_LCL/LICENSE.TXT" "$packageDeb/$licenseDir"
mkdir -p "$packageDeb/$installPath/bin"
for i in "${core_bin[@]}"; do
@@ -838,36 +728,32 @@ package_lightning_static() {
done
if [ "$BUILD_MANPAGES" == "ON" ]; then
if [ $BUILD_ALT -eq 0 ]; then
mkdir -p "$packageDeb/$installPath/share/man1"
for i in "${core_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
gzip -f "$distMan/man1/$i"
cp -d "$distMan/man1/${i}.gz" "$packageDeb/$installPath/share/man1/"
fi
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
ln -sf "clang.1.gz" "$distMan/man1/$i"
cp -d "$distMan/man1/${i}" "$packageDeb/$installPath/share/man1/"
done
mkdir -p "$packageDeb/$installPath/share/man1"
for i in "${core_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
gzip -f "$distMan/man1/$i"
cp -d "$distMan/man1/${i}.gz" "$packageDeb/$installPath/share/man1/"
fi
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
ln -sf "clang.1.gz" "$distMan/man1/$i"
cp -d "$distMan/man1/${i}" "$packageDeb/$installPath/share/man1/"
done
fi
fi
if [ $BUILD_ALT -eq 0 ]; then
touch "$postinstFile" "$prermFile"
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\"" >> $postinstFile
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$packageDeb/$installPath/bin/$i" ]; then
echo "ln -s \"../lib/llvm/bin/$i\" \"$ROCM_INSTALL_PATH/bin/$i\"" >> $postinstFile
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\"" >> $prermFile
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\"" >> $prermFile
chmod 0555 "$postinstFile" "$prermFile"
cp -P "$backwardsCompatibleSymlink" "$packageDeb/$ROCM_INSTALL_PATH"
fi
touch "$postinstFile" "$prermFile"
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\"" >> $postinstFile
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$packageDeb/$installPath/bin/$i" ]; then
echo "ln -s \"../lib/llvm/bin/$i\" \"$ROCM_INSTALL_PATH/bin/$i\"" >> $postinstFile
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\"" >> $prermFile
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\"" >> $prermFile
chmod 0555 "$postinstFile" "$prermFile"
cp -P "$backwardsCompatibleSymlink" "$packageDeb/$ROCM_INSTALL_PATH"
{
echo "Package: $packageName"
@@ -892,14 +778,6 @@ package_lightning_static() {
mkdir -p "$packageDeb/$installPath"
mkdir "${controlFile%/*}"
mkdir -p "$DEB_PATH"
mkdir -p "$packageDeb/$licenseDirExtra"
if [ $BUILD_ALT -eq 0 ] ; then
cp -r "$LLVM_ROOT_LCL/LICENSE.TXT" "$packageDeb/$licenseDirExtra"
else
cp -r "$LLVM_PROJECT_ALT_ROOT/EULA" "$packageDeb/$licenseDirExtra"
cp -r "$LLVM_PROJECT_ALT_ROOT/DISCLAIMER.txt" "$packageDeb/$licenseDirExtra"
fi
mkdir -p "$packageDeb/$installPath/bin"
for i in "$distBin"/*; do
@@ -922,21 +800,16 @@ package_lightning_static() {
fi
if [ "$BUILD_MANPAGES" == "ON" ]; then
if [ $BUILD_ALT -eq 0 ]; then
mkdir -p "$packageDeb/$installPath/share/man1"
for i in "${dev_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
gzip -f "$distMan/man1/$i"
cp -d "$distMan/man1/${i}.gz" "$packageDeb/$installPath/share/man1/"
fi
done
fi
mkdir -p "$packageDeb/$installPath/share/man1"
for i in "${dev_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
gzip -f "$distMan/man1/$i"
cp -d "$distMan/man1/${i}.gz" "$packageDeb/$installPath/share/man1/"
fi
done
fi
debDependencies="${debDependencies}, ${packageName}"
if [ $BUILD_ALT -eq 1 ]; then
debDependencies="${debDependencies}, rocm-llvm-dev"
fi
echo "Package: $packageNameExtra" > $controlFile
echo "Architecture: $packageArch" >> $controlFile
@@ -979,13 +852,8 @@ package_lightning_static() {
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/bin" >> $specFile
echo "mkdir -p \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
if [ $BUILD_ALT -eq 0 ]; then
echo "cp -R $LLVM_ROOT_LCL/LICENSE.TXT \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
echo "cp -P $backwardsCompatibleSymlink \$RPM_BUILD_ROOT/$ROCM_INSTALL_PATH" >> $specFile
else
echo "cp -R $LLVM_PROJECT_ALT_ROOT/EULA \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
echo "cp -R $LLVM_PROJECT_ALT_ROOT/DISCLAIMER.txt \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
fi
echo "cp -R $LLVM_ROOT_LCL/LICENSE.TXT \$RPM_BUILD_ROOT/$licenseDir" >> $specFile
echo "cp -P $backwardsCompatibleSymlink \$RPM_BUILD_ROOT/$ROCM_INSTALL_PATH" >> $specFile
for i in "${core_bin[@]}"; do
if [ -f "$distBin/$i" ]; then
@@ -995,9 +863,7 @@ package_lightning_static() {
echo "cp -d \"$distBin/flang\" \$RPM_BUILD_ROOT/$installPath/bin/" >> $specFile
if [ $BUILD_ALT -eq 0 ]; then
echo "cp -d \"$distBin\"/*.cfg \$RPM_BUILD_ROOT/$installPath/bin/" >> $specFile
fi
echo "cp -d \"$distBin\"/*.cfg \$RPM_BUILD_ROOT/$installPath/bin/" >> $specFile
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/lib/clang" >> $specFile
echo "cp -R \"$distLib/clang/\" \$RPM_BUILD_ROOT/$installPath/lib/" >> $specFile
@@ -1014,20 +880,18 @@ package_lightning_static() {
done
if [ "$BUILD_MANPAGES" == "ON" ]; then
if [ $BUILD_ALT -eq 0 ]; then
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/share/man/man1" >> $specFile
for i in "${core_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
echo "gzip -f $distMan/man1/$i" >> $specFile
echo "cp -d $distMan/man1/${i}.gz \$RPM_BUILD_ROOT/$installPath/share/man/man1/" >> $specFile
fi
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
echo "ln -sf clang.1.gz \"$distMan/man1/$i\"" >> $specFile
echo "cp -d $distMan/man1/${i} \$RPM_BUILD_ROOT/$installPath/share/man/man1/" >> $specFile
done
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/share/man/man1" >> $specFile
for i in "${core_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
echo "gzip -f $distMan/man1/$i" >> $specFile
echo "cp -d $distMan/man1/${i}.gz \$RPM_BUILD_ROOT/$installPath/share/man/man1/" >> $specFile
fi
done
if [ -f "$distMan/man1/clang.1.gz" ]; then
for i in "${amd_man_pages[@]}"; do
echo "ln -sf clang.1.gz \"$distMan/man1/$i\"" >> $specFile
echo "cp -d $distMan/man1/${i} \$RPM_BUILD_ROOT/$installPath/share/man/man1/" >> $specFile
done
fi
fi
@@ -1039,24 +903,20 @@ package_lightning_static() {
echo "$ROCM_INSTALL_PATH"
echo "%post"
if [ $BUILD_ALT -eq 0 ]; then
echo "mkdir -p \"$ROCM_INSTALL_PATH/bin\""
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "ln -sf ../lib/llvm/bin/$i \"$ROCM_INSTALL_PATH/bin/$i\""
fi
done
fi
echo "mkdir -p \"\$RPM_INSTALL_PREFIX0/bin\""
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "ln -sf ../lib/llvm/bin/$i \"\$RPM_INSTALL_PREFIX0/bin/$i\""
fi
done
echo "%preun"
if [ $BUILD_ALT -eq 0 ]; then
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "rm -f \"$ROCM_INSTALL_PATH/bin/$i\""
fi
done
echo "rmdir --ignore-fail-on-non-empty \"$ROCM_INSTALL_PATH/bin\""
fi
for i in "${amd_compiler_commands[@]}"; do
if [ -f "$distBin/$i" ]; then
echo "rm -f \"\$RPM_INSTALL_PREFIX0/bin/$i\""
fi
done
echo "rmdir --ignore-fail-on-non-empty \"\$RPM_INSTALL_PREFIX0/bin\""
echo "%postun"
} >> "$specFile"
@@ -1071,16 +931,13 @@ package_lightning_static() {
echo "Release: ${JOB_DESIGNATOR}${SLES_BUILD_ID_PREFIX}${BUILD_ID}%{?dist}" >> $specFileExtra
echo "Summary: $packageSummaryExtra" >> $specFileExtra
echo "Group: System Environment/Libraries" >> $specFileExtra
if [ $BUILD_ALT -eq 1 ]; then
echo "License: AMD Proprietary" >> $specFileExtra
else
echo "License: ASL 2.0 with exceptions" >> $specFileExtra
fi
echo "License: ASL 2.0 with exceptions" >> $specFileExtra
echo "Prefix: $ROCM_INSTALL_PATH" >> $specFileExtra
echo "Requires: $rpmRequiresExtra" >> $specFileExtra
if [ $BUILD_ALT -eq 1 ]; then
echo "%define _build_id_links none" >> $specFileExtra
fi
# The following is commented as Centos 7 has a version of rpm
# that does not understand it. When we no longer support Centos 7
# then we should have a correct recommends line.
#echo "Recommends: $rpmRecommends" >> $specFileExtra
echo "%description" >> $specFileExtra
echo "$packageSummaryLongExtra" >> $specFileExtra
@@ -1093,15 +950,8 @@ package_lightning_static() {
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/bin" >> $specFileExtra
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/include" >> $specFileExtra
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/lib" >> $specFileExtra
echo "mkdir -p \$RPM_BUILD_ROOT/$licenseDirExtra" >> $specFileExtra
if [ $BUILD_ALT -eq 0 ]; then
echo "cp -R $LLVM_ROOT_LCL/LICENSE.TXT \$RPM_BUILD_ROOT/$licenseDirExtra" >> $specFileExtra
echo "cp -P $backwardsCompatibleSymlink \$RPM_BUILD_ROOT/$ROCM_INSTALL_PATH" >> $specFileExtra
else
echo "cp -R $LLVM_PROJECT_ALT_ROOT/EULA \$RPM_BUILD_ROOT/$licenseDirExtra" >> $specFileExtra
echo "cp -R $LLVM_PROJECT_ALT_ROOT/DISCLAIMER.txt \$RPM_BUILD_ROOT/$licenseDirExtra" >> $specFileExtra
fi
echo "cp -P $backwardsCompatibleSymlink \$RPM_BUILD_ROOT/$ROCM_INSTALL_PATH" >> $specFileExtra
for i in "$distBin"/*; do
bin=$(basename "$i")
@@ -1122,15 +972,13 @@ package_lightning_static() {
fi
if [ "$BUILD_MANPAGES" == "ON" ]; then
if [ $BUILD_ALT -eq 0 ]; then
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/share/man/man1" >> $specFileExtra
for i in "${extra_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
echo "gzip -f $distMan/man1/$i" >> $specFileExtra
echo "cp -d \"$distMan/man1/${i}.gz\" \$RPM_BUILD_ROOT/$installPath/share/man/man1/" >> $specFileExtra
fi
done
fi
echo "mkdir -p \$RPM_BUILD_ROOT/$installPath/share/man/man1" >> $specFileExtra
for i in "${dev_man_pages[@]}"; do
if [ -f "$distMan/man1/$i" ]; then
echo "gzip -f $distMan/man1/$i" >> $specFileExtra
echo "cp -d \"$distMan/man1/${i}.gz\" \$RPM_BUILD_ROOT/$installPath/share/man/man1/" >> $specFileExtra
fi
done
fi
echo "%clean" >> $specFileExtra
@@ -1266,21 +1114,7 @@ print_output_directory() {
build() {
mkdir -p "${INSTALL_PATH}"
build_lightning
if [ $BUILD_ALT -eq 0 ] ; then
create_compiler_config_files
fi
}
create_wheel_package() {
echo "Creating rocm-llvm wheel package"
mkdir -p "$ROCM_WHEEL_DIR"
cp -f $SCRIPT_ROOT/generate_setup_py.py $ROCM_WHEEL_DIR
cp -f $SCRIPT_ROOT/repackage_wheel.sh $ROCM_WHEEL_DIR
cd $ROCM_WHEEL_DIR
# Currently only supports python3.6
./repackage_wheel.sh $RPM_PATH/rocm-llvm*.rpm python3.6
# Copy the wheel created to RPM folder which will be uploaded to artifactory
mv "$ROCM_WHEEL_DIR"/dist/*.whl "$RPM_PATH"
create_compiler_config_files
}
case $TARGET in
@@ -1301,9 +1135,4 @@ case $TARGET in
(*) die "Invalid target $TARGET" ;;
esac
if [[ $WHEEL_PACKAGE == true ]]; then
echo "Wheel Package build started !!!!"
create_wheel_package
fi
echo "Operation complete"

View File

@@ -12,6 +12,10 @@ RPM_PATH=$PACKAGE_DIR
build_miopen_hip() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
cd $COMPONENT_SRC
git config --global --add safe.directory "$COMPONENT_SRC"
checkout_lfs
@@ -26,10 +30,12 @@ build_miopen_hip() {
cmake \
"${rocm_math_common_cmake_params[@]}" \
-DMIOPEN_BACKEND=HIP \
-DMIOPEN_OFFLINE_COMPILER_PATHS_V2=1 \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
-DCMAKE_PREFIX_PATH="${ROCM_PATH};${ROCM_PATH}/hip;${HOME}/miopen-deps" \
-DHIP_OC_COMPILER="${ROCM_PATH}/bin/clang-ocl" \
-DMIOPEN_TEST_DISCRETE=OFF \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- -j${PROC}

View File

@@ -9,6 +9,10 @@ BUILD_DEV=ON
build_mivisionx() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
mkdir -p $BUILD_DIR && cd $BUILD_DIR
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
@@ -21,7 +25,7 @@ build_mivisionx() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100"
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
cmake \
@@ -40,8 +44,7 @@ build_mivisionx() {
cpack -G ${PKGTYPE^^}
rm -rf _CPack_Packages/ && find -name '*.o' -delete
mkdir -p $PACKAGE_DIR
cp ${BUILD_DIR}/*.${PKGTYPE} $PACKAGE_DIR
mkdir -p $PACKAGE_DIR && cp ${BUILD_DIR}/*.${PKGTYPE} $PACKAGE_DIR
show_build_cache_stats
}

View File

@@ -1,141 +0,0 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
PROJ_NAME=OpenCL-ICD-Loader
TARGET="build"
MAKEOPTS="$DASH_JAY"
BUILD_TYPE="Debug"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_DEB="$PACKAGE_ROOT/deb/${PROJ_NAME,,}"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/${PROJ_NAME,,}"
CLEAN_OR_OUT=0;
PKGTYPE="deb"
MAKETARGET="deb"
API_NAME="rocm-opencl-icd-loader"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [options ...]"
echo
echo "Options:"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
echo " -h, --help Prints this help"
echo " -o, --outdir Print path of output directory containing packages"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo
echo "Possible values for <type>:"
echo " deb -> Debian format (default)"
echo " rpm -> RPM format"
echo
return 0
}
RET_CONFLICT=1
check_conflicting_options $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
if [ $RET_CONFLICT -ge 30 ]; then
print_vars $TARGET $BUILD_TYPE $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
exit $RET_CONFLICT
fi
clean_opencl_icd_loader() {
echo "Cleaning $PROJ_NAME"
rm -rf "$PACKAGE_DEB"
rm -rf "$PACKAGE_RPM"
rm -rf "$PACKAGE_ROOT/${PROJ_NAME,,}"
}
copy_pkg_files_to_rocm() {
local comp_folder=$1
local comp_pkg_name=$2
cd "${OUT_DIR}/${PKGTYPE}/${comp_folder}"|| exit 2
if [ "${PKGTYPE}" = 'deb' ]; then
dpkg-deb -x ${comp_pkg_name}_*.deb pkg/
else
mkdir pkg && pushd pkg/ || exit 2
if [[ "${comp_pkg_name}" != *-dev* ]]; then
rpm2cpio ../${comp_pkg_name}-*.rpm | cpio -idmv
else
rpm2cpio ../${comp_pkg_name}el-*.rpm | cpio -idmv
fi
popd || exit 2
fi
ls ./pkg -alt
cp -r ./pkg/*/rocm*/* "${ROCM_PATH}" || exit 2
rm -rf pkg/
}
build_opencl_icd_loader() {
echo "Downloading $PROJ_NAME" package
if [ "$DISTRO_NAME" = ubuntu ]; then
mkdir -p "$PACKAGE_DEB"
local rocm_ver=${ROCM_VERSION}
if [ ${ROCM_VERSION##*.} = 0 ]; then
rocm_ver=${ROCM_VERSION%.*}
fi
local url="https://repo.radeon.com/rocm/apt/${rocm_ver}/pool/main/r/${API_NAME}/"
local package
package=$(curl -s "$url" | grep -Po 'href="\K[^"]*' | grep "${DISTRO_RELEASE}" | head -n 1)
if [ -z "$package" ]; then
echo "No package found for Ubuntu version $DISTRO_RELEASE"
exit 1
fi
wget -t3 -P "$PACKAGE_DEB" "${url}${package}"
copy_pkg_files_to_rocm ${PROJ_NAME,,} ${API_NAME}
else
echo "$DISTRO_ID is not supported..."
exit 2
fi
echo "Installing $PROJ_NAME" package
}
print_output_directory() {
case ${PKGTYPE} in
("deb")
echo ${PACKAGE_DEB};;
("rpm")
echo ${PACKAGE_RPM};;
(*)
echo "Invalid package type \"${PKGTYPE}\" provided for -o" >&2; exit 1;;
esac
exit
}
VALID_STR=`getopt -o hcraswlo:p: --long help,clean,release,outdir:,package: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case "$1" in
(-c | --clean )
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-r | --release )
BUILD_TYPE="RelWithDebInfo" ; shift ;;
(-h | --help )
printUsage ; exit 0 ;;
(-a | --address_sanitizer)
ack_and_ignore_asan ; shift ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)
MAKETARGET="$2" ; shift 2;;
(-s | --static)
echo "-s parameter accepted but ignored" ; shift ;;
--) shift; break;;
(*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
done
case $TARGET in
(clean) clean_opencl_icd_loader ;;
(build) build_opencl_icd_loader ;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac
echo "Operation complete"

View File

@@ -21,14 +21,15 @@ printUsage() {
return 0
}
PROJ_NAME="opencl-on-rocclr"
MAKEOPTS="$DASH_JAY"
BUILD_PATH="$(getBuildPath opencl-on-rocclr)"
BUILD_PATH="$(getBuildPath $PROJ_NAME)"
TARGET="build"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_DEB="$PACKAGE_ROOT/deb/opencl-on-rocclr"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/opencl-on-rocclr"
PACKAGE_DEB="$PACKAGE_ROOT/deb/$PROJ_NAME"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/$PROJ_NAME"
CORE_BUILD_DIR="$(getBuildPath hsa-core)"
ROCclr_BUILD_DIR="$(getBuildPath rocclr)"
BUILD_TYPE="Debug"
@@ -54,7 +55,7 @@ do
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
--) shift; break;;
@@ -148,7 +149,7 @@ print_output_directory() {
case $TARGET in
(clean) clean_opencl_on_rocclr ;;
(build) build_opencl_on_rocclr ; package_opencl_on_rocclr ;;
(outdir) print_output_directory ;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac

View File

@@ -13,6 +13,7 @@ printUsage() {
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -s, --static Component/Build does not support static builds just accepting this param for configuring package deps"
echo " -h, --help Prints this help"
echo
echo "Possible values for <type>:"
@@ -23,20 +24,25 @@ printUsage() {
return 0
}
packageMajorVersion="17.60"
PROJ_NAME="openmp-extras"
packageMajorVersion="18.63"
packageMinorVersion="0"
packageVersion="${packageMajorVersion}.${packageMinorVersion}.${ROCM_LIBPATCH_VERSION}"
BUILD_PATH="$(getBuildPath openmp-extras)"
DEB_PATH="$(getDebPath openmp-extras)"
RPM_PATH="$(getRpmPath openmp-extras)"
BUILD_PATH="$(getBuildPath $PROJ_NAME)"
DEB_PATH="$(getDebPath $PROJ_NAME)"
RPM_PATH="$(getRpmPath $PROJ_NAME)"
TARGET="build"
MAKEOPTS="$DASH_JAY"
STATIC_PKG_DEPS="OFF"
export INSTALL_PREFIX=${ROCM_INSTALL_PATH}
while [ "$1" != "" ];
VALID_STR=`getopt -o hcraso:p: --long help,clean,release,address_sanitizer,static,outdir,package: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case $1 in
case "$1" in
-c | --clean )
TARGET="clean" ;;
-p | --package )
@@ -52,8 +58,11 @@ do
export SANITIZER=1 ;;
-o | --outdir )
shift 1; PKGTYPE=$1 ; TARGET="outdir" ;;
-s | --static )
export STATIC_PKG_DEPS="ON" ;;
-h | --help )
printUsage ; exit 0 ;;
--) shift; break;;
*)
MAKEARG=$@ ; break ;;
esac
@@ -124,6 +133,23 @@ build_openmp_extras() {
export AOMP_JENKINS_BUILD_LIST="extras openmp pgmath flang flang_runtime"
echo "BEGIN Build of openmp-extras"
"$AOMP_REPOS"/aomp/bin/build_aomp.sh $MAKEARG
local llvm_ver=`$INSTALL_PREFIX/lib/llvm/bin/clang --print-resource-dir | sed 's^/llvm/lib/clang/^ ^' | awk '{print $2}'`
if [ ! -e $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/omp.h ] ; then
if [ ! -h $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/omp.h ] ; then
ln -s ../../../../include/omp.h $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/omp.h
fi
fi
if [ ! -e $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/ompt.h ] ; then
if [ ! -h $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/ompt.h ] ; then
ln -s ../../../../include/ompt.h $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/ompt.h
fi
fi
if [ ! -e $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/omp-tools.h ] ; then
if [ ! -h $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/omp-tools.h ] ; then
ln -s ../../../../include/omp-tools.h $ROCM_INSTALL_PATH/lib/llvm/lib/clang/$llvm_ver/include/omp-tools.h
fi
fi
popd
}
@@ -133,20 +159,30 @@ package_openmp_extras_deb() {
local packageArch="amd64"
local packageMaintainer="Openmp Extras Support <openmp-extras.support@amd.com>"
local packageSummary="OpenMP Extras provides openmp and flang libraries."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 15 and is used for offloading to Radeon GPUs."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 17 and is used for offloading to Radeon GPUs."
local debDependencies="rocm-llvm, rocm-device-libs, rocm-core"
local debRecommends="gcc, g++"
local controlFile="$packageDeb/openmp-extras/DEBIAN/control"
if [ "$packageName" == "openmp-extras-runtime" ]; then
packageType="runtime"
debDependencies="rocm-core, hsa-rocr"
if [ "$STATIC_PKG_DEPS" == "OFF" ]; then
debDependencies="rocm-core, hsa-rocr"
else
echo "static package dependency configuration for runtime" ;
debDependencies="rocm-core, hsa-rocr-static-dev"
fi
else
local debProvides="openmp-extras"
local debConflicts="openmp-extras"
local debReplaces="openmp-extras"
packageType="devel"
debDependencies="$debDependencies, openmp-extras-runtime, hsa-rocr-dev"
if [ "$STATIC_PKG_DEPS" == "OFF" ]; then
debDependencies="$debDependencies, openmp-extras-runtime, hsa-rocr-dev"
else
echo "Enabled static package dependency configuration for dev" ;
debDependencies="$debDependencies, openmp-extras-runtime, hsa-rocr-static-dev"
fi
fi
if [ -f "$BUILD_PATH"/build/installed_files.txt ] && [ ! -d "$INSTALL_PREFIX"/openmp-extras/devel ]; then
@@ -209,6 +245,9 @@ package_openmp_extras_deb() {
cp -r "$AOMP_REPOS"/aomp/examples/fortran "$packageDeb"/openmp-extras"$copyPath"/share/openmp-extras/examples
cp -r "$AOMP_REPOS"/aomp/examples/openmp "$packageDeb"/openmp-extras"$copyPath"/share/openmp-extras/examples
cp -r "$AOMP_REPOS"/aomp/examples/tools "$packageDeb"/openmp-extras"$copyPath"/share/openmp-extras/examples
if [ -e "$AOMP_REPOS/aomp/examples/Makefile.help" ]; then
cp "$AOMP_REPOS"/aomp/examples/Makefile* "$packageDeb"/openmp-extras"$copyPath"/share/openmp-extras/examples
fi
clean_examples "$packageDeb"/openmp-extras"$copyPath"/share/openmp-extras/examples
fi
@@ -260,7 +299,7 @@ package_openmp_extras_asan_deb() {
local packageArch="amd64"
local packageMaintainer="Openmp Extras Support <openmp-extras.support@amd.com>"
local packageSummary="AddressSanitizer OpenMP Extras provides instrumented openmp and flang libraries."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 15 and is used for offloading to Radeon GPUs."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 17 and is used for offloading to Radeon GPUs."
local debDependencies="hsa-rocr-asan, rocm-core-asan"
local debRecommends="gcc, g++"
local controlFile="$packageDeb/openmp-extras/DEBIAN/control"
@@ -317,23 +356,26 @@ package_openmp_extras_rpm() {
local packageRpm="$packageDir/rpm"
local specFile="$packageDir/$packageName.spec"
local packageSummary="OpenMP Extras provides openmp and flang libraries."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 15 and is used for offloading to Radeon GPUs."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 17 and is used for offloading to Radeon GPUs."
local rpmRequires="rocm-llvm, rocm-device-libs, rocm-core"
if [ "$packageName" == "openmp-extras-runtime" ]; then
packageType="runtime"
rpmRequires="rocm-core, hsa-rocr"
if [ "$STATIC_PKG_DEPS" == "OFF" ]; then
rpmRequires="rocm-core, hsa-rocr"
else
rpmRequires="rocm-core, hsa-rocr-static-devel"
fi
else
local rpmProvides="openmp-extras"
local rpmObsoletes="openmp-extras"
packageType="devel"
rpmRequires="$rpmRequires, openmp-extras-runtime, hsa-rocr-devel"
if [ "$STATIC_PKG_DEPS" == "OFF" ]; then
rpmRequires="$rpmRequires, openmp-extras-runtime, hsa-rocr-devel"
else
rpmRequires="$rpmRequires, openmp-extras-runtime, hsa-rocr-static-devel"
fi
fi
rm -f "$AOMP_REPOS"/aomp/examples/*.sh
rm -f "$AOMP_REPOS"/aomp/examples/fortran/*.sh
rm -f "$AOMP_REPOS"/aomp/examples/openmp/*.sh
if [ "$packageType" == "runtime" ]; then
rm -rf "$packageDir"
rm -rf "$RPM_PATH"
@@ -354,6 +396,7 @@ package_openmp_extras_rpm() {
echo "Group: System Environment/Libraries"
echo "License: MIT and ASL 2.0 and ASL 2.0 with exceptions"
echo "Vendor: Advanced Micro Devices, Inc."
echo "Prefix: $INSTALL_PREFIX"
echo "Requires: $rpmRequires"
echo "%if %is_devel"
echo "Provides: $rpmProvides"
@@ -435,6 +478,9 @@ package_openmp_extras_rpm() {
echo " cp -r $AOMP_REPOS/aomp/examples/fortran \$RPM_BUILD_ROOT$copyPath/share/openmp-extras/examples"
echo " cp -r $AOMP_REPOS/aomp/examples/openmp \$RPM_BUILD_ROOT$copyPath/share/openmp-extras/examples"
echo " cp -r $AOMP_REPOS/aomp/examples/tools \$RPM_BUILD_ROOT$copyPath/share/openmp-extras/examples"
if [ -e "$AOMP_REPOS/aomp/examples/Makefile.help" ]; then
echo " cp $AOMP_REPOS/aomp/examples/Makefile* \$RPM_BUILD_ROOT$copyPath/share/openmp-extras/examples"
fi
clean_examples \$RPM_BUILD_ROOT$copyPath/share/openmp-extras/examples
echo "%endif"
echo "%clean"
@@ -461,7 +507,7 @@ package_openmp_extras_asan_rpm() {
local packageRpm="$packageDir/rpm"
local specFile="$packageDir/$packageName.spec"
local packageSummary="AddressSanitizer OpenMP Extras provides instrumented openmp and flang libraries."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 15 and is used for offloading to Radeon GPUs."
local packageSummaryLong="openmp-extras $packageVersion is based on LLVM 17 and is used for offloading to Radeon GPUs."
local rpmRequires="hsa-rocr-asan, rocm-core-asan"
local asanLibDir="runtime"
@@ -527,7 +573,6 @@ package_openmp_extras_asan_rpm() {
mv $packageRpm/RPMS/x86_64/*.rpm $RPM_PATH
}
package_openmp_extras() {
local DISTRO_NAME=$(cat /etc/os-release | grep -e ^NAME=)
local installPath="$ROCM_INSTALL_PATH/lib/llvm"
@@ -563,21 +608,23 @@ package_tests_deb(){
local packageArch="amd64"
local packageMaintainer="Openmp Extras Support <openmp-extras.support@amd.com>"
local packageSummary="Tests for openmp-extras."
local packageSummaryLong="Tests for openmp-extras $packageMajorVersion-$packageMinorVersion is based on LLVM 15 and is used for offloading to Radeon GPUs."
local debDependencies="rocm-core"
local debRecommends="gcc, g++"
local packageSummaryLong="Tests for openmp-extras $packageMajorVersion-$packageMinorVersion is based on LLVM 17 and is used for offloading to Radeon GPUs."
local debDependencies="openmp-extras-dev, rocm-core"
local debRecommends=""
local controlFile="$packageDeb/openmp-extras/DEBIAN/control"
local installPath="$ROCM_INSTALL_PATH/share/openmp-extras/tests"
local packageName="openmp-extras-tests"
rm -rf "$packageDir"
mkdir -p $packageDeb/openmp-extras$installPath; mkdir -p $packageDeb/openmp-extras$copyPath/bin
mkdir -p $packageDeb/openmp-extras"$installPath"
if [ -e $(dirname $controlFile) ]; then
rm $(dirname $controlFile)
fi
mkdir -p "$(dirname $controlFile)"
cp -r "$AOMP_REPOS/aomp/test/smoke" "$packageDeb$installPath"
cp -r "$AOMP_REPOS/aomp/." "$packageDeb/openmp-extras/$installPath"
rm -rf "$packageDeb"/openmp-extras"$installPath"/.git "$packageDeb"/openmp-extras"$installPath"/.github
cp "$OUT_DIR/build/lightning/bin/FileCheck" "$packageDeb/openmp-extras/$installPath/bin"
{
echo "Package: $packageName"
echo "Architecture: $packageArch"
@@ -603,11 +650,12 @@ package_tests_rpm(){
local packageName="openmp-extras-tests"
local specFile="$packageDir/$packageName.spec"
local packageSummary="Tests for openmp-extras."
local packageSummaryLong="Tests for openmp-extras $packageVersion is based on LLVM 15 and is used for offloading to Radeon GPUs."
local packageSummaryLong="Tests for openmp-extras $packageVersion is based on LLVM 18 and is used for offloading to Radeon GPUs."
rm -rf "$packageDir"
mkdir -p "$packageRpm$installPath"
mkdir -p "$packageRpm/openmp-extras/$installPath"
{
echo "AutoReqProv: no"
echo "Name: $packageName"
echo "Version: $packageVersion"
echo "Release: ${CPACK_RPM_PACKAGE_RELEASE}%{?dist}"
@@ -615,7 +663,10 @@ package_tests_rpm(){
echo "Group: System Environment/Libraries"
echo "License: Advanced Micro Devices, Inc."
echo "Vendor: Advanced Micro Devices, Inc."
echo "Prefix: $INSTALL_PREFIX"
echo "Requires: $rpmRequires"
echo "%define debug_package %{nil}"
# Redefining __os_install_post to remove stripping
echo "%define __os_install_post %{nil}"
echo "%description"
echo "$packageSummaryLong"
@@ -625,18 +676,21 @@ package_tests_rpm(){
echo "%build"
echo "%install"
echo "mkdir -p \$RPM_BUILD_ROOT$copyPath/share/aomp/tests"
echo "cp -R $AOMP_REPOS/aomp/test/smoke \$RPM_BUILD_ROOT$copyPath/share/aomp/tests"
echo "mkdir -p \$RPM_BUILD_ROOT$installPath"
echo "cp -R $AOMP_REPOS/aomp/. \$RPM_BUILD_ROOT$installPath"
echo "rm -rf \$RPM_BUILD_ROOT$installPath/.git \$RPM_BUILD_ROOT$installPath/.github"
echo "cp $OUT_DIR/build/lightning/bin/FileCheck \$RPM_BUILD_ROOT$installPath/bin"
echo 'find $RPM_BUILD_ROOT \! -type d | sed "s|$RPM_BUILD_ROOT||"> files.list'
echo "%clean"
echo "rm -rf \$RPM_BUILD_ROOT"
echo "%files -f files.list"
echo "$installPath"
echo "%defattr(-,root,root,-)"
echo "%postun"
echo "rm -rf $ROCM_INSTALL_PATH/share/aomp"
echo "rm -rf $installPath"
} > $specFile
rpmbuild --define "_topdir $packageRpm" -ba $specFile
mv $packageRpm/RPMS/x86_64/*.rpm $RPM_PATH
@@ -665,7 +719,7 @@ print_output_directory() {
case $TARGET in
(clean) clean_openmp_extras ;;
(build) build_openmp_extras; package_openmp_extras ;;
(build) build_openmp_extras; package_openmp_extras; package_tests ;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac

View File

@@ -10,6 +10,10 @@ ENABLE_ADDRESS_SANITIZER=false
build_rccl() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
mkdir -p $ROCM_PATH/.info/
echo $ROCM_VERSION | tee $ROCM_PATH/.info/version
@@ -23,7 +27,7 @@ build_rccl() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
init_rocm_common_cmake_params

View File

@@ -13,7 +13,7 @@ printUsage() {
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo " -h, --help Prints this help"
echo
return 0
@@ -41,7 +41,7 @@ RDC_PKG_NAME_ROOT="rdc"
RDC_PKG_NAME="${RDC_PKG_NAME_ROOT}"
GRPC_PROTOC_ROOT="${RDC_BUILD_DIR}/grpc"
GRPC_SEARCH_ROOT="/usr/grpc"
GRPC_DESIRED_VERSION="1.59.1" # do not include 'v'
GRPC_DESIRED_VERSION="1.61.0"
RDC_LIB_RPATH='$ORIGIN'
RDC_LIB_RPATH=$RDC_LIB_RPATH:'$ORIGIN/..'
@@ -70,7 +70,7 @@ do
(-d | --documentation )
BUILD_DOCS="yes" ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)
@@ -111,49 +111,20 @@ find_grpc() {
GRPC_PROTOC_ROOT=$GRPC_SEARCH_ROOT
}
build_grpc() {
if find_grpc; then
return 0
fi
echo "GRPC SEARCH FAILED! Building from scratch..."
mkdir -p $PACKAGE_ROOT/build
pushd $PACKAGE_ROOT/build
if [ ! -d $PACKAGE_ROOT/build/grpc/.git ]; then
git clone \
--shallow-submodules \
--recurse-submodules \
$DASH_JAY \
-b v${GRPC_DESIRED_VERSION} \
--depth 1 \
https://github.com/grpc/grpc
fi
cd grpc
mkdir -p cmake/build
cd cmake/build
cmake \
-DgRPC_INSTALL=ON \
-DgRPC_BUILD_TESTS=OFF \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=${GRPC_PROTOC_ROOT} \
../..
cmake --build . -- $DASH_JAY
cmake --build . -- install
cp ../../LICENSE ${GRPC_PROTOC_ROOT}
popd
}
rdc_backwards_compat_cmake_params() {
grep -q "RDC_CLIENT_INSTALL_PREFIX" "$RDC_ROOT/CMakeLists.txt" &&
echo "-DRDC_CLIENT_INSTALL_PREFIX=$PACKAGE_ROOT"
}
build_rdc() {
if ! find_grpc; then
echo "ERROR: GRPC SEARCH FAILED!"
echo "You are expected to have gRPC [${GRPC_DESIRED_VERSION}] in [${GRPC_SEARCH_ROOT}]"
# Compiling gRPC as part of the RDC build takes too long and times out the build job
return 1
fi
echo "gRPC [${GRPC_DESIRED_VERSION}] found!"
echo "Building RDC"
echo "RDC_BUILD_DIR: ${RDC_BUILD_DIR}"
echo "GRPC_PROTOC_ROOT: ${GRPC_PROTOC_ROOT}"
@@ -226,7 +197,7 @@ verifyEnvSetup
case $TARGET in
(clean) clean_rdc ;;
(clean_grpc) clean_grpc ;;
(build) build_grpc; build_rdc ;;
(build) build_rdc ;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac

View File

@@ -7,11 +7,6 @@ set_component_src rocAL
build_rocal() {
if [ "$DISTRO_ID" = "mariner-2.0" ] ; then
echo "Not building rocal for ${DISTRO_ID}. Exiting..."
return 0
fi
echo "Start build"
# Enable ASAN

View File

@@ -10,6 +10,10 @@ set_component_src rocALUTION
build_rocalution() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
cd $COMPONENT_SRC
CXX="g++"
@@ -27,7 +31,7 @@ build_rocalution() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
cmake \
@@ -35,7 +39,6 @@ build_rocalution() {
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DAMDGPU_TARGETS=${GPU_TARGETS} \
-DCPACK_SET_DESTDIR=OFF \
-DBUILD_CLIENTS_SAMPLES=ON \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \

View File

@@ -12,6 +12,11 @@ stage2_command_args "$@"
build_rocblas() {
echo "Start build"
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -26,7 +31,7 @@ build_rocblas() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx900;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
init_rocm_common_cmake_params
@@ -36,8 +41,8 @@ build_rocblas() {
"${rocm_math_common_cmake_params[@]}" \
-DROCM_DIR="${ROCM_PATH}" \
${LAUNCHER_FLAGS} \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DCMAKE_PREFIX_PATH="${DEPS_DIR};${ROCM_PATH}" \
-DCPACK_SET_DESTDIR=OFF \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DBUILD_CLIENTS_SAMPLES=ON \

View File

@@ -1,127 +0,0 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [options ...] [make options]"
echo
echo "Options:"
echo " -h, --help Prints this help"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -r, --release Make a release build instead of a debug build"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of type referred to by pkg_type"
echo
echo "Possible values for <type>:"
echo " deb -> Debian format (default)"
echo " rpm -> RPM format"
echo
return 0
}
MAKEOPTS="$DASH_JAY"
BUILD_PATH="$(getBuildPath rocclr)"
TARGET="build"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_DEB="$(getPackageRoot)/deb/rocclr"
PACKAGE_RPM="$(getPackageRoot)/rpm/rocclr"
CORE_BUILD_DIR="$(getBuildPath hsa-core)"
BUILD_TYPE="Debug"
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
VALID_STR=`getopt -o hcraso: --long help,clean,release,static,address_sanitizer,outdir: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-r | --release)
BUILD_TYPE="Release" ; shift ;;
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
--) shift; break;;
(*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
done
RET_CONFLICT=1
check_conflicting_options $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
if [ $RET_CONFLICT -ge 30 ]; then
print_vars $API_NAME $TARGET $BUILD_TYPE $SHARED_LIBS $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
exit $RET_CONFLICT
fi
clean_rocclr() {
rm -rf "$BUILD_PATH"
rm -rf "$PACKAGE_DEB"
rm -rf "$PACKAGE_RPM"
}
build_rocclr() {
if [ "$SHARED_LIBS" = "ON" ]; then
echo "rocclr not a standalone repo. skipping build" >&2
echo "rocclr not a standalone repo. skipping build"
exit 0
fi
if [ ! -e "$CLR_ROOT/CMakeLists.txt" ]; then
_ROCclr_CMAKELIST_DIR="$CLR_ROOT"
elif [ ! -e "$ROCclr_ROOT/CMakeLists.txt" ]; then
echo "No $ROCclr_ROOT/CMakeLists.txt file, skipping rocclr" >&2
echo "No $ROCclr_ROOT/CMakeLists.txt file, skipping rocclr"
exit 0
else
_ROCclr_CMAKELIST_DIR="$ROCclr_ROOT"
fi
echo "$_ROCclr_CMAKELIST_DIR"
mkdir -p "$BUILD_PATH"
pushd "$BUILD_PATH"
print_lib_type $SHARED_LIBS
if [ ! -e Makefile ]; then
echo "Building ROCclr CMake environment"
cmake -DUSE_COMGR_LIBRARY=ON \
$(rocm_cmake_params) \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DLLVM_INCLUDES="$LLVM_ROOT/include" \
$(rocm_common_cmake_params) \
"$_ROCclr_CMAKELIST_DIR"
echo "CMake complete"
fi
echo "Building ROCclr"
cmake --build . -- $MAKEOPTS "VERBOSE=1"
popd
}
case $TARGET in
(clean) clean_rocclr ;;
(build) build_rocclr ;;
(outdir) exit ;;
(*) die "Invalid target $TARGET" ;;
esac
echo "Operation complete"

View File

@@ -4,14 +4,14 @@ source "$(dirname "${BASH_SOURCE[0]}")/compute_helper.sh"
set_component_src rocDecode
BUILD_DEV=ON
build_rocdecode() {
if [ "$DISTRO_ID" = "centos-7" ] || [ "$DISTRO_ID" = "sles-15.4" ] ; then
echo "Not building rocDecode for ${DISTRO_ID}. Exiting..."
return 0
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
echo "Start build"
mkdir -p $BUILD_DIR && cd $BUILD_DIR
python3 ${COMPONENT_SRC}/rocDecode-setup.py --developer OFF
# python3 ${COMPONENT_SRC}/rocDecode-setup.py --developer OFF
cmake -DROCM_DEP_ROCMCORE=ON ${COMPONENT_SRC}
make -j8

View File

@@ -21,7 +21,7 @@ build_rocfft() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
CXX="${ROCM_PATH}/bin/hipcc" \
@@ -34,7 +34,6 @@ build_rocfft() {
-DBUILD_CLIENTS_SAMPLES=ON \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_RIDER=ON \
-DCPACK_SET_DESTDIR=OFF \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- -j${PROC}

View File

@@ -0,0 +1,39 @@
#!/bin/bash
set -ex
source "$(dirname "${BASH_SOURCE[0]}")/compute_helper.sh"
set_component_src rocJPEG
BUILD_DEV=ON
build_rocjpeg() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
mkdir -p $BUILD_DIR && cd $BUILD_DIR
# python3 ../rocJPEG-setup.py
cmake -DROCM_DEP_ROCMCORE=ON "$COMPONENT_SRC"
make -j8
make install
make package
cmake --build "$BUILD_DIR" -- -j${PROC}
cpack -G ${PKGTYPE^^}
rm -rf _CPack_Packages/ && find -name '*.o' -delete
mkdir -p $PACKAGE_DIR
cp ${BUILD_DIR}/*.${PKGTYPE} $PACKAGE_DIR
show_build_cache_stats
}
clean_rocjpeg() {
echo "Cleaning rocJPEG build directory: ${BUILD_DIR} ${PACKAGE_DIR}"
rm -rf "$BUILD_DIR" "$PACKAGE_DIR"
echo "Done!"
}
stage2_command_args "$@"
case $TARGET in
build) build_rocjpeg ;;
outdir) print_output_directory ;;
clean) clean_rocjpeg ;;
*) die "Invalid target $TARGET" ;;
esac

View File

@@ -32,7 +32,6 @@ ROCM_CMAKE_BUILD_DIR="$(getBuildPath rocm-cmake)"
ROCM_CMAKE_BUILD_DIR="$(getBuildPath rocm-cmake)"
ROCM_CMAKE_PACKAGE_DEB="$(getPackageRoot)/deb/rocm-cmake"
ROCM_CMAKE_PACKAGE_RPM="$(getPackageRoot)/rpm/rocm-cmake"
ROCM_WHEEL_DIR="${ROCM_CMAKE_BUILD_DIR}/_wheel"
ROCM_CMAKE_BUILD_TYPE="debug"
BUILD_TYPE="Debug"
SHARED_LIBS="ON"
@@ -56,8 +55,6 @@ do
ack_and_ignore_asan ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
(-w | --wheel)
WHEEL_PACKAGE=true ; shift ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)
@@ -78,7 +75,6 @@ fi
clean_rocm_cmake() {
rm -rf "$ROCM_WHEEL_DIR"
rm -rf $ROCM_CMAKE_BUILD_DIR
rm -rf $ROCM_CMAKE_PACKAGE_DEB
rm -rf $ROCM_CMAKE_PACKAGE_RPM
@@ -106,19 +102,6 @@ build_rocm_cmake() {
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$ROCM_CMAKE_PACKAGE_RPM" $ROCM_CMAKE_BUILD_DIR/rocm-cmake*.rpm
}
create_wheel_package() {
echo "Creating rocm-cmake wheel package"
# Copy the setup.py generator to build folder
mkdir -p $ROCM_WHEEL_DIR
cp -f $SCRIPT_ROOT/generate_setup_py.py $ROCM_WHEEL_DIR
cp -f $SCRIPT_ROOT/repackage_wheel.sh $ROCM_WHEEL_DIR
cd $ROCM_WHEEL_DIR
# Currently only supports python3.6
./repackage_wheel.sh $ROCM_CMAKE_BUILD_DIR/rocm-cmake*.rpm python3.6
# Copy the wheel created to RPM folder which will be uploaded to artifactory
copy_if WHL "WHL" "$ROCM_CMAKE_PACKAGE_RPM" "$ROCM_WHEEL_DIR"/dist/*.whl
}
print_output_directory() {
case ${PKGTYPE} in
("deb")
@@ -138,9 +121,4 @@ case $TARGET in
(*) die "Invalid target $TARGET" ;;
esac
if [[ $WHEEL_PACKAGE == true ]]; then
echo "Wheel Package build started !!!!"
create_wheel_package
fi
echo "Operation complete"

View File

@@ -24,10 +24,11 @@ printUsage() {
return 0
}
PROJ_NAME="rocm-core"
PACKAGE_ROOT="$(getPackageRoot)"
ROCM_CORE_BUILD_DIR="$(getBuildPath rocm_core)"
ROCM_CORE_PACKAGE_DEB="$(getPackageRoot)/deb/rocm-core"
ROCM_CORE_PACKAGE_RPM="$(getPackageRoot)/rpm/rocm-core"
ROCM_CORE_PACKAGE_DEB="$(getPackageRoot)/deb/$PROJ_NAME"
ROCM_CORE_PACKAGE_RPM="$(getPackageRoot)/rpm/$PROJ_NAME"
ROCM_CORE_MAKE_OPTS="$DASH_JAY -C $ROCM_CORE_BUILD_DIR"
BUILD_TYPE="Debug"
TARGET="build"

View File

@@ -48,7 +48,7 @@ CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
LDFLAGS="$LDFLAGS -Wl,--enable-new-dtags"
LIB_AMD_PYTHON="libamdpython.so"
tokeep=(
main${ROCM_INSTALL_PATH}/bin/rocgdb
@@ -123,11 +123,41 @@ package_deb(){
local VERSION
get_version unknown
VERSION="${VERSION}.${ROCM_LIBPATCH_VERSION}"
grep -v '^# ' > "$BUILD_DIR/package/main/DEBIAN/control" <<EOF
#create postinstall and prerm
grep -v '^# ' > "$BUILD_DIR/package/main/DEBIAN/preinst" <<EOF
#!/bin/sh
# Pre-installation script commands
echo "Running pre-installation script..."
mkdir -p ${ROCM_INSTALL_PATH}/lib
PYTHON_LIB_INSTALLED=\$(ldconfig -p | awk '/libpython3/ { print \$NF; exit}')
ln -s \$PYTHON_LIB_INSTALLED ${ROCM_INSTALL_PATH}/lib/$LIB_AMD_PYTHON
echo "pre-installation done."
EOF
grep -v '^# ' > "$BUILD_DIR/package/main/DEBIAN/postrm" <<EOF
#!/bin/sh
# Post-uninstallation script commands
echo "Running post-uninstallation script..."
PYTHON_LINK_BY_OPENCL=\$(ldconfig -p | awk '/libpython3/ { print \$NF; exit}' | awk -F'/' '{print \$NF}')
rm -f ${ROCM_INSTALL_PATH}/lib/\$PYTHON_LINK_BY_OPENCL
rm -f ${ROCM_INSTALL_PATH}/lib/$LIB_AMD_PYTHON
if [ -L "${ROCM_INSTALL_PATH}/lib/$LIB_AMD_PYTHON" ] || \
[ -L "${ROCM_INSTALL_PATH}/lib/\$PYTHON_LINK_BY_OPENCL" ] ; then
echo " some rocm-gdb requisite libs could not be removed"
else
echo " all requisite libs removed successfully "
fi
echo "post-uninstallation done."
EOF
chmod +x $BUILD_DIR/package/main/DEBIAN/postrm
chmod +x $BUILD_DIR/package/main/DEBIAN/preinst
# Create control file, with variable substitution.
# Lines with # at the start are removed, to allow for comments
mkdir "$BUILD_DIR/debian"
grep -v '^# ' > "$BUILD_DIR/debian/control" <<EOF
# Required fields
Version: ${VERSION}-${CPACK_DEBIAN_PACKAGE_RELEASE}
Package: ${PROJ_NAME}
Source: ${PROJ_NAME}-src
Maintainer: ROCm Debugger Support <rocm-gdb.support@amd.com>
Description: ROCgdb
This is ROCgdb, the AMD ROCm source-level debugger for Linux,
@@ -137,15 +167,37 @@ Section: utils
Architecture: amd64
Essential: no
Priority: optional
Depends: libexpat1, libtinfo5, libncurses5, rocm-dbgapi, libpython3.10 | libpython3.8, libbabeltrace-ctf1 (>= 1.2.1), libbabeltrace1 (>= 1.2.1), rocm-core
Depends: \${shlibs:Depends}, rocm-dbgapi, rocm-core
EOF
# Use dpkg-shlibdeps to list shlib dependencies, the result is placed
# in $BUILD_DIR/debian/substvars.
(
cd "$BUILD_DIR"
if [[ $ASAN_BUILD == "yes" ]]
then
LD_LIBRARY_PATH=${ROCM_INSTALL_PATH}/lib/asan:$LD_LIBRARY_PATH
fi
dpkg-shlibdeps --ignore-missing-info -e "$BUILD_DIR/package/main/${ROCM_INSTALL_PATH}/bin/rocgdb"
)
# Generate the final DEBIAN/control, and substitute the shlibs:Depends.
# This is a bit unorthodox as we are only using bits and pieces of the
# dpkg tools.
(
SHLIB_DEPS=$(grep "^shlibs:Depends" "$BUILD_DIR/debian/substvars" | \
sed -e "s/shlibs:Depends=//")
sed -E \
-e "/^#/d" \
-e "/^Source:/d" \
-e "s/\\$\{shlibs:Depends\}/$SHLIB_DEPS/" \
< debian/control > "$BUILD_DIR/package/main/DEBIAN/control"
)
mkdir -p "$OUT_DIR/deb/$PROJ_NAME"
fakeroot dpkg-deb -Zgzip --build "$BUILD_DIR/package/main" "$OUT_DIR/deb/$PROJ_NAME"
# Package the tests so they can be run on a test slave
mkdir -p "$BUILD_DIR/package/tests/DEBIAN"
mkdir -p "$BUILD_DIR/package/tests/${ROCM_INSTALL_PATH}/test/gdb"
# Create control file, with variable substitution.
# Lines with # at the start are removed, to allow for comments
grep -v '^# ' > "$BUILD_DIR/package/tests/DEBIAN/control" <<EOF
# Required fields
Version: ${VERSION}-${CPACK_DEBIAN_PACKAGE_RELEASE}
@@ -161,7 +213,6 @@ Priority: optional
# rocm-core as policy says everything to depend on rocm-core
Depends: ${PROJ_NAME} (=${VERSION}-${CPACK_DEBIAN_PACKAGE_RELEASE}), dejagnu, rocm-core, make
EOF
copy_testsuite_files
fakeroot dpkg-deb -Zgzip --build "$BUILD_DIR/package/tests" "$OUT_DIR/deb/$PROJ_NAME"
}
@@ -204,7 +255,9 @@ Summary: ROCm source-level debugger for Linux
Version: ${VERSION//-/_}
Release: ${CPACK_RPM_PACKAGE_RELEASE}%{?dist}
License: GPL
Prefix: ${ROCM_INSTALL_PATH}
Requires: rocm-core
Provides: $LIB_AMD_PYTHON()(64bit)
%description
This is ROCgdb, the ROCm source-level debugger for Linux, based on
@@ -225,6 +278,27 @@ https://github.com/RadeonOpenCompute/ROCm
## into the local RPM_BUILD_ROOT and left the defaults take over. Need
## to quote the dollar signs as we want rpm to expand them when it is
## run, rather than the shell when we build the spec file.
%pre
# Post-install script commands
echo "Running post-install script..."
mkdir -p ${ROCM_INSTALL_PATH}/lib
PYTHON_LIB_INSTALLED=\$(ldconfig -p | awk '/libpython3/ { print \$NF; exit}')
ln -s \$PYTHON_LIB_INSTALLED ${ROCM_INSTALL_PATH}/lib/$LIB_AMD_PYTHON
%postun
# Post-uninstallation script commands
echo "Running post-uninstallation script..."
PYTHON_LINK_BY_OPENCL=\$(ldconfig -p | awk '/libpython3/ { print \$NF; exit}' | awk -F'/' '{print \$NF}')
rm -f ${ROCM_INSTALL_PATH}/lib/\$PYTHON_LINK_BY_OPENCL
rm -f ${ROCM_INSTALL_PATH}/lib/$LIB_AMD_PYTHON
if [ -L "${ROCM_INSTALL_PATH}/lib/$LIB_AMD_PYTHON" ] || \
[ -L "${ROCM_INSTALL_PATH}/lib/\$PYTHON_LINK_BY_OPENCL" ] ; then
echo " some rocm-gdb requisite libs could not be removed"
else
echo " all requisite libs removed successfully "
fi
echo "post-uninstallation done."
%install
rm -rf \$RPM_BUILD_ROOT
mkdir -p \$RPM_BUILD_ROOT
@@ -279,6 +353,7 @@ Summary: Tests for gdb enhanced to debug AMD GPUs
Version: ${VERSION//-/_}
Release: ${RELEASE}
License: GPL
Prefix: ${ROCM_INSTALL_PATH}
Requires: dejagnu, ${PROJ_NAME} = ${VERSION//-/_}-${RELEASE}, rocm-core, make
%description
@@ -340,15 +415,36 @@ build() {
--infodir="\${prefix}/share/info/rocgdb" \
--with-separate-debug-dir="\${prefix}/lib/debug:/usr/lib/debug" \
--with-gdb-datadir="\${prefix}/share/rocgdb" --enable-64-bit-bfd \
--with-bugurl="$BUG_URL" --with-pkgversion="${ROCM_BUILD_ID:-ROCm}" \
--enable-targets="x86_64-linux-gnu,amdgcn-amd-amdhsa" \
--disable-ld --disable-gas --disable-gdbserver --disable-sim --enable-tui \
--disable-gdbtk --disable-shared --disable-gprofng \
--with-expat --with-system-zlib --without-guile --with-babeltrace --with-lzma \
--with-python=$pythonver --with-rocm-dbgapi=$ROCM_INSTALL_PATH \
--with-amd-dbgapi PKG_CONFIG_PATH="${ROCM_INSTALL_PATH}/share/pkgconfig" \
--with-bugurl="$BUG_URL" --with-pkgversion="${ROCM_BUILD_ID:-ROCm}" \
--enable-targets="x86_64-linux-gnu,amdgcn-amd-amdhsa" \
--disable-gas \
--disable-gdbserver \
--disable-gdbtk \
--disable-gprofng \
--disable-ld \
--disable-shared \
--disable-sim \
--enable-tui \
--with-amd-dbgapi \
--with-expat \
--with-lzma \
--with-python=$pythonver \
--with-rocm-dbgapi=$ROCM_INSTALL_PATH \
--with-system-zlib \
--with-zstd \
--without-babeltrace \
--without-guile \
--without-intel-pt \
--without-libunwind-ia64 \
--without-xxhash \
PKG_CONFIG_PATH="${ROCM_INSTALL_PATH}/share/pkgconfig" \
LDFLAGS="$LDFLAGS"
LD_RUN_PATH='${ORIGIN}/../lib' make $MAKE_OPTS
REPLACE_LIB_NAME=$(ldd -d $BUILD_DIR/gdb/gdb |awk '/libpython/{print $1}')
echo "Replacing $REPLACE_LIB_NAME with $LIB_AMD_PYTHON"
patchelf --replace-needed $REPLACE_LIB_NAME $LIB_AMD_PYTHON $BUILD_DIR/gdb/gdb
mkdir -p $BUILD_DIR/package/main${ROCM_INSTALL_PATH}/{share/rocgdb,bin}
make $MAKE_OPTS -C gdb DESTDIR=$BUILD_DIR/package/main install install-pdf install-html
@@ -381,7 +477,7 @@ main(){
VALID_STR=`getopt -o hcraso:p: --long help,clean,release,static,address_sanitizer,outdir:,package: -- "$@"`
eval set -- "$VALID_STR"
ASAN_BUILD="no"
while true ;
do
case "$1" in
@@ -393,9 +489,10 @@ do
BUILD_TYPE="Release" ; shift ; MAKEARG="$MAKEARG REL=1" ;; # For compatability with other scripts
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
set_address_sanitizer_on
ASAN_BUILD="yes" ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package) #FIXME

View File

@@ -7,7 +7,7 @@ printUsage() {
echo "Usage: $(basename "${BASH_SOURCE}") [options ...]"
echo
echo "Options:"
echo " -s, --static Supports static CI by accepting this param & not bailing out. No effect of the param though"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
@@ -65,7 +65,7 @@ do
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)
@@ -127,6 +127,10 @@ build_rocm_bandwidth_test() {
echo "Packaging $TEST_NAME"
cmake --build "$TEST_BLD_DIR" -- $MAKEARG -C $TEST_BLD_DIR package
mkdir -p "$TEST_BIN_DIR"
echo "Copying $TEST_NAME to $TEST_BIN_DIR"
progressCopy "$TEST_BLD_DIR/$TEST_NAME" "$TEST_BIN_DIR"
copy_if DEB "${CPACKGEN:-"DEB;RPM"}" "$TEST_PKG_DEB" $TEST_BLD_DIR/*.deb
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$TEST_PKG_RPM" $TEST_BLD_DIR/*.rpm

View File

@@ -23,6 +23,7 @@ printUsage() {
return 0
}
PROJ_NAME="rsmi"
PACKAGE_ROOT="$(getPackageRoot)"
TARGET="build"
@@ -30,8 +31,8 @@ PACKAGE_LIB=$(getLibPath)
PACKAGE_INCLUDE="$(getIncludePath)"
RSMI_BUILD_DIR=$(getBuildPath rsmi)
RSMI_PACKAGE_DEB_DIR="$(getPackageRoot)/deb/rsmi"
RSMI_PACKAGE_RPM_DIR="$(getPackageRoot)/rpm/rsmi"
RSMI_PACKAGE_DEB_DIR="$(getPackageRoot)/deb/$PROJ_NAME"
RSMI_PACKAGE_RPM_DIR="$(getPackageRoot)/rpm/$PROJ_NAME"
RSMI_BUILD_TYPE="debug"
BUILD_TYPE="Debug"

View File

@@ -22,17 +22,16 @@ printUsage() {
return 0
}
PROJ_NAME="rocminfo"
TARGET="build"
ROCMINFO_DEST="$(getBinPath)"
ROCMINFO_SRC_ROOT="$ROCMINFO_ROOT"
ROCMINFO_BUILD_DIR="$(getBuildPath rocminfo)"
ROCMINFO_BUILD_DIR="$(getBuildPath $PROJ_NAME)"
MAKEARG="$DASH_JAY"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_UTILS="$(getUtilsPath)"
ROCMINFO_PACKAGE_DEB="$(getPackageRoot)/deb/rocminfo"
ROCMINFO_PACKAGE_RPM="$(getPackageRoot)/rpm/rocminfo"
ROCMINFO_PACKAGE_DEB="$PACKAGE_ROOT/deb/$PROJ_NAME"
ROCMINFO_PACKAGE_RPM="$PACKAGE_ROOT/rpm/$PROJ_NAME"
BUILD_TYPE="debug"
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
@@ -91,6 +90,7 @@ build_rocminfo() {
cmake \
$(rocm_cmake_params) \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DROCRTST_BLD_TYPE="$BUILD_TYPE" \
$(rocm_common_cmake_params) \
-DCPACK_PACKAGE_VERSION_MAJOR="1" \

View File

@@ -10,6 +10,10 @@ ROCM_RVS_LIB_RPATH="\$ORIGIN/.."
build_rocmvalidationsuite() {
echo "Start build"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
ack_and_skip_static
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -17,11 +21,13 @@ build_rocmvalidationsuite() {
cd "${COMPONENT_SRC}"
mkdir -p "$BUILD_DIR"
init_rocm_common_cmake_params
cmake \
$(rocm_common_cmake_params) \
"${rocm_math_common_cmake_params[@]}" \
-DFETCH_ROCMPATH_FROM_ROCMCORE=ON \
-DCMAKE_SHARED_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_LIB_RPATH:$ROCM_RVS_LIB_RPATH" \
-DCMAKE_SHARED_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--build-id=sha1,--rpath,$ROCM_LIB_RPATH:$ROCM_RVS_LIB_RPATH" \
-DRVS_BUILD_TESTS=FALSE \
-B "$BUILD_DIR" \
"$COMPONENT_SRC"

View File

@@ -16,12 +16,17 @@ build_rocprim() {
ASAN_CMAKE_PARAMS="false"
fi
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
init_rocm_common_cmake_params
@@ -31,8 +36,8 @@ build_rocprim() {
"${rocm_math_common_cmake_params[@]}" \
-DAMDGPU_TARGETS=${GPU_TARGETS} \
-DBUILD_BENCHMARK=OFF \
-DBUILD_SHARED_LIBS=ON \
-DBUILD_TEST=ON \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DCMAKE_MODULE_PATH="${ROCM_PATH}/lib/cmake/hip;${ROCM_PATH}/hip/cmake" \
"$COMPONENT_SRC"

View File

@@ -7,18 +7,18 @@ printUsage() {
echo "Usage: ${BASH_SOURCE##*/} [options ...]"
echo
echo "Options:"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
echo " -h, --help Prints this help"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -w, --wheel Creates python wheel package of omniperf.
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo " -w, --wheel Creates python wheel package of ROCm Compute Profiler.
It needs to be used along with -r option"
echo " -h, --help Prints this help"
echo
echo "Possible values for <type>:"
echo "Possible values for package <type>:"
echo " deb -> Debian format (default)"
echo " rpm -> RPM format"
echo
@@ -26,7 +26,7 @@ printUsage() {
return 0
}
API_NAME="omniperf"
API_NAME="rocprofiler-compute"
PROJ_NAME="$API_NAME"
LIB_NAME="lib${API_NAME}"
TARGET="build"
@@ -36,17 +36,13 @@ PACKAGE_LIB="$(getLibPath)"
BUILD_DIR="$(getBuildPath $API_NAME)"
PACKAGE_DEB="$(getPackageRoot)/deb/$API_NAME"
PACKAGE_RPM="$(getPackageRoot)/rpm/$API_NAME"
ROCM_WHEEL_DIR="${BUILD_DIR}/_wheel"
BUILD_TYPE="Debug"
MAKE_OPTS="$DASH_JAY -C $BUILD_DIR"
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
WHEEL_PACKAGE=false
#parse the arguments
VALID_STR=$(getopt -o hcraso:p:w --long help,clean,release,static,address_sanitizer,outdir:,package:,wheel -- "$@")
eval set -- "$VALID_STR"
@@ -55,22 +51,22 @@ do
case "$1" in
-h | --help)
printUsage ; exit 0;;
-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
-r | --release)
BUILD_TYPE="Release" ; shift ;;
-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
-s | --static)
SHARED_LIBS="OFF" ; shift ;;
-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
-p | --package)
MAKETARGET="$2" ; shift 2 ;;
-r | --release)
BUILD_TYPE="Release" ; shift ;;
-s | --static)
ack_and_skip_static ;;
-w | --wheel)
WHEEL_PACKAGE=true ; shift ;;
--) shift; break;; # end delimiter
--) shift; break;;
*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
@@ -86,7 +82,6 @@ fi
clean() {
echo "Cleaning $PROJ_NAME"
rm -rf "$ROCM_WHEEL_DIR"
rm -rf "$BUILD_DIR"
rm -rf "$PACKAGE_DEB"
rm -rf "$PACKAGE_RPM"
@@ -97,10 +92,9 @@ clean() {
build() {
echo "Building $PROJ_NAME"
if [ "$DISTRO_ID" = centos-7 ]; then
echo "Skip make and uploading packages for Omniperf on Centos7 distro, due to python dependency"
echo "Skip make and uploading packages for ROCm Compute Profiler on Centos7 distro, due to python dependency"
exit 0
fi
if [ ! -d "$BUILD_DIR" ]; then
mkdir -p "$BUILD_DIR"
pushd "$BUILD_DIR" || exit
@@ -108,16 +102,16 @@ build() {
echo "ROCm CMake Params: $(rocm_cmake_params)"
echo "ROCm Common CMake Params: $(rocm_common_cmake_params)"
#install python deps
#python3 -m pip install -t ${BUILD_DIR}/python-libs -r ${ROCPROFILER_COMPUTE_ROOT}/requirements.txt
print_lib_type $SHARED_LIBS
cmake \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DCHECK_PYTHON_DEPS=NO \
-DPYTHON_DEPS=${BUILD_DIR}/python-libs \
-DMOD_INSTALL_PATH=${BUILD_DIR}/modulefiles \
"$OMNIPERF_ROOT"
"$ROCPROFILER_COMPUTE_ROOT"
fi
make $MAKE_OPTS
make $MAKE_OPTS install
make $MAKE_OPTS package
@@ -126,22 +120,6 @@ build() {
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$PACKAGE_RPM" "$BUILD_DIR/${API_NAME}"*.rpm
}
create_wheel_package() {
echo "Creating Omniperf wheel package"
# Copy the setup.py generator to build folder
mkdir -p "$ROCM_WHEEL_DIR"
cp -f "$SCRIPT_ROOT"/generate_setup_py.py "$ROCM_WHEEL_DIR"
cp -f "$SCRIPT_ROOT"/repackage_wheel.sh "$ROCM_WHEEL_DIR"
cd "$ROCM_WHEEL_DIR" || exit
# Currently only supports python3.6
./repackage_wheel.sh "$BUILD_DIR"/*.rpm python3.6
# Copy the wheel created to RPM folder which will be uploaded to artifactory
copy_if WHL "WHL" "$PACKAGE_RPM" "$ROCM_WHEEL_DIR"/dist/*.whl
}
print_output_directory() {
case ${PKGTYPE} in
("deb")
@@ -163,9 +141,4 @@ case "$TARGET" in
(*) die "Invalid target $TARGET" ;;
esac
if [[ $WHEEL_PACKAGE == true ]]; then
echo "Wheel Package build started !!!!"
create_wheel_package
fi
echo "Operation complete"
echo "Operation complete"

View File

@@ -37,7 +37,6 @@ PACKAGE_INCLUDE="$(getIncludePath)"
BUILD_DIR="$(getBuildPath $API_NAME)"
PACKAGE_DEB="$(getPackageRoot)/deb/$API_NAME"
PACKAGE_RPM="$(getPackageRoot)/rpm/$API_NAME"
ROCM_WHEEL_DIR="${BUILD_DIR}/_wheel"
PACKAGE_PREFIX="$ROCM_INSTALL_PATH"
BUILD_TYPE="Debug"
MAKE_OPTS="$DASH_JAY"
@@ -74,8 +73,7 @@ while true; do
shift
;;
-s | --static)
SHARED_LIBS="OFF"
shift
ack_and_skip_static
;;
-w | --wheel)
WHEEL_PACKAGE=true
@@ -113,7 +111,6 @@ fi
clean() {
echo "Cleaning $PROJ_NAME"
rm -rf "$ROCM_WHEEL_DIR"
rm -rf "$BUILD_DIR"
rm -rf "$PACKAGE_DEB"
rm -rf "$PACKAGE_RPM"
@@ -177,18 +174,6 @@ build_rocprofiler-sdk() {
fi
}
create_wheel_package() {
echo "Creating rocprofiler sdk wheel package"
mkdir -p "$ROCM_WHEEL_DIR"
cp -f "$SCRIPT_ROOT"/generate_setup_py.py "$ROCM_WHEEL_DIR"
cp -f "$SCRIPT_ROOT"/repackage_wheel.sh "$ROCM_WHEEL_DIR"
cd "$ROCM_WHEEL_DIR"
# Currently only supports python3.6
./repackage_wheel.sh "$BUILD_DIR"/*.rpm python3.6
# Copy the wheel created to RPM folder which will be uploaded to artifactory
copy_if WHL "WHL" "$PACKAGE_RPM" "$ROCM_WHEEL_DIR"/dist/*.whl
}
print_output_directory() {
case ${PKGTYPE} in
"deb")
@@ -214,9 +199,4 @@ case "$TARGET" in
*) die "Invalid target $TARGET" ;;
esac
if [[ $WHEEL_PACKAGE == true ]]; then
echo "Wheel Package build started !!!!"
create_wheel_package
fi
echo "Operation complete"

View File

@@ -14,7 +14,7 @@ printUsage() {
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -w, --wheel Creates python wheel package of omnitrace.
echo " -w, --wheel Creates python wheel package of rocprof_sys.
It needs to be used along with -r option"
echo " -h, --help Prints this help"
echo
@@ -26,16 +26,18 @@ printUsage() {
return 0
}
API_NAME="omnitrace"
API_NAME="rocprofiler-systems"
PROJ_NAME="$API_NAME"
LIB_NAME="lib${API_NAME}"
TARGET="build"
MAKETARGET="deb"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_LIB="$(getLibPath)"
BUILD_DIR="$(getBuildPath $API_NAME)"
PACKAGE_DEB="$(getPackageRoot)/deb/$API_NAME"
PACKAGE_RPM="$(getPackageRoot)/rpm/$API_NAME"
BUILD_TYPE="Debug"
MAKE_OPTS="-j 8"
SHARED_LIBS="ON"
@@ -44,11 +46,11 @@ MAKETARGET="deb"
PKGTYPE="deb"
ASAN=0
#parse the arguments
VALID_STR=$(getopt -o hcraso:p:w --long help,clean,release,address_sanitizer,static,outdir:,package:,wheel -- "$@")
eval set -- "$VALID_STR"
while true; do
#echo "parocessing $1"
case "$1" in
-h | --help)
printUsage
@@ -65,17 +67,19 @@ while true; do
;;
-a | --address_sanitizer)
ack_and_ignore_asan
# set_asan_env_vars
# set_address_sanitizer_on
ASAN=1
shift
;;
-s | --static)
SHARED_LIBS="OFF"
shift
ack_and_skip_static
;;
-o | --outdir)
TARGET="outdir"
PKGTYPE=$2
# OUT_DIR_SPECIFIED=1
((CLEAN_OR_OUT |= 2))
shift 2
;;
@@ -84,13 +88,13 @@ while true; do
shift 2
;;
-w | --wheel)
echo "omnitrace: wheel build option accepted and ignored"
WHEEL_PACKAGE=true
shift
;;
--)
shift
break
;;
;; # end delimiter
*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] " >&2
exit 20
@@ -115,15 +119,11 @@ clean() {
rm -rf "$PACKAGE_LIB/${LIB_NAME:?}"*
}
build_omnitrace() {
build_rocprofiler_systems() {
echo "Building $PROJ_NAME"
if [ "$DISTRO_ID" = "mariner-2.0" ] || [ "$DISTRO_ID" = "ubuntu-24.04" ] || [ "$DISTRO_ID" = "azurelinux-3.0" ]; then
echo "Skip make and uploading packages for Omnitrace on \"${DISTRO_ID}\" distro"
exit 0
fi
if [ $ASAN == 1 ]; then
echo "Skip make and uploading packages for Omnitrace on ASAN build"
echo "Skip make and uploading packages for rocprofiler-systems on ASAN build"
exit 0
fi
if [ ! -d "$BUILD_DIR" ]; then
@@ -131,24 +131,69 @@ build_omnitrace() {
echo "Created build directory: $BUILD_DIR"
fi
cd $ROCPROFILER_SYSTEMS_ROOT || exit
echo "Current submodule status"
git submodule status
echo "Cached (old) submodule status"
git submodule status --cached
cat .git/config
echo "Updating submodules"
git submodule init
git submodule sync --recursive
git submodule update --init --recursive --force
echo "Updated submodule status"
git submodule status
cat .git/config
echo "Build directory: $BUILD_DIR"
pushd "$BUILD_DIR" || exit
print_lib_type $SHARED_LIBS
ELFUTIL_URL="https://compute-artifactory.amd.com/artifactory/rocm-generic-local/dev-tools/omnitrace/elfutils-0.188.tar.bz2"
BINUTIL_URL="https://compute-artifactory.amd.com/artifactory/rocm-generic-local/dev-tools/omnitrace/binutils-2.40.tar.gz"
echo "ROCm CMake Params: $(rocm_cmake_params)"
echo "ROCm Common CMake Params: $(rocm_common_cmake_params)"
echo "ELFUTIL_URL=$ELFUTIL_URL, BINUTIL_URL=$BINUTIL_URL"
if [ $ASAN == 1 ]; then
echo "Address Sanitizer path"
# Commenting out the below cmake command as it is not working as expected
# LD_LIBRARY_PATH=$ROCM_INSTALL_PATH/lib/asan:$LD_LIBRARY_PATH
# cmake \
# $(rocm_cmake_params) \
# $(rocm_common_cmake_params) \
# -DROCPROFSYS_BUILD_{LIBUNWIND,DYNINST}=ON \
# -DDYNINST_BUILD_{TBB,BOOST,ELFUTILS,LIBIBERTY}=ON \
# -DAMDDeviceLibs_DIR="${ROCM_INSTALL_PATH}/lib/asan/cmake/AMDDeviceLibs" \
# -Dhip_DIR="${ROCM_INSTALL_PATH}/lib/asan/cmake/hip" \
# -Dhip-lang_DIR="${ROCM_INSTALL_PATH}/lib/asan/cmake/hip-lang" \
# -Damd_comgr_DIR="${ROCM_INSTALL_PATH}/lib/asan/cmake/amd_comgr" \
# -Dhsa-runtime64_DIR="${ROCM_INSTALL_PATH}/lib/asan/cmake/hsa-runtime64" \
# -Dhsakmt_DIR="${ROCM_INSTALL_PATH}/lib/asan/cmake/hsakmt" \
# -DROCM_PATH="${ROCM_INSTALL_PATH}/lib/asan" \
# -Drocprofiler_ROOT_DIR="${ROCM_INSTALL_PATH}/lib/asan" \
# -DCMAKE_HIP_COMPILER_ROCM_ROOT="${ROCM_INSTALL_PATH}" \
# -DCMAKE_PREFIX_PATH="${ROCM_INSTALL_PATH};${ROCM_INSTALL_PATH}/lib/asan" \
# -DCMAKE_LIBRARY_PATH="${ROCM_INSTALL_PATH}/lib/asan" \
# -DCPACK_DEBIAN_PACKAGE_SHLIBDEPS=OFF \
# "$ROCPROFILER_SYSTEMS_ROOT"
else
cmake \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DOMNITRACE_BUILD_{LIBUNWIND,DYNINST}=ON \
-DROCPROFSYS_BUILD_{LIBUNWIND,DYNINST}=ON \
-DDYNINST_BUILD_{TBB,BOOST,ELFUTILS,LIBIBERTY}=ON \
"$OMNITRACE_ROOT"
-DElfUtils_DOWNLOAD_URL="$ELFUTIL_URL" \
-D{DYNINST,TIMEMORY}_BINUTILS_DOWNLOAD_URL="$BINUTIL_URL" \
"$ROCPROFILER_SYSTEMS_ROOT"
fi
@@ -182,10 +227,10 @@ print_output_directory() {
verifyEnvSetup
case "$TARGET" in
clean) clean ;;
build) build_omnitrace ;;
outdir) print_output_directory ;;
*) die "Invalid target $TARGET" ;;
clean) clean ;;
build) build_rocprofiler_systems ;;
outdir) print_output_directory ;;
*) die "Invalid target $TARGET" ;;
esac
echo "Operation complete"

View File

@@ -8,7 +8,7 @@ printUsage() {
echo
echo "Options:"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
echo " -a, --address_sanitizer Enable address sanitizer"
@@ -42,9 +42,9 @@ SHARED_LIBS="ON"
CLEAN_OR_OUT=0
MAKETARGET="deb"
PKGTYPE="deb"
GPU_LIST="gfx900,gfx906,gfx908,gfx90a,gfx940,gfx941,gfx942,gfx1030,gfx1100,gfx1101,gfx1102"
GPU_LIST="gfx900,gfx906,gfx908,gfx90a,gfx940,gfx941,gfx942,gfx1030,gfx1031,gfx1100,gfx1101,gfx1102,gfx1200,gfx1201"
VALID_STR=$(getopt -o hcraso:p: --long help,clean,release,static,address_sanitizer,outdir:,package: -- "$@")
VALID_STR=$(getopt -o hcraswo:p: --long help,clean,release,static,wheel,address_sanitizer,outdir:,package: -- "$@")
eval set -- "$VALID_STR"
while true; do
@@ -68,7 +68,7 @@ while true; do
shift
;;
-s | --static)
SHARED_LIBS="OFF"
ack_and_skip_static
shift
;;
-o | --outdir)
@@ -131,7 +131,9 @@ build_rocprofiler() {
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DENABLE_LDCONFIG=OFF \
-DUSE_PROF_API=1 \
-DUSE_GET_ROCM_PATH_API=1 \
-DGPU_TARGETS="$GPU_LIST" \
-DPython3_EXECUTABLE=$(which python3) \
-DPROF_API_HEADER_PATH="$WORK_ROOT/roctracer/inc/ext" \
-DHIP_HIPCC_FLAGS=$HIP_HIPCC_FLAGS";--offload-arch=$GPU_LIST" \
-DCPACK_OBJCOPY_EXECUTABLE="${ROCM_INSTALL_PATH}/llvm/bin/llvm-objcopy" \

360
tools/rocm-build/build_rocr.sh Executable file
View File

@@ -0,0 +1,360 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
PROJ_NAME="rocr"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [options ...] [make options]"
echo
echo "Options:"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -r, --release Make a release build instead of a debug build"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of type referred to by pkg_type"
echo " -h, --help Prints this help"
echo " -s, --static Build static lib (.a). build instead of dynamic/shared(.so) "
echo " -n, --norocr Don't build ROCr runtime (default is to build). This implies --norocrtst."
echo " -k, --nokfdtest Don't build kfdtest (default is to build)"
echo " -w, --wheel Creates python wheel packages. It needs to be used along with -r option"
echo " -t, --norocrtst Don't build rocrtst (default is to build)"
echo ""
echo " rocrtst options:"
echo " -e, --emulator Build a version suitable for running on emulator"
echo " -g, --gpu_list <gpus> Quoted, semi-colon separated list of gpu architectures that"
echo " kernels will run on; e.g., \"gfx803;gfx900;...\" the"
echo " default is to build kernels for all supported architectures."
echo
echo "Default build: debug, shared libs"
return 0
}
build_rocr_runtime() {
echo "Build ROCr Runtime"
echo "$ROCR_ROOT"
if [ "$shared_libs" == "OFF" ]; then
install_drmStatic_lib
fi
if [ ! -d "$rocr_build_dir" ]; then
mkdir -p "$rocr_build_dir"
pushd "$rocr_build_dir" || { echo "Failed to pushd into $rocr_build_dir"; exit 1; }
print_lib_type "$shared_libs"
cmake \
$(rocm_cmake_params) \
-DBUILD_SHARED_LIBS="$shared_libs" \
-DBUILD_ROCR="$rocr_target" \
-DENABLE_LDCONFIG=OFF \
$(rocm_common_cmake_params) \
-DADDRESS_SANITIZER="$ADDRESS_SANITIZER" \
-DROCM_INSTALL_PATH="$ROCM_INSTALL_PATH" \
-DCPACK_GENERATOR="${CPACKGEN:-"DEB;RPM"}" \
-DTHUNK_DEFINITIONS="$thunk_defines_string" \
-DROCR_DEFINITIONS="$rocr_defines_string" \
"$ROCR_ROOT"
popd
fi
cmake --build "$rocr_build_dir" --verbose -- $DASH_JAY
cmake --build "$rocr_build_dir" --target install --verbose
cmake --build "$rocr_build_dir" --target package --verbose
mkdir -p "$package_lib"
copy_if DEB "${CPACKGEN:-"DEB;RPM"}" "$package_root_deb" "$rocr_build_dir"/hsa-rocr*.deb
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$package_root_rpm" "$rocr_build_dir"/hsa-rocr*.rpm
}
build_rocrtst() {
rocrtst_build_type="debug"
mkdir -p "$rocrtst_build_dir"
pushd "$rocrtst_build_dir" || { echo "Failed to pushd into $rocrtst_build_dir"; exit 1; }
BUILD_TYPE=
if [[ $gpu_list ]]; then
cmake -DTARGET_DEVICES="$gpu_list" \
-DROCRTST_BLD_TYPE="$rocrtst_build_type" \
-DBUILD_SHARED_LIBS="$shared_libs" \
-DCMAKE_PREFIX_PATH="$ROCM_INSTALL_PATH;$ROCM_INSTALL_PATH/llvm" \
-DCMAKE_VERBOSE_MAKEFILE=1 \
$(rocm_common_cmake_params) \
-DCMAKE_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \
-DCPACK_PACKAGING_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \
-DCPACK_GENERATOR="${CPACKGEN:-"DEB;RPM"}" \
-DROCM_PATCH_VERSION="$ROCM_LIBPATCH_VERSION" \
-DROCM_DIR="$ROCM_INSTALL_PATH" \
-DLLVM_DIR="$ROCM_INSTALL_PATH/llvm/bin" \
-DOPENCL_DIR="$ROCM_INSTALL_PATH" \
-DEMULATOR_BUILD="$emulator_build" \
"$rocrtst_src_root"
else
$ADDRESS_SANITIZER cmake -DROCRTST_BLD_TYPE="$rocrtst_build_type" \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DBUILD_SHARED_LIBS="$shared_libs" \
-DCMAKE_PREFIX_PATH="$ROCM_INSTALL_PATH;$ROCM_INSTALL_PATH/llvm" \
-DCMAKE_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \
-DCPACK_PACKAGING_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \
-DCPACK_GENERATOR="${CPACKGEN:-"DEB;RPM"}" \
$(rocm_common_cmake_params) \
-DROCM_PATCH_VERSION="$ROCM_LIBPATCH_VERSION" \
-DROCM_DIR="$ROCM_INSTALL_PATH" \
-DLLVM_DIR="$ROCM_INSTALL_PATH/llvm/bin" \
-DOPENCL_DIR="$ROCM_INSTALL_PATH" \
-DEMULATOR_BUILD="$emulator_build" \
"$rocrtst_src_root"
fi
echo "Making rocrtst:"
echo "MAKEARG=$MAKEARG [eom]"
cmake --build . -- $DASH_JAY
cmake --build . -- rocrtst_kernels
cmake --build . -- package || true
mkdir -p "$rocrtst_package"
echo "Copying rocrtst binaries to $rocrtst_package"
progressCopy "$rocrtst_build_dir" "$rocrtst_package"
progressCopy "$ROCRTST_ROOT/thirdparty" "$rocrtst_package/thirdparty" || true
DEB_FILE=(./rocrtst*.deb)
if [ -e "${DEB_FILE[0]}" ]; then
mkdir -p "$package_root_deb"
progressCopy "${DEB_FILE[@]}" "$package_root_deb"
fi
RPM_FILE=(./rocrtst*.rpm)
if [ -e "${RPM_FILE[0]}" ]; then
mkdir -p "$package_root_rpm"
progressCopy "${RPM_FILE[@]}" "$package_root_rpm"
fi
mkdir -p "$package_utils"
progressCopy "$SCRIPT_ROOT/run_rocrtst.sh" "$package_utils"
popd
}
file_exists(){
set -- $1
[ -e "$1" ]
}
build_kfdtest() {
echo "Building kfdtest"
mkdir -p "$kfdtest_build_dir"
pushd "$kfdtest_build_dir" || { echo "Failed to pushd into $kfdtest_build_dir"; exit 1; }
cmake \
-DCMAKE_BUILD_TYPE="$build_type" \
-DBUILD_SHARED_LIBS="$shared_libs" \
-DCMAKE_PREFIX_PATH="${ROCM_INSTALL_PATH}" \
-DCPACK_PACKAGING_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \
$(rocm_common_cmake_params) \
-DADDRESS_SANITIZER="$ADDRESS_SANITIZER" \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH="FALSE" \
-DCPACK_GENERATOR="${CPACKGEN:-"DEB;RPM"}" \
-DCPACK_RPM_DEBUGINFO_PACKAGE=YES \
-DCPACK_RPM_PACKAGE_DEBUG=YES \
-DCMAKE_SKIP_BUILD_RPATH=TRUE \
-DCMAKE_EXE_LINKER_FLAGS="-Wl,--enable-new-dtags -Wl,--rpath,$ROCM_RPATH $LDFLAGS" \
"$kfdtest_src_root"
cmake --build . -- $DASH_JAY
cmake --build . -- package || true
popd
mkdir -p "$kfdtest_bin"
progressCopy "$kfdtest_build_dir" "$kfdtest_bin"
progressCopy "$kfdtest_build_dir/kfdtest.exclude" "$kfdtest_bin"
progressCopy "$kfdtest_build_dir/run_kfdtest.sh" "$kfdtest_bin"
mkdir -p "$package_utils"
progressCopy "$SCRIPT_ROOT/run_kfdtest.sh" "$package_utils"
if file_exists $kfdtest_build_dir/kfdtest*.deb ; then
mkdir -p "$package_root_deb"
cp "$kfdtest_build_dir"/kfdtest*.deb "$package_root_deb"
fi
if file_exists "$kfdtest_build_dir"/kfdtest*.rpm ; then
mkdir -p "$package_root_rpm"
cp $kfdtest_build_dir/kfdtest*.rpm "$package_root_rpm"
fi
}
clean_rocr_runtime() {
echo "Cleaning ROCr Runtime"
rm -f $package_lib/libhsakmt.so*
rm -f $package_lib/libhsakmt.a
rm -f $package_lib/libhsakmt-staticdrm.a
rm -f $package_include/hsakmt*.h $package_include/linux/kfd_ioctl.h
rm -rf "${runtime_build_dir}"
rm -f "$package_root"/lib/libhsa-runtime*
rm -rf "$package_root/lib/cmake/hsa-runtime64"
rm -rf "$package_root/include/hsa"
rm -rf "$package_root/share/doc/hsa-runtime64"
rm -f "$package_root_deb"/hsa-rocr*.deb
rm -f "$package_root_rpm"/hsa-rocr*.rpm
rm -f "$package_root_rpm"/hsa_rocr*.whl
rm -rf "$PACKAGE_ROOT/hsa"
clean_rocrtst
clean_kfdtest
}
clean_rocrtst() {
echo "Cleaning rocrtst"
rm -rf "${rocrtst_package}"
rm -rf "${rocrtst_build_dir}"
rm -rf "${package_root_deb}"/rocrtst*.deb
rm -rf "${package_root_rpm}"/rocrtst*.rpm
}
clean_kfdtest() {
echo "Cleaning kfdtest"
rm -rf "$kfdtest_build_dir"
rm -rf "$kfdtest_bin"
rm -rf "$package_root_deb"/kfdtest*.deb
rm -rf "$package_root_rpm"/kfdtest*.rpm
}
print_output_directory() {
case ${pkgtype} in
("deb")
echo "${package_root_deb}";;
("rpm")
package_rpm="some_value"
echo "${package_root_rpm}";;
(*)
echo "Invalid package type \"${pkgtype}\" provided for -o" >&2; exit 1;;
esac
exit
}
target="build"
kfdtest_target="yes"
rocrtst_target="yes"
rocr_target="ON"
package_root="$(getPackageRoot)"
package_root_deb="${package_root}/deb/$PROJ_NAME"
package_root_rpm="${package_root}/rpm/$PROJ_NAME"
package_lib="$(getLibPath)"
package_include="$(getIncludePath)"
runtime_build_dir="$(getBuildPath runtime)"
BUILD_TYPE="Debug"
shared_libs="ON"
clean_or_out=0;
maketarget="deb"
pkgtype="deb"
WHEEL_PACKAGE=false
thunk_defines_string=
roct_build_dir="${runtime_build_dir}/libhsakmt"
rocr_defines_string=
rocr_build_dir="${runtime_build_dir}/$PROJ_NAME"
rocrtst_package="$(getBinPath)/rocrtst_tests"
rocrtst_build_dir="${runtime_build_dir}/rocrtst"
rocrtst_src_root="$ROCRTST_ROOT/suites/test_common"
emulator_build=0
kfdtest_src_root="$ROCR_ROOT/libhsakmt/tests/kfdtest"
kfdtest_bin="$(getBinPath)/kfdtest"
package_utils="$(getUtilsPath)"
kfdtest_build_dir=${runtime_build_dir}/kfdtest
unset HIP_DEVICE_LIB_PATH
unset ROCM_PATH
valid_str=$(getopt -o hcraswnkteg:o: --long help,clean,release,static,wheel,address_sanitizer,norocr,nokfdtest,norocrtst,emulator,gpu_list:,outdir: -- "$@")
eval set -- "$valid_str"
while true ;
do
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-c | --clean)
target="clean" ; ((clean_or_out|=1)) ; shift ;;
(-r | --release)
BUILD_TYPE="RelWithDebInfo" ; shift ;;
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
shared_libs="OFF" ; shift ;;
(-w | --wheel)
WHEEL_PACKAGE=true ; shift ;;
(-n | --norocr)
rocr_target="OFF"
rocrtst_target="no"; shift ;;
(-k | --nokfdtest)
kfdtest_target="no" ; shift ;;
(-t | --norocrtst)
rocrtst_target="no" ; shift ;;
(-e | --emulator )
emulator_build=1 ; shift ;;
(-g | --gpu_list )
gpu_list=$2 ; shift 2;;
(-o | --outdir)
target="outdir"; pkgtype=$2 ; OUT_DIR_SPECIFIED=1 ; ((clean_or_out|=2)) ; shift 2 ;;
--) shift; break;; # end delimiter
(*)
echo " ${BASH_SOURCE}: UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 22;;
esac
done
ret_conflict=1
check_conflicting_options $clean_or_out $pkgtype $maketarget
if [ $ret_conflict -ge 30 ]; then
print_vars $API_NAME $target $BUILD_TYPE $shared_libs $clean_or_out $pkgtype $maketarget
exit $ret_conflict
fi
case $target in
(clean) clean_rocr_runtime ;;
(build) build_rocr_runtime;;
(outdir) print_output_directory ;;
(*) die "Invalid target $target" ;;
esac
checkchild(){
if wait "$1"; then
return;
else
die "$2 failed with exit code $?"
fi
}
# if [ "$target" != "clean" ]; then
# if [ "$rocrtst_target" == "yes" ]; then
# build_rocrtst &
# else
# true & # Dummy build_rocrtst
# fi
# rocrtst_pid=$!
# if [ "$kfdtest_target" == "yes" ]; then
# build_kfdtest &
# else
# true & # Dummy build_kfdtest
# fi
# kfdtest_pid=$!
# checkchild $kfdtest_pid kfdtest
# checkchild $rocrtst_pid rocrtst
# fi
echo "Operation complete"

View File

@@ -6,7 +6,7 @@ printUsage() {
echo "Usage: $(basename "${BASH_SOURCE}") [options ...]"
echo
echo "Options:"
echo " -s, --static Supports static CI by accepting this param & not bailing out. No effect of the param though"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo " -c, --clean Clean output and delete all intermediate work"
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
@@ -70,7 +70,7 @@ do
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)

View File

@@ -8,6 +8,11 @@ set_component_src rocRAND
build_rocrand() {
echo "Start build"
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
@@ -31,6 +36,7 @@ build_rocrand() {
cmake \
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DAMDGPU_TARGETS=${GPU_TARGETS} \
-DBUILD_TEST=ON \
-DBUILD_BENCHMARK=ON \

View File

@@ -0,0 +1,124 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [-c|-r|-h] [makeopts]"
echo
echo "Options:"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo " -c, --clean Removes all RocR Samples build artifacts"
echo " -e, --emulator Build a version suitable for running on emulator"
echo " -r, --release Build release version RocR Samples (default is debug)"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -g, --gpu_list <gpus> Semi-colon separated List of gpu architectures that"
echo " kernels will run on; e.g., \"gfx803;gfx900;...\" the"
echo " default is to build kernels for all supported architectures."
echo " -h, --help Prints this help"
echo " makeopts Options to pass to the make command"
echo
return 0
}
GPU_LIST="gfx803;gfx701;gfx801;gfx802;gfx900;gfx902;gfx906;gfx908"
TARGET="build"
ROCRTST_SAMPLES_PACKAGE=$(getBinPath)/rocrtst_samples
ROCRTST_SAMPLES_ROOT=$ROCRTST_ROOT/samples
ROCRTST_SAMPLES_BUILD_DIR=$(getBuildPath rocrtst_samples)
MAKEARG="$DASH_JAY"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_UTILS="$(getUtilsPath)"
ROCRTST_SAMPLES_BUILD_TYPE="debug"
EMULATOR_BUILD=0
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
#parse the arguments
VALID_STR=`getopt -o hcrao:seg: --long help,clean,release,outdir:,static,address_sanitizer,emulator,gpu_list: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-r | --release)
ROCRTST_SAMPLES_BUILD_TYPE="release"; shift ;;
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-o | --outdir )
exit ;;
(-s | --static)
ack_and_skip_static ;;
(-e | --emulator )
EMULATOR_BUILD=1 ; ((CLEAN_OR_OUT|=3)) ; shift ;;
(-g | --gpu_list )
GPU_LIST=$2 ; shift 2;;
--) shift; break;; # end delimiter
(*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
done
RET_CONFLICT=1
check_conflicting_options $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
if [ $RET_CONFLICT -ge 30 ]; then
print_vars $API_NAME $TARGET $BUILD_TYPE $SHARED_LIBS $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
exit $RET_CONFLICT
fi
clean_rocrsamples() {
echo "Removing ROCR Samples"
rm -rf "$ROCRTST_SAMPLES_PACKAGE"
rm -rf "$ROCRTST_SAMPLES_BUILD_DIR"
}
build_rocrsamples() {
mkdir -p $ROCRTST_SAMPLES_BUILD_DIR
pushd $ROCRTST_SAMPLES_BUILD_DIR
cmake -DTARGET_DEVICES=$GPU_LIST \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DROCRTST_BLD_TYPE=$ROCRTST_SAMPLES_BUILD_TYPE \
-DROCM_DIR=$PACKAGE_ROOT \
-DLLVM_DIR="$ROCM_INSTALL_PATH/llvm/bin" \
-DOPENCL_DIR=$ROCM_INSTALL_PATH \
-DEMULATOR_BUILD=$EMULATOR_BUILD \
$ROCRTST_SAMPLES_ROOT
echo "Making ROCR Samples:"
cmake --build . -- $MAKEARG
cmake --build . -- sample_kernels
mkdir -p "$ROCRTST_SAMPLES_PACKAGE"
echo "Copying HSA Sample binaries to $ROCRTST_SAMPLES_PACKAGE"
progressCopy "$ROCRTST_SAMPLES_BUILD_DIR" "$ROCRTST_SAMPLES_PACKAGE"
mkdir -p "$PACKAGE_UTILS"
progressCopy "$SCRIPT_ROOT/run_rocrsamples.sh" "$PACKAGE_UTILS"
popd
}
case $TARGET in
clean) clean_rocrsamples ;;
build) build_rocrsamples ;;
*) die "Invalid target $target" ;;
esac
echo "Operation complete"
exit 0

View File

@@ -9,13 +9,19 @@ set_component_src rocSOLVER
build_rocsolver() {
echo "Start build"
cd $COMPONENT_SRC
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
fi
cd $COMPONENT_SRC
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
@@ -25,18 +31,17 @@ build_rocsolver() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx900;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
init_rocm_common_cmake_params
CXX="${ROCM_PATH}/bin/hipcc" \
cmake \
-DCPACK_SET_DESTDIR=OFF \
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-Drocblas_DIR="${ROCM_PATH}/rocblas/lib/cmake/rocblas" \
-DAMDGPU_TARGETS=${GPU_TARGETS} \
-DAMDGPU_TARGETS="${GPU_TARGETS}" \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
-DBUILD_CLIENTS_BENCHMARKS=ON \

View File

@@ -16,6 +16,11 @@ build_rocsparse() {
set_address_sanitizer_on
fi
SHARED_LIBS="ON"
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
SHARED_LIBS="OFF"
fi
MIRROR="http://compute-artifactory.amd.com/artifactory/list/rocm-generic-local/mathlib/sparse/"
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
@@ -23,7 +28,7 @@ build_rocsparse() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908:xnack-;gfx90a:xnack-;gfx90a:xnack+;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx900;gfx906:xnack-;gfx908:xnack-;gfx90a:xnack+;gfx90a:xnack-;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
fi
ROCSPARSE_TEST_MIRROR=$MIRROR \
@@ -34,11 +39,11 @@ build_rocsparse() {
cmake \
-DAMDGPU_TARGETS=${GPU_TARGETS} \
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}"\
"${rocm_math_common_cmake_params[@]}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DBUILD_CLIENTS_SAMPLES=ON \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DCPACK_SET_DESTDIR=OFF \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
-DCMAKE_MODULE_PATH="${ROCM_PATH}/lib/cmake/hip;${ROCM_PATH}/hip/cmake" \

Some files were not shown because too many files have changed in this diff Show More