Compare commits

...

22 Commits

Author SHA1 Message Date
Istvan Kiss
56de4cda80 Replace "-" on precision support page 2025-03-10 13:26:57 +01:00
dependabot[bot]
6db5bee4dd Build(deps): Bump rocm-docs-core from 1.17.0 to 1.17.1 in /docs/sphinx (#4442)
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.17.0 to 1.17.1.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.17.0...v1.17.1)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-07 17:06:17 -07:00
Joseph Macaranas
cb27cda5c7 External CI: Add default ubuntu repos to sources.list (#4464)
- Also add fix-missing parameter to apt install
2025-03-07 17:36:25 -05:00
Daniel Su
1d9ecdef44 Ex CI: temporarily change from low pool to base pool (#4463) 2025-03-07 17:15:32 -05:00
Daniel Su
29509640e7 Ex CI: fixes for rocMLIR, rocPyDecode, RVS, rocprof-compute (#4462) 2025-03-07 16:46:05 -05:00
Pratik Basyal
9aad9ce7ef Content for modprobe added to MI300X system optimization (#4434)
Added content for modprobe
2025-03-07 14:52:20 -05:00
Daniel Su
2bcd398de6 Ex CI: make component CTests nonverbose (#4458) 2025-03-06 17:38:24 -05:00
Daniel Su
a0b91d17ff Ex CI: make disk space print optional, small ROCr and manifest tweaks (#4457) 2025-03-06 16:13:19 -05:00
Daniel Su
c83677f41c Ex CI: enable gfx90a tests (#4450) 2025-03-06 13:50:12 -05:00
Daniel Su
e38b3aea50 Ex CI: update to 6.3.4, fixes for rocm-smi and rocWMMA (#4455)
* Ex CI: update to 6.3.4

* fix rocm-smi not installing apt packages

* extend rocWMMA test timeout to 2 hours
2025-03-06 13:34:56 -05:00
Adel Johar
cad7b92954 Merge pull request #4385 from ROCm/docs_versions
Docs: use custom directive to reference library versions
2025-03-05 15:01:25 +01:00
Adel Johar
cd85ccd539 Docs: use custom directive to reference library versions 2025-03-05 10:24:22 +01:00
alexxu-amd
de4ac7a5a3 Merge pull request #4438 from ROCm/alexxu12/md-file-fix
Fix important block from CONTRIBUTING.md
2025-03-04 13:08:51 -05:00
Peter Park
fa0e212906 Fix applies to linux tag for training benchmark docker pages (#4446) 2025-03-04 12:06:55 -05:00
Daniel Su
84001e176e Ex CI: increase hipBLASLt test timeout to 2 hours (#4445) 2025-03-04 10:53:05 -05:00
Joseph Macaranas
9cd2706fdb External CI: Set hipSPARSELt Fortran compiler to f95 (#4441)
- Explicitly set Fortran compiler to account for recent llvm-project changes that were meant to help with aomp issues.
2025-03-03 16:43:37 -05:00
Alex Xu
13be0b6a51 fix important block 2025-03-03 14:35:33 -05:00
Alex Xu
efefa0f43e fix important block 2025-03-03 14:12:05 -05:00
Daniel Su
4d15adf284 Ex CI: fix rocm-cmake tests, update component branch names (#4433) 2025-02-28 13:57:06 -05:00
Peter Park
1fb42c2591 Update LLM inference performance validation on AMD Instinct MI300X guide to filter by desired model (#4424)
* WIP

(cherry picked from commit a06a5b5b959a9425e7384fb58b88c3716f380e48)

rm unneeded files

(cherry picked from commit f1d0c00056a83299bdea74a43cd17454999cf2d8)

* add sphinxcontrib.datatemplates

(cherry picked from commit d056b93a325d87b81f54f70c6eb4ae78f4fb0bc1)

* add template

(cherry picked from commit 0691d59f0a1efbda7908762b7a906e30a65c0ee1)

fix template

(cherry picked from commit 01e4bea5522aa5deeaade58c105ff850f449df8b)

WIPO

(cherry picked from commit 4d8daf7445e7be92cd9ee1d39dff564bd8de41f4)

WIP

(cherry picked from commit 9eefd1f5833bc4dc8de9d777ff65a5fe5f826dbd)

update models yaml schema

(cherry picked from commit a5f0fc1e6cc51104dc2d42029bfcf3eea276d270)

add model groups functionality

(cherry picked from commit 13f49f96dd3e5a160d37c52e48a4fbcccdcf4f9e)

add selector headings and fix template

(cherry picked from commit 35f7f2314bcf74b4fd0a8ca10aaabf0de7063bb0)

update template

(cherry picked from commit 9e2dcfe0c7f6e7c2c685866ea83375fbacbc5032)

fix

(cherry picked from commit be51e32791550ddc21785effccb889228394b242)

use classes instead of data tags

(cherry picked from commit cd52d68c504f7e7435d156ae70cf4bde1dfe703e)

update template

(cherry picked from commit 9ed89fee6874b39ee3535fbde54a0a59f346ea2b)

clean up extra wip files

(cherry picked from commit a9f965a104baa966c184054638e935b011526278)

update wordlist

(cherry picked from commit f783656814e896aedd21acd1c8c87b4700c14469)

remove unused template

(cherry picked from commit cac894bd9c2b1262c9c006e5fddbcb742dc6d882)

improve script

(cherry picked from commit ca20ffd4922916616e0924d625652a815f27c35f)

fix template

(cherry picked from commit 752c61fda856fd5b244734636c036c8877e823b9)

fix standalone benchmark output path in template

(cherry picked from commit d8c04203b5ec0f6c2e2307f7890304a3dc5687be)

fix toc

(cherry picked from commit 8df42faf53488ef29f5a263d25032f3d35cd58ed)

update script to prevent flash of unstyled content

import a11y

(cherry picked from commit 46c852717f223a1d8744fab035807cebab4c5404)

add tabindex to wordlist

(cherry picked from commit 11492593f9692f5453045e7ec52c8f8ae9624ae9)

text

update script

* remove unused config option

* reorganize assets

* fix linting warning

* move js from data/ to extension/
2025-02-28 12:39:02 -05:00
Joseph Macaranas
e984954088 External CI: llvm-project updates (#4423)
- Add flang to built projects.
- Upgrade build VM to account for additional project.
- Temporarily ignore a test case for debug info, which is not a high priority in External CI.
2025-02-27 16:14:04 -05:00
Istvan Kiss
cd57bc8186 Fix white paper links 2025-02-27 15:29:06 +01:00
83 changed files with 1571 additions and 760 deletions

View File

@@ -146,6 +146,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -41,7 +41,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -85,18 +86,19 @@ jobs:
parameters:
artifactName: amd
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
environment: amd
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# environment: amd
# HIP with Nvidia backend
- job: hip_clr_combined_nvidia
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -139,8 +141,8 @@ jobs:
parameters:
artifactName: nvidia
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
environment: nvidia
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# environment: nvidia

View File

@@ -141,6 +141,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
@@ -196,7 +199,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: MIOpen
testParameters: '-VV --output-on-failure --force-new-ctest-process --output-junit test_output.xml --exclude-regex test_rnn_seq_api'
testParameters: '--output-on-failure --force-new-ctest-process --output-junit test_output.xml --exclude-regex test_rnn_seq_api'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}

View File

@@ -70,7 +70,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -107,11 +108,11 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
gpuTarget: $(JOB_GPU_TARGET)
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# gpuTarget: $(JOB_GPU_TARGET)
- job: MIVisionX_testing
dependsOn: MIVisionX
@@ -127,6 +128,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -32,7 +32,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -56,9 +57,9 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
- job: ROCR_Runtime_testing
dependsOn: ROCR_Runtime
@@ -74,6 +75,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -24,7 +24,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -48,6 +49,6 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}

View File

@@ -60,7 +60,8 @@ jobs:
value: $(Agent.BuildDirectory)/rocm
- name: HIP_INC_DIR
value: $(Agent.BuildDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -97,14 +98,14 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
extraEnvVars:
- HIP_ROCCLR_HOME:::/home/user/workspace/rocm
- ROCM_PATH:::/home/user/workspace/rocm
- HIP_INC_DIR:::/home/user/workspace/rocm
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
# extraEnvVars:
# - HIP_ROCCLR_HOME:::/home/user/workspace/rocm
# - ROCM_PATH:::/home/user/workspace/rocm
# - HIP_INC_DIR:::/home/user/workspace/rocm
- job: ROCmValidationSuite_testing
dependsOn: ROCmValidationSuite
@@ -120,6 +121,11 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
TEST_CONF_DIR: MI300X
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
TEST_CONF_DIR: MI210
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
@@ -138,7 +144,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: ROCmValidationSuite
testExecutable: $(Agent.BuildDirectory)/rocm/bin/rvs -c $(Agent.BuildDirectory)/rocm/share/rocm-validation-suite/conf/MI300X/gst_single.conf
testExecutable: $(Agent.BuildDirectory)/rocm/bin/rvs -c $(Agent.BuildDirectory)/rocm/share/rocm-validation-suite/conf/$(TEST_CONF_DIR)/gst_single.conf
testParameters: ''
testDir: $(Agent.BuildDirectory)
testPublishResults: false

View File

@@ -35,7 +35,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -83,10 +84,10 @@ jobs:
echo $(basename "$whlFile") >> pipelineArtifacts.txt
fi
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
- job: Tensile_testing
timeoutInMinutes: 90
@@ -103,6 +104,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -35,7 +35,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -71,10 +72,10 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
- job: TransferBench_testing
dependsOn: TransferBench
@@ -90,6 +91,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -18,7 +18,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -36,9 +37,9 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
- job: amdsmi_testing
dependsOn: amdsmi
@@ -54,6 +55,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -23,7 +23,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -51,6 +52,6 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}

View File

@@ -456,6 +456,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -119,6 +119,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -25,7 +25,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -59,7 +60,7 @@ jobs:
./bin/test
workingDirectory: $(Build.SourcesDirectory)/test
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
environment: combined
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# environment: combined

View File

@@ -108,6 +108,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -29,7 +29,8 @@ jobs:
- name: ROCM_PATH
value: $(Agent.BuildDirectory)/rocm
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -53,8 +54,8 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
extraEnvVars:
- ROCM_PATH:::/home/user/workspace/rocm
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# extraEnvVars:
# - ROCM_PATH:::/home/user/workspace/rocm

View File

@@ -120,6 +120,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -158,6 +158,7 @@ jobs:
- deps
- job: hipBLASLt_testing
timeoutInMinutes: 120
dependsOn: hipBLASLt
condition: and(succeeded(), eq(variables.ENABLE_GFX942_TESTS, 'true'), not(containsValue(split(variables.DISABLED_GFX942_TESTS, ','), variables['Build.DefinitionName'])))
variables:
@@ -171,6 +172,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -93,6 +93,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -47,7 +47,8 @@ jobs:
- template: /.azuredevops/variables-global.yml
- name: HIP_ROCCLR_HOME
value: $(Build.BinariesDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -91,10 +92,10 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
- job: hipFFT_testing
dependsOn: hipFFT
@@ -110,6 +111,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -38,7 +38,8 @@ jobs:
- template: /.azuredevops/variables-global.yml
- name: HIP_ROCCLR_HOME
value: $(Build.BinariesDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -78,12 +79,12 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
extraEnvVars:
- HIP_ROCCLR_HOME:::/home/user/workspace/rocm
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
# extraEnvVars:
# - HIP_ROCCLR_HOME:::/home/user/workspace/rocm
- job: hipRAND_testing
dependsOn: hipRAND
@@ -99,6 +100,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -50,7 +50,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -99,12 +100,12 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
extraCopyDirectories:
- deps-install
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
# extraCopyDirectories:
# - deps-install
- job: hipSOLVER_testing
dependsOn: hipSOLVER
@@ -120,6 +121,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -45,7 +45,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -94,11 +95,11 @@ jobs:
artifactName: testMatrices
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
environment: test
gpuTarget: $(JOB_GPU_TARGET)
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# environment: test
# gpuTarget: $(JOB_GPU_TARGET)
- job: hipSPARSE_testing
dependsOn: hipSPARSE
@@ -114,6 +115,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -105,6 +105,7 @@ jobs:
-DCMAKE_BUILD_TYPE=Release
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++
-DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang
-DCMAKE_Fortran_COMPILER=f95
-DAMDGPU_TARGETS=$(JOB_GPU_TARGET)
-DTensile_LOGIC=
-DTensile_CPU_THREADS=

View File

@@ -94,6 +94,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
@@ -113,7 +116,7 @@ jobs:
parameters:
componentName: hipTensor
testDir: '$(Agent.BuildDirectory)/rocm/bin/hiptensor'
testParameters: '-E ".*-extended" -VV --output-on-failure --force-new-ctest-process --output-junit test_output.xml'
testParameters: '-E ".*-extended" --output-on-failure --force-new-ctest-process --output-junit test_output.xml'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}

View File

@@ -110,6 +110,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -29,7 +29,7 @@ jobs:
value: '$(Build.BinariesDirectory)/amdgcn/bitcode'
- name: HIP_PATH
value: '$(Agent.BuildDirectory)/rocm'
pool: ${{ variables.MEDIUM_BUILD_POOL }}
pool: ${{ variables.ULTRA_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -51,7 +51,7 @@ jobs:
extraBuildFlags: >-
-DCMAKE_PREFIX_PATH="$(Build.BinariesDirectory)/llvm;$(Build.BinariesDirectory)"
-DCMAKE_BUILD_TYPE=Release
-DLLVM_ENABLE_PROJECTS=clang;lld;clang-tools-extra;mlir
-DLLVM_ENABLE_PROJECTS=clang;lld;clang-tools-extra;mlir;flang
-DLLVM_ENABLE_RUNTIMES=compiler-rt;libunwind;libcxx;libcxxabi
-DCLANG_ENABLE_AMDCLANG=ON
-DLLVM_TARGETS_TO_BUILD=AMDGPU;X86
@@ -85,7 +85,7 @@ jobs:
componentName: check-llvm
testDir: 'llvm/build'
testExecutable: './bin/llvm-lit'
testParameters: '-q --xunit-xml-output=llvm_test_output.xml ./test'
testParameters: '-q --xunit-xml-output=llvm_test_output.xml --filter-out="live-debug-values-spill-tracking" ./test'
testOutputFile: llvm_test_output.xml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:

View File

@@ -123,6 +123,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -134,6 +134,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -66,7 +66,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -157,14 +158,14 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
registerJPEGPackages: true
gpuTarget: $(JOB_GPU_TARGET)
extraCopyDirectories:
- /opt/libjpeg-turbo
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# registerJPEGPackages: true
# gpuTarget: $(JOB_GPU_TARGET)
# extraCopyDirectories:
# - /opt/libjpeg-turbo
- job: rocAL_testing
dependsOn: rocAL
@@ -186,6 +187,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- task: Bash@3
displayName: 'Register libjpeg-turbo packages'

View File

@@ -53,7 +53,8 @@ jobs:
- template: /.azuredevops/variables-global.yml
- name: HIP_ROCCLR_HOME
value: $(Build.BinariesDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -94,12 +95,12 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
extraEnvVars:
- HIP_ROCCLR_HOME:::/home/user/workspace/rocm
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
# extraEnvVars:
# - HIP_ROCCLR_HOME:::/home/user/workspace/rocm
- job: rocALUTION_testing
dependsOn: rocALUTION
@@ -115,6 +116,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -144,6 +144,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -45,7 +45,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -71,10 +72,10 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
registerROCmPackages: true
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# registerROCmPackages: true
- job: rocDecode_testing
dependsOn: rocDecode
@@ -92,6 +93,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -112,6 +112,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -40,7 +40,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -74,11 +75,11 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
registerROCmPackages: true
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
# registerROCmPackages: true
- job: rocJPEG_testing
dependsOn: rocJPEG
@@ -96,6 +97,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -9,21 +9,25 @@ parameters:
type: object
default:
- cmake
- ninja-build
- git
- python3-pip
- libdrm-dev
- libstdc++-12-dev
- ninja-build
- python3-pip
- name: pipModules
type: object
default:
- hip-python --extra-index-url https://test.pypi.org/simple
- ml_dtypes
- numpy
- tomli
- scipy
- name: rocmDependencies
type: object
default:
- clr
- llvm-project
- rocm-cmake
- clr
- rocminfo
- rocprofiler-register
- ROCR-Runtime
@@ -56,6 +60,7 @@ jobs:
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/clang++
-DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/clang
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm
-DROCM_PATH=$(Agent.BuildDirectory)/rocm
-DBUILD_FAT_LIBROCKCOMPILER=1
-GNinja
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
@@ -81,6 +86,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
@@ -111,6 +119,7 @@ jobs:
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/clang++
-DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/clang
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm
-DROCM_PATH=$(Agent.BuildDirectory)/rocm
-DAMDGPU_TARGETS=$(JOB_GPU_TARGET)
-DROCM_TEST_CHIPSET=$(JOB_GPU_TARGET)
-GNinja

View File

@@ -92,6 +92,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -22,26 +22,28 @@ parameters:
- name: pipModules
type: object
default:
- hip-python --extra-index-url https://test.pypi.org/simple
- numpy
- pybind11
- name: rocmDependencies
type: object
default:
- rocm-cmake
- llvm-project
- ROCR-Runtime
- clr
- rocminfo
- rocm-core
- rocprofiler-register
- llvm-project
- rocDecode
- rocm-cmake
- rocm-core
- rocminfo
- ROCR-Runtime
- rocprofiler-register
jobs:
- job: rocPyDecode
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -55,11 +57,6 @@ jobs:
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
- task: Bash@3
displayName: 'pip install hip-python'
inputs:
targetType: inline
script: pip install -i https://test.pypi.org/simple hip-python
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
@@ -141,12 +138,12 @@ jobs:
echo $(basename "$whlFile") >> pipelineArtifacts.txt
fi
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
gpuTarget: $(JOB_GPU_TARGET)
optSymLink: true
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# gpuTarget: $(JOB_GPU_TARGET)
# optSymLink: true
- job: rocPyDecode_testing
dependsOn: rocPyDecode
@@ -164,6 +161,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- task: Bash@3
displayName: Ensure pybind11-dev is not installed
@@ -180,11 +180,6 @@ jobs:
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
- task: Bash@3
displayName: 'pip install hip-python'
inputs:
targetType: inline
script: pip install -i https://test.pypi.org/simple hip-python
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- task: DownloadPipelineArtifact@2
displayName: 'Download Pipeline Wheel Files'

View File

@@ -38,7 +38,8 @@ jobs:
- template: /.azuredevops/variables-global.yml
- name: HIP_ROCCLR_HOME
value: $(Build.BinariesDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -75,12 +76,12 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
gpuTarget: $(JOB_GPU_TARGET)
extraEnvVars:
- HIP_ROCCLR_HOME:::/home/user/workspace/rocm
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# gpuTarget: $(JOB_GPU_TARGET)
# extraEnvVars:
# - HIP_ROCCLR_HOME:::/home/user/workspace/rocm
- job: rocRAND_testing
dependsOn: rocRAND
@@ -96,6 +97,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -130,6 +130,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -125,6 +125,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -97,6 +97,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -98,7 +98,7 @@ jobs:
gpuTarget: $(JOB_GPU_TARGET)
- job: rocWMMA_testing
timeoutInMinutes: 90
timeoutInMinutes: 120
dependsOn: rocWMMA
condition: and(succeeded(), eq(variables.ENABLE_GFX942_TESTS, 'true'), not(containsValue(split(variables.DISABLED_GFX942_TESTS, ','), variables['Build.DefinitionName'])))
variables:
@@ -112,6 +112,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -8,7 +8,6 @@ parameters:
- name: aptPackages
type: object
default:
- cmake
- doxygen
- doxygen-doc
- ninja-build
@@ -18,14 +17,17 @@ parameters:
type: object
default:
- cget
- cmake==3.20.5
- ninja
- rocm-docs-core
jobs:
- job: rocm_cmake
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -33,26 +35,34 @@ jobs:
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
- task: Bash@3
displayName: Add CMake to PATH
inputs:
targetType: inline
script: echo "##vso[task.prependpath]$(python3 -m site --user-base)/bin"
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
# extra steps for ctest suite
- script: |
python -m pip install -r $(Build.SourcesDirectory)/docs/requirements.txt
python -m pip install -r $(Build.SourcesDirectory)/test/docsphinx/docs/.sphinx/requirements.txt
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
displayName: "ctest setup"
- task: Bash@3
displayName: CTest setup
inputs:
targetType: inline
script: |
python -m pip install -r $(Build.SourcesDirectory)/docs/requirements.txt
python -m pip install -r $(Build.SourcesDirectory)/test/docsphinx/docs/.sphinx/requirements.txt
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: rocm-cmake
testParameters: '-E "pass-version-parent" --output-on-failure --force-new-ctest-process --output-junit test_output.xml'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
environment: combined
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# environment: combined

View File

@@ -16,7 +16,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -40,6 +41,6 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}

View File

@@ -135,6 +135,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -40,7 +40,8 @@ jobs:
value: $(Agent.BuildDirectory)/rocm
- name: ROCR_LIB_DIR
value: $(Agent.BuildDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -66,13 +67,13 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
extraEnvVars:
- ROCR_INC_DIR:::/home/user/workspace/rocm
- ROCR_LIB_DIR:::/home/user/workspace/rocm
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# extraEnvVars:
# - ROCR_INC_DIR:::/home/user/workspace/rocm
# - ROCR_LIB_DIR:::/home/user/workspace/rocm
- job: rocm_bandwidth_test_testing
dependsOn: rocm_bandwidth_test
@@ -88,6 +89,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -18,7 +18,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -37,9 +38,9 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
- job: rocm_smi_lib_testing
dependsOn: rocm_smi_lib
@@ -55,7 +56,13 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml

View File

@@ -28,7 +28,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -53,10 +54,10 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
registerROCmPackages: true
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# registerROCmPackages: true
- job: rocminfo_testing
dependsOn: rocminfo
@@ -72,6 +73,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -52,7 +52,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -84,11 +85,11 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
gpuTarget: $(JOB_GPU_TARGET)
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# gpuTarget: $(JOB_GPU_TARGET)
- job: rocprofiler_compute_testing
timeoutInMinutes: 120
@@ -107,6 +108,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
@@ -157,7 +161,7 @@ jobs:
parameters:
componentName: rocprofiler-compute
testDir: $(Build.BinariesDirectory)/libexec/rocprofiler-compute
testExecutable: ROCPROFCOMPUTE_ARCH_OVERRIDE="MI300X" ROCM_PATH=$(Agent.BuildDirectory)/rocm ctest
testExecutable: ROCM_PATH=$(Agent.BuildDirectory)/rocm ctest
- task: Bash@3
displayName: Remove ROCm binaries from PATH
condition: always()

View File

@@ -16,7 +16,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -44,7 +45,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
environment: combined
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# environment: combined

View File

@@ -50,7 +50,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -98,12 +99,12 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
gpuTarget: $(JOB_GPU_TARGET)
registerROCmPackages: true
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# gpuTarget: $(JOB_GPU_TARGET)
# registerROCmPackages: true
- job: rocprofiler_sdk_testing
dependsOn: rocprofiler_sdk
@@ -119,6 +120,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -158,6 +158,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -114,6 +114,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -40,7 +40,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
steps:
@@ -66,9 +67,9 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
- job: rocr_debug_agent_testing
dependsOn: rocr_debug_agent
@@ -84,6 +85,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -43,7 +43,8 @@ jobs:
- template: /.azuredevops/variables-global.yml
- name: HIP_ROCCLR_HOME
value: $(Build.BinariesDirectory)/rocm
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -84,12 +85,12 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
gpuTarget: $(JOB_GPU_TARGET)
registerROCmPackages: true
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# gpuTarget: $(JOB_GPU_TARGET)
# registerROCmPackages: true
- job: roctracer_testing
dependsOn: roctracer
@@ -105,6 +106,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -50,7 +50,8 @@ jobs:
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool: ${{ variables.LOW_BUILD_POOL }}
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
@@ -91,11 +92,11 @@ jobs:
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
gpuTarget: $(JOB_GPU_TARGET)
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters:
# aptPackages: ${{ parameters.aptPackages }}
# pipModules: ${{ parameters.pipModules }}
# gpuTarget: $(JOB_GPU_TARGET)
- job: rpp_testing
dependsOn: rpp
@@ -113,6 +114,9 @@ jobs:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
gfx90a:
JOB_GPU_TARGET: gfx90a
JOB_TEST_POOL: ${{ variables.GFX90A_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:

View File

@@ -32,6 +32,9 @@ parameters:
- name: installEnabled
type: boolean
default: true
- name: printDiskSpace
type: boolean
default: true
steps:
# create workingDirectory if it does not exist and change into it
@@ -44,8 +47,9 @@ steps:
cmakeArgs: -DCMAKE_INSTALL_PREFIX=${{ parameters.installDir }} ${{ parameters.extraBuildFlags }} ${{ parameters.cmakeSourceDir }}
${{ else }}:
cmakeArgs: ${{ parameters.extraBuildFlags }} ..
- script: df -h
displayName: Disk space before build
- ${{ if parameters.printDiskSpace }}:
- script: df -h
displayName: Disk space before build
# equivalent to running make $cmakeTargetDir from $cmakeBuildDir
# i.e., cd $cmakeBuildDir; make $cmakeTargetDir
- task: CMake@1
@@ -57,8 +61,9 @@ steps:
${{ else }}:
cmakeArgs: '--build ${{ parameters.cmakeTargetDir }} --target ${{ parameters.customBuildTarget }} ${{ parameters.multithreadFlag }}'
retryCountOnTaskFailure: 10
- script: df -h
displayName: Disk space after build
- ${{ if parameters.printDiskSpace }}:
- script: df -h
displayName: Disk space after build
# equivalent to running make $cmakeTarget from $cmakeBuildDir
# e.g., make install
- ${{ if eq(parameters.installEnabled, true) }}:

View File

@@ -34,7 +34,12 @@ steps:
displayName: 'sudo apt-get update'
inputs:
targetType: inline
script: sudo DEBIAN_FRONTEND=noninteractive apt-get --yes update
script: |
echo "deb http://archive.ubuntu.com/ubuntu/ jammy main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/default.list
echo "deb http://archive.ubuntu.com/ubuntu/ jammy-updates main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/default.list
echo "deb http://archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/default.list
echo "deb http://archive.ubuntu.com/ubuntu/ jammy-security main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/default.list
sudo DEBIAN_FRONTEND=noninteractive apt-get --yes update
- task: Bash@3
displayName: 'sudo apt-get upgrade'
inputs:
@@ -50,7 +55,7 @@ steps:
displayName: 'sudo apt-get install ...'
inputs:
targetType: inline
script: sudo DEBIAN_FRONTEND=noninteractive apt-get --yes install ${{ join(' ', parameters.aptPackages) }}
script: sudo DEBIAN_FRONTEND=noninteractive apt-get --yes --fix-missing install ${{ join(' ', parameters.aptPackages) }}
- ${{ if gt(length(parameters.pipModules), 0) }}:
- task: Bash@3
displayName: 'pip install ...'

View File

@@ -222,13 +222,13 @@ parameters:
hasGpuTarget: false
rocm-examples:
pipelineId: $(ROCM_EXAMPLES_PIPELINE_ID)
stagingBranch: develop
mainlineBranch: develop
stagingBranch: amd-staging
mainlineBranch: amd-mainline
hasGpuTarget: true
rocminfo:
pipelineId: $(ROCMINFO_PIPELINE_ID)
stagingBranch: amd-staging
mainlineBranch: amd-master
mainlineBranch: amd-mainline
hasGpuTarget: false
rocMLIR:
pipelineId: $(ROCMLIR_PIPELINE_ID)
@@ -288,7 +288,7 @@ parameters:
ROCR-Runtime:
pipelineId: $(ROCR_RUNTIME_PIPELINE_ID)
stagingBranch: amd-staging
mainlineBranch: amd-master
mainlineBranch: amd-mainline
hasGpuTarget: false
rocRAND:
pipelineId: $(ROCRAND_PIPELINE_ID)

View File

@@ -107,7 +107,7 @@ steps:
# dynamically write to a Dockerfile
# first is to do base setup of users, groups
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Create start of Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -126,7 +126,7 @@ steps:
# for test jobs, setup GPU-related users and group
- ${{ if eq(parameters.environment, 'test') }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: GPU setup of Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -137,7 +137,7 @@ steps:
# now install a common set of packages through apt
- ${{ if gt(length(parameters.baseAptPackages), 0) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Base Apt Packages to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -146,7 +146,7 @@ steps:
# iterate through possible apt repos that might need to be added to the docker container
- ${{ if eq(parameters.registerROCmPackages, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Register ROCm packages to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -160,7 +160,7 @@ steps:
echo "RUN DEBIAN_FRONTEND=noninteractive apt-get --yes update" >> Dockerfile
- ${{ if eq(parameters.registerCUDAPackages, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Register CUDA packages to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -172,7 +172,7 @@ steps:
echo 'RUN DEBIAN_FRONTEND=noninteractive apt-get --yes update' >> Dockerfile
- ${{ if eq(parameters.registerJPEGPackages, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Register libjpeg-turbo packages to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -185,7 +185,7 @@ steps:
# install AOCL to docker container, if needed
- ${{ if eq(parameters.installAOCL, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: aocl install to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -197,7 +197,7 @@ steps:
# since apt repo list is updated, install the extra apt packages
- ${{ if gt(length(parameters.aptPackages), 0) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Extra Apt Packages to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -206,7 +206,7 @@ steps:
# install latest cmake to docker container, if needed
- ${{ if eq(parameters.installLatestCMake, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: latest cmake install to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -216,7 +216,7 @@ steps:
echo "RUN pip install cmake --upgrade" >> Dockerfile
# setup workspace where binaries, sources, and dependencies from the job will be copied to
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Workspace setup of Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -228,7 +228,7 @@ steps:
# pip install is done here as non-root
- ${{ if gt(length(parameters.pipModules), 0) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Extra Python Modules to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -236,7 +236,7 @@ steps:
script: echo "RUN pip install -v ${{ join(' ', parameters.pipModules) }}" >> Dockerfile
# copy common directories
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Copy base directories to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -254,7 +254,7 @@ steps:
# copy extra directories, if applicable to the job
- ${{ each extraCopyDirectory in parameters.extraCopyDirectories }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Copy ${{ extraCopyDirectory }} to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -265,7 +265,7 @@ steps:
fi
# setup ldconfig
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: ldconfig to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -279,7 +279,7 @@ steps:
# create /opt/rocm symbolic link, if needed
- ${{ if eq(parameters.optSymLink, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: /opt/rocm symbolic link to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -290,7 +290,7 @@ steps:
# set environment variables needed for some python-based components
- ${{ if eq(parameters.pythonEnvVars, true) }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: python environment variables
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -303,7 +303,7 @@ steps:
# add to PATH environment variable
- ${{ if ne(parameters.extraPaths, '') }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Add to PATH in Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
@@ -313,21 +313,21 @@ steps:
# use ::: as delimiter to allow for colons to be in the environment variable values
- ${{ each extraEnvVar in parameters.extraEnvVars }}:
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Set ${{ extraEnvVar }} to Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
targetType: inline
script: echo "ENV ${{ split(extraEnvVar, ':::')[0] }}='${{ split(extraEnvVar, ':::')[1] }}'" >> Dockerfile
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: Print Dockerfile
inputs:
workingDirectory: $(Pipeline.Workspace)
targetType: inline
script: cat Dockerfile
- task: Docker@2
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
inputs:
containerRegistry: 'ContainerService'
${{ if ne(parameters.gpuTarget, '') }}:
@@ -337,7 +337,7 @@ steps:
Dockerfile: '$(Pipeline.Workspace)/Dockerfile'
buildContext: '$(Pipeline.Workspace)'
- task: Bash@3
condition: or(failed(), ${{ eq(parameters.forceDockerCreation, true) }})
condition: or(and(failed(), not(contains(variables['DOCKER_SKIP_GFX'], variables['JOB_GPU_TARGET']))), ${{ eq(parameters.forceDockerCreation, true) }})
displayName: "!! Docker Image URL !!"
inputs:
workingDirectory: $(Pipeline.Workspace)

View File

@@ -77,11 +77,11 @@ steps:
dependencies_rows=$(cat $manifest_json | \
jq -r '
.dependencies[] |
"<tr><td>" + .buildNumber + "</td>" +
"<td><a href=\"https://dev.azure.com/ROCm-CI/ROCm-CI/_build/results?buildId=" + .buildId + "\">" + .buildId + "</a></td>" +
"<td><a href=\"" + .repoUrl + "\">" + .repoName + "</a></td>" +
"<td><a href=\"" + .repoUrl + "/tree/" + .repoRef + "\">" + .repoRef + "</a></td>" +
.dependencies[] |
"<tr><td>" + .buildNumber + "</td>" +
"<td><a href=\"https://dev.azure.com/ROCm-CI/ROCm-CI/_build/results?buildId=" + .buildId + "\">" + .buildId + "</a></td>" +
"<td><a href=\"" + .repoUrl + "\">" + .repoName + "</a></td>" +
"<td><a href=\"" + .repoUrl + "/tree/" + .repoRef + "\">" + .repoRef + "</a></td>" +
"<td><a href=\"" + .repoUrl + "/commit/" + .repoVersion + "\">" + .repoVersion + "</a></td></tr>"
')
dependencies_rows=$(echo $dependencies_rows)
@@ -130,6 +130,11 @@ steps:
</html>
EOF
sed -i -e 's|</tr> <tr>|</tr>\n<tr>|g' \
-e 's|</td><td>|</td>\n <td>|g' \
-e 's|<tr><td>|<tr>\n <td>|g' \
-e 's|</td></tr>|</td>\n</tr>|g' $manifest_html
cat $manifest_html
- task: PublishHtmlReport@1
displayName: Publish manifest.html

View File

@@ -10,7 +10,7 @@ parameters:
default: 'ctest'
- name: testParameters
type: string
default: '-VV --output-on-failure --force-new-ctest-process --output-junit test_output.xml'
default: '--output-on-failure --force-new-ctest-process --output-junit test_output.xml'
- name: testOutputFile
type: string
default: test_output.xml
@@ -33,7 +33,6 @@ parameters:
- aomp
- HIPIFY
- MIVisionX
- rocm-cmake
- rocm_smi_lib
- rocprofiler-sdk
- roctracer

View File

@@ -27,14 +27,18 @@ variables:
value: rocm-ci_larger_base_disk_pool
- name: GFX942_TEST_POOL
value: gfx942_test_pool
- name: GFX90A_TEST_POOL
value: gfx90a_test_pool
- name: LATEST_RELEASE_VERSION
value: 6.3.3
value: 6.3.4
- name: REPO_RADEON_VERSION
value: 6.3.3
value: 6.3.4
- name: NEXT_RELEASE_VERSION
value: 6.4.0
- name: LATEST_RELEASE_TAG
value: rocm-6.3.3
value: rocm-6.3.4
- name: DOCKER_SKIP_GFX
value: gfx90a
- name: AMDMIGRAPHX_GFX942_TEST_PIPELINE_ID
value: 197
- name: AMDMIGRAPHX_PIPELINE_ID

1
.gitignore vendored
View File

@@ -11,6 +11,7 @@ _toc.yml
docBin/
_doxygen/
_readthedocs/
__pycache__/
# avoid duplicating contributing.md due to conf.py
docs/CHANGELOG.md

View File

@@ -481,6 +481,7 @@ ZenDNN
accuracies
activations
addr
ai
alloc
allocatable
allocator
@@ -546,6 +547,7 @@ cTDP
dataset
datasets
dataspace
datatemplate
datatype
datatypes
dbgapi
@@ -574,6 +576,7 @@ el
embeddings
enablement
encodings
endfor
endpgm
enqueue
env
@@ -694,6 +697,7 @@ pageable
pallas
parallelization
parallelizing
param
parameterization
passthrough
perfcounter
@@ -811,6 +815,7 @@ supercomputing
symlink
symlinks
sys
tabindex
td
tensorfloat
th
@@ -856,6 +861,7 @@ vectorizes
virtualize
virtualized
vjxb
vllm
voxel
walkthrough
walkthroughs

View File

@@ -66,11 +66,10 @@ project-specific steps. Refer to each repository's PR process for any additional
during our release cycle, as coordinated by the maintainer
* We'll inform you once your change is committed
:::{important}
By creating a PR, you agree to allow your contribution to be licensed under the
terms of the LICENSE.txt file in the corresponding repository. Different repositories may use different
licenses.
:::
> [!IMPORTANT]
> By creating a PR, you agree to allow your contribution to be licensed under the
> terms of the LICENSE.txt file in the corresponding repository. Different repositories may use different
> licenses.
You can look up each license on the [ROCm licensing](https://rocm.docs.amd.com/en/latest/about/license.html) page.

View File

@@ -4,6 +4,8 @@
:description: JAX compatibility
:keywords: GPU, JAX compatibility
.. version-set:: rocm_version latest
*******************************************************************************
JAX compatibility
*******************************************************************************
@@ -119,7 +121,8 @@ Critical ROCm libraries for JAX
The functionality of JAX with ROCm is determined by its underlying library
dependencies. These critical ROCm components affect the capabilities,
performance, and feature set available to developers.
performance, and feature set available to developers. The versions described
are available in ROCm :version:`rocm_version`.
.. list-table::
:header-rows: 1
@@ -129,7 +132,7 @@ performance, and feature set available to developers.
- Purpose
- Used in
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- 2.3.0
- :version-ref:`hipBLAS rocm_version`
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations.
- Matrix multiplication in ``jax.numpy.matmul``, ``jax.lax.dot`` and
@@ -138,7 +141,7 @@ performance, and feature set available to developers.
``jax.numpy.einsum`` with matrix-multiplication patterns algebra
operations.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- 0.10.0
- :version-ref:`hipBLASLt rocm_version`
- hipBLASLt is an extension of hipBLAS, providing additional
features like epilogues fused into the matrix multiplication kernel or
use of integer tensor cores.
@@ -147,7 +150,7 @@ performance, and feature set available to developers.
operations, mixed-precision support, and hardware-specific
optimizations.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_
- 3.3.0
- :version-ref:`hipCUB rocm_version`
- Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select.
- Reduction functions (``jax.numpy.sum``, ``jax.numpy.mean``,
@@ -155,23 +158,23 @@ performance, and feature set available to developers.
(``jax.numpy.cumsum``, ``jax.numpy.cumprod``) and sorting
(``jax.numpy.sort``, ``jax.numpy.argsort``).
* - `hipFFT <https://github.com/ROCm/hipFFT>`_
- 1.0.17
- :version-ref:`hipFFT rocm_version`
- Provides GPU-accelerated Fast Fourier Transform (FFT) operations.
- Used in functions like ``jax.numpy.fft``.
* - `hipRAND <https://github.com/ROCm/hipRAND>`_
- 2.11.0
- :version-ref:`hipRAND rocm_version`
- Provides fast random number generation for GPUs.
- The ``jax.random.uniform``, ``jax.random.normal``,
``jax.random.randint`` and ``jax.random.split``.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- 2.3.0
- :version-ref:`hipSOLVER rocm_version`
- Provides GPU-accelerated solvers for linear systems, eigenvalues, and
singular value decompositions (SVD).
- Solving linear systems (``jax.numpy.linalg.solve``), matrix
factorizations, SVD (``jax.numpy.linalg.svd``) and eigenvalue problems
(``jax.numpy.linalg.eig``).
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- 3.1.2
- :version-ref:`hipSPARSE rocm_version`
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse matrix multiplication (``jax.numpy.matmul``), sparse
@@ -179,28 +182,28 @@ performance, and feature set available to developers.
(``jax.experimental.sparse.dot``), sparse linear system solvers and
sparse data handling.
* - `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_
- 0.2.2
- :version-ref:`hipSPARSELt rocm_version`
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse matrix multiplication (``jax.numpy.matmul``), sparse
matrix-vector and matrix-matrix products
(``jax.experimental.sparse.dot``) and sparse linear system solvers.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_
- 3.3.0
- :version-ref:`MIOpen rocm_version`
- Optimized for deep learning primitives such as convolutions, pooling,
normalization, and activation functions.
- Speeds up convolutional neural networks (CNNs), recurrent neural
networks (RNNs), and other layers. Used in operations like
``jax.nn.conv``, ``jax.nn.relu``, and ``jax.nn.batch_norm``.
* - `RCCL <https://github.com/ROCm/rccl>`_
- 2.21.5
- :version-ref:`RCCL rocm_version`
- Optimized for multi-GPU communication for operations like all-reduce,
broadcast, and scatter.
- Distribute computations across multiple GPU with ``pmap`` and
``jax.distributed``. XLA automatically uses rccl when executing
operations across multiple GPUs on AMD hardware.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_
- 3.3.0
- :version-ref:`rocThrust rocm_version`
- Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning.
- Reduction operations like ``jax.numpy.sum``, ``jax.pmap`` for

View File

@@ -4,6 +4,8 @@
:description: PyTorch compatibility
:keywords: GPU, PyTorch compatibility
.. version-set:: rocm_version latest
********************************************************************************
PyTorch compatibility
********************************************************************************
@@ -200,7 +202,8 @@ Critical ROCm libraries for PyTorch
The functionality of PyTorch with ROCm is determined by its underlying library
dependencies. These critical ROCm components affect the capabilities,
performance, and feature set available to developers.
performance, and feature set available to developers. The versions described
are available in ROCm :version:`rocm_version`.
.. list-table::
:header-rows: 1
@@ -210,28 +213,28 @@ performance, and feature set available to developers.
- Purpose
- Used in
* - `Composable Kernel <https://github.com/ROCm/composable_kernel>`_
- 1.1.0
- :version-ref:`"Composable Kernel" rocm_version`
- Enables faster execution of core operations like matrix multiplication
(GEMM), convolutions and transformations.
- Speeds up ``torch.permute``, ``torch.view``, ``torch.matmul``,
``torch.mm``, ``torch.bmm``, ``torch.nn.Conv2d``, ``torch.nn.Conv3d``
and ``torch.nn.MultiheadAttention``.
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- 2.3.0
- :version-ref:`hipBLAS rocm_version`
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations.
- Supports operations like matrix multiplication, matrix-vector products,
and tensor contractions. Utilized in both dense and batched linear
algebra operations.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- 0.10.0
- :version-ref:`hipBLASLt rocm_version`
- hipBLASLt is an extension of the hipBLAS library, providing additional
features like epilogues fused into the matrix multiplication kernel or
use of integer tensor cores.
- It accelerates operations like ``torch.matmul``, ``torch.mm``, and the
matrix multiplications used in convolutional and linear layers.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_
- 3.3.0
- :version-ref:`hipCUB rocm_version`
- Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select.
- Supports operations like ``torch.sum``, ``torch.cumsum``, ``torch.sort``
@@ -239,93 +242,93 @@ performance, and feature set available to developers.
irregular shapes often involve scanning, sorting, and filtering, which
hipCUB handles efficiently.
* - `hipFFT <https://github.com/ROCm/hipFFT>`_
- 1.0.17
- :version-ref:`hipFFT rocm_version`
- Provides GPU-accelerated Fast Fourier Transform (FFT) operations.
- Used in functions like the ``torch.fft`` module.
* - `hipRAND <https://github.com/ROCm/hipRAND>`_
- 2.11.0
- :version-ref:`hipRAND rocm_version`
- Provides fast random number generation for GPUs.
- The ``torch.rand``, ``torch.randn`` and stochastic layers like
``torch.nn.Dropout``.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- 2.3.0
- :version-ref:`hipSOLVER rocm_version`
- Provides GPU-accelerated solvers for linear systems, eigenvalues, and
singular value decompositions (SVD).
- Supports functions like ``torch.linalg.solve``,
``torch.linalg.eig``, and ``torch.linalg.svd``.
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- 3.1.2
- :version-ref:`hipSPARSE rocm_version`
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse tensor operations ``torch.sparse``.
* - `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_
- 0.2.2
- :version-ref:`hipSPARSELt rocm_version`
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse tensor operations ``torch.sparse``.
* - `hipTensor <https://github.com/ROCm/hipTensor>`_
- 1.4.0
- :version-ref:`hipTensor rocm_version`
- Optimizes for high-performance tensor operations, such as contractions.
- Accelerates tensor algebra, especially in deep learning and scientific
computing.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_
- 3.3.0
- :version-ref:`MIOpen rocm_version`
- Optimizes deep learning primitives such as convolutions, pooling,
normalization, and activation functions.
- Speeds up convolutional neural networks (CNNs), recurrent neural
networks (RNNs), and other layers. Used in operations like
``torch.nn.Conv2d``, ``torch.nn.ReLU``, and ``torch.nn.LSTM``.
* - `MIGraphX <https://github.com/ROCm/AMDMIGraphX>`_
- 2.11.0
- :version-ref:`MIGraphX rocm_version`
- Adds graph-level optimizations, ONNX models and mixed precision support
and enable Ahead-of-Time (AOT) Compilation.
- Speeds up inference models and executes ONNX models for
compatibility with other frameworks.
``torch.nn.Conv2d``, ``torch.nn.ReLU``, and ``torch.nn.LSTM``.
* - `MIVisionX <https://github.com/ROCm/MIVisionX>`_
- 3.1.0
- :version-ref:`MIVisionX rocm_version`
- Optimizes acceleration for computer vision and AI workloads like
preprocessing, augmentation, and inferencing.
- Faster data preprocessing and augmentation pipelines for datasets like
ImageNet or COCO and easy to integrate into PyTorch's ``torch.utils.data``
and ``torchvision`` workflows.
* - `rocAL <https://github.com/ROCm/rocAL>`_
- 2.1.0
- :version-ref:`rocAL rocm_version`
- Accelerates the data pipeline by offloading intensive preprocessing and
augmentation tasks. rocAL is part of MIVisionX.
- Easy to integrate into PyTorch's ``torch.utils.data`` and
``torchvision`` data load workloads.
* - `RCCL <https://github.com/ROCm/rccl>`_
- 2.21.5
- :version-ref:`RCCL rocm_version`
- Optimizes for multi-GPU communication for operations like AllReduce and
Broadcast.
- Distributed data parallel training (``torch.nn.parallel.DistributedDataParallel``).
Handles communication in multi-GPU setups.
* - `rocDecode <https://github.com/ROCm/rocDecode>`_
- 0.8.0
- :version-ref:`rocDecode rocm_version`
- Provides hardware-accelerated data decoding capabilities, particularly
for image, video, and other dataset formats.
- Can be integrated in ``torch.utils.data``, ``torchvision.transforms``
and ``torch.distributed``.
* - `rocJPEG <https://github.com/ROCm/rocJPEG>`_
- 0.6.0
- :version-ref:`rocJPEG rocm_version`
- Provides hardware-accelerated JPEG image decoding and encoding.
- GPU accelerated ``torchvision.io.decode_jpeg`` and
``torchvision.io.encode_jpeg`` and can be integrated in
``torch.utils.data`` and ``torchvision``.
* - `RPP <https://github.com/ROCm/RPP>`_
- 1.9.1
- :version-ref:`RPP rocm_version`
- Speeds up data augmentation, transformation, and other preprocessing steps.
- Easy to integrate into PyTorch's ``torch.utils.data`` and
``torchvision`` data load workloads.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_
- 3.3.0
- :version-ref:`rocThrust rocm_version`
- Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning.
- Utilized in backend operations for tensor computations requiring
parallel processing.
* - `rocWMMA <https://github.com/ROCm/rocWMMA>`_
- 1.6.0
- :version-ref:`rocWMMA rocm_version`
- Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix
multiplication (GEMM) and accumulation operations with mixed precision
support.

View File

@@ -4,6 +4,8 @@
:description: TensorFlow compatibility
:keywords: GPU, TensorFlow compatibility
.. version-set:: rocm_version latest
*******************************************************************************
TensorFlow compatibility
*******************************************************************************
@@ -117,7 +119,8 @@ Critical ROCm libraries for TensorFlow
TensorFlow depends on multiple components and the supported features of those
components can affect the TensorFlow ROCm supported feature set. The versions
in the following table refer to the first TensorFlow version where the ROCm
library was introduced as a dependency.
library was introduced as a dependency. The versions described
are available in ROCm :version:`rocm_version`.
.. list-table::
:widths: 25, 10, 35, 30
@@ -128,43 +131,43 @@ library was introduced as a dependency.
- Purpose
- Used in
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- 2.3.0
- :version-ref:`hipBLAS rocm_version`
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations.
- Accelerates operations like ``tf.matmul``, ``tf.linalg.matmul``, and
other matrix multiplications commonly used in neural network layers.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- 0.10.0
- :version-ref:`hipBLASLt rocm_version`
- Extends hipBLAS with additional optimizations like fused kernels and
integer tensor cores.
- Optimizes matrix multiplications and linear algebra operations used in
layers like dense, convolutional, and RNNs in TensorFlow.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_
- 3.3.0
- :version-ref:`hipCUB rocm_version`
- Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select.
- Supports operations like ``tf.reduce_sum``, ``tf.cumsum``, ``tf.sort``
and other tensor operations in TensorFlow, especially those involving
scanning, sorting, and filtering.
* - `hipFFT <https://github.com/ROCm/hipFFT>`_
- 1.0.17
- :version-ref:`hipFFT rocm_version`
- Accelerates Fast Fourier Transforms (FFT) for signal processing tasks.
- Used for operations like signal processing, image filtering, and
certain types of neural networks requiring FFT-based transformations.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- 2.3.0
- :version-ref:`hipSOLVER rocm_version`
- Provides GPU-accelerated direct linear solvers for dense and sparse
systems.
- Optimizes linear algebra functions such as solving systems of linear
equations, often used in optimization and training tasks.
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- 3.1.2
- :version-ref:`hipSPARSE rocm_version`
- Optimizes sparse matrix operations for efficient computations on sparse
data.
- Accelerates sparse matrix operations in models with sparse weight
matrices or activations, commonly used in neural networks.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_
- 3.3.0
- :version-ref:`MIOpen rocm_version`
- Provides optimized deep learning primitives such as convolutions,
pooling,
normalization, and activation functions.
@@ -172,13 +175,13 @@ library was introduced as a dependency.
in TensorFlow for layers like ``tf.nn.conv2d``, ``tf.nn.relu``, and
``tf.nn.lstm_cell``.
* - `RCCL <https://github.com/ROCm/rccl>`_
- 2.21.5
- :version-ref:`RCCL rocm_version`
- Optimizes for multi-GPU communication for operations like AllReduce and
Broadcast.
- Distributed data parallel training (``tf.distribute.MirroredStrategy``).
Handles communication in multi-GPU setups.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_
- 3.3.0
- :version-ref:`rocThrust rocm_version`
- Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning.
- Reduction operations like ``tf.reduce_sum``, ``tf.cumsum`` for computing

View File

@@ -32,7 +32,7 @@ architecture.
* [AMD Instinct™ MI250 microarchitecture](./gpu-arch/mi250.md)
* [AMD Instinct MI200/CDNA2 ISA](https://www.amd.com/system/files/TechDocs/instinct-mi200-cdna2-instruction-set-architecture.pdf)
* [White paper](https://www.amd.com/system/files/documents/amd-cdna2-white-paper.pdf)
* [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-business-docs/white-papers/amd-cdna2-white-paper.pdf)
* [Performance counters](./gpu-arch/mi300-mi200-performance-counters.rst)
:::
@@ -45,7 +45,7 @@ architecture.
* [AMD Instinct™ MI100 microarchitecture](./gpu-arch/mi100.md)
* [AMD Instinct MI100/CDNA1 ISA](https://www.amd.com/system/files/TechDocs/instinct-mi100-cdna1-shader-instruction-set-architecture%C2%A0.pdf)
* [White paper](https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf)
* [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-business-docs/white-papers/amd-cdna-white-paper.pdf)
:::
@@ -55,7 +55,6 @@ architecture.
* [AMD RDNA3 ISA](https://www.amd.com/system/files/TechDocs/rdna3-shader-instruction-set-architecture-feb-2023_0.pdf)
* [AMD RDNA2 ISA](https://www.amd.com/system/files/TechDocs/rdna2-shader-instruction-set-architecture.pdf)
* [AMD RDNA ISA](https://www.amd.com/system/files/TechDocs/rdna-shader-instruction-set-architecture.pdf)
* [AMD RDNA Architecture White Paper](https://www.amd.com/system/files/documents/rdna-whitepaper.pdf)
:::

View File

@@ -6,6 +6,8 @@
import os
import shutil
import sys
from pathlib import Path
shutil.copy2("../RELEASE.md", "./about/release-notes.md")
@@ -50,8 +52,8 @@ article_pages = [
{"file": "how-to/rocm-for-ai/training/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/train-a-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/prerequisite-system-validation", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/train-a-model/benchmark-docker/megatron-lm", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/train-a-model/benchmark-docker/pytorch-training", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/megatron-lm", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/pytorch-training", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/scale-model-training", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/index", "os": ["linux"]},
@@ -66,7 +68,7 @@ article_pages = [
{"file": "how-to/rocm-for-ai/inference/llm-inference-frameworks", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/vllm-benchmark", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/deploy-your-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/model-quantization", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/model-acceleration-libraries", "os": ["linux"]},
@@ -89,11 +91,16 @@ article_pages = [
external_toc_path = "./sphinx/_toc.yml"
extensions = ["rocm_docs", "sphinx_reredirects", "sphinx_sitemap"]
# Add the _extensions directory to Python's search path
sys.path.append(str(Path(__file__).parent / 'extension'))
extensions = ["rocm_docs", "sphinx_reredirects", "sphinx_sitemap", "sphinxcontrib.datatemplates", "version-ref"]
compatibility_matrix_file = str(Path(__file__).parent / 'compatibility/compatibility-matrix-historical-6.0.csv')
external_projects_current_project = "rocm"
# Uncomment if facing rate limit exceed issue with local build
# Uncomment if facing rate limit exceed issue with local build
# external_projects_remote_repository = ""
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "https://rocm-stg.amd.com/")
@@ -104,8 +111,9 @@ if os.environ.get("READTHEDOCS", "") == "True":
html_theme = "rocm_docs_theme"
html_theme_options = {"flavor": "rocm-docs-home"}
html_static_path = ["sphinx/static/css"]
html_css_files = ["rocm_custom.css", "rocm_rn.css"]
html_static_path = ["sphinx/static/css", "extension/how-to/rocm-for-ai/inference"]
html_css_files = ["rocm_custom.css", "rocm_rn.css", "vllm-benchmark.css"]
html_js_files = ["vllm-benchmark.js"]
html_title = "ROCm Documentation"

View File

@@ -0,0 +1,153 @@
vllm_benchmark:
unified_docker:
latest:
pull_tag: rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
docker_hub_url: https://hub.docker.com/layers/rocm/vllm/rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6/images/sha256-9a12ef62bbbeb5a4c30a01f702c8e025061f575aa129f291a49fbd02d6b4d6c9
rocm_version: 6.3.1
vllm_version: 0.6.6
pytorch_version: 2.7.0 (2.7.0a0+git3a58512)
model_groups:
- group: Llama
tag: llama
models:
- model: Llama 3.1 8B
mad_tag: pyt_vllm_llama-3.1-8b
model_repo: meta-llama/Llama-3.1-8B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: float16
- model: Llama 3.1 70B
mad_tag: pyt_vllm_llama-3.1-70b
model_repo: meta-llama/Llama-3.1-70B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct
precision: float16
- model: Llama 3.1 405B
mad_tag: pyt_vllm_llama-3.1-405b
model_repo: meta-llama/Llama-3.1-405B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct
precision: float16
- model: Llama 3.2 11B Vision
mad_tag: pyt_vllm_llama-3.2-11b-vision-instruct
model_repo: meta-llama/Llama-3.2-11B-Vision-Instruct
url: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct
precision: float16
- model: Llama 2 7B
mad_tag: pyt_vllm_llama-2-7b
model_repo: meta-llama/Llama-2-7b-chat-hf
url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
precision: float16
- model: Llama 2 70B
mad_tag: pyt_vllm_llama-2-70b
model_repo: meta-llama/Llama-2-70b-chat-hf
url: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
precision: float16
- model: Llama 3.1 70B FP8
mad_tag: pyt_vllm_llama-3.1-70b_fp8
model_repo: amd/Llama-3.1-70B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV
precision: float8
- model: Llama 3.1 405B FP8
mad_tag: pyt_vllm_llama-3.1-405b_fp8
model_repo: amd/Llama-3.1-405B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV
precision: float8
- group: Mistral
tag: mistral
models:
- model: Mixtral MoE 8x7B
mad_tag: pyt_vllm_mixtral-8x7b
model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
precision: float16
- model: Mixtral MoE 8x22B
mad_tag: pyt_vllm_mixtral-8x22b
model_repo: mistralai/Mixtral-8x22B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
precision: float16
- model: Mistral 7B
mad_tag: pyt_vllm_mistral-7b
model_repo: mistralai/Mistral-7B-Instruct-v0.3
url: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3
precision: float16
- model: Mixtral MoE 8x7B FP8
mad_tag: pyt_vllm_mixtral-8x7b_fp8
model_repo: amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
precision: float8
- model: Mixtral MoE 8x22B FP8
mad_tag: pyt_vllm_mixtral-8x22b_fp8
model_repo: amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
precision: float8
- model: Mistral 7B FP8
mad_tag: pyt_vllm_mistral-7b_fp8
model_repo: amd/Mistral-7B-v0.1-FP8-KV
url: https://huggingface.co/amd/Mistral-7B-v0.1-FP8-KV
precision: float8
- group: Qwen
tag: qwen
models:
- model: Qwen2 7B
mad_tag: pyt_vllm_qwen2-7b
model_repo: Qwen/Qwen2-7B-Instruct
url: https://huggingface.co/Qwen/Qwen2-7B-Instruct
precision: float16
- model: Qwen2 72B
mad_tag: pyt_vllm_qwen2-72b
model_repo: Qwen/Qwen2-72B-Instruct
url: https://huggingface.co/Qwen/Qwen2-72B-Instruct
precision: float16
- group: JAIS
tag: jais
models:
- model: JAIS 13B
mad_tag: pyt_vllm_jais-13b
model_repo: core42/jais-13b-chat
url: https://huggingface.co/core42/jais-13b-chat
precision: float16
- model: JAIS 30B
mad_tag: pyt_vllm_jais-30b
model_repo: core42/jais-30b-chat-v3
url: https://huggingface.co/core42/jais-30b-chat-v3
precision: float16
- group: DBRX
tag: dbrx
models:
- model: DBRX Instruct
mad_tag: pyt_vllm_dbrx-instruct
model_repo: databricks/dbrx-instruct
url: https://huggingface.co/databricks/dbrx-instruct
precision: float16
- model: DBRX Instruct FP8
mad_tag: pyt_vllm_dbrx_fp8
model_repo: amd/dbrx-instruct-FP8-KV
url: https://huggingface.co/amd/dbrx-instruct-FP8-KV
precision: float8
- group: Gemma
tag: gemma
models:
- model: Gemma 2 27B
mad_tag: pyt_vllm_gemma-2-27b
model_repo: google/gemma-2-27b
url: https://huggingface.co/google/gemma-2-27b
precision: float16
- group: Cohere
tag: cohere
models:
- model: C4AI Command R+ 08-2024
mad_tag: pyt_vllm_c4ai-command-r-plus-08-2024
model_repo: CohereForAI/c4ai-command-r-plus-08-2024
url: https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024
precision: float16
- model: C4AI Command R+ 08-2024 FP8
mad_tag: pyt_vllm_command-r-plus_fp8
model_repo: amd/c4ai-command-r-plus-FP8-KV
url: https://huggingface.co/amd/c4ai-command-r-plus-FP8-KV
precision: float8
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek MoE 16B
mad_tag: pyt_vllm_deepseek-moe-16b-chat
model_repo: deepseek-ai/deepseek-moe-16b-chat
url: https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat
precision: float16

View File

View File

@@ -0,0 +1,212 @@
function ready(proc) {
// Check if page is loaded. If so, init.
if (document.readyState !== "loading") {
proc();
} else {
// Otherwise, wait for DOMContentLoaded event.
document.addEventListener("DOMContentLoaded", proc);
}
}
ready(() => {
const ModelPicker = {
// Selector strings for DOM elements
SELECTORS: {
CONTAINER: "#vllm-benchmark-ud-params-picker",
MODEL_GROUP_BTN: 'div[data-param-k="model-group"][data-param-v]',
MODEL_PARAM_BTN: 'div[data-param-k="model"][data-param-v]',
MODEL_DOC: "div.model-doc",
},
CSS_CLASSES: {
HIDDEN: "hidden",
},
ATTRIBUTES: {
PARAM_KEY: "data-param-k", // URL search parameter key (i.e., "model")
PARAM_VALUE: "data-param-v", // URL search param value (e.g., "pyt_vllm_llama-3.1-8b", "pyt_vllm_llama-3.1-70b") -- these are MAD model tags
PARAM_GROUP: "data-param-group", // Model group (e.g., "llama", "mistral")
PARAM_STATE: "data-param-state", // Selection state
},
// Cache DOM elements
elements: {
container: null,
modelGroups: null,
modelParams: null,
modelDocs: null,
},
data: {
availableModels: new Set(),
modelsByGroup: new Map(),
modelToGroupMap: new Map(),
formattedModelClassMap: new Map(), //TODO
},
init() {
this.elements.container = document.querySelector(
this.SELECTORS.CONTAINER,
);
if (!this.elements.container) return;
this.cacheDOMElements();
if (!this.validateElements()) return;
this.buildModelData();
this.bindEvents();
this.initializeState();
},
cacheDOMElements() {
const { CONTAINER, MODEL_GROUP_BTN, MODEL_PARAM_BTN, MODEL_DOC } =
this.SELECTORS;
this.elements = {
container: document.querySelector(CONTAINER),
modelGroups: document.querySelectorAll(MODEL_GROUP_BTN),
modelParams: document.querySelectorAll(MODEL_PARAM_BTN),
modelDocs: document.querySelectorAll(MODEL_DOC),
};
},
validateElements() {
const { modelGroups, modelParams } = this.elements;
if (!modelGroups.length || !modelParams.length) {
console.warn("Model picker is missing required elements");
return false;
}
return true;
},
buildModelData() {
const { PARAM_VALUE, PARAM_GROUP } = this.ATTRIBUTES;
this.elements.modelParams.forEach((model) => {
const modelTag = model.getAttribute(PARAM_VALUE);
const groupTag = model.getAttribute(PARAM_GROUP);
if (!modelTag || !groupTag) return;
this.data.availableModels.add(modelTag);
this.data.modelToGroupMap.set(modelTag, groupTag);
// FIXME: this is because Sphinx auto-formats class names to use dashes
this.data.formattedModelClassMap.set(
modelTag,
modelTag.replace(/[^a-zA-Z0-9]/g, "-"),
);
if (!this.data.modelsByGroup.has(groupTag)) {
this.data.modelsByGroup.set(groupTag, []);
}
this.data.modelsByGroup.get(groupTag).push(modelTag);
});
},
// Event listeners for user interactions
bindEvents() {
const handleInteraction = (event) => {
const target = event.target.closest(`[${this.ATTRIBUTES.PARAM_KEY}]`);
if (!target) return;
const paramType = target.getAttribute(this.ATTRIBUTES.PARAM_KEY);
const paramValue = target.getAttribute(this.ATTRIBUTES.PARAM_VALUE);
if (paramType === "model") {
const groupTag = target.getAttribute(this.ATTRIBUTES.PARAM_GROUP);
if (groupTag) this.updateUI(paramValue, groupTag);
} else if (paramType === "model-group") {
const firstModelInGroup = this.data.modelsByGroup.get(paramValue)
?.[0];
if (firstModelInGroup) this.updateUI(firstModelInGroup, paramValue);
}
};
this.elements.container.addEventListener("click", handleInteraction);
this.elements.container.addEventListener("keydown", (event) => {
if (event.key === "Enter" || event.key === " ") {
event.preventDefault();
handleInteraction(event);
}
});
},
// Update the page based on the selected model
updateUI(modelTag, groupTag) {
const validModel = this.setModelSearchParam(modelTag);
// Update model group buttons
this.elements.modelGroups.forEach((group) => {
const isSelected =
group.getAttribute(this.ATTRIBUTES.PARAM_VALUE) === groupTag;
group.setAttribute(
this.ATTRIBUTES.PARAM_STATE,
isSelected ? "selected" : "",
);
group.setAttribute("aria-selected", isSelected.toString());
});
// Update model buttons
this.elements.modelParams.forEach((model) => {
const isInSelectedGroup =
model.getAttribute(this.ATTRIBUTES.PARAM_GROUP) === groupTag;
const isSelectedModel =
model.getAttribute(this.ATTRIBUTES.PARAM_VALUE) === validModel;
model.classList.toggle(this.CSS_CLASSES.HIDDEN, !isInSelectedGroup);
model.setAttribute(
this.ATTRIBUTES.PARAM_STATE,
isSelectedModel ? "selected" : "",
);
model.setAttribute("aria-selected", isSelectedModel.toString());
});
// Update visibility of doc sections
const formattedClass = this.data.formattedModelClassMap.get(validModel);
if (formattedClass) {
this.elements.modelDocs.forEach((doc) => {
doc.classList.toggle(
this.CSS_CLASSES.HIDDEN,
!doc.classList.contains(formattedClass),
);
});
}
},
// Get the current model from the URL search parameters.
getModelSearchParam() {
return new URLSearchParams(location.search).get("model");
},
// Set the model in the URL search parameters, or fallback to the first available one.
setModelSearchParam(modelTag) {
const defaultModel = [...this.data.availableModels][0];
const model = this.data.availableModels.has(modelTag)
? modelTag
: defaultModel;
const searchParams = new URLSearchParams(location.search);
searchParams.set("model", model);
history.replaceState(
{},
"",
`${location.pathname}?${searchParams.toString()}`,
);
return model;
},
// Initialize the UI state based on the current URL search parameter or default values.
initializeState() {
const currentModel = this.getModelSearchParam();
const validModel = this.setModelSearchParam(currentModel);
const initialGroup = this.data.modelToGroupMap.get(validModel) ??
[...this.data.modelsByGroup.keys()][0];
if (initialGroup) {
this.updateUI(validModel, initialGroup);
}
},
};
ModelPicker.init();
});

View File

@@ -0,0 +1,266 @@
from docutils import nodes
from docutils.parsers.rst import Directive
from sphinx.util import logging
import csv
from io import StringIO
import re
import shlex
logger = logging.getLogger(__name__)
class VersionReference(nodes.Inline, nodes.TextElement):
"""Represents an inline version reference."""
pass
class VersionSetDirective(Directive):
"""Directive for setting version references within a page scope."""
required_arguments = 2 # name and value
optional_arguments = 0
def run(self):
env = self.state.document.settings.env
if not hasattr(env, 'doc_version_refs'):
env.doc_version_refs = {}
current_doc = env.docname
if current_doc not in env.doc_version_refs:
env.doc_version_refs[current_doc] = {}
name, value = self.arguments
if name.lower() == 'latest':
logger.warning('Cannot override the "latest" keyword with version-set')
return []
# Handle 'latest' value by getting the actual version
if value.lower() == 'latest':
data = getattr(env, 'compatibility_matrix', None)
if data:
latest_version = get_latest_rocm_version(data)
if latest_version:
value = latest_version
env.doc_version_refs[current_doc][name] = value
return []
def clean_library_name(name):
"""Extract library name from RST formatting."""
# Handle :doc: format
doc_match = re.search(r':doc:`([^<]+)(?:\s+<[^>]+>)?`', name)
if doc_match:
return doc_match.group(1).strip()
# Handle other link formats
link_match = re.search(r'`([^<]+)(?:\s+<[^>]+>)?`_?', name)
if link_match:
return link_match.group(1).strip()
return name.strip()
def get_latest_rocm_version(data):
"""Get the latest ROCm version from the matrix headers."""
if not data or len(data) == 0:
return None
# Get all column names except 'ROCm Version'
columns = [col for col in data[0].keys() if col != 'ROCm Version']
# Return the first column name (assumed to be the latest version)
return columns[0] if columns else None
def version_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
"""
Role function to print version value.
Usage: :version:`version_name`
"""
try:
version_name = text.strip()
env = inliner.document.settings.env
if hasattr(env, 'doc_version_refs'):
current_doc = env.docname
if current_doc in env.doc_version_refs:
doc_refs = env.doc_version_refs[current_doc]
if version_name in doc_refs:
version = doc_refs[version_name]
node = nodes.Text(version)
return [node], []
msg = inliner.reporter.warning(
f'No version defined for name {version_name}',
line=lineno
)
return [], [msg]
except Exception as e:
msg = inliner.reporter.error(
f'Error looking up version: {str(e)}',
line=lineno
)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
def version_ref_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
"""
Role function for version references.
Usage: :version-ref:`library_name release`
:version-ref:`"library name" release`
:version-ref:`library_name latest`
:version-ref:`rocm latest`
"""
try:
# Parse the text - handle both quoted and unquoted formats
if '"' in text:
parts = shlex.split(text)
else:
parts = text.split()
if len(parts) != 2:
msg = inliner.reporter.error(
'Version reference must be in format "library_name release" or "\\"library name\\" release"',
line=lineno
)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
library_name, release = parts
env = inliner.document.settings.env
# Check if release is a version reference in current document
if hasattr(env, 'doc_version_refs'):
current_doc = env.docname
if current_doc in env.doc_version_refs:
doc_refs = env.doc_version_refs[current_doc]
if release in doc_refs:
release = doc_refs[release]
# Handle special case for "rocm latest"
if library_name.lower() == 'rocm' and release.lower() == 'latest':
data = getattr(env, 'compatibility_matrix', None)
if not data:
raise ValueError("Compatibility matrix not found in environment")
latest_version = get_latest_rocm_version(data)
if latest_version:
node = VersionReference()
node += nodes.Text(latest_version)
return [node], []
else:
msg = inliner.reporter.warning(
'No ROCm versions found in compatibility matrix',
line=lineno
)
return [], [msg]
version = lookup_version(inliner, library_name, release)
if version:
node = VersionReference()
node += nodes.Text(version)
return [node], []
else:
msg = inliner.reporter.warning(
f'No version found for library {library_name} in release {release}',
line=lineno
)
return [], [msg]
except Exception as e:
msg = inliner.reporter.error(
f'Error looking up version: {str(e)}',
line=lineno
)
prb = inliner.problematic(rawtext, rawtext, msg)
return [prb], [msg]
def lookup_version(inliner, library_name, release):
"""Look up the version in the compatibility matrix."""
env = inliner.document.settings.env
data = getattr(env, 'compatibility_matrix', None)
if not data:
raise ValueError("Compatibility matrix not found in environment")
# Handle the 'latest' keyword
if release.lower() == 'latest':
latest_version = get_latest_rocm_version(data)
if not latest_version:
return None
release = latest_version
# For ROCm, check if the version exists in column headers
if library_name.lower() == 'rocm':
columns = [col for col in data[0].keys() if col != 'ROCm Version']
if release in columns:
return release
return None
# Find the library version
for row in data:
row_lib_name = clean_library_name(row['ROCm Version'])
if row_lib_name == library_name:
# Get the version, removing any whitespace
version = row.get(release, '').strip()
if version:
return version
# If not found, try a case-insensitive search
for row in data:
row_lib_name = clean_library_name(row['ROCm Version'])
if row_lib_name.lower() == library_name.lower():
version = row.get(release, '').strip()
if version:
return version
return None
def visit_version_reference(self, node):
self.body.append(f'<span class="version-reference">')
def depart_version_reference(self, node):
self.body.append('</span>')
def load_compatibility_matrix(app):
"""Load the compatibility matrix content from CSV."""
if not app.config.compatibility_matrix_file:
logger.warning('No compatibility matrix file configured')
return
try:
with open(app.config.compatibility_matrix_file, 'r', encoding='utf-8') as f:
reader = csv.DictReader(f)
app.env.compatibility_matrix = list(reader)
logger.info('Successfully loaded compatibility matrix')
# Debug: print first few rows with their library names
for row in list(app.env.compatibility_matrix)[:5]:
if 'ROCm Version' in row:
lib_name = clean_library_name(row['ROCm Version'])
logger.debug(f"Loaded library: {lib_name}")
except Exception as e:
logger.error(f'Error loading compatibility matrix: {str(e)}')
def purge_version_refs(app, env, docname):
"""Remove version references for a document when it is purged"""
if hasattr(env, 'doc_version_refs'):
if docname in env.doc_version_refs:
del env.doc_version_refs[docname]
def setup(app):
app.add_node(VersionReference,
html=(visit_version_reference, depart_version_reference))
app.add_role('version-ref', version_ref_role)
app.add_role('version', version_role)
app.add_directive('version-set', VersionSetDirective)
# Add a config value for the compatibility matrix file path
app.add_config_value('compatibility_matrix_file', None, 'env')
# Connect to the builder-inited event to load the matrix
app.connect('builder-inited', load_compatibility_matrix)
# Connect to env-purge-doc event to clean up document-specific version refs
app.connect('env-purge-doc', purge_version_refs)
return {
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -9,422 +9,266 @@ LLM inference performance validation on AMD Instinct MI300X
.. _vllm-benchmark-unified-docker:
The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers
a prebuilt, optimized environment for validating large language model (LLM)
inference performance on the AMD Instinct™ MI300X accelerator. This ROCm vLLM
Docker image integrates vLLM and PyTorch tailored specifically for the MI300X
accelerator and includes the following components:
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/vllm-benchmark-models.yaml
* `ROCm 6.3.1 <https://github.com/ROCm/ROCm>`_
{% set unified_docker = data.vllm_benchmark.unified_docker.latest %}
{% set model_groups = data.vllm_benchmark.model_groups %}
* `vLLM 0.6.6 <https://docs.vllm.ai/en/latest>`_
The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers
a prebuilt, optimized environment for validating large language model (LLM)
inference performance on the AMD Instinct™ MI300X accelerator. This ROCm vLLM
Docker image integrates vLLM and PyTorch tailored specifically for the MI300X
accelerator and includes the following components:
* `PyTorch 2.7.0 (2.7.0a0+git3a58512) <https://github.com/pytorch/pytorch>`_
* `ROCm {{ unified_docker.rocm_version }} <https://github.com/ROCm/ROCm>`_
With this Docker image, you can quickly validate the expected inference
performance numbers for the MI300X accelerator. This topic also provides tips on
optimizing performance with popular AI models. For more information, see the lists of
:ref:`available models for MAD-integrated benchmarking <vllm-benchmark-mad-models>`
and :ref:`standalone benchmarking <vllm-benchmark-standalone-options>`.
* `vLLM {{ unified_docker.vllm_version }} <https://docs.vllm.ai/en/latest>`_
.. _vllm-benchmark-vllm:
* `PyTorch {{ unified_docker.pytorch_version }} <https://github.com/pytorch/pytorch>`_
.. note::
With this Docker image, you can quickly validate the expected inference
performance numbers for the MI300X accelerator. This topic also provides tips on
optimizing performance with popular AI models.
vLLM is a toolkit and library for LLM inference and serving. AMD implements
high-performance custom kernels and modules in vLLM to enhance performance.
See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
more information.
.. _vllm-benchmark-available-models:
Getting started
===============
Available models
================
Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image.
.. raw:: html
.. _vllm-benchmark-get-started:
<div id="vllm-benchmark-ud-params-picker" class="container-fluid">
<div class="row">
<div class="col-2 me-2 model-param-head">Model</div>
<div class="row col-10">
{% for model_group in model_groups %}
<div class="col-3 model-param" data-param-k="model-group" data-param-v="{{ model_group.tag }}" tabindex="0">{{ model_group.group }}</div>
{% endfor %}
</div>
</div>
1. Disable NUMA auto-balancing.
<div class="row mt-1">
<div class="col-2 me-2 model-param-head">Model variant</div>
<div class="row col-10">
{% for model_group in model_groups %}
{% set models = model_group.models %}
{% for model in models %}
{% if models|length % 3 == 0 %}
<div class="col-4 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% else %}
<div class="col-6 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</div>
To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU
might hang until the periodic balancing is finalized. For more information,
see :ref:`AMD Instinct MI300X system optimization <mi300x-disable-numa>`.
.. _vllm-benchmark-vllm:
.. code-block:: shell
{% for model_group in model_groups %}
{% for model in model_group.models %}
# disable automatic NUMA balancing
sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'
# check if NUMA balancing is disabled (returns 0 if disabled)
cat /proc/sys/kernel/numa_balancing
0
.. container:: model-doc {{model.mad_tag}}
2. Download the :ref:`ROCm vLLM Docker image <vllm-benchmark-unified-docker>`.
.. note::
Use the following command to pull the Docker image from Docker Hub.
See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model.
Some models require access authorization prior to use via an external license agreement through a third party.
.. code-block:: shell
{% endfor %}
{% endfor %}
docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
Once the setup is complete, choose between two options to reproduce the
benchmark results:
.. note::
- :ref:`MAD-integrated benchmarking <vllm-benchmark-mad>`
vLLM is a toolkit and library for LLM inference and serving. AMD implements
high-performance custom kernels and modules in vLLM to enhance performance.
See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
more information.
- :ref:`Standalone benchmarking <vllm-benchmark-standalone>`
Getting started
===============
.. _vllm-benchmark-mad:
Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image.
MAD-integrated benchmarking
===========================
.. _vllm-benchmark-get-started:
Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
1. Disable NUMA auto-balancing.
.. code-block:: shell
To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU
might hang until the periodic balancing is finalized. For more information,
see :ref:`AMD Instinct MI300X system optimization <mi300x-disable-numa>`.
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
.. code-block:: shell
Use this command to run a performance benchmark test of the Llama 3.1 8B model
on one GPU with ``float16`` data type in the host machine.
# disable automatic NUMA balancing
sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'
# check if NUMA balancing is disabled (returns 0 if disabled)
cat /proc/sys/kernel/numa_balancing
0
.. code-block:: shell
2. Download the `ROCm vLLM Docker image <{{ unified_docker.docker_hub_url }}>`_.
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
python3 tools/run_models.py --tags pyt_vllm_llama-3.1-8b --keep-model-dir --live-output --timeout 28800
Use the following command to pull the Docker image from Docker Hub.
ROCm MAD launches a Docker container with the name
``container_ci-pyt_vllm_llama-3.1-8b``. The latency and throughput reports of the
model are collected in the following path: ``~/MAD/reports_float16/``.
.. code-block:: shell
Although the following models are preconfigured to collect latency and
throughput performance data, you can also change the benchmarking parameters.
Refer to the :ref:`Standalone benchmarking <vllm-benchmark-standalone>` section.
docker pull {{ unified_docker.pull_tag }}
.. _vllm-benchmark-mad-models:
Benchmarking
============
Available models
----------------
Once the setup is complete, choose between two options to reproduce the
benchmark results:
.. list-table::
:header-rows: 1
:widths: 2, 3
.. _vllm-benchmark-mad:
* - Model name
- Tag
{% for model_group in model_groups %}
{% for model in model_group.models %}
* - `Llama 3.1 8B <https://huggingface.co/meta-llama/Llama-3.1-8B>`_
- ``pyt_vllm_llama-3.1-8b``
.. container:: model-doc {{model.mad_tag}}
* - `Llama 3.1 70B <https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct>`_
- ``pyt_vllm_llama-3.1-70b``
.. tab-set::
* - `Llama 3.1 405B <https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct>`_
- ``pyt_vllm_llama-3.1-405b``
.. tab-item:: MAD-integrated benchmarking
* - `Llama 3.2 11B Vision <https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct>`_
- ``pyt_vllm_llama-3.2-11b-vision-instruct``
Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
* - `Llama 2 7B <https://huggingface.co/meta-llama/Llama-2-7b-chat-hf>`_
- ``pyt_vllm_llama-2-7b``
.. code-block:: shell
* - `Llama 2 70B <https://huggingface.co/meta-llama/Llama-2-70b-chat-hf>`_
- ``pyt_vllm_llama-2-70b``
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
* - `Mixtral MoE 8x7B <https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1>`_
- ``pyt_vllm_mixtral-8x7b``
Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model
using one GPU with the ``{{model.precision}}`` data type on the host machine.
* - `Mixtral MoE 8x22B <https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1>`_
- ``pyt_vllm_mixtral-8x22b``
.. code-block:: shell
* - `Mistral 7B <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3>`_
- ``pyt_vllm_mistral-7b``
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
python3 tools/run_models.py --tags {{model.mad_tag}} --keep-model-dir --live-output --timeout 28800
* - `Qwen2 7B <https://huggingface.co/Qwen/Qwen2-7B-Instruct>`_
- ``pyt_vllm_qwen2-7b``
MAD launches a Docker container with the name
``container_ci-{{model.mad_tag}}``. The latency and throughput reports of the
model are collected in the following path: ``~/MAD/reports_{{model.precision}}/``.
* - `Qwen2 72B <https://huggingface.co/Qwen/Qwen2-72B-Instruct>`_
- ``pyt_vllm_qwen2-72b``
Although the :ref:`available models <vllm-benchmark-available-models>` are preconfigured
to collect latency and throughput performance data, you can also change the benchmarking
parameters. See the standalone benchmarking tab for more information.
* - `JAIS 13B <https://huggingface.co/core42/jais-13b-chat>`_
- ``pyt_vllm_jais-13b``
.. tab-item:: Standalone benchmarking
* - `JAIS 30B <https://huggingface.co/core42/jais-30b-chat-v3>`_
- ``pyt_vllm_jais-30b``
Run the vLLM benchmark tool independently by starting the
`Docker container <https://hub.docker.com/layers/rocm/vllm/rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6/images/sha256-9a12ef62bbbeb5a4c30a01f702c8e025061f575aa129f291a49fbd02d6b4d6c9>`_
as shown in the following snippet.
* - `DBRX Instruct <https://huggingface.co/databricks/dbrx-instruct>`_
- ``pyt_vllm_dbrx-instruct``
.. code-block::
* - `Gemma 2 27B <https://huggingface.co/google/gemma-2-27b>`_
- ``pyt_vllm_gemma-2-27b``
docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name vllm_v0.6.6 rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
* - `C4AI Command R+ 08-2024 <https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024>`_
- ``pyt_vllm_c4ai-command-r-plus-08-2024``
In the Docker container, clone the ROCm MAD repository and navigate to the
benchmark scripts directory at ``~/MAD/scripts/vllm``.
* - `DeepSeek MoE 16B <https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat>`_
- ``pyt_vllm_deepseek-moe-16b-chat``
.. code-block::
* - `Llama 3.1 70B FP8 <https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV>`_
- ``pyt_vllm_llama-3.1-70b_fp8``
git clone https://github.com/ROCm/MAD
cd MAD/scripts/vllm
* - `Llama 3.1 405B FP8 <https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV>`_
- ``pyt_vllm_llama-3.1-405b_fp8``
To start the benchmark, use the following command with the appropriate options.
* - `Mixtral MoE 8x7B FP8 <https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV>`_
- ``pyt_vllm_mixtral-8x7b_fp8``
.. code-block::
* - `Mixtral MoE 8x22B FP8 <https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV>`_
- ``pyt_vllm_mixtral-8x22b_fp8``
./vllm_benchmark_report.sh -s $test_option -m {{model.model_repo}} -g $num_gpu -d {{model.precision}}
* - `Mistral 7B FP8 <https://huggingface.co/amd/Mistral-7B-v0.1-FP8-KV>`_
- ``pyt_vllm_mistral-7b_fp8``
.. list-table::
:header-rows: 1
:align: center
* - `DBRX Instruct FP8 <https://huggingface.co/amd/dbrx-instruct-FP8-KV>`_
- ``pyt_vllm_dbrx_fp8``
* - Name
- Options
- Description
* - `C4AI Command R+ 08-2024 FP8 <https://huggingface.co/amd/c4ai-command-r-plus-FP8-KV>`_
- ``pyt_vllm_command-r-plus_fp8``
* - ``$test_option``
- latency
- Measure decoding token latency
.. _vllm-benchmark-standalone:
* -
- throughput
- Measure token generation throughput
Standalone benchmarking
=======================
* -
- all
- Measure both throughput and latency
You can run the vLLM benchmark tool independently by starting the
`Docker container <https://hub.docker.com/layers/rocm/vllm/rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6/images/sha256-9a12ef62bbbeb5a4c30a01f702c8e025061f575aa129f291a49fbd02d6b4d6c9>`_
as shown in the following snippet.
* - ``$num_gpu``
- 1 or 8
- Number of GPUs
.. code-block::
* - ``$datatype``
- ``float16`` or ``float8``
- Data type
docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name vllm_v0.6.6 rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
.. note::
In the Docker container, clone the ROCm MAD repository and navigate to the
benchmark scripts directory at ``~/MAD/scripts/vllm``.
The input sequence length, output sequence length, and tensor parallel (TP) are
already configured. You don't need to specify them with this script.
.. code-block::
.. note::
git clone https://github.com/ROCm/MAD
cd MAD/scripts/vllm
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
Command
-------
.. code-block::
To start the benchmark, use the following command with the appropriate options.
See :ref:`Options <vllm-benchmark-standalone-options>` for the list of
options and their descriptions.
OSError: You are trying to access a gated repo.
.. code-block:: shell
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
./vllm_benchmark_report.sh -s $test_option -m $model_repo -g $num_gpu -d $datatype
Here are some examples of running the benchmark with various options.
See the :ref:`examples <vllm-benchmark-run-benchmark>` for more information.
* Latency benchmark
.. note::
Use this command to benchmark the latency of the {{model.model}} model on eight GPUs with the ``{{model.precision}}`` data type.
The input sequence length, output sequence length, and tensor parallel (TP) are
already configured. You don't need to specify them with this script.
.. code-block::
.. note::
./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}}
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
Find the latency report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_latency_report.csv``.
.. code-block:: shell
* Throughput benchmark
OSError: You are trying to access a gated repo.
Use this command to throughput the latency of the {{model.model}} model on eight GPUs with the ``{{model.precision}}`` data type.
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
.. code-block:: shell
.. _vllm-benchmark-standalone-options:
./vllm_benchmark_report.sh -s latency -m {{model.model_repo}} -g 8 -d {{model.precision}}
Options and available models
----------------------------
Find the throughput report at ``./reports_{{model.precision}}_vllm_rocm{{unified_docker.rocm_version}}/summary/{{model.model_repo.split('/', 1)[1] if '/' in model.model_repo else model.model_repo}}_throughput_report.csv``.
.. list-table::
:header-rows: 1
:align: center
.. raw:: html
* - Name
- Options
- Description
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
* - ``$test_option``
- latency
- Measure decoding token latency
.. note::
* -
- throughput
- Measure token generation throughput
Throughput is calculated as:
* -
- all
- Measure both throughput and latency
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
* - ``$model_repo``
- ``meta-llama/Llama-3.1-8B-Instruct``
- `Llama 3.1 8B <https://huggingface.co/meta-llama/Llama-3.1-8B>`_
* - (``float16``)
- ``meta-llama/Llama-3.1-70B-Instruct``
- `Llama 3.1 70B <https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct>`_
* -
- ``meta-llama/Llama-3.1-405B-Instruct``
- `Llama 3.1 405B <https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct>`_
* -
- ``meta-llama/Llama-3.2-11B-Vision-Instruct``
- `Llama 3.2 11B Vision <https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct>`_
* -
- ``meta-llama/Llama-2-7b-chat-hf``
- `Llama 2 7B <https://huggingface.co/meta-llama/Llama-2-7b-chat-hf>`_
* -
- ``meta-llama/Llama-2-70b-chat-hf``
- `Llama 2 7B <https://huggingface.co/meta-llama/Llama-2-70b-chat-hf>`_
* -
- ``mistralai/Mixtral-8x7B-Instruct-v0.1``
- `Mixtral MoE 8x7B <https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1>`_
* -
- ``mistralai/Mixtral-8x22B-Instruct-v0.1``
- `Mixtral MoE 8x22B <https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1>`_
* -
- ``mistralai/Mistral-7B-Instruct-v0.3``
- `Mistral 7B <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3>`_
* -
- ``Qwen/Qwen2-7B-Instruct``
- `Qwen2 7B <https://huggingface.co/Qwen/Qwen2-7B-Instruct>`_
* -
- ``Qwen/Qwen2-72B-Instruct``
- `Qwen2 72B <https://huggingface.co/Qwen/Qwen2-72B-Instruct>`_
* -
- ``core42/jais-13b-chat``
- `JAIS 13B <https://huggingface.co/core42/jais-13b-chat>`_
* -
- ``core42/jais-30b-chat-v3``
- `JAIS 30B <https://huggingface.co/core42/jais-30b-chat-v3>`_
* -
- ``databricks/dbrx-instruct``
- `DBRX Instruct <https://huggingface.co/databricks/dbrx-instruct>`_
* -
- ``google/gemma-2-27b``
- `Gemma 2 27B <https://huggingface.co/google/gemma-2-27b>`_
* -
- ``CohereForAI/c4ai-command-r-plus-08-2024``
- `C4AI Command R+ 08-2024 <https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024>`_
* -
- ``deepseek-ai/deepseek-moe-16b-chat``
- `DeepSeek MoE 16B <https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat>`_
* - ``$model_repo``
- ``amd/Llama-3.1-70B-Instruct-FP8-KV``
- `Llama 3.1 70B FP8 <https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV>`_
* - (``float8``)
- ``amd/Llama-3.1-405B-Instruct-FP8-KV``
- `Llama 3.1 405B FP8 <https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV>`_
* -
- ``amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV``
- `Mixtral MoE 8x7B FP8 <https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV>`_
* -
- ``amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV``
- `Mixtral MoE 8x22B FP8 <https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV>`_
* -
- ``amd/Mistral-7B-v0.1-FP8-KV``
- `Mistral 7B FP8 <https://huggingface.co/amd/Mistral-7B-v0.1-FP8-KV>`_
* -
- ``amd/dbrx-instruct-FP8-KV``
- `DBRX Instruct FP8 <https://huggingface.co/amd/dbrx-instruct-FP8-KV>`_
* -
- ``amd/c4ai-command-r-plus-FP8-KV``
- `C4AI Command R+ 08-2024 FP8 <https://huggingface.co/amd/c4ai-command-r-plus-FP8-KV>`_
* - ``$num_gpu``
- 1 or 8
- Number of GPUs
* - ``$datatype``
- ``float16`` or ``float8``
- Data type
.. _vllm-benchmark-run-benchmark:
Running the benchmark on the MI300X accelerator
-----------------------------------------------
Here are some examples of running the benchmark with various options.
See :ref:`Options <vllm-benchmark-standalone-options>` for the list of
options and their descriptions.
Example 1: latency benchmark
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to benchmark the latency of the Llama 3.1 70B model on eight GPUs with the ``float16`` and ``float8`` data types.
.. code-block::
./vllm_benchmark_report.sh -s latency -m meta-llama/Llama-3.1-70B-Instruct -g 8 -d float16
./vllm_benchmark_report.sh -s latency -m amd/Llama-3.1-70B-Instruct-FP8-KV -g 8 -d float8
Find the latency reports at:
- ``./reports_float16/summary/Llama-3.1-70B-Instruct_latency_report.csv``
- ``./reports_float8/summary/Llama-3.1-70B-Instruct-FP8-KV_latency_report.csv``
Example 2: throughput benchmark
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to benchmark the throughput of the Llama 3.1 70B model on eight GPUs with the ``float16`` and ``float8`` data types.
.. code-block:: shell
./vllm_benchmark_report.sh -s throughput -m meta-llama/Llama-3.1-70B-Instruct -g 8 -d float16
./vllm_benchmark_report.sh -s throughput -m amd/Llama-3.1-70B-Instruct-FP8-KV -g 8 -d float8
Find the throughput reports at:
- ``./reports_float16/summary/Llama-3.1-70B-Instruct_throughput_report.csv``
- ``./reports_float8/summary/Llama-3.1-70B-Instruct-FP8-KV_throughput_report.csv``
.. raw:: html
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
.. note::
Throughput is calculated as:
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
{% endfor %}
{% endfor %}
Further reading
===============
@@ -446,33 +290,3 @@ Further reading
- To learn how to fine-tune LLMs, see
:doc:`Fine-tuning LLMs <../fine-tuning/index>`.
Previous versions
=================
This table lists previous versions of the ROCm vLLM Docker image for inference
performance validation. For detailed information about available models for
benchmarking, see the version-specific documentation.
.. list-table::
:header-rows: 1
:stub-columns: 1
* - ROCm version
- vLLM version
- PyTorch version
- Resources
* - 6.2.1
- 0.6.4
- 2.5.0
-
* `Documentation <https://rocm.docs.amd.com/en/docs-6.3.0/how-to/performance-validation/mi300x/vllm-benchmark.html>`_
* `Docker Hub <https://hub.docker.com/layers/rocm/vllm/rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4/images/sha256-ccbb74cc9e7adecb8f7bdab9555f7ac6fc73adb580836c2a35ca96ff471890d8>`_
* - 6.2.0
- 0.4.3
- 2.4.0
-
* `Documentation <https://rocm.docs.amd.com/en/docs-6.2.0/how-to/performance-validation/mi300x/vllm-benchmark.html>`_
* `Docker Hub <https://hub.docker.com/layers/rocm/vllm/rocm6.2_mi300_ubuntu22.04_py3.9_vllm_7c5fd50/images/sha256-9e4dd4788a794c3d346d7d0ba452ae5e92d39b8dfac438b2af8efdc7f15d22c0>`_

View File

@@ -308,6 +308,24 @@ Otherwise, if the system has Intel host CPUs add this instead to
intel_iommu=on iommu=pt
``modprobe.blacklist=amdgpu``
For some system configurations, the ``amdgpu`` driver needs to be blocked during kernel initialization to avoid an issue where after boot, the GPUs are not listed when running the command ``rocm-smi`` or ``amd-smi``.
Alternatively, configuring the AMD recommended system optimized BIOS settings might remove the need for using this setting. Some manufacturers and users might not implement the recommended system optimized BIOS settings.
If you experience the mentioned issue, then add this to ``GRUB_CMDLINE_LINUX``:
.. code-block:: text
modprobe.blacklist=amdgpu
After the change, the ``amdgpu`` module must be loaded to support the ROCm framework
software tools and utilities. Run the following command to load the ``amdgpu`` module:
.. code-block:: text
sudo modprobe amdgpu
Update GRUB
-----------

View File

@@ -9,16 +9,14 @@
Data types and precision support
*************************************************************
This topic lists the supported data types of AMD GPUs and ROCm libraries.
Corresponding :doc:`HIP <hip:index>` data types are also noted.
This topic lists the data types support on AMD GPUs, ROCm libraries along
with corresponding :doc:`HIP <hip:index>` data types.
Integral types
==========================================
==============
The signed and unsigned integral types supported by ROCm are listed in
the following table, along with their corresponding HIP type and a short
description.
the following table.
.. list-table::
:header-rows: 1
@@ -48,10 +46,9 @@ description.
.. _precision_support_floating_point_types:
Floating-point types
==========================================
====================
The floating-point types supported by ROCm are listed in the following
table, along with their corresponding HIP type and a short description.
The floating-point types supported by ROCm are listed in the following table.
.. image:: ../data/about/compatibility/floating-point-data-types.png
:alt: Supported floating-point types
@@ -66,18 +63,18 @@ table, along with their corresponding HIP type and a short description.
- Description
*
- float8 (E4M3)
- ``-``
- ``__hip_fp8_e4m3_fnuz``
- An 8-bit floating-point number that mostly follows IEEE-754 conventions
and **S1E4M3** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_ ,
with expanded range and no infinity or signed zero. NaN is
represented as negative zero.
and **S1E4M3** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_,
with expanded range and no infinity or signed zero. NaN is represented
as negative zero.
*
- float8 (E5M2)
- ``-``
- ``__hip_fp8_e5m2_fnuz``
- An 8-bit floating-point number mostly following IEEE-754 conventions and
**S1E5M2** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_ ,
with expanded range and no infinity or signed zero. NaN is
represented as negative zero.
**S1E5M2** bit layout, as described in `8-bit Numerical Formats for Deep Neural Networks <https://arxiv.org/abs/2206.02915>`_,
with expanded range and no infinity or signed zero. NaN is represented
as negative zero.
*
- float16
- ``half``
@@ -90,7 +87,7 @@ table, along with their corresponding HIP type and a short description.
format.
*
- tensorfloat32
- ``-``
- Not available
- A floating-point number that occupies 32 bits or less of storage,
providing improved range compared to half (16-bit) format, at
(potentially) greater throughput than single-precision (32-bit) formats.
@@ -117,12 +114,15 @@ table, along with their corresponding HIP type and a short description.
* In some AMD documents and articles, float8 (E5M2) is referred to as bfloat8.
ROCm support icons
==========================================
* The :doc:`low precision floating point types page <hip:reference/low_fp_types>`
describes how to use these types in HIP with examples.
In the following sections, icons represent the level of support. These
icons, described in the following table, are also used in the library data type
support pages.
Level of support definitions
============================
In the following sections, icons represent the level of support. These icons,
described in the following table, are also used in the library data type support
pages.
.. list-table::
:header-rows: 1
@@ -130,6 +130,11 @@ support pages.
*
- Icon
- Definition
*
- NA
- Not applicable
*
-
- Not supported
@@ -158,16 +163,15 @@ support pages.
* Any type can be emulated by software, but this page does not cover such
cases.
Hardware data type support
Data type support by Hardware Architecture
==========================================
The following tables provide information about AMD Instinct accelerators support
for various data types. The MI200 series GPUs, which include MI210, MI250, and
MI250X, are based on the CDNA2 architecture. The MI300 series GPUs, consisting
of MI300A, MI300X, and MI325X, are built on the CDNA3 architecture.
The MI200 series GPUs, which include MI210, MI250, and MI250X, are based on the
CDNA2 architecture. The MI300 series GPUs, consisting of MI300A, MI300X, and
MI325X, are based on the CDNA3 architecture.
Compute units support
-------------------------------------------------------------------------------
---------------------
The following table lists data type support for compute units.
@@ -248,7 +252,7 @@ The following table lists data type support for compute units.
-
Matrix core support
-------------------------------------------------------------------------------
-------------------
The following table lists data type support for AMD GPU matrix cores.
@@ -329,7 +333,7 @@ The following table lists data type support for AMD GPU matrix cores.
-
Atomic operations support
-------------------------------------------------------------------------------
-------------------------
The following table lists data type support for atomic operations.
@@ -416,14 +420,14 @@ The following table lists data type support for atomic operations.
performance impact when they frequently access the same memory address.
Data type support in ROCm libraries
==========================================
===================================
ROCm library support for int8, float8 (E4M3), float8 (E5M2), int16, float16,
bfloat16, int32, tensorfloat32, float32, int64, and float64 is listed in the
following tables.
Libraries input/output type support
-------------------------------------------------------------------------------
-----------------------------------
The following tables list ROCm library support for specific input and output
data types. Refer to the corresponding library data type support page for a
@@ -444,37 +448,37 @@ detailed description.
- int32
- int64
*
- hipSPARSELt (:doc:`details <hipsparselt:reference/data-type-support>`)
- :doc:`hipSPARSELt <hipsparselt:reference/data-type-support>`
- ✅/✅
- ❌/❌
- ❌/❌
- ❌/❌
*
- rocRAND (:doc:`details <rocrand:api-reference/data-type-support>`)
- -/✅
- -/✅
- -/✅
- -/✅
- :doc:`rocRAND <rocrand:api-reference/data-type-support>`
- NA/✅
- NA/✅
- NA/✅
- NA/✅
*
- hipRAND (:doc:`details <hiprand:api-reference/data-type-support>`)
- -/✅
- -/✅
- -/✅
- -/✅
- :doc:`hipRAND <hiprand:api-reference/data-type-support>`
- NA/✅
- NA/✅
- NA/✅
- NA/✅
*
- rocPRIM (:doc:`details <rocprim:reference/data-type-support>`)
- :doc:`rocPRIM <rocprim:reference/data-type-support>`
- ✅/✅
- ✅/✅
- ✅/✅
- ✅/✅
*
- hipCUB (:doc:`details <hipcub:api-reference/data-type-support>`)
- :doc:`hipCUB <hipcub:api-reference/data-type-support>`
- ✅/✅
- ✅/✅
- ✅/✅
- ✅/✅
*
- rocThrust (:doc:`details <rocthrust:data-type-support>`)
- :doc:`rocThrust <rocthrust:data-type-support>`
- ✅/✅
- ✅/✅
- ✅/✅
@@ -496,7 +500,7 @@ detailed description.
- float32
- float64
*
- hipSPARSELt (:doc:`details <hipsparselt:reference/data-type-support>`)
- :doc:`hipSPARSELt <hipsparselt:reference/data-type-support>`
- ❌/❌
- ❌/❌
- ✅/✅
@@ -505,25 +509,25 @@ detailed description.
- ❌/❌
- ❌/❌
*
- rocRAND (:doc:`details <rocrand:api-reference/data-type-support>`)
- -/❌
- -/❌
- -/✅
- -/❌
- -/❌
- -/✅
- -/✅
- :doc:`rocRAND <rocrand:api-reference/data-type-support>`
- NA/❌
- NA/❌
- NA/✅
- NA/❌
- NA/❌
- NA/✅
- NA/✅
*
- hipRAND (:doc:`details <hiprand:api-reference/data-type-support>`)
- -/❌
- -/❌
- -/✅
- -/❌
- -/❌
- -/✅
- -/✅
- :doc:`hipRAND <hiprand:api-reference/data-type-support>`
- NA/❌
- NA/❌
- NA/✅
- NA/❌
- NA/❌
- NA/✅
- NA/✅
*
- rocPRIM (:doc:`details <rocprim:reference/data-type-support>`)
- :doc:`rocPRIM <rocprim:reference/data-type-support>`
- ❌/❌
- ❌/❌
- ✅/✅
@@ -532,7 +536,7 @@ detailed description.
- ✅/✅
- ✅/✅
*
- hipCUB (:doc:`details <hipcub:api-reference/data-type-support>`)
- :doc:`hipCUB <hipcub:api-reference/data-type-support>`
- ❌/❌
- ❌/❌
- ✅/✅
@@ -541,7 +545,7 @@ detailed description.
- ✅/✅
- ✅/✅
*
- rocThrust (:doc:`details <rocthrust:data-type-support>`)
- :doc:`rocThrust <rocthrust:data-type-support>`
- ❌/❌
- ❌/❌
- ⚠️/⚠️
@@ -550,9 +554,14 @@ detailed description.
- ✅/✅
- ✅/✅
.. note::
As random number generation libraries, rocRAND and hipRAND only specify output
data types for the random values they generate, with no need for input data
types.
Libraries internal calculations type support
-------------------------------------------------------------------------------
--------------------------------------------
The following tables list ROCm library support for specific internal data types.
Refer to the corresponding library data type support page for a detailed
@@ -573,7 +582,7 @@ description.
- int32
- int64
*
- hipSPARSELt (:doc:`details <hipsparselt:reference/data-type-support>`)
- :doc:`hipSPARSELt <hipsparselt:reference/data-type-support>`
-
-
-
@@ -596,7 +605,7 @@ description.
- float32
- float64
*
- hipSPARSELt (:doc:`details <hipsparselt:reference/data-type-support>`)
- :doc:`hipSPARSELt <hipsparselt:reference/data-type-support>`
-
-
-

View File

@@ -154,7 +154,7 @@ subtrees:
- entries:
- url: https://www.amd.com/system/files/TechDocs/instinct-mi200-cdna2-instruction-set-architecture.pdf
title: AMD Instinct MI200/CDNA2 ISA
- url: https://www.amd.com/system/files/documents/amd-cdna2-white-paper.pdf
- url: https://www.amd.com/content/dam/amd/en/documents/instinct-business-docs/white-papers/amd-cdna2-white-paper.pdf
title: White paper
- file: conceptual/gpu-arch/mi100.md
title: MI100 microarchitecture
@@ -162,7 +162,7 @@ subtrees:
- entries:
- url: https://www.amd.com/system/files/TechDocs/instinct-mi100-cdna1-shader-instruction-set-architecture%C2%A0.pdf
title: AMD Instinct MI100/CDNA1 ISA
- url: https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf
- url: https://www.amd.com/content/dam/amd/en/documents/instinct-business-docs/white-papers/amd-cdna-white-paper.pdf
title: White paper
- file: conceptual/iommu.rst
title: Input-Output Memory Management Unit (IOMMU)

View File

@@ -1,3 +1,4 @@
rocm-docs-core==1.17.0
rocm-docs-core==1.17.1
sphinx-reredirects
sphinx-sitemap
sphinxcontrib.datatemplates==0.11.0

View File

@@ -2,7 +2,7 @@
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
# pip-compile requirements.in
# pip-compile docs/sphinx/requirements.in
#
accessible-pygments==0.0.5
# via pydata-sphinx-theme
@@ -43,6 +43,8 @@ debugpy==1.8.12
# via ipykernel
decorator==5.1.1
# via ipython
defusedxml==0.7.1
# via sphinxcontrib-datatemplates
deprecated==1.2.15
# via pygithub
docutils==0.21.2
@@ -175,6 +177,7 @@ pyyaml==6.0.2
# myst-parser
# rocm-docs-core
# sphinx-external-toc
# sphinxcontrib-datatemplates
pyzmq==26.2.0
# via
# ipykernel
@@ -187,7 +190,7 @@ requests==2.32.3
# via
# pygithub
# sphinx
rocm-docs-core==1.17.0
rocm-docs-core==1.17.1
# via -r requirements.in
rpds-py==0.22.3
# via
@@ -215,6 +218,8 @@ sphinx==8.1.3
# sphinx-notfound-page
# sphinx-reredirects
# sphinx-sitemap
# sphinxcontrib-datatemplates
# sphinxcontrib-runcmd
sphinx-book-theme==1.1.3
# via rocm-docs-core
sphinx-copybutton==0.5.2
@@ -231,6 +236,8 @@ sphinx-sitemap==2.6.0
# via -r requirements.in
sphinxcontrib-applehelp==2.0.0
# via sphinx
sphinxcontrib-datatemplates==0.11.0
# via -r requirements.in
sphinxcontrib-devhelp==2.0.0
# via sphinx
sphinxcontrib-htmlhelp==2.1.0
@@ -239,6 +246,8 @@ sphinxcontrib-jsmath==1.0.1
# via sphinx
sphinxcontrib-qthelp==2.0.0
# via sphinx
sphinxcontrib-runcmd==0.2.0
# via sphinxcontrib-datatemplates
sphinxcontrib-serializinghtml==2.0.0
# via sphinx
sqlalchemy==2.0.37

View File

@@ -0,0 +1,102 @@
/* ------------------ Compatibility options grid ------------------ */
html {
--compat-border-radius: 2px;
--compat-accent-color: var(--pst-color-primary);
--compat-bg-color: var(--pst-color-on-background);
--compat-fg-color: var(--pst-color-primary-text);
--compat-head-color: var(--pst-color-surface);
--compat-param-hover-color: var(--pst-color-link-hover);
--compat-param-selected-color: var(--pst-color-primary);
}
html[data-theme="light"] {
--compat-border-color: var(--pst-gray-500);
--compat-param-disabled-color: var(--pst-gray-300);
}
html[data-theme="dark"] {
--compat-border-color: var(--pst-gray-600);
--compat-param-disabled-color: var(--pst-gray-600);
}
div#vllm-benchmark-ud-params-picker.container-fluid {
padding: 0 0 1rem 0;
}
div[data-param-k="model"] {
background-color: var(--compat-bg-color);
padding: 2px;
border: solid 1px var(--compat-border-color);
font-weight: 500;
cursor: pointer;
}
div[data-param-k="model"][data-param-state="selected"] {
background-color: var(--compat-param-selected-color);
color: var(--compat-fg-color);
}
div[data-param-k="model"][data-param-state="latest-version"] {
background-color: var(--compat-param-selected-color);
color: var(--compat-fg-color);
}
div[data-param-k="model"][data-param-state="disabled"] {
background-color: var(--compat-param-disabled-color);
text-decoration: line-through;
/* text-decoration-color: var(--pst-color-danger); */
cursor: auto;
}
div[data-param-k="model"]:not([data-param-state]):hover {
background-color: var(--compat-param-hover-color);
}
div[data-param-k="model-group"] {
background-color: var(--compat-bg-color);
padding: 2px;
border: solid 1px var(--compat-border-color);
font-weight: 500;
cursor: pointer;
}
div[data-param-k="model-group"][data-param-state="selected"] {
background-color: var(--compat-param-selected-color);
color: var(--compat-fg-color);
}
div[data-param-k="model-group"][data-param-state="latest-version"] {
background-color: var(--compat-param-selected-color);
color: var(--compat-fg-color);
}
div[data-param-k="model-group"][data-param-state="disabled"] {
background-color: var(--compat-param-disabled-color);
text-decoration: line-through;
/* text-decoration-color: var(--pst-color-danger); */
cursor: auto;
}
div[data-param-k="model-group"]:not([data-param-state]):hover {
background-color: var(--compat-param-hover-color);
}
.model-param-head {
background-color: var(--compat-head-color);
padding: 0.15rem 0.15rem 0.15rem 0.67rem;
/* margin: 2px; */
border-right: solid 2px var(--compat-accent-color);
font-weight: 600;
}
.model-param {
/* padding: 2px; */
/* margin: 0 2px 0 2px; */
/* margin: 2px; */
border: solid 1px var(--compat-border-color);
font-weight: 500;
}
.hidden {
display: none !important;
}