Compare commits

...

195 Commits

Author SHA1 Message Date
anisha-amd
773f5de407 Docs: Ray release 25.12 and compatibility version format standardization (#5845) 2026-01-08 12:09:11 -05:00
dependabot[bot]
b297ced032 Bump urllib3 from 2.5.0 to 2.6.3 in /docs/sphinx (#5842)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.5.0 to 2.6.3.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/2.5.0...2.6.3)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-version: 2.6.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-08 08:22:01 -05:00
peterjunpark
2dc22ca890 fix(primus-pytorch.rst): FP8 config instead of BF16 (#5839) 2026-01-07 13:49:31 -05:00
Joseph Macaranas
85102079ed [External CI] Add SIMDe dev package to HIP runtime pipeline (#5838) 2026-01-07 11:00:38 -05:00
dependabot[bot]
ba95e0e689 Bump pynacl from 1.6.1 to 1.6.2 in /docs/sphinx (#5836)
Bumps [pynacl](https://github.com/pyca/pynacl) from 1.6.1 to 1.6.2.
- [Changelog](https://github.com/pyca/pynacl/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/pynacl/compare/1.6.1...1.6.2)

---
updated-dependencies:
- dependency-name: pynacl
  dependency-version: 1.6.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-06 14:10:42 -05:00
Pratik Basyal
1691d369e9 ROCM-core version fixed (#5827) 2026-01-02 16:06:27 -05:00
peterjunpark
172b0f7c08 Fix inconsistency in xDiT doc
Fix inconsistency in xDiT doc
2025-12-29 10:26:25 -05:00
peterjunpark
c67fac78bd Update docs for xDiT diffusion inference 25.13 Docker release (#5820)
* archive previous version

* add xdit 25.13

* update history index

* add perf results section
2025-12-29 08:44:45 -05:00
peterjunpark
e0b8ec4dfb Update training docs for Primus/25.11 (#5819)
* update conf and toc.yml.in

* archive previous versions

archive data files

update anchors

* primus pytorch: remove training batch size args

* update primus megatron run cmds

multi-node

* update primus pytorch

update

* update

update

* update docker tag
2025-12-29 08:05:47 -05:00
Pratik Basyal
38f2d043dc OS table removed from compatibility table [develop] (#5810)
* OS table removed from compatibility table

* Feedback added

* Azure Linux 3.0 and compatibility version update

* Version fix

* Review feedback added

* Minor change
2025-12-23 16:28:19 -05:00
peterjunpark
3a43bacdda Update xdit diffusion inference history (#5808)
* Update xdit diffusion inference history

* fix
2025-12-22 11:05:32 -05:00
peterjunpark
48d8fe139b fix link to ROCm PyT docker image (#5803) 2025-12-19 15:47:55 -05:00
peterjunpark
7455fe57b8 clean up formatting in FA2 page (#5795) 2025-12-19 09:21:41 -05:00
peterjunpark
52c0a47e84 Update Flash Attention guidance in "Model acceleration libraries" (#5793)
* flash attention update

Signed-off-by: seungrok.jung <seungrok.jung@amd.com>

flash attention update

Signed-off-by: seungrok.jung <seungrok.jung@amd.com>

flash attention update

Signed-off-by: seungrok.jung <seungrok.jung@amd.com>

sentence-case heading

* Update docs/how-to/rocm-for-ai/inference-optimization/model-acceleration-libraries.rst

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

---------

Co-authored-by: seungrok.jung <seungrok.jung@amd.com>
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2025-12-19 08:48:52 -05:00
peterjunpark
cbab9a465d Update documentation for JAX training MaxText 25.11 release (#5789) 2025-12-18 11:23:58 -05:00
peterjunpark
459283da3c xDiT diffusion inference v25.12 documentation update (#5786)
* Add xdit-diffusion ROCm docs page.

* Update template formatting and fix sphinx warnings

* Add System Validation section.

* Add sw component versions/commits.

* Update to use latest v25.10 image instead of v25.9

* Update commands and add FLUX instructions.

* Update Flux instructions. Change image tag. Describe as diffusion inference instead of specifically video.

* git rm xdit-video-diffusion.rst

* Docs for v25.12

* Add hyperlinks to components

* Command fixes

* -Diffusers suffix

* Simplify yaml file and cleanup main rst page.

* Spelling, added 'js'

* fix merge conflict

fix

---------

Co-authored-by: Kristoffer <kristoffer.torp@amd.com>
2025-12-17 10:20:10 -05:00
peterjunpark
1b4f25733d vLLM inference benchmark 1210 (#5776)
* Archive previous ver

fix anchors

* Update vllm.rst and data yaml for 20251210
2025-12-17 09:21:57 -05:00
Ibrahim Wani
b287372be5 [origami] Test update (#5768)
* Fix the skipping of origami tests

* Update dependencies for origami refactor

* test

* Unsupress test output.

* Ctest implementation

* Test ctest

* Test ctest 2

* Add pip install test

* Fix python version

* Add python dep

* test

* test 2

* Debug for readme

* Fix pip install

* Fix pip install 2

* Clean up

* Run tests on 950

* Replace 950 with 1201

* 1101

* Add more archs

* Add more archs 2

* Comment out archs

* Move pip install script to ./azuredevops/scripts

* Fix path

* Fix path 2

* Fix path 3

* Fix path 4

* Remove pip install testing:

* Use inline script

* Add old deps
2025-12-16 15:37:41 -07:00
Pratik Basyal
78e8baf147 Taichi removed from ROCm docs [Develop] (#5779)
* Taichi removed from ROCm docs

* Warnings fixed
2025-12-16 13:12:40 -05:00
Matt Williams
3e0c8b47e3 Merge pull request #5771 from ROCm/mattwill-amd-patch-4
Reverting Optiq note
2025-12-12 17:53:41 -05:00
Matt Williams
c3f0b99cc0 Reverting Optiq note 2025-12-12 17:47:33 -05:00
dependabot[bot]
c9d1679486 Bump rocm-docs-core from 1.31.0 to 1.31.1 in /docs/sphinx
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.31.0 to 1.31.1.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.31.0...v1.31.1)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-version: 1.31.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-12 16:15:26 -05:00
Pratik Basyal
fdbef17d7b Onnx and rocshmem version updated (#5760) 2025-12-11 17:05:25 -05:00
Matt Williams
6592a41a7f Adding ROCm-Optiq note to What is ROCm page (#5709)
* Adding ROCm-Optiq note to What is ROCm page

Adding a note for a link to the Optiq docs

* Apply suggestion from @mattwill-amd

* Apply suggestion from @mattwill-amd

* Apply suggestion from @mattwill-amd

* Update what-is-rocm.rst

* Update what-is-rocm.rst

* Apply suggestion from @mattwill-amd

* Apply suggestion from @mattwill-amd

* Apply suggestion from @mattwill-amd

* Apply suggestion from @mattwill-amd
2025-12-10 12:56:33 -08:00
Matt Williams
65a936023b Fixing link redirects (#5758)
* Update multi-gpu-fine-tuning-and-inference.rst

* Update pytorch-training-v25.6.rst

* Update pytorch-compatibility.rst
2025-12-10 11:17:59 -05:00
anisha-amd
2a64949081 Docs: update verl compatibility - fix (#5756) 2025-12-09 19:51:37 -05:00
anisha-amd
0a17434517 Docs: update verl compatibility - fix (#5754) 2025-12-09 18:36:16 -05:00
anisha-amd
2be7e5ac1e Docs: verl framework - compatibility - 25.11 release (#5752) 2025-12-09 11:41:43 -05:00
dependabot[bot]
ae80c4a31c Bump rocm-docs-core from 1.30.1 to 1.31.0 in /docs/sphinx (#5751)
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.30.1 to 1.31.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.31.0/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.30.1...v1.31.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-version: 1.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-09 08:25:16 -05:00
Adel Johar
dd89a692e1 [Ex CI] Add rocAL dependencies 2025-12-09 10:56:23 +01:00
peterjunpark
bf74351e5a Fix Primus PyTorch doc: training.batch_size -> training.local_batch_size (#5748) 2025-12-08 13:35:22 -05:00
yugang-amd
f2067767e0 xdit-diffusion v25.11 docs (#5744) 2025-12-05 17:09:48 -05:00
Pratik Basyal
effd4174fb PyTorch 2.7 support added (#5740) 2025-12-04 15:49:23 -05:00
peterjunpark
453751a86f fix docker hub links for primus:v25.10 (#5738) 2025-12-04 09:17:33 -05:00
peterjunpark
fb644412d5 Update training Docker docs for Primus 25.10 (#5737) 2025-12-04 09:08:00 -05:00
Pratik Basyal
e8fdc34b71 711 hipBLASLT performance decline known issue added (#5730)
* hipBLASLT performance decline known issue added

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* GitHub Issue added

* Ram's feedback incorporated

* GitHub Issue added

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

---------

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2025-12-03 08:50:25 -05:00
Pratik Basyal
b4031ef23c 7.1.1 known issues post GA (#5721)
* rocblas known issues added

* Minor change

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Resolved

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

---------

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2025-11-28 16:34:47 -05:00
dependabot[bot]
d0bd4e6f03 Bump rocm-docs-core from 1.29.0 to 1.30.1 in /docs/sphinx (#5712)
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.29.0 to 1.30.1.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.29.0...v1.30.1)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-version: 1.30.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-28 08:18:23 -05:00
Jan Stephan
0056b9453e Remove continuous numbering of tables and figures
Signed-off-by: Jan Stephan <jan.stephan@amd.com>
2025-11-28 10:29:01 +01:00
Pratik Basyal
3d1ad79766 Merged cell removed for coloring issue (#5713) 2025-11-27 19:52:36 -05:00
Pratik Basyal
8683bed11b Known issue from 7.1.0 removed (#5702) 2025-11-26 12:27:22 -05:00
Pratik Basyal
847cd7c423 Link and PyTorch version updated (#5700) 2025-11-26 11:52:47 -05:00
Alex Xu
42cad29c04 re-compile requirements.txt 2025-11-26 11:35:00 -05:00
alexxu-amd
f7b2fe0a48 Merge pull request #5699 from ROCm/sync-develop-from-internal
Sync develop from internal for 7.1.1
2025-11-26 11:27:48 -05:00
alexxu-amd
bb199aa2b9 Merge pull request #639 from ROCm/sync-develop-from-external
Sync develop from external
2025-11-26 11:10:19 -05:00
alexxu-amd
2f7b2a7fa1 Merge branch 'develop' into sync-develop-from-external 2025-11-26 10:54:34 -05:00
Pratik Basyal
7fd75919d1 711 GPU and environment variable link updated (#640)
* ROCm environment vairable link updated

* Programming patter link updated
2025-11-26 10:41:16 -05:00
Alex Xu
4490c57c6a resolve merge conflict 2025-11-26 10:33:02 -05:00
Alex Xu
007f24fe7b Merge remote-tracking branch 'external/develop' into sync-develop-from-external 2025-11-26 10:09:04 -05:00
Pratik Basyal
afbb6e0f61 PLDM table synced (#638) 2025-11-26 10:08:12 -05:00
Pratik Basyal
1b5a3e54c2 711 compatibility note update and review feedback added (#636)
* Leo's review feedback added

* rocshmem version bumped from 3.0.0 to 3.1.0

* Footnote cleaned

* Footnote updated

* Ram's feedback

* Link updated

* Footnote updated

* Link fixed
2025-11-26 09:46:57 -05:00
alexxu-amd
2c6eb9cf2a Update versions.md (#637)
* Update versions.md

* remove empty line
2025-11-26 09:03:54 -05:00
Pratik Basyal
b93fdb811c 7.1.1 pre-GA public link reset (#627)
* 7.1.1 pre-GA public link reset

* Update CHANGELOG.md
2025-11-26 08:38:13 -05:00
srayasam-amd
096d91e190 Updating rocm version to 7.1.1 GA (#5697)
* 7.1.1 GA update

* 7.1.1 GA update

* Update rocm-7.1.1.xml

* Update default.xml
2025-11-26 16:08:03 +05:30
Pratik Basyal
02037f4384 7.1.1 fixed issues added (#634)
* Fixed issues added

* Blank line added
2025-11-24 15:56:44 -05:00
peterjunpark
c64dc46a50 [7.1.1] docs(RELEASE.md): Add notes under "Driver and firmware related changes" (#632)
* Add notes under "Driver and firmware related changes"

update

* Update RELEASE.md

---------

Co-authored-by: Pratik Basyal <prbasyal@amd.com>
2025-11-24 13:18:53 -05:00
Pratik Basyal
702d8e4c8e New link updated for MIgraphx (#5691) 2025-11-24 11:52:38 -05:00
Istvan Kiss
19344d7b61 Fix rocr-runtime environment variables content link (#631) 2025-11-21 18:59:57 +01:00
amd-hsivasun
807ec6afcf [Ex CI] Update AMDMIGraphX CMake version (#5683) 2025-11-20 18:05:24 -05:00
amd-hsivasun
4c04da05c3 [Ex CI] Update pipeline ID for amdmis to monorepo (#5685) 2025-11-20 18:05:17 -05:00
dependabot[bot]
411334716c Bump rocm-docs-core from 1.28.0 to 1.29.0 in /docs/sphinx (#5659)
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.28.0 to 1.29.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.28.0...v1.29.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-version: 1.29.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-20 13:54:33 -05:00
amd-hsivasun
99f0875e70 [Ex CI] amdsmi monorepo enablement (#5677)
* [Ex CI] amdsmi monorepo enablement

* Fix amdsmi yaml
2025-11-20 13:52:01 -05:00
peterjunpark
50658d0812 Update release highlights for 7.1.1 (#629) 2025-11-20 13:51:13 -05:00
Pratik Basyal
7aeecdf8e2 Document 7.1.1 Known issues (#628)
Co-authored-by: Peter Park <peter.park@amd.com>
2025-11-20 13:12:52 -05:00
Istvan Kiss
4f669eb2c6 Add JAX Plugin-PJRT support table (#619) 2025-11-20 10:55:51 -06:00
Jithun Nair
7d1f314303 Update PyTorch compatibility documentation with PyTorch2.9 for ROCm7.1.1 2025-11-19 19:15:04 -06:00
Jithun Nair
c523f51e58 Merge branch 'develop' into update-pytorch-compatibility 2025-11-19 19:11:22 -06:00
Melantha-S
b566858909 Update pytorch-compatibility.rst 2025-11-19 15:54:04 -07:00
Melantha-S
c33b9e3611 Update pytorch-compatibility.rst 2025-11-19 15:16:30 -07:00
Shao
2646b4841d Update pytorch compatibility documentation 2025-11-19 15:05:11 -07:00
Shao
ff2f40d800 Add logsumexp to spellcheck dictionary 2025-11-19 15:03:12 -07:00
Shao
71bcc5b204 Add PyTorch 2.9 release notes for ROCm 2025-11-19 14:59:27 -07:00
Pratik Basyal
fd840df30b JAX and PyTorch support and ROCProfiler upcoming changes updated 7.1.1 (#626)
* ROCProfiler upcoming changes updated

* ROCm examples moved

* JAX verison udpated

* Formatting updated"

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Environment variable updated added

* Minor changelog fixes

* JAX reverted

* grid alignment

* Revert "grid alignment"

This reverts commit 47939743ab3175cad47f45fd2cd263476eaf14e1.

---------

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2025-11-19 15:29:02 -05:00
Shao
58e26eede1 Add Cholesky and mx to spellcheck dictionary 2025-11-19 10:27:51 -07:00
Shao
407a9d4cb0 Update PyTorch compatibility documentation 2025-11-19 09:52:47 -07:00
Istvan Kiss
81b7745f8e Docs: Add Environment Variable Page (#395)
Co-authored-by: Adel Johar <adel.johar@amd.com>
2025-11-19 17:40:26 +01:00
Pratik Basyal
6af62fd30a 7.1.1 Compatibility table fixed (#624)
* broken table fixed

* Line break added

* Line break added
2025-11-19 11:22:47 -05:00
Pratik Basyal
bb692dfd84 711 Release Notes update [Batch1] (#623)
* Fixed issue updated

* Release notes updated

* Formatting correction

* RCCL performance decline issue added

* Known issue updated

* Minor update

* Known issues updated

* Review feedback added

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

---------

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2025-11-19 08:04:37 -05:00
Adel Johar
8d51d0e803 [Ex CI] Add CXX override for MIGraphX 2025-11-19 10:45:10 +01:00
Adel Johar
66b8b96c72 [Ex CI] Add missing dependencies for rccl and mivisionx 2025-11-19 10:45:10 +01:00
Pratik Basyal
fb098b6354 Initial changes for 7.1.1 release notes (#622)
* Changelog and tables updates for 7.1.1 release notes

* Changelog synced

* Naming udpated

* Added upcoming changes for composable kernel

* Update RELEASE.md

Co-authored-by: Pratik Basyal <prbasyal@amd.com>

* Update RELEASE.md

* Highlights udpated for DGL, ROCm-DS, and HIP documentation

* Changelog synced"

* Offline, runfile and ROCm Bandwidth test updated

* CK/AITER highlight added

* Changelog synced

* AI model highlight updated

* PLDM version added

* Changelog updated

* Leo's feedback incorporated

* Compatibility and PLDM versions udpated

* New docs update added

* ROCm resolved issue added

* Review feedback added

* Link added

* PLDM updated

* PLDM table udpated

* Changes

---------

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>
2025-11-17 12:09:59 -05:00
cfallows-amd
72107dd6d5 [Ex CI] Adding dependencies to rocprofiler-compute azure workflow (#5667) 2025-11-14 12:24:56 -05:00
amd-hsivasun
99c1590057 [Ex CI] Added ROCM_PATH env var to rocprofiler-compute (#5666) 2025-11-14 12:19:06 -05:00
Jeffrey Novotny
3d86323f88 Update licenses document to reflect monorepo (#620) 2025-11-14 09:16:48 -05:00
Carrie Fallows
636d4cc736 Adding dependencies to rocmDependencies in rocprof-compute yaml. Now needed for building because of rocprofiler-sdk dependency.
Signed-off-by: Carrie Fallows <Carrie.Fallows@amd.com>
2025-11-13 20:56:45 -05:00
amd-hsivasun
d1ce815d8d [Ex CI] Add rocprofiler-sdk dep to build for rocprofiler-compute (#5664) 2025-11-13 16:08:02 -05:00
Pratik Basyal
80ced95526 Changelog updated (#5660) 2025-11-13 10:18:15 -05:00
Pratik Basyal
09c6a9fdef 710 RCCL Known Issues and CRIU note update (#5647)
* RCCL ALltoALL known issue added

* CRIU note added

* Minor change

* Review feedback and AMDSMI detailed changelog link added

* Github issue link added
2025-11-11 16:54:36 -05:00
Alex Xu
372ddd5af3 revert test changes 2025-11-11 09:33:28 -05:00
peterjunpark
eb956cfc5c Fixed wording related to VLLM_V1_USE_PREFILL_DECODE_ATTENTION (#5605)
Co-authored-by: Hongxia Yang <hongxia.yang@amd.com>
2025-11-11 09:22:11 -05:00
peterjunpark
e05cdca54f Fix references to vLLM docs (#5651) 2025-11-11 09:00:07 -05:00
anisha-amd
04c7374f41 Docs: frameworks 25.10 - compatibility - DGL and llama.cpp (#5648) 2025-11-10 15:26:54 -05:00
Alex Xu
39de859bd1 update rocm-docs-core to 1.29.0 2025-11-10 14:10:06 -05:00
amd-hsivasun
c8531ac7ea [Ex CI] Update pipeline Id for hipTensor to monorepo (#5638) 2025-11-10 13:32:10 -05:00
Pratik Basyal
420bbfa126 7.1.0 MI325X PLDM note updated (#5644)
* PLDM note updated

* Footnote update

* Note added to compatibility

* Lint error fixed
2025-11-08 09:08:21 -05:00
Pratik Basyal
4881887e2c rocBLAS precision known issue added [Develop] (#5641)
* rocBLAS precision known issue added

* IPC note removed

* Review feedback added
2025-11-07 19:45:33 -05:00
Pratik Basyal
148d6670ad rocBLAS and HipBLASLt known issue added 7.1.0 (#5634)
* rocBLAS and HipBLASLt known issue added

* Title warning fixed

* Jeff's feedback added

* Leo's feedback incorporated

* Minor feedback

* MI325X PLDM udpate

* Leo's feedback added

* PyTorch profiling issue added

* Changelog synced

* JAX section removed

* Ram's feedback added
2025-11-07 17:48:36 -05:00
amd-hsivasun
9770e9b6ef [Ex CI] hiptensor Enablement (#5636) 2025-11-07 16:08:46 -05:00
Joseph Macaranas
ee4cf66d67 [External CI] Add simde-devel in dnf mapping (#5635) 2025-11-07 00:59:35 -05:00
Alex Xu
908862242a test preview banner 2025-11-06 12:24:52 -05:00
amd-hsivasun
6ba30f191c [Ex CI] rocWMMA increase timeout for test job (#5620) 2025-11-06 11:38:07 -05:00
yugang-amd
674dc355e4 vLLM 10/24 release (#5626)
* vLLM 10/24 release

* updates per SME inputs

* Update docs/how-to/rocm-for-ai/inference/benchmark-docker/vllm.rst

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

---------

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2025-11-05 11:13:50 -05:00
Adel Johar
c7f3a56811 [Ex CI] Add half, rccl, and dependencies for rpp, mivisionx and rocjpeg 2025-11-05 15:59:15 +01:00
Pratik Basyal
0107fa731e ROCm Bandwidth test issue added (#5612) 2025-10-31 18:19:40 -04:00
Pratik Basyal
a87ec360e1 710 known issues update[Batch1] (#5604)
* Version update

* ROCm Bandwidth failure added

* Editorial feedback added

* Minor change

* rocprofv3 issue added

* Minor change

* ROCgdb issue added

* SME feedback incorpprated

* Leo's feedback added

* ROCm Compute Profiler known issue added

* Changelog synced
2025-10-31 14:57:13 -04:00
amd-hsivasun
7215e1e8c7 [Ex CI] Update rocwmma pipeline ID to monorepo (#5602) 2025-10-31 13:56:17 -04:00
amd-hsivasun
e4a59d8c66 [Ex CI] Enable rocWMMA Monorepo (#5597)
* [Ex CI] Enable rocWMMA Monorepo

* Updated to use component name parameter
2025-10-30 13:43:05 -04:00
Pratik Basyal
8108fe7275 7.1.0 Post GA updates (#5600)
* Post GA updates

* Mono repo link added

* AMD SMI changelog link removed
2025-10-30 13:27:25 -04:00
alexxu-amd
d3ff9d7c8e Merge pull request #5599 from ROCm/sync-develop-from-internal
Sync develop from internal for 7.1.0
2025-10-30 11:37:20 -04:00
Alex Xu
939ee7de0c Merge remote-tracking branch 'internal/develop' into sync-develop-from-internal 2025-10-30 11:15:00 -04:00
Pratik Basyal
f1e6c285dd 7.1.0 PRE GA Link reset (#616)
* Link reset

* Changelog synced and feedback incorporated

* Jeff's feedback added
2025-10-30 11:01:13 -04:00
alexxu-amd
ff1d9b4d69 Update versions.md for ROCm 7.1.0 GA (#615)
* Update versions.md

* fix linting
2025-10-30 10:00:32 -04:00
srayasam-amd
ef3fa601d5 7.1.0 GA update (#5598)
* PR for GA 7.1.0

* Create rocm-7.1.0.xml

* Update default.xml

* Update rocm-7.1.0.xml
2025-10-30 19:10:03 +05:30
Pratik Basyal
576191a104 710 release highlights update pre GA (#614)
* hipBLASLt highlights updated

* Flash attention highlight added

* PLDM highlight updated

* Spell fixes
2025-10-30 09:03:32 -04:00
Pratik Basyal
2db07b5cda Changelog updated for HIP (#613) 2025-10-29 18:27:05 -04:00
alexxu-amd
fe3dc988b8 Merge pull request #612 from ROCm/sync-develop-from-external
Sync develop from external for 7.1.0 GA
2025-10-29 17:13:01 -04:00
Alex Xu
36c879b7e0 resolve merge conflict 2025-10-29 17:08:07 -04:00
alexxu-amd
91450dca10 Merge branch 'develop' into sync-develop-from-external 2025-10-29 16:49:33 -04:00
Alex Xu
2de92767e6 Merge remote-tracking branch 'external/develop' into sync-develop-from-external 2025-10-29 16:48:29 -04:00
Pratik Basyal
54d226acd9 710 highlight updates [batch 2] (#611)
* Changelog updated for ROCdbg api"

* Systems profiler update

* Minor change
2025-10-29 16:42:57 -04:00
Pratik Basyal
f46d7ec00f 7.1.0 Release notes updated (#610)
* Release notes updated

* Changelog updated"

Changelog udpated
"

* Github link updated for Mono repo
2025-10-29 14:59:33 -04:00
Pratik Basyal
09c946b6fb 710 fixed issue update (#608)
* Resolved issues added

* Changelog synced

* Changelog synced
2025-10-29 12:28:09 -04:00
Pratik Basyal
5285669d98 7.1.0 release notes, changelog, and known issues update (#606)
* RCCL and hipblaslt changelog updated

* ROCProfiler-SDK highlight addede

* Review feedback from Leo and Swati added

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>

* ROCprofiler-SDK added

* Minor edits

---------

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>
2025-10-29 10:22:52 -04:00
Jan Stephan
9b3138cffa [Ex CI] Add aomp, aomp-extras, composable_kernel and rocALUTION
Remove libomp-dev

Signed-off-by: Jan Stephan <jan.stephan@amd.com>
2025-10-29 11:22:27 +01:00
Pratik Basyal
61fffe3250 7.0.2 Broken link, version and known issue update (#5591)
* Version and known issue update

* Historical compatibility updated
2025-10-28 15:16:15 -04:00
dependabot[bot]
43ccfbbe80 Bump rocm-docs-core from 1.26.0 to 1.27.0 in /docs/sphinx (#5570)
Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.26.0 to 1.27.0.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.26.0...v1.27.0)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-version: 1.27.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-28 11:06:22 -04:00
peterjunpark
1515fb3779 Revert "Add xdit diffusion docs (#5576)" (#5580)
This reverts commit 4132a2609c.
2025-10-27 16:22:28 -04:00
randyh62
410a69efe4 Update RELEASE.md (#598)
Edit doorbell ring improvements
2025-10-27 13:14:45 -07:00
Joseph Macaranas
248cbf8bc1 [External CI] rccl triggers rocprofiler-sdk downstream (#5420)
- Update rccl component pipeline to include new additions made to projects already in super repos.
- Also update rccl to trigger rocproifler-sdk job upon completion.
- rocprofiler-sdk pipeline updated to include os parameter to enable future almalinux 8 job.
2025-10-27 12:14:30 -04:00
Istvan Kiss
0171dced89 Link fix and remove CentOS Stream mention from PyTorch release notes. (#593)
CentOS Stream not officially supported OS
2025-10-27 16:47:52 +01:00
Istvan Kiss
f2d6675839 Add back extra line to fix spellchecker (#604) 2025-10-27 16:39:29 +01:00
Pratik Basyal
7d0fad9aa8 Changelog duplication fixed (#601) 2025-10-27 10:38:44 -04:00
Kristoffer
4132a2609c Add xdit diffusion docs (#5576)
* Add xdit video diffusion base page.

* Update supported accelerators.

* Remove dependency on mad-tags.

* Update docker pull section.

* Update container launch instructions.

* Improve launch instruction options and layout.

* Add benchmark result outputs.

* Fix wrong HunyuanVideo path

* Finalize instructions.

* Consistent title.

* Make page and side-bar titles the same.

* Updated wordlist. Removed note container reg HF.

* Remove fp8_gemms in command and add release notes.

* Update accelerators naming.

* Add note regarding OOB performance.

* Fix admonition box.

* Overall fixes.
2025-10-27 14:56:55 +01:00
Pratik Basyal
c56d5b7495 7.1.0 release notes and compatibility footnote update (#599)
* RDC changelog and highlight addition

* Compatibility updated

* Minor change

* Consolidated changelog synced
2025-10-25 08:47:17 -05:00
Pratik Basyal
a2e2bd3277 710 Compatibility table fixed (#597)
* Compatibility table fixed

* Ryzen link updated

* rocJPEG added

* Driver updated

* Minor change

* PLDM udpate
2025-10-24 15:54:11 -04:00
randyh62
32d1cdcd90 Update RELEASE.md (#596)
Fix HIP 7.1 issues
2025-10-24 12:11:59 -07:00
Pratik Basyal
ac16524ebd 7.1.0 Compatibility updated (#595)
* Compatibility updated

* rocAL and MIgraphx changelog added

* Minor update

* Heading changes
2025-10-24 13:43:36 -04:00
Pratik Basyal
157d86b780 7.1.0 Release Notes Update (#591)
* Initial changelog added

* Changelog updated

* 7.1.0 draft changes

* Highlight changes

* Add release highlights

* formatting

* Order updated

* Highlights added

* Highlight update

* Changelog updated

* RCCL change

* RCCL changelog entry added

* Changelog updates added

* heading level fixed

* Updates added

* Leo's and Jeff's review feedback incorporated

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Release notes feedback

* Updated highlights

* Minor changes

* TOC for internal updated

---------

Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2025-10-24 12:41:23 -04:00
peterjunpark
35ca027aa4 Fix broken links under rocm-for-ai/ (#5564) 2025-10-23 14:39:58 -04:00
peterjunpark
90c1d9068f add xref to vllm v1 optimization guide in workload.rst (#5560) 2025-10-22 13:47:46 -04:00
peterjunpark
cb8d21a0df Updates to the vLLM optimization guide for MI300X/MI355X (#5554)
* Expand vLLM optimization guide for MI300X/MI355X with comprehensive AITER coverage. attention backend selection, environment variables (HIP/RCCL/Quick Reduce), parallelism strategies, quantization (FP8/FP4), engine tuning, CUDA graph modes, and multi-node scaling.

Co-authored-by: PinSiang <pinsiang.tan@embeddedllm.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: pinsiangamd <pinsiang.tan@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2025-10-22 12:54:25 -04:00
Kiriti Gowda
6f8cf36279 Merge pull request #5530 from kiritigowda/kg/ctest-verbose
CTest - Output verbose
2025-10-21 13:16:12 -07:00
anisha-amd
8eb5fef37c Docs: frameworks compatibility standardization (#5488) 2025-10-21 16:12:18 -04:00
Pratik Basyal
a5f0b30a47 PLDM version update for MI350 series [Develop] (#5547)
* PLDM version update for MI350 series

* Minor update
2025-10-20 14:39:17 -04:00
Adel Johar
2ec051dec5 Merge pull request #5531 from adeljo-amd/ci_examples
[Ex CI] Add libomp-dev, MIVisionX, rocDecode and dependencies
2025-10-20 09:55:02 +02:00
Pratik Basyal
fd6bbe18a7 PLDM update for MI250 and MI210 [Develop] (#5537)
* PLDM update for MI250 and MI210

* PLDM update
2025-10-17 17:13:42 -04:00
Istvan Kiss
14ada81c41 Pytorch release notes with rocm 7.1 (#588)
* Add PyTorch release notes udpate

* Remove torchtext

Torchtext development stoped and only supported with PyTorch 2.2

* Update
2025-10-17 22:03:14 +02:00
peterjunpark
a613bd6824 JAX Maxtext v25.9 doc update (#5532)
* archive previous version (25.7)

* update docker components list for 25.9

* update template

* update docker pull tag

* update

* fix intro
2025-10-17 11:31:06 -04:00
Adel Johar
b3459da524 [Ex CI] Add libomp-dev, MIVisionX, rocDecode 2025-10-17 14:02:54 +02:00
kiritigowda
eba211d7f1 CTest - Output verbose 2025-10-16 15:22:27 -07:00
peterjunpark
14bb59fca9 Update Megatron/PyTorch Primus 25.9 docs (#5528)
* add previous versions

* Fix heading levels in pages using embedded templates (#5468)

* update primus-megatron doc

update megatron-lm doc

update templates

fix tab

update primus-megatron model configs

Update primus-pytorch model configs

fix css class

add posttrain to pytorch-training template

update data sheets

update

update

update

update docker tags

* Add known issue and update Primus/Turbo versions

* add primus ver to histories

* update primus ver to 0.1.1

* fix leftovers from merge conflict
2025-10-16 12:51:30 -04:00
anisha-amd
a98236a4e3 Main Docs: references of accelerator removal and change to GPU (#5495)
* Docs: references of accelerator removal and change to GPU

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>
2025-10-16 11:22:10 -04:00
David Dixon
5cb6bfe151 Add yaml-cpp to dependencies 2025-10-16 07:26:06 -06:00
David Dixon
6e7422ded7 Update cli11.yml for Azure Pipelines (#5523) 2025-10-15 10:47:29 -06:00
Istvan Kiss
7b7ff53985 Update Radeon link (#5453) 2025-10-15 17:25:05 +02:00
David Dixon
019796dc63 [external] Create cli11.yml (#5522) 2025-10-15 09:19:56 -06:00
Pratik Basyal
f21cfe1171 GitHub issue added to 702 known issues (#5520)
* GitHub issue added to 702 known issues

* Added missing RCCL changelog
2025-10-15 09:58:23 -04:00
Jan Stephan
170cb47a4f Merge pull request #5512 from j-stephan/rocm-examples-deps
[Ex CI] Add libtiff-dev, libopencv-dev and rpp
2025-10-15 10:02:46 +02:00
Braden Stefanuk
d19a8e4a83 [superbuild] Add dependencies for hipblaslt and origami (#5487)
* ci: add deps for origami in superbuild

* ci: add rocm path to system path

* build: add pip msgpack dep
2025-10-14 16:05:24 -06:00
amd-hsivasun
3a0b8529ed [Ex CI] Added MIOpen to the test dependencies for rocm-examples (#5517) 2025-10-14 14:56:36 -04:00
Joseph Macaranas
f9d7fc2e6a [External CI] Add libsimde-dev to ROCR pipeline (#5515) 2025-10-14 14:24:45 -04:00
Nilesh M Negi
d424687191 [Ex CI] Increase RCCL build time limit to 120mins (#5516) 2025-10-14 12:59:40 -05:00
Jan Stephan
35e6e50888 [Ex CI] Add libopencv-dev
Signed-off-by: Jan Stephan <jan.stephan@amd.com>
2025-10-13 20:00:25 +02:00
Jan Stephan
91cfe98eb3 [Ex CI] Add libtiff-dev and rpp
Signed-off-by: Jan Stephan <jan.stephan@amd.com>
2025-10-13 17:42:59 +02:00
Pratik Basyal
036aaa2e78 ROCm for HPC topic updated Develop (#5504)
* ROCm for HPC topic updated

* ROCm for HPC topic udpated

* Minor editorial
2025-10-10 22:31:51 -04:00
Pratik Basyal
78258e0f85 702 compatibility Footnote updated (#5502)
* Footnote updated

* Minor update

* Minor update

* Break added

* Line break added

* Line break

* Footnote updated

* Minor correction
2025-10-10 21:23:07 -04:00
amd-hsong
c79d9f74ef Merge pull request #5490 Re-enable device_merge_inplace unit test for rocPRIM 2025-10-10 15:03:23 -06:00
amd-hsivasun
fb1b78c6f0 [Ex CI] Added Component and Module Dependencies (#5489)
* [Ex CI] Added Component and Module Dependencies

* Add registerROCmPackages flag
2025-10-10 16:01:11 -04:00
peterjunpark
3a70d75f5e Fix documented AMD SMI version (ROCm 7.0.2) (#5496) 2025-10-10 15:09:20 -04:00
alexxu-amd
61e1f088a1 Merge pull request #5492 from ROCm/sync-dev-from-internal
Sync dev from internal for 7.0.2 GA
2025-10-10 11:17:32 -04:00
Pratik Basyal
1f6e5c5e04 Update compatibility-matrix.rst 2025-10-10 11:10:48 -04:00
Pratik Basyal
e8a0769842 Update RELEASE.md 2025-10-10 11:07:51 -04:00
Alex Xu
6f9579d052 Merge remote-tracking branch 'internal/develop' into sync-dev-from-internal 2025-10-10 11:02:33 -04:00
Pratik Basyal
245d53a021 Merge pull request #579 from prbasyal-amd/post-rc3-702-update
GPU resiliency highlight updated 702
2025-10-10 11:00:59 -04:00
Alex Xu
35dbbb22bc fix linting 2025-10-10 10:29:13 -04:00
alexxu-amd
03dc8cee00 Merge pull request #584 from ROCm/sync-dev-from-external
Sync dev from external
2025-10-10 10:14:56 -04:00
Alex Xu
323e5fd27a Merge remote-tracking branch 'external/develop' into sync-dev-from-external 2025-10-10 10:13:08 -04:00
alexxu-amd
b11fd7b492 Update versions.md (#583) 2025-10-10 09:31:24 -04:00
srayasam-amd
5e2efa05a6 7.0.2 GA update (#5491)
* 7.0.2 GA update

* Create rocm-7.0.2.xml
2025-10-10 18:47:48 +05:30
Hao Song
29a90f0271 [rocPRIM] Re-enable device_merge_inplace unit test for rocPRIM 2025-10-09 21:48:11 +00:00
randyh62
c06242bb89 Update RELEASE.md (#581)
* Update RELEASE.md

Remove support for rocBlas and hipBlasLt

* Update CHANGELOG.md

Removed from the Changelog as well.
2025-10-09 13:15:08 -07:00
peterjunpark
68e8453ca5 Update vLLM doc for 10/6 release and bump rocm-docs-core to 1.26.0 (#5481)
* archive previous doc version

* update model/docker data and doc templates

* Update "Reproducing the Docker image"

* fix: truncated commit hash doesn't work for some reason

* bump rocm-docs-core to 1.26.0

* fix numbering

fix

* update docker tag

* update .wordlist.txt
2025-10-08 16:23:40 -04:00
Pratik Basyal
503b8bcc86 Framework and changelog updated (#5483)
* Framework and chaneglog updated

* Wordlist updated
2025-10-08 15:05:11 -04:00
amd-hsivasun
e3d97d339a [Ex CI] Added rocJPEG and rocprofiler-sdk 2025-10-08 14:47:44 -04:00
alexxu-amd
978c58d196 Merge pull request #577 from ROCm/sync-develop-from-external
Sync develop from external
2025-10-08 14:25:03 -04:00
alexxu-amd
a366048b64 Merge branch 'develop' into sync-develop-from-external 2025-10-08 14:12:14 -04:00
Pratik Basyal
4c3e33c291 Compatibility matrix and changelog synced for ROCm 7.0.2 (#576)
* Compatibility matrix and changelog synced

* Indentation updated

* OS updated
2025-10-08 14:11:15 -04:00
Alex Xu
89758e67d8 Merge remote-tracking branch 'external/develop' into sync-develop-from-external 2025-10-08 14:03:34 -04:00
Pratik Basyal
5d0f201b4d 7.0.2 review update (#575)
* 7.0.2 review update

* Tensorflow footnote updated

* Wordlist added
2025-10-08 12:35:14 -04:00
Pratik Basyal
e3677d89a6 PLDM bundle info updated for 7.0.2 (#574)
* PLDM bundle info updated

* Driver dependency added to GPU resiliency

* Known issue for Migrpahx added

* Footnote added

* Known issue for OpenCV updated

* Leo's feedback incorporated

* Radeon 9060 updated

* Known issues updated
2025-10-08 11:00:42 -04:00
amd-hsivasun
f20edab8fc [Ex CI] Update CMake Flags for hipTensor 2025-10-07 15:21:39 -04:00
Pratik Basyal
6f84d50011 ROCm 7.0.2 Post RC3 update (#573)
* Space minimized

* OS support updated

* Minor change
2025-10-06 14:08:01 -04:00
Pratik Basyal
57dd082f28 Post RC2 7.0.2 review feedback updated (#571)
* Known issue updated

* Space optimized

* Changelog updated

* Apply suggestions from code review

Leo's review feedback incorporated

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Highlight changes

* Highlight and OS support updated

* GPU resiliency highlight updated

* Highlights updated

* ROCm-EP deprecation added

* Apply suggestions from code review

leo's feedback incorporated

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* PLDM update

---------

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2025-10-06 12:04:09 -04:00
peterjunpark
eeea0d2180 Fix heading levels in pages using embedded templates (#5468) 2025-10-03 13:33:14 -04:00
Pratik Basyal
5c7b993c0c 7.0.2 release changes (#568)
* Initial changes for 7.0.2

* Heading level updated

* Release notes changes

* rocsolver added

* Known issues updated

* Highlights updated

* RN changes

* Release highlights for AI applications updated

* AI developer contents added

* leo's review feedback added

* Compatibility matrix updated

* GPU driver support
2025-09-30 14:02:04 -04:00
197 changed files with 22735 additions and 5383 deletions

View File

@@ -128,6 +128,9 @@ jobs:
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }} pipModules: ${{ parameters.pipModules }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml
parameters:
cmakeVersion: '3.28.6'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
@@ -152,6 +155,7 @@ jobs:
-DCMAKE_BUILD_TYPE=Release -DCMAKE_BUILD_TYPE=Release
-DGPU_TARGETS=${{ job.target }} -DGPU_TARGETS=${{ job.target }}
-DAMDGPU_TARGETS=${{ job.target }} -DAMDGPU_TARGETS=${{ job.target }}
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++
-DCMAKE_MODULE_PATH=$(Agent.BuildDirectory)/rocm/lib/cmake/hip -DCMAKE_MODULE_PATH=$(Agent.BuildDirectory)/rocm/lib/cmake/hip
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm/llvm;$(Agent.BuildDirectory)/rocm -DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm/llvm;$(Agent.BuildDirectory)/rocm
-DHALF_INCLUDE_DIR=$(Agent.BuildDirectory)/rocm/include -DHALF_INCLUDE_DIR=$(Agent.BuildDirectory)/rocm/include
@@ -192,6 +196,9 @@ jobs:
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }} pipModules: ${{ parameters.pipModules }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml
parameters:
cmakeVersion: '3.28.6'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
@@ -217,6 +224,7 @@ jobs:
-DCMAKE_BUILD_TYPE=Release -DCMAKE_BUILD_TYPE=Release
-DGPU_TARGETS=${{ job.target }} -DGPU_TARGETS=${{ job.target }}
-DAMDGPU_TARGETS=${{ job.target }} -DAMDGPU_TARGETS=${{ job.target }}
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++
-DCMAKE_MODULE_PATH=$(Agent.BuildDirectory)/rocm/lib/cmake/hip -DCMAKE_MODULE_PATH=$(Agent.BuildDirectory)/rocm/lib/cmake/hip
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm/llvm;$(Agent.BuildDirectory)/rocm -DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm/llvm;$(Agent.BuildDirectory)/rocm
-DHALF_INCLUDE_DIR=$(Agent.BuildDirectory)/rocm/include -DHALF_INCLUDE_DIR=$(Agent.BuildDirectory)/rocm/include

View File

@@ -34,6 +34,7 @@ parameters:
default: default:
- cmake - cmake
- libnuma-dev - libnuma-dev
- libsimde-dev
- mesa-common-dev - mesa-common-dev
- ninja-build - ninja-build
- ocl-icd-libopencl1 - ocl-icd-libopencl1

View File

@@ -37,6 +37,7 @@ parameters:
- libdrm-dev - libdrm-dev
- libelf-dev - libelf-dev
- libnuma-dev - libnuma-dev
- libsimde-dev
- ninja-build - ninja-build
- pkg-config - pkg-config
- name: rocmDependencies - name: rocmDependencies

View File

@@ -1,10 +1,29 @@
parameters: parameters:
- name: componentName
type: string
default: amdsmi
- name: checkoutRepo - name: checkoutRepo
type: string type: string
default: 'self' default: 'self'
- name: checkoutRef - name: checkoutRef
type: string type: string
default: '' default: ''
# monorepo related parameters
- name: sparseCheckoutDir
type: string
default: ''
- name: triggerDownstreamJobs
type: boolean
default: false
- name: downstreamAggregateNames
type: string
default: ''
- name: buildDependsOn
type: object
default: null
- name: unifiedBuild
type: boolean
default: false
# set to true if doing full build of ROCm stack # set to true if doing full build of ROCm stack
# and dependencies are pulled from same pipeline # and dependencies are pulled from same pipeline
- name: aggregatePipeline - name: aggregatePipeline
@@ -31,7 +50,7 @@ parameters:
jobs: jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}: - ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: amdsmi_build_${{ job.os }} - job: ${{ parameters.componentName }}_build_${{ job.os }}
pool: pool:
${{ if eq(job.os, 'ubuntu2404') }}: ${{ if eq(job.os, 'ubuntu2404') }}:
vmImage: 'ubuntu-24.04' vmImage: 'ubuntu-24.04'
@@ -55,6 +74,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
checkoutRepo: ${{ parameters.checkoutRepo }} checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
os: ${{ job.os }} os: ${{ job.os }}
@@ -65,50 +85,54 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
parameters: parameters:
os: ${{ job.os }} os: ${{ job.os }}
componentName: ${{ parameters.componentName }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters: parameters:
os: ${{ job.os }} os: ${{ job.os }}
componentName: ${{ parameters.componentName }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
# - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml # - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
# parameters: # parameters:
# aptPackages: ${{ parameters.aptPackages }} # aptPackages: ${{ parameters.aptPackages }}
- ${{ each job in parameters.jobMatrix.testJobs }}: - ${{ if eq(parameters.unifiedBuild, False) }}:
- job: amdsmi_test_${{ job.os }}_${{ job.target }} - ${{ each job in parameters.jobMatrix.testJobs }}:
dependsOn: amdsmi_build_${{ job.os }} - job: ${{ parameters.componentName }}_test_${{ job.os }}_${{ job.target }}
condition: dependsOn: ${{ parameters.componentName }}_build_${{ job.os }}
and(succeeded(), condition:
eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'), and(succeeded(),
not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), variables['Build.DefinitionName'])), eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'),
eq(${{ parameters.aggregatePipeline }}, False) not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), '${{ parameters.componentName }}')),
) eq(${{ parameters.aggregatePipeline }}, False)
variables: )
- group: common variables:
- template: /.azuredevops/variables-global.yml - group: common
pool: ${{ job.target }}_test_pool - template: /.azuredevops/variables-global.yml
workspace: pool: ${{ job.target }}_test_pool
clean: all workspace:
steps: clean: all
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml steps:
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
aptPackages: ${{ parameters.aptPackages }} parameters:
packageManager: ${{ job.packageManager }} aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml packageManager: ${{ job.packageManager }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
os: ${{ job.os }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml os: ${{ job.os }}
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
runRocminfo: false parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml runRocminfo: false
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
componentName: amdsmi parameters:
testDir: '$(Agent.BuildDirectory)' componentName: ${{ parameters.componentName }}
testExecutable: 'sudo ./rocm/share/amd_smi/tests/amdsmitst' testDir: '$(Agent.BuildDirectory)'
testParameters: '--gtest_output=xml:./test_output.xml --gtest_color=yes' testExecutable: 'sudo ./rocm/share/amd_smi/tests/amdsmitst'
os: ${{ job.os }} testParameters: '--gtest_output=xml:./test_output.xml --gtest_color=yes'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml os: ${{ job.os }}
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
aptPackages: ${{ parameters.aptPackages }} parameters:
environment: test aptPackages: ${{ parameters.aptPackages }}
gpuTarget: ${{ job.target }} environment: test
gpuTarget: ${{ job.target }}

View File

@@ -1,10 +1,29 @@
parameters: parameters:
- name: componentName
type: string
default: hipTensor
- name: checkoutRepo - name: checkoutRepo
type: string type: string
default: 'self' default: 'self'
- name: checkoutRef - name: checkoutRef
type: string type: string
default: '' default: ''
# monorepo related parameters
- name: sparseCheckoutDir
type: string
default: ''
- name: triggerDownstreamJobs
type: boolean
default: false
- name: downstreamAggregateNames
type: string
default: ''
- name: buildDependsOn
type: object
default: null
- name: unifiedBuild
type: boolean
default: false
# set to true if doing full build of ROCm stack # set to true if doing full build of ROCm stack
# and dependencies are pulled from same pipeline # and dependencies are pulled from same pipeline
- name: aggregatePipeline - name: aggregatePipeline
@@ -51,7 +70,7 @@ parameters:
jobs: jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}: - ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: hipTensor_build_${{ job.target }} - job: ${{ parameters.componentName }}_build_${{ job.target }}
variables: variables:
- group: common - group: common
- template: /.azuredevops/variables-global.yml - template: /.azuredevops/variables-global.yml
@@ -66,17 +85,21 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
checkoutRepo: ${{ parameters.checkoutRepo }} checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters: parameters:
checkoutRef: ${{ parameters.checkoutRef }} checkoutRef: ${{ parameters.checkoutRef }}
dependencyList: ${{ parameters.rocmDependencies }} dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
aggregatePipeline: ${{ parameters.aggregatePipeline }} aggregatePipeline: ${{ parameters.aggregatePipeline }}
${{ if parameters.triggerDownstreamJobs }}:
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
extraBuildFlags: >- extraBuildFlags: >-
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm;$(Agent.BuildDirectory)/rocm/llvm -DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm;$(Agent.BuildDirectory)/rocm/llvm
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++ -DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++
-DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang
-DROCM_PATH=$(Agent.BuildDirectory)/rocm -DROCM_PATH=$(Agent.BuildDirectory)/rocm
-DCMAKE_BUILD_TYPE=Release -DCMAKE_BUILD_TYPE=Release
-DHIPTENSOR_BUILD_TESTS=ON -DHIPTENSOR_BUILD_TESTS=ON
@@ -84,9 +107,12 @@ jobs:
-GNinja -GNinja
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
parameters: parameters:
componentName: ${{ parameters.componentName }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters: parameters:
componentName: ${{ parameters.componentName }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
@@ -94,44 +120,47 @@ jobs:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- ${{ each job in parameters.jobMatrix.testJobs }}: - ${{ if eq(parameters.unifiedBuild, False) }}:
- job: hipTensor_test_${{ job.target }} - ${{ each job in parameters.jobMatrix.testJobs }}:
timeoutInMinutes: 90 - job: ${{ parameters.componentName }}_test_${{ job.target }}
dependsOn: hipTensor_build_${{ job.target }} timeoutInMinutes: 90
condition: dependsOn: ${{ parameters.componentName }}_build_${{ job.target }}
and(succeeded(), condition:
eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'), and(succeeded(),
not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), variables['Build.DefinitionName'])), eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'),
eq(${{ parameters.aggregatePipeline }}, False) not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), '${{ parameters.componentName }}')),
) eq(${{ parameters.aggregatePipeline }}, False)
variables: )
- group: common variables:
- template: /.azuredevops/variables-global.yml - group: common
pool: ${{ job.target }}_test_pool - template: /.azuredevops/variables-global.yml
workspace: pool: ${{ job.target }}_test_pool
clean: all workspace:
steps: clean: all
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml steps:
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
aptPackages: ${{ parameters.aptPackages }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
gpuTarget: ${{ job.target }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
checkoutRef: ${{ parameters.checkoutRef }} parameters:
dependencyList: ${{ parameters.rocmTestDependencies }} checkoutRef: ${{ parameters.checkoutRef }}
gpuTarget: ${{ job.target }} dependencyList: ${{ parameters.rocmTestDependencies }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml ${{ if parameters.triggerDownstreamJobs }}:
parameters: downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
componentName: hipTensor - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
testDir: '$(Agent.BuildDirectory)/rocm/bin/hiptensor' - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
testParameters: '-E ".*-extended" --output-on-failure --force-new-ctest-process --output-junit test_output.xml' parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml componentName: ${{ parameters.componentName }}
parameters: testDir: '$(Agent.BuildDirectory)/rocm/bin/hiptensor'
aptPackages: ${{ parameters.aptPackages }} testParameters: '-E ".*-extended" --extra-verbose --output-on-failure --force-new-ctest-process --output-junit test_output.xml'
environment: test - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
gpuTarget: ${{ job.target }} parameters:
aptPackages: ${{ parameters.aptPackages }}
environment: test
gpuTarget: ${{ job.target }}

View File

@@ -39,6 +39,7 @@ parameters:
- python3 - python3
- python3-dev - python3-dev
- python3-pip - python3-pip
- python3-venv
- libgtest-dev - libgtest-dev
- libboost-filesystem-dev - libboost-filesystem-dev
- libboost-program-options-dev - libboost-program-options-dev
@@ -46,6 +47,8 @@ parameters:
type: object type: object
default: default:
- nanobind>=2.0.0 - nanobind>=2.0.0
- pytest
- pytest-cov
- name: rocmDependencies - name: rocmDependencies
type: object type: object
default: default:
@@ -72,8 +75,10 @@ parameters:
- { os: ubuntu2204, packageManager: apt } - { os: ubuntu2204, packageManager: apt }
- { os: almalinux8, packageManager: dnf } - { os: almalinux8, packageManager: dnf }
testJobs: testJobs:
- { os: ubuntu2204, packageManager: apt, target: gfx942 }
- { os: ubuntu2204, packageManager: apt, target: gfx90a } - { os: ubuntu2204, packageManager: apt, target: gfx90a }
# - { os: ubuntu2204, packageManager: apt, target: gfx1100 }
# - { os: ubuntu2204, packageManager: apt, target: gfx1151 }
# - { os: ubuntu2204, packageManager: apt, target: gfx1201 }
- name: downstreamComponentMatrix - name: downstreamComponentMatrix
type: object type: object
default: default:
@@ -116,6 +121,11 @@ jobs:
parameters: parameters:
dependencyList: dependencyList:
- gtest - gtest
- ${{ if ne(job.os, 'almalinux8') }}:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-vendor.yml
parameters:
dependencyList:
- catch2
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
checkoutRepo: ${{ parameters.checkoutRepo }} checkoutRepo: ${{ parameters.checkoutRepo }}
@@ -137,6 +147,7 @@ jobs:
-DORIGAMI_BUILD_SHARED_LIBS=ON -DORIGAMI_BUILD_SHARED_LIBS=ON
-DORIGAMI_ENABLE_PYTHON=ON -DORIGAMI_ENABLE_PYTHON=ON
-DORIGAMI_BUILD_TESTING=ON -DORIGAMI_BUILD_TESTING=ON
-DORIGAMI_ENABLE_FETCH=ON
-GNinja -GNinja
- ${{ if ne(job.os, 'almalinux8') }}: - ${{ if ne(job.os, 'almalinux8') }}:
- task: PublishPipelineArtifact@1 - task: PublishPipelineArtifact@1
@@ -169,7 +180,6 @@ jobs:
dependsOn: origami_build_${{ job.os }} dependsOn: origami_build_${{ job.os }}
condition: condition:
and(succeeded(), and(succeeded(),
eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'),
not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), '${{ parameters.componentName }}')), not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), '${{ parameters.componentName }}')),
eq(${{ parameters.aggregatePipeline }}, False) eq(${{ parameters.aggregatePipeline }}, False)
) )
@@ -180,30 +190,30 @@ jobs:
workspace: workspace:
clean: all clean: all
steps: steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }} pipModules: ${{ parameters.pipModules }}
packageManager: ${{ job.packageManager }} packageManager: ${{ job.packageManager }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-vendor.yml
parameters:
dependencyList:
- gtest
- ${{ if ne(job.os, 'almalinux8') }}:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-vendor.yml
parameters:
dependencyList:
- catch2
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
parameters: parameters:
preTargetFilter: ${{ parameters.componentName }} preTargetFilter: ${{ parameters.componentName }}
os: ${{ job.os }} os: ${{ job.os }}
- task: DownloadPipelineArtifact@2
displayName: 'Download Build Directory Artifact'
inputs:
artifact: '${{ parameters.componentName }}_${{ job.os }}_build_dir'
path: '$(Agent.BuildDirectory)/s/build'
- task: DownloadPipelineArtifact@2
displayName: 'Download Python Source Artifact'
inputs:
artifact: '${{ parameters.componentName }}_${{ job.os }}_python_src'
path: '$(Agent.BuildDirectory)/s/python'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters: parameters:
checkoutRef: ${{ parameters.checkoutRef }} checkoutRef: ${{ parameters.checkoutRef }}
@@ -212,25 +222,72 @@ jobs:
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
${{ if parameters.triggerDownstreamJobs }}: ${{ if parameters.triggerDownstreamJobs }}:
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }} downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
- task: CMake@1
displayName: 'Origami Test CMake Configuration'
inputs:
cmakeArgs: >-
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm;$(Agent.BuildDirectory)/vendor
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++
-DORIGAMI_BUILD_SHARED_LIBS=ON
-DORIGAMI_ENABLE_PYTHON=ON
-DORIGAMI_BUILD_TESTING=ON
-GNinja
$(Agent.BuildDirectory)/s
- task: Bash@3
displayName: 'Build Origami Tests and Python Bindings'
inputs:
targetType: inline
workingDirectory: build
script: |
cmake --build . --target origami-tests origami_python -- -j$(nproc)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
# Run tests using CTest (discovers and runs both C++ and Python tests)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters: parameters:
componentName: ${{ parameters.componentName }} componentName: ${{ parameters.componentName }}
os: ${{ job.os }} os: ${{ job.os }}
testDir: '$(Agent.BuildDirectory)/rocm/bin' testDir: 'build'
testExecutable: './origami-tests' testParameters: '--output-on-failure --force-new-ctest-process --output-junit test_output.xml'
testParameters: '--yaml origami-tests.yaml --gtest_output=xml:./test_output.xml --gtest_color=yes' # Test pip install workflow
- script: | # - task: Bash@3
set -e # displayName: 'Test Pip Install'
export PYTHONPATH=$(Agent.BuildDirectory)/s/build/python:$PYTHONPATH # inputs:
# targetType: inline
echo "--- Running origami_test.py ---" # script: |
python3 $(Agent.BuildDirectory)/s/python/origami_test.py # set -e
echo "--- Running origami_grid_test.py ---" # echo "==================================================================="
python3 $(Agent.BuildDirectory)/s/python/origami_grid_test.py # echo "Testing pip install workflow (pip install -e .)"
displayName: 'Run Python Binding Tests' # echo "==================================================================="
condition: succeeded()
# # Set environment variables for pip install CMake build
# export ROCM_PATH=$(Agent.BuildDirectory)/rocm
# export CMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm:$(Agent.BuildDirectory)/vendor
# export CMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang++
# echo "ROCM_PATH: $ROCM_PATH"
# echo "CMAKE_PREFIX_PATH: $CMAKE_PREFIX_PATH"
# echo "CMAKE_CXX_COMPILER: $CMAKE_CXX_COMPILER"
# echo ""
# # Install from source directory
# cd "$(Agent.BuildDirectory)/s/python"
# pip install -e .
# # Verify import works
# echo ""
# echo "Verifying origami can be imported..."
# python3 -c "import origami; print('✓ Successfully imported origami')"
# # Run pytest on installed package
# echo ""
# echo "Running pytest tests..."
# python3 -m pytest tests/ -v -m "not slow" --tb=short
# echo ""
# echo "==================================================================="
# echo "Pip install test completed successfully"
# echo "==================================================================="
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}

View File

@@ -1,10 +1,35 @@
parameters: parameters:
- name: componentName
type: string
default: rccl
- name: checkoutRepo - name: checkoutRepo
type: string type: string
default: 'self' default: 'self'
- name: checkoutRef - name: checkoutRef
type: string type: string
default: '' default: ''
- name: systemsRepo
type: string
default: systems_repo
- name: systemsSparseCheckoutDir
type: string
default: 'projects/rocprofiler-sdk'
# monorepo related parameters
- name: sparseCheckoutDir
type: string
default: ''
- name: triggerDownstreamJobs
type: boolean
default: false
- name: downstreamAggregateNames
type: string
default: ''
- name: buildDependsOn
type: object
default: null
- name: unifiedBuild
type: boolean
default: false
# set to true if doing full build of ROCm stack # set to true if doing full build of ROCm stack
# and dependencies are pulled from same pipeline # and dependencies are pulled from same pipeline
- name: aggregatePipeline - name: aggregatePipeline
@@ -57,37 +82,52 @@ parameters:
type: object type: object
default: default:
buildJobs: buildJobs:
- gfx942: - { os: ubuntu2204, packageManager: apt, target: gfx942 }
target: gfx942 - { os: ubuntu2204, packageManager: apt, target: gfx90a }
- gfx90a:
target: gfx90a
testJobs: testJobs:
- gfx942: - { os: ubuntu2204, packageManager: apt, target: gfx942 }
target: gfx942 - { os: ubuntu2204, packageManager: apt, target: gfx90a }
- gfx90a: - name: downstreamComponentMatrix
target: gfx90a type: object
default:
- rocprofiler-sdk:
name: rocprofiler-sdk
sparseCheckoutDir: ''
skipUnifiedBuild: 'false'
buildDependsOn:
- rccl_build
jobs: jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}: - ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: rccl_build_${{ job.target }} - job: ${{ parameters.componentName }}_build_${{ job.os }}_${{ job.target }}
timeoutInMinutes: 90 ${{ if parameters.buildDependsOn }}:
dependsOn:
- ${{ each build in parameters.buildDependsOn }}:
- ${{ build }}_${{ job.os }}_${{ job.target }}
timeoutInMinutes: 120
variables: variables:
- group: common - group: common
- template: /.azuredevops/variables-global.yml - template: /.azuredevops/variables-global.yml
- name: HIP_ROCCLR_HOME - name: HIP_ROCCLR_HOME
value: $(Build.BinariesDirectory)/rocm value: $(Build.BinariesDirectory)/rocm
pool: ${{ variables.MEDIUM_BUILD_POOL }} pool: ${{ variables.MEDIUM_BUILD_POOL }}
${{ if eq(job.os, 'almalinux8') }}:
container:
image: rocmexternalcicd.azurecr.io/manylinux228:latest
endpoint: ContainerService3
workspace: workspace:
clean: all clean: all
steps: steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
packageManager: ${{ job.packageManager }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
checkoutRepo: ${{ parameters.checkoutRepo }} checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
submoduleBehaviour: recursive submoduleBehaviour: recursive
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-vendor.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-vendor.yml
parameters: parameters:
@@ -97,10 +137,14 @@ jobs:
parameters: parameters:
checkoutRef: ${{ parameters.checkoutRef }} checkoutRef: ${{ parameters.checkoutRef }}
dependencyList: ${{ parameters.rocmDependencies }} dependencyList: ${{ parameters.rocmDependencies }}
os: ${{ job.os }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
aggregatePipeline: ${{ parameters.aggregatePipeline }} aggregatePipeline: ${{ parameters.aggregatePipeline }}
${{ if parameters.triggerDownstreamJobs }}:
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
os: ${{ job.os }}
extraBuildFlags: >- extraBuildFlags: >-
-DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/bin/hipcc -DCMAKE_CXX_COMPILER=$(Agent.BuildDirectory)/rocm/bin/hipcc
-DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/bin/hipcc -DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/bin/hipcc
@@ -112,58 +156,87 @@ jobs:
-GNinja -GNinja
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
parameters: parameters:
componentName: ${{ parameters.componentName }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
os: ${{ job.os }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters: parameters:
componentName: ${{ parameters.componentName }}
os: ${{ job.os }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - ${{ if eq(job.os, 'ubuntu2204') }}:
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
aptPackages: ${{ parameters.aptPackages }} parameters:
gpuTarget: ${{ job.target }} aptPackages: ${{ parameters.aptPackages }}
extraEnvVars: gpuTarget: ${{ job.target }}
- HIP_ROCCLR_HOME:::/home/user/workspace/rocm extraEnvVars:
installLatestCMake: true - HIP_ROCCLR_HOME:::/home/user/workspace/rocm
installLatestCMake: true
- ${{ each job in parameters.jobMatrix.testJobs }}: - ${{ if eq(parameters.unifiedBuild, False) }}:
- job: rccl_test_${{ job.target }} - ${{ each job in parameters.jobMatrix.testJobs }}:
timeoutInMinutes: 120 - job: ${{ parameters.componentName }}_test_${{ job.os }}_${{ job.target }}
dependsOn: rccl_build_${{ job.target }} timeoutInMinutes: 120
condition: dependsOn: ${{ parameters.componentName }}_build_${{ job.os }}_${{ job.target }}
and(succeeded(), condition:
eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'), and(succeeded(),
not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), variables['Build.DefinitionName'])), eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'),
eq(${{ parameters.aggregatePipeline }}, False) not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), '${{ parameters.componentName }}')),
) eq(${{ parameters.aggregatePipeline }}, False)
variables: )
- group: common variables:
- template: /.azuredevops/variables-global.yml - group: common
pool: ${{ job.target }}_test_pool - template: /.azuredevops/variables-global.yml
workspace: pool: ${{ job.target }}_test_pool
clean: all workspace:
steps: clean: all
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml steps:
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
aptPackages: ${{ parameters.aptPackages }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
gpuTarget: ${{ job.target }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml preTargetFilter: ${{ parameters.componentName }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml os: ${{ job.os }}
parameters: gpuTarget: ${{ job.target }}
checkoutRef: ${{ parameters.checkoutRef }} - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
dependencyList: ${{ parameters.rocmTestDependencies }} - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
gpuTarget: ${{ job.target }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml checkoutRef: ${{ parameters.checkoutRef }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml dependencyList: ${{ parameters.rocmTestDependencies }}
parameters: os: ${{ job.os }}
componentName: rccl gpuTarget: ${{ job.target }}
testDir: '$(Agent.BuildDirectory)/rocm/bin' ${{ if parameters.triggerDownstreamJobs }}:
testExecutable: './rccl-UnitTests' downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
testParameters: '--gtest_output=xml:./test_output.xml --gtest_color=yes' - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} componentName: ${{ parameters.componentName }}
environment: test os: ${{ job.os }}
gpuTarget: ${{ job.target }} testDir: '$(Agent.BuildDirectory)/rocm/bin'
testExecutable: './rccl-UnitTests'
testParameters: '--gtest_output=xml:./test_output.xml --gtest_color=yes'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
environment: test
gpuTarget: ${{ job.target }}
- ${{ if parameters.triggerDownstreamJobs }}:
- ${{ each component in parameters.downstreamComponentMatrix }}:
- ${{ if not(and(parameters.unifiedBuild, eq(component.skipUnifiedBuild, 'true'))) }}:
- template: /.azuredevops/components/${{ component.name }}.yml@pipelines_repo
parameters:
checkoutRepo: ${{ parameters.systemsRepo }}
sparseCheckoutDir: ${{ parameters.systemsSparseCheckoutDir }}
triggerDownstreamJobs: true
unifiedBuild: ${{ parameters.unifiedBuild }}
${{ if parameters.unifiedBuild }}:
buildDependsOn: ${{ component.unifiedBuild.buildDependsOn }}
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}+${{ component.unifiedBuild.downstreamAggregateNames }}
${{ else }}:
buildDependsOn: ${{ component.buildDependsOn }}
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}+${{ parameters.componentName }}

View File

@@ -210,7 +210,7 @@ jobs:
parameters: parameters:
componentName: ${{ parameters.componentName }} componentName: ${{ parameters.componentName }}
testDir: '$(Agent.BuildDirectory)/rocm/bin/rocprim' testDir: '$(Agent.BuildDirectory)/rocm/bin/rocprim'
extraTestParameters: '-I ${{ job.shard }},,${{ job.shardCount }} -E device_merge_inplace' extraTestParameters: '-I ${{ job.shard }},,${{ job.shardCount }}'
os: ${{ job.os }} os: ${{ job.os }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters: parameters:

View File

@@ -1,10 +1,29 @@
parameters: parameters:
- name: componentName
type: string
default: rocWMMA
- name: checkoutRepo - name: checkoutRepo
type: string type: string
default: 'self' default: 'self'
- name: checkoutRef - name: checkoutRef
type: string type: string
default: '' default: ''
# monorepo related parameters
- name: sparseCheckoutDir
type: string
default: ''
- name: triggerDownstreamJobs
type: boolean
default: false
- name: downstreamAggregateNames
type: string
default: ''
- name: buildDependsOn
type: object
default: null
- name: unifiedBuild
type: boolean
default: false
# set to true if doing full build of ROCm stack # set to true if doing full build of ROCm stack
# and dependencies are pulled from same pipeline # and dependencies are pulled from same pipeline
- name: aggregatePipeline - name: aggregatePipeline
@@ -66,7 +85,11 @@ parameters:
jobs: jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}: - ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: rocWMMA_build_${{ job.target }} - job: ${{ parameters.componentName }}_build_${{ job.target }}
${{ if parameters.buildDependsOn }}:
dependsOn:
- ${{ each build in parameters.buildDependsOn }}:
- ${{ build }}_${{ job.target }}
variables: variables:
- group: common - group: common
- template: /.azuredevops/variables-global.yml - template: /.azuredevops/variables-global.yml
@@ -81,6 +104,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters: parameters:
checkoutRepo: ${{ parameters.checkoutRepo }} checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters: parameters:
checkoutRef: ${{ parameters.checkoutRef }} checkoutRef: ${{ parameters.checkoutRef }}
@@ -102,9 +126,12 @@ jobs:
# gfx1030 not supported in documentation # gfx1030 not supported in documentation
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
parameters: parameters:
componentName: ${{ parameters.componentName }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters: parameters:
componentName: ${{ parameters.componentName }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-links.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
@@ -112,43 +139,45 @@ jobs:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- ${{ each job in parameters.jobMatrix.testJobs }}: - ${{ if eq(parameters.unifiedBuild, False) }}:
- job: rocWMMA_test_${{ job.target }} - ${{ each job in parameters.jobMatrix.testJobs }}:
timeoutInMinutes: 270 - job: ${{ parameters.componentName }}_test_${{ job.target }}
dependsOn: rocWMMA_build_${{ job.target }} timeoutInMinutes: 350
condition: dependsOn: ${{ parameters.componentName }}_build_${{ job.target }}
and(succeeded(), condition:
eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'), and(succeeded(),
not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), variables['Build.DefinitionName'])), eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'),
eq(${{ parameters.aggregatePipeline }}, False) not(containsValue(split(variables['DISABLED_${{ upper(job.target) }}_TESTS'], ','), '${{ parameters.componentName }}')),
) eq(${{ parameters.aggregatePipeline }}, False)
variables: )
- group: common variables:
- template: /.azuredevops/variables-global.yml - group: common
pool: ${{ job.target }}_test_pool - template: /.azuredevops/variables-global.yml
workspace: pool: ${{ job.target }}_test_pool
clean: all workspace:
steps: clean: all
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml steps:
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
aptPackages: ${{ parameters.aptPackages }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml aptPackages: ${{ parameters.aptPackages }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
gpuTarget: ${{ job.target }} parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml preTargetFilter: ${{ parameters.componentName }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml gpuTarget: ${{ job.target }}
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
checkoutRef: ${{ parameters.checkoutRef }} - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
dependencyList: ${{ parameters.rocmTestDependencies }} parameters:
gpuTarget: ${{ job.target }} checkoutRef: ${{ parameters.checkoutRef }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml dependencyList: ${{ parameters.rocmTestDependencies }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml gpuTarget: ${{ job.target }}
parameters: - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
componentName: rocWMMA - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
testDir: '$(Agent.BuildDirectory)/rocm/bin/rocwmma' parameters:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml componentName: ${{ parameters.componentName }}
parameters: testDir: '$(Agent.BuildDirectory)/rocm/bin/rocwmma'
aptPackages: ${{ parameters.aptPackages }} - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
environment: test parameters:
gpuTarget: ${{ job.target }} aptPackages: ${{ parameters.aptPackages }}
environment: test
gpuTarget: ${{ job.target }}

View File

@@ -81,7 +81,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters: parameters:
componentName: rocm-cmake componentName: rocm-cmake
testParameters: '-E "pass-version-parent" --output-on-failure --force-new-ctest-process --output-junit test_output.xml' testParameters: '-E "pass-version-parent" --extra-verbose --output-on-failure --force-new-ctest-process --output-junit test_output.xml'
os: ${{ job.os }} os: ${{ job.os }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/manifest.yml
parameters: parameters:

View File

@@ -14,16 +14,42 @@ parameters:
type: object type: object
default: default:
- cmake - cmake
- libdw-dev
- libglfw3-dev - libglfw3-dev
- libmsgpack-dev - libmsgpack-dev
- libopencv-dev
- libtbb-dev - libtbb-dev
- libtiff-dev
- libva-amdgpu-dev
- libva2-amdgpu
- mesa-amdgpu-va-drivers
- libavcodec-dev
- libavformat-dev
- libavutil-dev
- ninja-build - ninja-build
- python3-pip - python3-pip
- protobuf-compiler
- libprotoc-dev
- libopencv-dev
- name: pipModules
type: object
default:
- future==1.0.0
- pytz==2022.1
- numpy==1.23
- google==3.0.0
- protobuf==3.12.4
- onnx==1.12.0
- nnef==1.0.7
- name: rocmDependencies - name: rocmDependencies
type: object type: object
default: default:
- AMDMIGraphX - AMDMIGraphX
- aomp
- aomp-extras
- clr - clr
- half
- composable_kernel
- hipBLAS - hipBLAS
- hipBLAS-common - hipBLAS-common
- hipBLASLt - hipBLASLt
@@ -35,21 +61,35 @@ parameters:
- hipSPARSE - hipSPARSE
- hipTensor - hipTensor
- llvm-project - llvm-project
- MIOpen
- MIVisionX
- rocm_smi_lib
- rccl
- rocAL
- rocALUTION
- rocBLAS - rocBLAS
- rocDecode
- rocFFT - rocFFT
- rocJPEG
- rocPRIM - rocPRIM
- rocprofiler-register - rocprofiler-register
- rocprofiler-sdk
- ROCR-Runtime - ROCR-Runtime
- rocRAND - rocRAND
- rocSOLVER - rocSOLVER
- rocSPARSE - rocSPARSE
- rocThrust - rocThrust
- rocWMMA - rocWMMA
- rpp
- name: rocmTestDependencies - name: rocmTestDependencies
type: object type: object
default: default:
- AMDMIGraphX - AMDMIGraphX
- aomp
- aomp-extras
- clr - clr
- half
- composable_kernel
- hipBLAS - hipBLAS
- hipBLAS-common - hipBLAS-common
- hipBLASLt - hipBLASLt
@@ -61,11 +101,20 @@ parameters:
- hipSPARSE - hipSPARSE
- hipTensor - hipTensor
- llvm-project - llvm-project
- MIOpen
- MIVisionX
- rocm_smi_lib
- rccl
- rocAL
- rocALUTION
- rocBLAS - rocBLAS
- rocDecode
- rocFFT - rocFFT
- rocminfo - rocminfo
- rocPRIM - rocPRIM
- rocJPEG
- rocprofiler-register - rocprofiler-register
- rocprofiler-sdk
- ROCR-Runtime - ROCR-Runtime
- rocRAND - rocRAND
- rocSOLVER - rocSOLVER
@@ -73,6 +122,7 @@ parameters:
- rocThrust - rocThrust
- roctracer - roctracer
- rocWMMA - rocWMMA
- rpp
- name: jobMatrix - name: jobMatrix
type: object type: object
@@ -101,6 +151,8 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
registerROCmPackages: true
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml
parameters: parameters:
cmakeVersion: '3.25.0' cmakeVersion: '3.25.0'
@@ -165,6 +217,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
registerROCmPackages: true
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-cmake-custom.yml
parameters: parameters:
cmakeVersion: '3.25.0' cmakeVersion: '3.25.0'
@@ -198,5 +251,6 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
environment: test environment: test
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}

View File

@@ -43,9 +43,14 @@ parameters:
- ninja-build - ninja-build
- python3-pip - python3-pip
- python3-venv - python3-venv
- googletest
- libgtest-dev
- libgmock-dev
- libboost-filesystem-dev
- name: pipModules - name: pipModules
type: object type: object
default: default:
- msgpack
- joblib - joblib
- "packaging>=22.0" - "packaging>=22.0"
- pytest - pytest
@@ -147,6 +152,13 @@ jobs:
echo "##vso[task.prependpath]$USER_BASE/bin" echo "##vso[task.prependpath]$USER_BASE/bin"
echo "##vso[task.setvariable variable=PytestCmakePath]$USER_BASE/share/Pytest/cmake" echo "##vso[task.setvariable variable=PytestCmakePath]$USER_BASE/share/Pytest/cmake"
displayName: Set cmake configure paths displayName: Set cmake configure paths
- task: Bash@3
displayName: Add ROCm binaries to PATH
inputs:
targetType: inline
script: |
echo "##vso[task.prependpath]$(Agent.BuildDirectory)/rocm/bin"
echo "##vso[task.prependpath]$(Agent.BuildDirectory)/rocm/llvm/bin"
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
os: ${{ job.os }} os: ${{ job.os }}

View File

@@ -65,6 +65,13 @@ parameters:
- pytest - pytest
- pytest-cov - pytest-cov
- pytest-xdist - pytest-xdist
- name: rocmDependencies
type: object
default:
- clr
- llvm-project
- ROCR-Runtime
- rocprofiler-sdk
- name: rocmTestDependencies - name: rocmTestDependencies
type: object type: object
default: default:
@@ -101,10 +108,12 @@ jobs:
${{ if parameters.buildDependsOn }}: ${{ if parameters.buildDependsOn }}:
dependsOn: dependsOn:
- ${{ each build in parameters.buildDependsOn }}: - ${{ each build in parameters.buildDependsOn }}:
- ${{ build }}_${{ job.os }}_${{ job.target }} - ${{ build }}_${{ job.target }}
variables: variables:
- group: common - group: common
- template: /.azuredevops/variables-global.yml - template: /.azuredevops/variables-global.yml
- name: ROCM_PATH
value: $(Agent.BuildDirectory)/rocm
pool: pool:
vmImage: ${{ variables.BASE_BUILD_POOL }} vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace: workspace:
@@ -119,6 +128,14 @@ jobs:
parameters: parameters:
checkoutRepo: ${{ parameters.checkoutRepo }} checkoutRepo: ${{ parameters.checkoutRepo }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }} sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
checkoutRef: ${{ parameters.checkoutRef }}
dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: ${{ job.target }}
aggregatePipeline: ${{ parameters.aggregatePipeline }}
${{ if parameters.triggerDownstreamJobs }}:
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
extraBuildFlags: >- extraBuildFlags: >-

View File

@@ -79,27 +79,27 @@ parameters:
type: object type: object
default: default:
buildJobs: buildJobs:
- gfx942: - { os: ubuntu2204, packageManager: apt, target: gfx942 }
target: gfx942 - { os: ubuntu2204, packageManager: apt, target: gfx90a }
- gfx90a:
target: gfx90a
testJobs: testJobs:
- gfx942: - { os: ubuntu2204, packageManager: apt, target: gfx942 }
target: gfx942 - { os: ubuntu2204, packageManager: apt, target: gfx90a }
- gfx90a:
target: gfx90a
jobs: jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}: - ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: rocprofiler_sdk_build_${{ job.target }} - job: rocprofiler_sdk_build_${{ job.os }}_${{ job.target }}
${{ if parameters.buildDependsOn }}: ${{ if parameters.buildDependsOn }}:
dependsOn: dependsOn:
- ${{ each build in parameters.buildDependsOn }}: - ${{ each build in parameters.buildDependsOn }}:
- ${{ build }}_${{ job.target }} - ${{ build }}_${{ job.os}}_${{ job.target }}
variables: variables:
- group: common - group: common
- template: /.azuredevops/variables-global.yml - template: /.azuredevops/variables-global.yml
pool: ${{ variables.MEDIUM_BUILD_POOL }} pool: ${{ variables.MEDIUM_BUILD_POOL }}
${{ if eq(job.os, 'almalinux8') }}:
container:
image: rocmexternalcicd.azurecr.io/manylinux228:latest
endpoint: ContainerService3
workspace: workspace:
clean: all clean: all
steps: steps:
@@ -107,6 +107,7 @@ jobs:
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }} pipModules: ${{ parameters.pipModules }}
packageManager: ${{ job.packageManager }}
registerROCmPackages: true registerROCmPackages: true
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
@@ -118,6 +119,7 @@ jobs:
parameters: parameters:
checkoutRef: ${{ parameters.checkoutRef }} checkoutRef: ${{ parameters.checkoutRef }}
dependencyList: ${{ parameters.rocmDependencies }} dependencyList: ${{ parameters.rocmDependencies }}
os: ${{ job.os }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
aggregatePipeline: ${{ parameters.aggregatePipeline }} aggregatePipeline: ${{ parameters.aggregatePipeline }}
${{ if parameters.triggerDownstreamJobs }}: ${{ if parameters.triggerDownstreamJobs }}:
@@ -132,6 +134,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
componentName: ${{ parameters.componentName }} componentName: ${{ parameters.componentName }}
os: ${{ job.os }}
extraBuildFlags: >- extraBuildFlags: >-
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm -DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm
-DROCPROFILER_BUILD_TESTS=ON -DROCPROFILER_BUILD_TESTS=ON
@@ -143,6 +146,7 @@ jobs:
parameters: parameters:
componentName: ${{ parameters.componentName }} componentName: ${{ parameters.componentName }}
sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }} sparseCheckoutDir: ${{ parameters.sparseCheckoutDir }}
os: ${{ job.os }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters: parameters:
@@ -158,8 +162,8 @@ jobs:
- ${{ if eq(parameters.unifiedBuild, False) }}: - ${{ if eq(parameters.unifiedBuild, False) }}:
- ${{ each job in parameters.jobMatrix.testJobs }}: - ${{ each job in parameters.jobMatrix.testJobs }}:
- job: rocprofiler_sdk_test_${{ job.target }} - job: rocprofiler_sdk_test_${{ job.os }}_${{ job.target }}
dependsOn: rocprofiler_sdk_build_${{ job.target }} dependsOn: rocprofiler_sdk_build_${{ job.os }}_${{ job.target }}
condition: condition:
and(succeeded(), and(succeeded(),
eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'), eq(variables['ENABLE_${{ upper(job.target) }}_TESTS'], 'true'),
@@ -177,6 +181,7 @@ jobs:
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }} pipModules: ${{ parameters.pipModules }}
packageManager: ${{ job.packageManager }}
registerROCmPackages: true registerROCmPackages: true
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
@@ -188,6 +193,7 @@ jobs:
parameters: parameters:
checkoutRef: ${{ parameters.checkoutRef }} checkoutRef: ${{ parameters.checkoutRef }}
dependencyList: ${{ parameters.rocmDependencies }} dependencyList: ${{ parameters.rocmDependencies }}
os: ${{ job.os }}
gpuTarget: ${{ job.target }} gpuTarget: ${{ job.target }}
${{ if parameters.triggerDownstreamJobs }}: ${{ if parameters.triggerDownstreamJobs }}:
downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }} downstreamAggregateNames: ${{ parameters.downstreamAggregateNames }}
@@ -202,6 +208,7 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters: parameters:
componentName: ${{ parameters.componentName }} componentName: ${{ parameters.componentName }}
os: ${{ job.os }}
extraBuildFlags: >- extraBuildFlags: >-
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm -DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm
-DROCPROFILER_BUILD_TESTS=ON -DROCPROFILER_BUILD_TESTS=ON
@@ -213,7 +220,8 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters: parameters:
componentName: ${{ parameters.componentName }} componentName: ${{ parameters.componentName }}
testDir: $(Agent.BuildDirectory)/s/build os: ${{ job.os }}
testDir: $(Agent.BuildDirectory)/build
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml - template: ${{ variables.CI_TEMPLATE_PATH }}/steps/docker-container.yml
parameters: parameters:
aptPackages: ${{ parameters.aptPackages }} aptPackages: ${{ parameters.aptPackages }}

View File

@@ -0,0 +1,63 @@
parameters:
- name: checkoutRepo
type: string
default: 'self'
- name: checkoutRef
type: string
default: ''
- name: cli11Version
type: string
default: ''
- name: aptPackages
type: object
default:
- cmake
- git
- ninja-build
- name: jobMatrix
type: object
default:
buildJobs:
- { os: ubuntu2204, packageManager: apt}
- { os: almalinux8, packageManager: dnf}
jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: cli11_${{ job.os }}
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool:
vmImage: 'ubuntu-22.04'
${{ if eq(job.os, 'almalinux8') }}:
container:
image: rocmexternalcicd.azurecr.io/manylinux228:latest
endpoint: ContainerService3
workspace:
clean: all
steps:
- checkout: none
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
packageManager: ${{ job.packageManager }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- task: Bash@3
displayName: Clone cli11 ${{ parameters.cli11Version }}
inputs:
targetType: inline
script: git clone https://github.com/CLIUtils/CLI11.git -b ${{ parameters.cli11Version }}
workingDirectory: $(Agent.BuildDirectory)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters:
os: ${{ job.os }}
cmakeBuildDir: $(Agent.BuildDirectory)/CLI11/build
cmakeSourceDir: $(Agent.BuildDirectory)/CLI11
useAmdclang: false
extraBuildFlags: >-
-DCMAKE_BUILD_TYPE=Release
-GNinja
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters:
os: ${{ job.os }}

View File

@@ -0,0 +1,66 @@
parameters:
- name: checkoutRepo
type: string
default: 'self'
- name: checkoutRef
type: string
default: ''
- name: yamlcppVersion
type: string
default: ''
- name: aptPackages
type: object
default:
- cmake
- git
- ninja-build
- name: jobMatrix
type: object
default:
buildJobs:
- { os: ubuntu2204, packageManager: apt}
- { os: almalinux8, packageManager: dnf}
jobs:
- ${{ each job in parameters.jobMatrix.buildJobs }}:
- job: yamlcpp_${{ job.os }}
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool:
vmImage: 'ubuntu-22.04'
${{ if eq(job.os, 'almalinux8') }}:
container:
image: rocmexternalcicd.azurecr.io/manylinux228:latest
endpoint: ContainerService3
workspace:
clean: all
steps:
- checkout: none
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
packageManager: ${{ job.packageManager }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- task: Bash@3
displayName: Clone yaml-cpp ${{ parameters.yamlcppVersion }}
inputs:
targetType: inline
script: git clone https://github.com/jbeder/yaml-cpp.git -b ${{ parameters.yamlcppVersion }}
workingDirectory: $(Agent.BuildDirectory)
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters:
os: ${{ job.os }}
cmakeBuildDir: $(Agent.BuildDirectory)/yaml-cpp/build
cmakeSourceDir: $(Agent.BuildDirectory)/yaml-cpp
useAmdclang: false
extraBuildFlags: >-
-DCMAKE_BUILD_TYPE=Release
-DYAML_CPP_BUILD_TOOLS=OFF
-DYAML_BUILD_SHARED_LIBS=OFF
-DYAML_CPP_INSTALL=ON
-GNinja
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters:
os: ${{ job.os }}

View File

@@ -0,0 +1,23 @@
variables:
- group: common
- template: /.azuredevops/variables-global.yml
parameters:
- name: cli11Version
type: string
default: "main"
resources:
repositories:
- repository: pipelines_repo
type: github
endpoint: ROCm
name: ROCm/ROCm
trigger: none
pr: none
jobs:
- template: ${{ variables.CI_DEPENDENCIES_PATH }}/cli11.yml
parameters:
cli11Version: ${{ parameters.cli11Version }}

View File

@@ -0,0 +1,24 @@
variables:
- group: common
- template: /.azuredevops/variables-global.yml
parameters:
- name: yamlcppVersion
type: string
default: "0.8.0"
resources:
repositories:
- repository: pipelines_repo
type: github
endpoint: ROCm
name: ROCm/ROCm
trigger: none
pr: none
jobs:
- template: ${{ variables.CI_DEPENDENCIES_PATH }}/yamlcpp.yml
parameters:
yamlcppVersion: ${{ parameters.yamlcppVersion }}

View File

@@ -63,6 +63,7 @@ parameters:
libopenblas-dev: openblas-devel libopenblas-dev: openblas-devel
libopenmpi-dev: openmpi-devel libopenmpi-dev: openmpi-devel
libpci-dev: libpciaccess-devel libpci-dev: libpciaccess-devel
libsimde-dev: simde-devel
libssl-dev: openssl-devel libssl-dev: openssl-devel
# note: libstdc++-devel is in the base packages list # note: libstdc++-devel is in the base packages list
libsystemd-dev: systemd-devel libsystemd-dev: systemd-devel

View File

@@ -35,8 +35,8 @@ parameters:
developBranch: develop developBranch: develop
hasGpuTarget: true hasGpuTarget: true
amdsmi: amdsmi:
pipelineId: 99 pipelineId: 376
developBranch: amd-staging developBranch: develop
hasGpuTarget: false hasGpuTarget: false
aomp-extras: aomp-extras:
pipelineId: 111 pipelineId: 111
@@ -115,7 +115,7 @@ parameters:
developBranch: develop developBranch: develop
hasGpuTarget: true hasGpuTarget: true
hipTensor: hipTensor:
pipelineId: 105 pipelineId: 374
developBranch: develop developBranch: develop
hasGpuTarget: true hasGpuTarget: true
llvm-project: llvm-project:
@@ -263,7 +263,7 @@ parameters:
developBranch: develop developBranch: develop
hasGpuTarget: true hasGpuTarget: true
rocWMMA: rocWMMA:
pipelineId: 109 pipelineId: 370
developBranch: develop developBranch: develop
hasGpuTarget: true hasGpuTarget: true
rpp: rpp:

View File

@@ -13,7 +13,7 @@ parameters:
default: ctest default: ctest
- name: testParameters - name: testParameters
type: string type: string
default: --output-on-failure --force-new-ctest-process --output-junit test_output.xml default: --extra-verbose --output-on-failure --force-new-ctest-process --output-junit test_output.xml
- name: extraTestParameters - name: extraTestParameters
type: string type: string
default: '' default: ''

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
.venv .venv
.vscode .vscode
build build
__pycache__
# documentation artifacts # documentation artifacts
_build/ _build/

View File

@@ -27,6 +27,7 @@ ASICs
ASan ASan
ASAN ASAN
ASm ASm
Async
ATI ATI
atomicRMW atomicRMW
AddressSanitizer AddressSanitizer
@@ -34,6 +35,7 @@ AlexNet
Andrej Andrej
Arb Arb
Autocast Autocast
autograd
BARs BARs
BatchNorm BatchNorm
BLAS BLAS
@@ -77,6 +79,7 @@ CX
Cavium Cavium
CentOS CentOS
ChatGPT ChatGPT
Cholesky
CoRR CoRR
Codespaces Codespaces
Commitizen Commitizen
@@ -86,9 +89,11 @@ Conda
ConnectX ConnectX
CountOnes CountOnes
CuPy CuPy
customizable
da da
Dashboarding Dashboarding
Dataloading Dataloading
dataflows
DBRX DBRX
DDR DDR
DF DF
@@ -130,10 +135,13 @@ ELMo
ENDPGM ENDPGM
EPYC EPYC
ESXi ESXi
EP
EoS EoS
etcd etcd
equalto
fas fas
FBGEMM FBGEMM
FiLM
FIFOs FIFOs
FFT FFT
FFTs FFTs
@@ -154,10 +162,12 @@ Fortran
Fuyu Fuyu
GALB GALB
GAT GAT
GATNE
GCC GCC
GCD GCD
GCDs GCDs
GCN GCN
GCNN
GDB GDB
GDDR GDDR
GDR GDR
@@ -176,13 +186,16 @@ Glibc
GLXT GLXT
Gloo Gloo
GMI GMI
GNN
GNNs
GPG GPG
GPR GPR
GPT GPT
GPU GPU
GPU's GPU's
GPUDirect
GPUs GPUs
Graphbolt GraphBolt
GraphSage GraphSage
GRBM GRBM
GRE GRE
@@ -212,7 +225,10 @@ Haswell
Higgs Higgs
href href
Hyperparameters Hyperparameters
HybridEngine
Huggingface Huggingface
Hunyuan
HunyuanVideo
IB IB
ICD ICD
ICT ICT
@@ -243,7 +259,9 @@ Intersphinx
Intra Intra
Ioffe Ioffe
JAX's JAX's
JAXLIB
Jinja Jinja
js
JSON JSON
Jupyter Jupyter
KFD KFD
@@ -263,6 +281,7 @@ LLM
LLMs LLMs
LLVM LLVM
LM LM
logsumexp
LRU LRU
LSAN LSAN
LSan LSan
@@ -298,6 +317,7 @@ Makefiles
Matplotlib Matplotlib
Matrox Matrox
MaxText MaxText
MBT
Megablocks Megablocks
Megatrends Megatrends
Megatron Megatron
@@ -307,12 +327,15 @@ Meta's
Miniconda Miniconda
MirroredStrategy MirroredStrategy
Mixtral Mixtral
MLA
MosaicML MosaicML
MoEs MoEs
Mooncake Mooncake
Mpops Mpops
Multicore Multicore
Multithreaded Multithreaded
mx
MXFP
MyEnvironment MyEnvironment
MyST MyST
NANOO NANOO
@@ -348,6 +371,7 @@ OFED
OMM OMM
OMP OMP
OMPI OMPI
OOM
OMPT OMPT
OMPX OMPX
ONNX ONNX
@@ -374,6 +398,7 @@ perf
PEQT PEQT
PIL PIL
PILImage PILImage
PJRT
POR POR
PRNG PRNG
PRs PRs
@@ -393,6 +418,7 @@ Profiler's
PyPi PyPi
Pytest Pytest
PyTorch PyTorch
QPS
Qcycles Qcycles
Qwen Qwen
RAII RAII
@@ -495,13 +521,12 @@ TPS
TPU TPU
TPUs TPUs
TSME TSME
Taichi
Taichi's
Tagram Tagram
TensileLite TensileLite
TensorBoard TensorBoard
TensorFlow TensorFlow
TensorParallel TensorParallel
TheRock
ToC ToC
TorchAudio TorchAudio
torchaudio torchaudio
@@ -519,6 +544,7 @@ UAC
UC UC
UCC UCC
UCX UCX
ud
UE UE
UIF UIF
UMC UMC
@@ -668,12 +694,14 @@ denoised
denoises denoises
denormalize denormalize
dequantization dequantization
dequantized
dequantizes dequantizes
deserializers deserializers
detections detections
dev dev
devicelibs devicelibs
devsel devsel
dgl
dimensionality dimensionality
disambiguates disambiguates
distro distro
@@ -713,6 +741,7 @@ githooks
github github
globals globals
gnupg gnupg
gpu
grayscale grayscale
gx gx
gzip gzip
@@ -767,6 +796,7 @@ invariants
invocating invocating
ipo ipo
jax jax
json
kdb kdb
kfd kfd
kv kv
@@ -780,6 +810,7 @@ linalg
linearized linearized
linter linter
linux linux
llm
llvm llvm
lm lm
localscratch localscratch
@@ -825,11 +856,13 @@ pallas
parallelization parallelization
parallelizing parallelizing
param param
params
parameterization parameterization
passthrough passthrough
pe pe
perfcounter perfcounter
performant performant
piecewise
perl perl
pragma pragma
pre pre
@@ -870,6 +903,7 @@ querySelectorAll
queueing queueing
qwen qwen
radeon radeon
rc
rccl rccl
rdc rdc
rdma rdma
@@ -931,6 +965,7 @@ scalability
scalable scalable
scipy scipy
seealso seealso
selectattr
selectedTag selectedTag
sendmsg sendmsg
seqs seqs
@@ -967,6 +1002,7 @@ tabindex
targetContainer targetContainer
td td
tensorfloat tensorfloat
tf
th th
tokenization tokenization
tokenize tokenize
@@ -975,10 +1011,12 @@ tokenizer
tokenizes tokenizes
toolchain toolchain
toolchains toolchains
topk
toolset toolset
toolsets toolsets
torchtitan torchtitan
torchvision torchvision
tp
tqdm tqdm
tracebacks tracebacks
txt txt
@@ -1001,6 +1039,7 @@ USM
UTCL UTCL
UTIL UTIL
utils utils
UX
vL vL
variational variational
vdi vdi
@@ -1030,6 +1069,8 @@ writebacks
wrreq wrreq
wzo wzo
xargs xargs
xdit
xDiT
xGMI xGMI
xPacked xPacked
xz xz

File diff suppressed because it is too large Load Diff

2721
RELEASE.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,33 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<manifest> <manifest>
<remote name="rocm-org" fetch="https://github.com/ROCm/" /> <remote name="rocm-org" fetch="https://github.com/ROCm/" />
<default revision="refs/tags/rocm-7.0.1" <default revision="refs/tags/rocm-7.1.1"
remote="rocm-org" remote="rocm-org"
sync-c="true" sync-c="true"
sync-j="4" /> sync-j="4" />
<!--list of projects for ROCm--> <!--list of projects for ROCm-->
<project name="ROCK-Kernel-Driver" /> <project name="ROCK-Kernel-Driver" />
<project name="ROCR-Runtime" />
<project name="amdsmi" /> <project name="amdsmi" />
<project name="aqlprofile" />
<project name="rdc" />
<project name="rocm_bandwidth_test" /> <project name="rocm_bandwidth_test" />
<project name="rocm_smi_lib" />
<project name="rocm-core" />
<project name="rocm-examples" /> <project name="rocm-examples" />
<project name="rocminfo" />
<project name="rocprofiler" />
<project name="rocprofiler-register" />
<project name="rocprofiler-sdk" />
<project name="rocprofiler-compute" />
<project name="rocprofiler-systems" />
<project name="roctracer" />
<!--HIP Projects--> <!--HIP Projects-->
<project name="hip" />
<project name="hip-tests" />
<project name="HIPIFY" /> <project name="HIPIFY" />
<project name="clr" />
<project name="hipother" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler --> <!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="half" /> <project name="half" />
<project name="llvm-project" /> <project name="llvm-project" />
@@ -55,9 +39,15 @@
MIOpen rocBLAS rocFFT rocPRIM rocRAND MIOpen rocBLAS rocFFT rocPRIM rocRAND
rocSPARSE rocThrust Tensile --> rocSPARSE rocThrust Tensile -->
<project groups="mathlibs" name="rocm-libraries" /> <project groups="mathlibs" name="rocm-libraries" />
<!-- The following components have been migrated to rocm-systems:
aqlprofile clr hip hip-tests hipother
rdc rocm-core rocm_smi_lib rocminfo rocprofiler-compute
rocprofiler-register rocprofiler-sdk rocprofiler-systems
rocprofiler rocr-runtime roctracer -->
<project groups="mathlibs" name="rocm-systems" />
<project groups="mathlibs" name="rocPyDecode" /> <project groups="mathlibs" name="rocPyDecode" />
<project groups="mathlibs" name="rocSHMEM" />
<project groups="mathlibs" name="rocSOLVER" /> <project groups="mathlibs" name="rocSOLVER" />
<project groups="mathlibs" name="rocSHMEM" />
<project groups="mathlibs" name="rocWMMA" /> <project groups="mathlibs" name="rocWMMA" />
<project groups="mathlibs" name="rocm-cmake" /> <project groups="mathlibs" name="rocm-cmake" />
<project groups="mathlibs" name="rpp" /> <project groups="mathlibs" name="rpp" />

View File

@@ -25,69 +25,69 @@ additional licenses. Please review individual repositories for more information.
<!-- spellcheck-disable --> <!-- spellcheck-disable -->
| Component | License | | Component | License |
|:---------------------|:-------------------------| |:---------------------|:-------------------------|
| [AMD Compute Language Runtime (CLR)](https://github.com/ROCm/clr) | [MIT](https://github.com/ROCm/clr/blob/amd-staging/LICENSE.txt) | | [AMD Compute Language Runtime (CLR)](https://github.com/ROCm/rocm-systems/tree/develop/projects/clr) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/clr/LICENSE.md) |
| [AMD SMI](https://github.com/ROCm/amdsmi) | [MIT](https://github.com/ROCm/amdsmi/blob/amd-staging/LICENSE) | | [AMD SMI](https://github.com/ROCm/amdsmi) | [MIT](https://github.com/ROCm/amdsmi/blob/amd-staging/LICENSE) |
| [aomp](https://github.com/ROCm/aomp/) | [Apache 2.0](https://github.com/ROCm/aomp/blob/aomp-dev/LICENSE) | | [aomp](https://github.com/ROCm/aomp/) | [Apache 2.0](https://github.com/ROCm/aomp/blob/aomp-dev/LICENSE) |
| [aomp-extras](https://github.com/ROCm/aomp-extras/) | [MIT](https://github.com/ROCm/aomp-extras/blob/aomp-dev/LICENSE) | | [aomp-extras](https://github.com/ROCm/aomp-extras/) | [MIT](https://github.com/ROCm/aomp-extras/blob/aomp-dev/LICENSE) |
| [AQLprofile](https://github.com/rocm/aqlprofile/) | [MIT](https://github.com/ROCm/aqlprofile/blob/amd-staging/LICENSE.md) | | [AQLprofile](https://github.com/ROCm/rocm-systems/tree/develop/projects/aqlprofile/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/aqlprofile/LICENSE.md) |
| [Code Object Manager (Comgr)](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/comgr) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/comgr/LICENSE.txt) | | [Code Object Manager (Comgr)](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/comgr) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/comgr/LICENSE.txt) |
| [Composable Kernel](https://github.com/ROCm/composable_kernel) | [MIT](https://github.com/ROCm/composable_kernel/blob/develop/LICENSE) | | [Composable Kernel](https://github.com/ROCm/composable_kernel) | [MIT](https://github.com/ROCm/composable_kernel/blob/develop/LICENSE) |
| [half](https://github.com/ROCm/half/) | [MIT](https://github.com/ROCm/half/blob/rocm/LICENSE.txt) | | [half](https://github.com/ROCm/half/) | [MIT](https://github.com/ROCm/half/blob/rocm/LICENSE.txt) |
| [HIP](https://github.com/ROCm/HIP/) | [MIT](https://github.com/ROCm/HIP/blob/amd-staging/LICENSE.txt) | | [HIP](https://github.com/ROCm/rocm-systems/tree/develop/projects/hip/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/hip/LICENSE.md) |
| [hipamd](https://github.com/ROCm/clr/tree/amd-staging/hipamd) | [MIT](https://github.com/ROCm/clr/blob/amd-staging/hipamd/LICENSE.txt) | | [hipamd](https://github.com/ROCm/rocm-systems/tree/develop/projects/clr/hipamd/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/clr/hipamd/LICENSE.md) |
| [hipBLAS](https://github.com/ROCm/hipBLAS/) | [MIT](https://github.com/ROCm/hipBLAS/blob/develop/LICENSE.md) | | [hipBLAS](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipblas/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipblas/LICENSE.md) |
| [hipBLASLt](https://github.com/ROCm/hipBLASLt/) | [MIT](https://github.com/ROCm/hipBLASLt/blob/develop/LICENSE.md) | | [hipBLASLt](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipblaslt/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipblaslt/LICENSE.md) |
| [HIPCC](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/hipcc) | [MIT](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/hipcc/LICENSE.txt) | | [HIPCC](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/hipcc) | [MIT](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/hipcc/LICENSE.txt) |
| [hipCUB](https://github.com/ROCm/hipCUB/) | [Custom](https://github.com/ROCm/hipCUB/blob/develop/LICENSE.txt) | | [hipCUB](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipcub/) | [Custom](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipcub/LICENSE.txt) |
| [hipFFT](https://github.com/ROCm/hipFFT/) | [MIT](https://github.com/ROCm/hipFFT/blob/develop/LICENSE.md) | | [hipFFT](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipfft/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipfft/LICENSE.md) |
| [hipfort](https://github.com/ROCm/hipfort/) | [MIT](https://github.com/ROCm/hipfort/blob/develop/LICENSE) | | [hipfort](https://github.com/ROCm/hipfort/) | [MIT](https://github.com/ROCm/hipfort/blob/develop/LICENSE) |
| [HIPIFY](https://github.com/ROCm/HIPIFY/) | [MIT](https://github.com/ROCm/HIPIFY/blob/amd-staging/LICENSE.txt) | | [HIPIFY](https://github.com/ROCm/HIPIFY/) | [MIT](https://github.com/ROCm/HIPIFY/blob/amd-staging/LICENSE.txt) |
| [hipRAND](https://github.com/ROCm/hipRAND/) | [MIT](https://github.com/ROCm/hipRAND/blob/develop/LICENSE.txt) | | [hipRAND](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hiprand/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hiprand/LICENSE.md) |
| [hipSOLVER](https://github.com/ROCm/hipSOLVER/) | [MIT](https://github.com/ROCm/hipSOLVER/blob/develop/LICENSE.md) | | [hipSOLVER](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipsolver/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipsolver/LICENSE.md) |
| [hipSPARSE](https://github.com/ROCm/hipSPARSE/) | [MIT](https://github.com/ROCm/hipSPARSE/blob/develop/LICENSE.md) | | [hipSPARSE](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipsparse/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipsparse/LICENSE.md) |
| [hipSPARSELt](https://github.com/ROCm/hipSPARSELt/) | [MIT](https://github.com/ROCm/hipSPARSELt/blob/develop/LICENSE.md) | | [hipSPARSELt](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hipsparselt/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hipsparselt/LICENSE.md) |
| [hipTensor](https://github.com/ROCm/hipTensor) | [MIT](https://github.com/ROCm/hipTensor/blob/develop/LICENSE) | | [hipTensor](https://github.com/ROCm/rocm-libraries/tree/develop/projects/hiptensor/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/hiptensor/LICENSE) |
| [llvm-project](https://github.com/ROCm/llvm-project/) | [Apache](https://github.com/ROCm/llvm-project/blob/amd-staging/LICENSE.TXT) | | [llvm-project](https://github.com/ROCm/llvm-project/) | [Apache](https://github.com/ROCm/llvm-project/blob/amd-staging/LICENSE.TXT) |
| [llvm-project/flang](https://github.com/ROCm/llvm-project/tree/amd-staging/flang) | [Apache 2.0](https://github.com/ROCm/llvm-project/blob/amd-staging/flang/LICENSE.TXT) | | [llvm-project/flang](https://github.com/ROCm/llvm-project/tree/amd-staging/flang) | [Apache 2.0](https://github.com/ROCm/llvm-project/blob/amd-staging/flang/LICENSE.TXT) |
| [MIGraphX](https://github.com/ROCm/AMDMIGraphX/) | [MIT](https://github.com/ROCm/AMDMIGraphX/blob/develop/LICENSE) | | [MIGraphX](https://github.com/ROCm/AMDMIGraphX/) | [MIT](https://github.com/ROCm/AMDMIGraphX/blob/develop/LICENSE) |
| [MIOpen](https://github.com/ROCm/MIOpen/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/miopen/LICENSE.md) | | [MIOpen](https://github.com/ROCm/rocm-libraries/tree/develop/projects/miopen/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/miopen/LICENSE.md) |
| [MIVisionX](https://github.com/ROCm/MIVisionX/) | [MIT](https://github.com/ROCm/MIVisionX/blob/develop/LICENSE.txt) | | [MIVisionX](https://github.com/ROCm/MIVisionX/) | [MIT](https://github.com/ROCm/MIVisionX/blob/develop/LICENSE.txt) |
| [rocAL](https://github.com/ROCm/rocAL) | [MIT](https://github.com/ROCm/rocAL/blob/develop/LICENSE.txt) | | [rocAL](https://github.com/ROCm/rocAL) | [MIT](https://github.com/ROCm/rocAL/blob/develop/LICENSE.txt) |
| [rocALUTION](https://github.com/ROCm/rocALUTION/) | [MIT](https://github.com/ROCm/rocALUTION/blob/develop/LICENSE.md) | | [rocALUTION](https://github.com/ROCm/rocALUTION/) | [MIT](https://github.com/ROCm/rocALUTION/blob/develop/LICENSE.md) |
| [rocBLAS](https://github.com/ROCm/rocBLAS/) | [MIT](https://github.com/ROCm/rocBLAS/blob/develop/LICENSE.md) | | [rocBLAS](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocblas/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocblas/LICENSE.md) |
| [ROCdbgapi](https://github.com/ROCm/ROCdbgapi/) | [MIT](https://github.com/ROCm/ROCdbgapi/blob/amd-staging/LICENSE.txt) | | [ROCdbgapi](https://github.com/ROCm/ROCdbgapi/) | [MIT](https://github.com/ROCm/ROCdbgapi/blob/amd-staging/LICENSE.txt) |
| [rocDecode](https://github.com/ROCm/rocDecode) | [MIT](https://github.com/ROCm/rocDecode/blob/develop/LICENSE) | | [rocDecode](https://github.com/ROCm/rocDecode) | [MIT](https://github.com/ROCm/rocDecode/blob/develop/LICENSE) |
| [rocFFT](https://github.com/ROCm/rocFFT/) | [MIT](https://github.com/ROCm/rocFFT/blob/develop/LICENSE.md) | | [rocFFT](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocfft/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocfft/LICENSE.md) |
| [ROCgdb](https://github.com/ROCm/ROCgdb/) | [GNU General Public License v3.0](https://github.com/ROCm/ROCgdb/blob/amd-staging/COPYING3) | | [ROCgdb](https://github.com/ROCm/ROCgdb/) | [GNU General Public License v3.0](https://github.com/ROCm/ROCgdb/blob/amd-staging/COPYING3) |
| [rocJPEG](https://github.com/ROCm/rocJPEG/) | [MIT](https://github.com/ROCm/rocJPEG/blob/develop/LICENSE) | | [rocJPEG](https://github.com/ROCm/rocJPEG/) | [MIT](https://github.com/ROCm/rocJPEG/blob/develop/LICENSE) |
| [ROCK-Kernel-Driver](https://github.com/ROCm/ROCK-Kernel-Driver/) | [GPL 2.0 WITH Linux-syscall-note](https://github.com/ROCm/ROCK-Kernel-Driver/blob/master/COPYING) | | [ROCK-Kernel-Driver](https://github.com/ROCm/ROCK-Kernel-Driver/) | [GPL 2.0 WITH Linux-syscall-note](https://github.com/ROCm/ROCK-Kernel-Driver/blob/master/COPYING) |
| [rocminfo](https://github.com/ROCm/rocminfo/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocminfo/blob/amd-staging/License.txt) | | [rocminfo](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocminfo/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocminfo/License.txt) |
| [ROCm Bandwidth Test](https://github.com/ROCm/rocm_bandwidth_test/) | [MIT](https://github.com/ROCm/rocm_bandwidth_test/blob/master/LICENSE.txt) | | [ROCm Bandwidth Test](https://github.com/ROCm/rocm_bandwidth_test/) | [MIT](https://github.com/ROCm/rocm_bandwidth_test/blob/master/LICENSE.txt) |
| [ROCm CMake](https://github.com/ROCm/rocm-cmake/) | [MIT](https://github.com/ROCm/rocm-cmake/blob/develop/LICENSE) | | [ROCm CMake](https://github.com/ROCm/rocm-cmake/) | [MIT](https://github.com/ROCm/rocm-cmake/blob/develop/LICENSE) |
| [ROCm Communication Collectives Library (RCCL)](https://github.com/ROCm/rccl/) | [Custom](https://github.com/ROCm/rccl/blob/develop/LICENSE.txt) | | [ROCm Communication Collectives Library (RCCL)](https://github.com/ROCm/rccl/) | [Custom](https://github.com/ROCm/rccl/blob/develop/LICENSE.txt) |
| [ROCm-Core](https://github.com/ROCm/rocm-core) | [MIT](https://github.com/ROCm/rocm-core/blob/master/copyright) | | [ROCm-Core](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocm-core/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocm-core/LICENSE.md) |
| [ROCm Compute Profiler](https://github.com/ROCm/rocprofiler-compute) | [MIT](https://github.com/ROCm/rocprofiler-compute/blob/amd-staging/LICENSE) | | [ROCm Compute Profiler](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler-compute/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler-compute/LICENSE.md) |
| [ROCm Data Center (RDC)](https://github.com/ROCm/rdc/) | [MIT](https://github.com/ROCm/rdc/blob/amd-staging/LICENSE.md) | | [ROCm Data Center (RDC)](https://github.com/ROCm/rocm-systems/tree/develop/projects/rdc/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rdc/LICENSE.md) |
| [ROCm-Device-Libs](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/device-libs) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/device-libs/LICENSE.TXT) | | [ROCm-Device-Libs](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/device-libs) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/device-libs/LICENSE.TXT) |
| [ROCm-OpenCL-Runtime](https://github.com/ROCm/clr/tree/amd-staging/opencl) | [MIT](https://github.com/ROCm/clr/blob/amd-staging/opencl/LICENSE.txt) | | [ROCm-OpenCL-Runtime](https://github.com/ROCm/rocm-systems/tree/develop/projects/clr/opencl/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/clr/opencl/LICENSE.md) |
| [ROCm Performance Primitives (RPP)](https://github.com/ROCm/rpp) | [MIT](https://github.com/ROCm/rpp/blob/develop/LICENSE) | | [ROCm Performance Primitives (RPP)](https://github.com/ROCm/rpp) | [MIT](https://github.com/ROCm/rpp/blob/develop/LICENSE) |
| [ROCm SMI Lib](https://github.com/ROCm/rocm_smi_lib/) | [MIT](https://github.com/ROCm/rocm_smi_lib/blob/amd-staging/LICENSE.md) | | [ROCm SMI Lib](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocm-smi-lib/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocm-smi-lib/LICENSE.md) |
| [ROCm Systems Profiler](https://github.com/ROCm/rocprofiler-systems) | [MIT](https://github.com/ROCm/rocprofiler-systems/blob/amd-staging/LICENSE.md) | | [ROCm Systems Profiler](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler-systems/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler-systems/LICENSE.md) |
| [ROCm Validation Suite](https://github.com/ROCm/ROCmValidationSuite/) | [MIT](https://github.com/ROCm/ROCmValidationSuite/blob/master/LICENSE) | | [ROCm Validation Suite](https://github.com/ROCm/ROCmValidationSuite/) | [MIT](https://github.com/ROCm/ROCmValidationSuite/blob/master/LICENSE) |
| [rocPRIM](https://github.com/ROCm/rocPRIM/) | [MIT](https://github.com/ROCm/rocPRIM/blob/develop/LICENSE.txt) | | [rocPRIM](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocprim/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocprim/LICENSE.md) |
| [ROCProfiler](https://github.com/ROCm/rocprofiler/) | [MIT](https://github.com/ROCm/rocprofiler/blob/amd-staging/LICENSE.md) | | [ROCProfiler](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler/LICENSE.md) |
| [ROCprofiler-SDK](https://github.com/ROCm/rocprofiler-sdk) | [MIT](https://github.com/ROCm/rocprofiler-sdk/blob/amd-mainline/LICENSE) | | [ROCprofiler-SDK](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocprofiler-sdk/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocprofiler-sdk/LICENSE.md) |
| [rocPyDecode](https://github.com/ROCm/rocPyDecode) | [MIT](https://github.com/ROCm/rocPyDecode/blob/develop/LICENSE.txt) | | [rocPyDecode](https://github.com/ROCm/rocPyDecode) | [MIT](https://github.com/ROCm/rocPyDecode/blob/develop/LICENSE.txt) |
| [rocRAND](https://github.com/ROCm/rocRAND/) | [MIT](https://github.com/ROCm/rocRAND/blob/develop/LICENSE.txt) | | [rocRAND](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocrand/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocrand/LICENSE.md) |
| [ROCr Debug Agent](https://github.com/ROCm/rocr_debug_agent/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocr_debug_agent/blob/amd-staging/LICENSE.txt) | | [ROCr Debug Agent](https://github.com/ROCm/rocr_debug_agent/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocr_debug_agent/blob/amd-staging/LICENSE.txt) |
| [ROCR-Runtime](https://github.com/ROCm/ROCR-Runtime/) | [The University of Illinois/NCSA](https://github.com/ROCm/ROCR-Runtime/blob/amd-staging/LICENSE.txt) | | [ROCR-Runtime](https://github.com/ROCm/rocm-systems/tree/develop/projects/rocr-runtime/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocm-systems/blob/develop/projects/rocr-runtime/LICENSE.txt) |
| [rocSHMEM](https://github.com/ROCm/rocSHMEM/) | [MIT](https://github.com/ROCm/rocSHMEM/blob/develop/LICENSE.md) | | [rocSHMEM](https://github.com/ROCm/rocSHMEM/) | [MIT](https://github.com/ROCm/rocSHMEM/blob/develop/LICENSE.md) |
| [rocSOLVER](https://github.com/ROCm/rocSOLVER/) | [BSD-2-Clause](https://github.com/ROCm/rocSOLVER/blob/develop/LICENSE.md) | | [rocSOLVER](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocsolver/) | [BSD-2-Clause](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocsolver/LICENSE.md) |
| [rocSPARSE](https://github.com/ROCm/rocSPARSE/) | [MIT](https://github.com/ROCm/rocSPARSE/blob/develop/LICENSE.md) | | [rocSPARSE](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocsparse/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocsparse/LICENSE.md) |
| [rocThrust](https://github.com/ROCm/rocThrust/) | [Apache 2.0](https://github.com/ROCm/rocThrust/blob/develop/LICENSE) | | [rocThrust](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocthrust/) | [Apache 2.0](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocthrust/LICENSE) |
| [ROCTracer](https://github.com/ROCm/roctracer/) | [MIT](https://github.com/ROCm/roctracer/blob/amd-master/LICENSE) | | [ROCTracer](https://github.com/ROCm/rocm-systems/tree/develop/projects/roctracer/) | [MIT](https://github.com/ROCm/rocm-systems/blob/develop/projects/roctracer/LICENSE.md) |
| [rocWMMA](https://github.com/ROCm/rocWMMA/) | [MIT](https://github.com/ROCm/rocWMMA/blob/develop/LICENSE.md) | | [rocWMMA](https://github.com/ROCm/rocm-libraries/tree/develop/projects/rocwmma/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/projects/rocwmma/LICENSE.md) |
| [Tensile](https://github.com/ROCm/Tensile/) | [MIT](https://github.com/ROCm/Tensile/blob/develop/LICENSE.md) | | [Tensile](https://github.com/ROCm/rocm-libraries/tree/develop/shared/tensile/) | [MIT](https://github.com/ROCm/rocm-libraries/blob/develop/shared/tensile/LICENSE.md) |
| [TransferBench](https://github.com/ROCm/TransferBench) | [MIT](https://github.com/ROCm/TransferBench/blob/develop/LICENSE.md) | | [TransferBench](https://github.com/ROCm/TransferBench) | [MIT](https://github.com/ROCm/TransferBench/blob/develop/LICENSE.md) |
Open sourced ROCm components are released via public GitHub Open sourced ROCm components are released via public GitHub

View File

@@ -1,137 +1,136 @@
ROCm Version,7.0.1/7.0.0,6.4.3,6.4.2,6.4.1,6.4.0,6.3.3,6.3.2,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0, 6.1.5, 6.1.2, 6.1.1, 6.1.0, 6.0.2, 6.0.0 ROCm Version,7.1.1,7.1.0,7.0.2,7.0.1/7.0.0,6.4.3,6.4.2,6.4.1,6.4.0,6.3.3,6.3.2,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0, 6.1.5, 6.1.2, 6.1.1, 6.1.0, 6.0.2, 6.0.0
:ref:`Operating systems & kernels <OS-kernel-versions>`,Ubuntu 24.04.3,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,"Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04",Ubuntu 24.04,,,,,, :ref:`Operating systems & kernels <OS-kernel-versions>` [#os-compatibility-past-60]_,Ubuntu 24.04.3,Ubuntu 24.04.3,Ubuntu 24.04.3,Ubuntu 24.04.3,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04.2,"Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04",Ubuntu 24.04,,,,,,
,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,"Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3, 22.04.2","Ubuntu 22.04.4, 22.04.3, 22.04.2" ,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5,"Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3, 22.04.2","Ubuntu 22.04.4, 22.04.3, 22.04.2"
,,,,,,,,,,,,,,"Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5" ,,,,,,,,,,,,,,,,,"Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5"
,"RHEL 9.6, 9.4","RHEL 9.6, 9.4","RHEL 9.6, 9.4","RHEL 9.6, 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.3, 9.2","RHEL 9.3, 9.2" ,"RHEL 10.1, 10.0, 9.7, 9.6, 9.4","RHEL 10.0, 9.6, 9.4","RHEL 10.0, 9.6, 9.4","RHEL 9.6, 9.4","RHEL 9.6, 9.4","RHEL 9.6, 9.4","RHEL 9.6, 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.3, 9.2","RHEL 9.3, 9.2"
,RHEL 8.10 [#rhel-700-past-60]_,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,"RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8" ,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,RHEL 8.10,"RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8"
,SLES 15 SP7 [#sles-db-700-past-60]_,"SLES 15 SP7, SP6","SLES 15 SP7, SP6",SLES 15 SP6,SLES 15 SP6,"SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4" ,SLES 15 SP7,SLES 15 SP7,SLES 15 SP7,SLES 15 SP7,"SLES 15 SP7, SP6","SLES 15 SP7, SP6",SLES 15 SP6,SLES 15 SP6,"SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4"
,,,,,,,,,,,,,,,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9 ,,,,,,,,,,,,,,,,,,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9
,"Oracle Linux 9, 8 [#ol-700-mi300x-past-60]_","Oracle Linux 9, 8 [#mi300x-past-60]_","Oracle Linux 9, 8 [#mi300x-past-60]_","Oracle Linux 9, 8 [#mi300x-past-60]_","Oracle Linux 9, 8 [#mi300x-past-60]_",Oracle Linux 8.10 [#mi300x-past-60]_,Oracle Linux 8.10 [#mi300x-past-60]_,Oracle Linux 8.10 [#mi300x-past-60]_,Oracle Linux 8.10 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,Oracle Linux 8.9 [#mi300x-past-60]_,,, ,"Oracle Linux 10, 9, 8","Oracle Linux 10, 9, 8","Oracle Linux 10, 9, 8","Oracle Linux 9, 8","Oracle Linux 9, 8","Oracle Linux 9, 8","Oracle Linux 9, 8","Oracle Linux 9, 8",Oracle Linux 8.10,Oracle Linux 8.10,Oracle Linux 8.10,Oracle Linux 8.10,Oracle Linux 8.9,Oracle Linux 8.9,Oracle Linux 8.9,Oracle Linux 8.9,Oracle Linux 8.9,Oracle Linux 8.9,Oracle Linux 8.9,,,
,Debian 12 [#sles-db-700-past-60]_,Debian 12 [#single-node-past-60]_,Debian 12 [#single-node-past-60]_,Debian 12 [#single-node-past-60]_,Debian 12 [#single-node-past-60]_,Debian 12 [#single-node-past-60]_,Debian 12 [#single-node-past-60]_,Debian 12 [#single-node-past-60]_,,,,,,,,,,, ,"Debian 13, 12","Debian 13, 12","Debian 13, 12",Debian 12,Debian 12,Debian 12,Debian 12,Debian 12,Debian 12,Debian 12,Debian 12,,,,,,,,,,,
,Azure Linux 3.0 [#az-mi300x-past-60]_,Azure Linux 3.0 [#az-mi300x-past-60]_,Azure Linux 3.0 [#az-mi300x-past-60]_,Azure Linux 3.0 [#az-mi300x-past-60]_,Azure Linux 3.0 [#az-mi300x-past-60]_,Azure Linux 3.0 [#az-mi300x-630-past-60]_,Azure Linux 3.0 [#az-mi300x-630-past-60]_,,,,,,,,,,,, ,,,Azure Linux 3.0,Azure Linux 3.0,Azure Linux 3.0,Azure Linux 3.0,Azure Linux 3.0,Azure Linux 3.0,Azure Linux 3.0,Azure Linux 3.0,,,,,,,,,,,,
,Rocky Linux 9 [#rl-700-past-60]_,,,,,,,,,,,,,,,,,, ,Rocky Linux 9,Rocky Linux 9,Rocky Linux 9,Rocky Linux 9,,,,,,,,,,,,,,,,,,
,.. _architecture-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, ,.. _architecture-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
:doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA4,,,,,,,,,,,,,,,,,, :doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA4,CDNA4,CDNA4,CDNA4,,,,,,,,,,,,,,,,,,
,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3 ,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3
,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2 ,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2
,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA ,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA
,RDNA4,RDNA4,RDNA4,RDNA4,,,,,,,,,,,,,,, ,RDNA4,RDNA4,RDNA4,RDNA4,RDNA4,RDNA4,RDNA4,,,,,,,,,,,,,,,
,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3 ,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3
,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2 ,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2
,.. _gpu-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, ,.. _gpu-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
:doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>`,gfx950 [#mi350x-os-past-60]_,,,,,,,,,,,,,,,,,, :doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>` [#gpu-compatibility-past-60]_,gfx950,gfx950,gfx950,gfx950,,,,,,,,,,,,,,,,,,
,gfx1201 [#RDNA-OS-700-past-60]_,gfx1201 [#RDNA-OS-past-60]_,gfx1201 [#RDNA-OS-past-60]_,gfx1201 [#RDNA-OS-past-60]_,,,,,,,,,,,,,,, ,gfx1201,gfx1201,gfx1201,gfx1201,gfx1201,gfx1201,gfx1201,,,,,,,,,,,,,,,
,gfx1200 [#RDNA-OS-700-past-60]_,gfx1200 [#RDNA-OS-past-60]_,gfx1200 [#RDNA-OS-past-60]_,gfx1200 [#RDNA-OS-past-60]_,,,,,,,,,,,,,,, ,gfx1200,gfx1200,gfx1200,gfx1200,gfx1200,gfx1200,gfx1200,,,,,,,,,,,,,,,
,gfx1101 [#RDNA-OS-700-past-60]_ [#rd-v710-past-60]_,gfx1101 [#RDNA-OS-past-60]_ [#7700XT-OS-past-60]_,gfx1101 [#RDNA-OS-past-60]_ [#7700XT-OS-past-60]_,gfx1101 [#RDNA-OS-past-60]_,,,,,,,,,,,,,,, ,gfx1101,gfx1101,gfx1101,gfx1101,gfx1101,gfx1101,gfx1101,,,,,,,,,,,,,,,
,gfx1100 [#RDNA-OS-700-past-60]_,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100 ,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100
,gfx1030 [#RDNA-OS-700-past-60]_ [#rd-v620-past-60]_,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030 ,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030
,gfx942 [#mi325x-os-past-60]_ [#mi300x-os-past-60]_ [#mi300A-os-past-60]_,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942 [#mi300_624-past-60]_,gfx942 [#mi300_622-past-60]_,gfx942 [#mi300_621-past-60]_,gfx942 [#mi300_620-past-60]_, gfx942 [#mi300_612-past-60]_, gfx942 [#mi300_612-past-60]_, gfx942 [#mi300_611-past-60]_, gfx942 [#mi300_610-past-60]_, gfx942 [#mi300_602-past-60]_, gfx942 [#mi300_600-past-60]_ ,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942,gfx942, gfx942, gfx942, gfx942, gfx942, gfx942, gfx942
,gfx90a [#mi200x-os-past-60]_,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a ,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a
,gfx908 [#mi100-os-past-60]_,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908 ,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908
,,,,,,,,,,,,,,,,,,, ,,,,,,,,,,,,,,,,,,,,,,
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
:doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.7, 2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13" :doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.9, 2.8, 2.7","2.8, 2.7, 2.6","2.8, 2.7, 2.6","2.7, 2.6, 2.5","2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13"
:doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.19.1, 2.18.1","2.18.1, 2.17.1, 2.16.2","2.18.1, 2.17.1, 2.16.2","2.18.1, 2.17.1, 2.16.2","2.18.1, 2.17.1, 2.16.2","2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.14.0, 2.13.1, 2.12.1","2.14.0, 2.13.1, 2.12.1" :doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.20.0, 2.19.1, 2.18.1","2.20.0, 2.19.1, 2.18.1","2.19.1, 2.18.1, 2.17.1 [#tf-mi350-past-60]_","2.19.1, 2.18.1, 2.17.1 [#tf-mi350-past-60]_","2.18.1, 2.17.1, 2.16.2","2.18.1, 2.17.1, 2.16.2","2.18.1, 2.17.1, 2.16.2","2.18.1, 2.17.1, 2.16.2","2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.14.0, 2.13.1, 2.12.1","2.14.0, 2.13.1, 2.12.1"
:doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.6.0,0.4.35,0.4.35,0.4.35,0.4.35,0.4.31,0.4.31,0.4.31,0.4.31,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26 :doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.7.1,0.7.1,0.6.0,0.6.0,0.4.35,0.4.35,0.4.35,0.4.35,0.4.31,0.4.31,0.4.31,0.4.31,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26
:doc:`verl <../compatibility/ml-compatibility/verl-compatibility>` [#verl_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.3.0.post0,N/A,N/A,N/A,N/A,N/A,N/A :doc:`verl <../compatibility/ml-compatibility/verl-compatibility>` [#verl_compat-past-60]_,N/A,N/A,N/A,0.6.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.3.0.post0,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`Stanford Megatron-LM <../compatibility/ml-compatibility/stanford-megatron-lm-compatibility>` [#stanford-megatron-lm_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,85f95ae,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`Stanford Megatron-LM <../compatibility/ml-compatibility/stanford-megatron-lm-compatibility>` [#stanford-megatron-lm_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,85f95ae,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`DGL <../compatibility/ml-compatibility/dgl-compatibility>` [#dgl_compat-past-60]_,N/A,N/A,N/A,N/A,2.4.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`DGL <../compatibility/ml-compatibility/dgl-compatibility>` [#dgl_compat-past-60]_,N/A,N/A,N/A,2.4.0,2.4.0,N/A,N/A,2.4.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`Megablocks <../compatibility/ml-compatibility/megablocks-compatibility>` [#megablocks_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.7.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`Megablocks <../compatibility/ml-compatibility/megablocks-compatibility>` [#megablocks_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.7.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`Taichi <../compatibility/ml-compatibility/taichi-compatibility>` [#taichi_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,1.8.0b1,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`Ray <../compatibility/ml-compatibility/ray-compatibility>` [#ray_compat-past-60]_,N/A,N/A,N/A,2.51.1,N/A,N/A,2.48.0.post0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`Ray <../compatibility/ml-compatibility/ray-compatibility>` [#ray_compat-past-60]_,N/A,N/A,N/A,2.48.0.post0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat-past-60]_,N/A,N/A,N/A,b6652,b6356,b6356,b6356,b5997,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat-past-60]_,b6356,b6356,b6356,b6356,b5997,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`FlashInfer <../compatibility/ml-compatibility/flashinfer-compatibility>` [#flashinfer_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,v0.2.5,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`FlashInfer <../compatibility/ml-compatibility/flashinfer-compatibility>` [#flashinfer_compat-past-60]_,N/A,N/A,N/A,v0.2.5,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A `ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.23.1,1.22.0,1.22.0,1.22.0,1.20.0,1.20.0,1.20.0,1.20.0,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.14.1,1.14.1
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.22.0,1.20.0,1.20.0,1.20.0,1.20.0,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.14.1,1.14.1 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, `UCC <https://github.com/ROCm/ucc>`_,>=1.4.0,>=1.4.0,>=1.4.0,>=1.4.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.2.0,>=1.2.0
`UCC <https://github.com/ROCm/ucc>`_,>=1.4.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.2.0,>=1.2.0 `UCX <https://github.com/ROCm/ucx>`_,>=1.17.0,>=1.17.0,>=1.17.0,>=1.17.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1
`UCX <https://github.com/ROCm/ucx>`_,>=1.17.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, Thrust,2.8.5,2.8.5,2.6.0,2.6.0,2.5.0,2.5.0,2.5.0,2.5.0,2.3.2,2.3.2,2.3.2,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
Thrust,2.6.0,2.5.0,2.5.0,2.5.0,2.5.0,2.3.2,2.3.2,2.3.2,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1 CUB,2.8.5,2.8.5,2.6.0,2.6.0,2.5.0,2.5.0,2.5.0,2.5.0,2.3.2,2.3.2,2.3.2,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
CUB,2.6.0,2.5.0,2.5.0,2.5.0,2.5.0,2.3.2,2.3.2,2.3.2,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, DRIVER & USER SPACE [#kfd_support-past-60]_,.. _kfd-userspace-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
DRIVER & USER SPACE [#kfd_support-past-60]_,.. _kfd-userspace-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, :doc:`AMD GPU Driver <rocm-install-on-linux:reference/user-kernel-space-compat-matrix>`,"30.20.1, 30.20.0 [#mi325x_KVM-past-60]_, 30.10.2, 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x","30.20.0 [#mi325x_KVM-past-60]_, 30.10.2, 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x","30.10.2, 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x, 6.3.x","30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x, 6.3.x, 6.2.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x"
:doc:`AMD GPU Driver <rocm-install-on-linux:reference/user-kernel-space-compat-matrix>`,"30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x, 6.3.x, 6.2.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x" ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, :doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0
:doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0 :doc:`MIGraphX <amdmigraphx:index>`,2.14.0,2.14.0,2.13.0,2.13.0,2.12.0,2.12.0,2.12.0,2.12.0,2.11.0,2.11.0,2.11.0,2.11.0,2.10.0,2.10.0,2.10.0,2.10.0,2.9.0,2.9.0,2.9.0,2.9.0,2.8.0,2.8.0
:doc:`MIGraphX <amdmigraphx:index>`,2.13.0,2.12.0,2.12.0,2.12.0,2.12.0,2.11.0,2.11.0,2.11.0,2.11.0,2.10.0,2.10.0,2.10.0,2.10.0,2.9.0,2.9.0,2.9.0,2.9.0,2.8.0,2.8.0 :doc:`MIOpen <miopen:index>`,3.5.1,3.5.1,3.5.0,3.5.0,3.4.0,3.4.0,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`MIOpen <miopen:index>`,3.5.0,3.4.0,3.4.0,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0 :doc:`MIVisionX <mivisionx:index>`,3.4.0,3.4.0,3.3.0,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0,3.0.0,3.0.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0
:doc:`MIVisionX <mivisionx:index>`,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0,3.0.0,3.0.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0 :doc:`rocAL <rocal:index>`,2.4.0,2.4.0,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.0,2.0.0,2.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`rocAL <rocal:index>`,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.0,2.0.0,2.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0 :doc:`rocDecode <rocdecode:index>`,1.4.0,1.4.0,1.0.0,1.0.0,0.10.0,0.10.0,0.10.0,0.10.0,0.8.0,0.8.0,0.8.0,0.8.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.0,0.5.0,N/A,N/A
:doc:`rocDecode <rocdecode:index>`,1.0.0,0.10.0,0.10.0,0.10.0,0.10.0,0.8.0,0.8.0,0.8.0,0.8.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.0,0.5.0,N/A,N/A :doc:`rocJPEG <rocjpeg:index>`,1.2.0,1.2.0,1.1.0,1.1.0,0.8.0,0.8.0,0.8.0,0.8.0,0.6.0,0.6.0,0.6.0,0.6.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`rocJPEG <rocjpeg:index>`,1.1.0,0.8.0,0.8.0,0.8.0,0.8.0,0.6.0,0.6.0,0.6.0,0.6.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`rocPyDecode <rocpydecode:index>`,0.7.0,0.7.0,0.6.0,0.6.0,0.3.1,0.3.1,0.3.1,0.3.1,0.2.0,0.2.0,0.2.0,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`rocPyDecode <rocpydecode:index>`,0.6.0,0.3.1,0.3.1,0.3.1,0.3.1,0.2.0,0.2.0,0.2.0,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0,N/A,N/A,N/A,N/A,N/A,N/A :doc:`RPP <rpp:index>`,2.1.0,2.1.0,2.0.0,2.0.0,1.9.10,1.9.10,1.9.10,1.9.10,1.9.1,1.9.1,1.9.1,1.9.1,1.8.0,1.8.0,1.8.0,1.8.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0
:doc:`RPP <rpp:index>`,2.0.0,1.9.10,1.9.10,1.9.10,1.9.10,1.9.1,1.9.1,1.9.1,1.9.1,1.8.0,1.8.0,1.8.0,1.8.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, COMMUNICATION,.. _commlibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
COMMUNICATION,.. _commlibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, :doc:`RCCL <rccl:index>`,2.27.7,2.27.7,2.26.6,2.26.6,2.22.3,2.22.3,2.22.3,2.22.3,2.21.5,2.21.5,2.21.5,2.21.5,2.20.5,2.20.5,2.20.5,2.20.5,2.18.6,2.18.6,2.18.6,2.18.6,2.18.3,2.18.3
:doc:`RCCL <rccl:index>`,2.26.6,2.22.3,2.22.3,2.22.3,2.22.3,2.21.5,2.21.5,2.21.5,2.21.5,2.20.5,2.20.5,2.20.5,2.20.5,2.18.6,2.18.6,2.18.6,2.18.6,2.18.3,2.18.3 :doc:`rocSHMEM <rocshmem:index>`,3.1.0,3.0.0,3.0.0,3.0.0,2.0.1,2.0.1,2.0.0,2.0.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`rocSHMEM <rocshmem:index>`,3.0.0,2.0.1,2.0.1,2.0.0,2.0.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, MATH LIBS,.. _mathlibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
MATH LIBS,.. _mathlibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, `half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0
`half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0 :doc:`hipBLAS <hipblas:index>`,3.1.0,3.1.0,3.0.2,3.0.0,2.4.0,2.4.0,2.4.0,2.4.0,2.3.0,2.3.0,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.0,2.0.0
:doc:`hipBLAS <hipblas:index>`,3.0.0,2.4.0,2.4.0,2.4.0,2.4.0,2.3.0,2.3.0,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.0,2.0.0 :doc:`hipBLASLt <hipblaslt:index>`,1.1.0,1.1.0,1.0.0,1.0.0,0.12.1,0.12.1,0.12.1,0.12.0,0.10.0,0.10.0,0.10.0,0.10.0,0.8.0,0.8.0,0.8.0,0.8.0,0.7.0,0.7.0,0.7.0,0.7.0,0.6.0,0.6.0
:doc:`hipBLASLt <hipblaslt:index>`,1.0.0,0.12.1,0.12.1,0.12.1,0.12.0,0.10.0,0.10.0,0.10.0,0.10.0,0.8.0,0.8.0,0.8.0,0.8.0,0.7.0,0.7.0,0.7.0,0.7.0,0.6.0,0.6.0 :doc:`hipFFT <hipfft:index>`,1.0.21,1.0.21,1.0.20,1.0.20,1.0.18,1.0.18,1.0.18,1.0.18,1.0.17,1.0.17,1.0.17,1.0.17,1.0.16,1.0.15,1.0.15,1.0.14,1.0.14,1.0.14,1.0.14,1.0.14,1.0.13,1.0.13
:doc:`hipFFT <hipfft:index>`,1.0.20,1.0.18,1.0.18,1.0.18,1.0.18,1.0.17,1.0.17,1.0.17,1.0.17,1.0.16,1.0.15,1.0.15,1.0.14,1.0.14,1.0.14,1.0.14,1.0.14,1.0.13,1.0.13 :doc:`hipfort <hipfort:index>`,0.7.1,0.7.1,0.7.0,0.7.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.1,0.5.1,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0
:doc:`hipfort <hipfort:index>`,0.7.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.1,0.5.1,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0 :doc:`hipRAND <hiprand:index>`,3.1.0,3.1.0,3.0.0,3.0.0,2.12.0,2.12.0,2.12.0,2.12.0,2.11.1,2.11.1,2.11.1,2.11.0,2.11.1,2.11.0,2.11.0,2.11.0,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16
:doc:`hipRAND <hiprand:index>`,3.0.0,2.12.0,2.12.0,2.12.0,2.12.0,2.11.1,2.11.1,2.11.1,2.11.0,2.11.1,2.11.0,2.11.0,2.11.0,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16 :doc:`hipSOLVER <hipsolver:index>`,3.1.0,3.1.0,3.0.0,3.0.0,2.4.0,2.4.0,2.4.0,2.4.0,2.3.0,2.3.0,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.1,2.1.1,2.1.1,2.1.0,2.0.0,2.0.0
:doc:`hipSOLVER <hipsolver:index>`,3.0.0,2.4.0,2.4.0,2.4.0,2.4.0,2.3.0,2.3.0,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.1,2.1.1,2.1.1,2.1.0,2.0.0,2.0.0 :doc:`hipSPARSE <hipsparse:index>`,4.1.0,4.1.0,4.0.1,4.0.1,3.2.0,3.2.0,3.2.0,3.2.0,3.1.2,3.1.2,3.1.2,3.1.2,3.1.1,3.1.1,3.1.1,3.1.1,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0
:doc:`hipSPARSE <hipsparse:index>`,4.0.1,3.2.0,3.2.0,3.2.0,3.2.0,3.1.2,3.1.2,3.1.2,3.1.2,3.1.1,3.1.1,3.1.1,3.1.1,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0 :doc:`hipSPARSELt <hipsparselt:index>`,0.2.5,0.2.5,0.2.4,0.2.4,0.2.3,0.2.3,0.2.3,0.2.3,0.2.2,0.2.2,0.2.2,0.2.2,0.2.1,0.2.1,0.2.1,0.2.1,0.2.0,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0
:doc:`hipSPARSELt <hipsparselt:index>`,0.2.4,0.2.3,0.2.3,0.2.3,0.2.3,0.2.2,0.2.2,0.2.2,0.2.2,0.2.1,0.2.1,0.2.1,0.2.1,0.2.0,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0 :doc:`rocALUTION <rocalution:index>`,4.0.1,4.0.1,4.0.0,4.0.0,3.2.3,3.2.3,3.2.3,3.2.2,3.2.1,3.2.1,3.2.1,3.2.1,3.2.1,3.2.0,3.2.0,3.2.0,3.1.1,3.1.1,3.1.1,3.1.1,3.0.3,3.0.3
:doc:`rocALUTION <rocalution:index>`,4.0.0,3.2.3,3.2.3,3.2.3,3.2.2,3.2.1,3.2.1,3.2.1,3.2.1,3.2.1,3.2.0,3.2.0,3.2.0,3.1.1,3.1.1,3.1.1,3.1.1,3.0.3,3.0.3 :doc:`rocBLAS <rocblas:index>`,5.1.1,5.1.0,5.0.2,5.0.0,4.4.1,4.4.1,4.4.0,4.4.0,4.3.0,4.3.0,4.3.0,4.3.0,4.2.4,4.2.1,4.2.1,4.2.0,4.1.2,4.1.2,4.1.0,4.1.0,4.0.0,4.0.0
:doc:`rocBLAS <rocblas:index>`,5.0.0,4.4.1,4.4.1,4.4.0,4.4.0,4.3.0,4.3.0,4.3.0,4.3.0,4.2.4,4.2.1,4.2.1,4.2.0,4.1.2,4.1.2,4.1.0,4.1.0,4.0.0,4.0.0 :doc:`rocFFT <rocfft:index>`,1.0.35,1.0.35,1.0.34,1.0.34,1.0.32,1.0.32,1.0.32,1.0.32,1.0.31,1.0.31,1.0.31,1.0.31,1.0.30,1.0.29,1.0.29,1.0.28,1.0.27,1.0.27,1.0.27,1.0.26,1.0.25,1.0.23
:doc:`rocFFT <rocfft:index>`,1.0.34,1.0.32,1.0.32,1.0.32,1.0.32,1.0.31,1.0.31,1.0.31,1.0.31,1.0.30,1.0.29,1.0.29,1.0.28,1.0.27,1.0.27,1.0.27,1.0.26,1.0.25,1.0.23 :doc:`rocRAND <rocrand:index>`,4.1.0,4.1.0,4.0.0,4.0.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.1,3.1.0,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,2.10.17
:doc:`rocRAND <rocrand:index>`,4.0.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.1,3.1.0,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,2.10.17 :doc:`rocSOLVER <rocsolver:index>`,3.31.0,3.31.0,3.30.1,3.30.0,3.28.2,3.28.2,3.28.0,3.28.0,3.27.0,3.27.0,3.27.0,3.27.0,3.26.2,3.26.0,3.26.0,3.26.0,3.25.0,3.25.0,3.25.0,3.25.0,3.24.0,3.24.0
:doc:`rocSOLVER <rocsolver:index>`,3.30.0,3.28.2,3.28.2,3.28.0,3.28.0,3.27.0,3.27.0,3.27.0,3.27.0,3.26.2,3.26.0,3.26.0,3.26.0,3.25.0,3.25.0,3.25.0,3.25.0,3.24.0,3.24.0 :doc:`rocSPARSE <rocsparse:index>`,4.1.0,4.1.0,4.0.2,4.0.2,3.4.0,3.4.0,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.2,3.1.2,3.1.2,3.1.2,3.0.2,3.0.2
:doc:`rocSPARSE <rocsparse:index>`,4.0.2,3.4.0,3.4.0,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.2,3.1.2,3.1.2,3.1.2,3.0.2,3.0.2 :doc:`rocWMMA <rocwmma:index>`,2.1.0,2.0.0,2.0.0,2.0.0,1.7.0,1.7.0,1.7.0,1.7.0,1.6.0,1.6.0,1.6.0,1.6.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0,1.4.0,1.4.0,1.3.0,1.3.0
:doc:`rocWMMA <rocwmma:index>`,2.0.0,1.7.0,1.7.0,1.7.0,1.7.0,1.6.0,1.6.0,1.6.0,1.6.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0,1.4.0,1.4.0,1.3.0,1.3.0 :doc:`Tensile <tensile:src/index>`,4.44.0,4.44.0,4.44.0,4.44.0,4.43.0,4.43.0,4.43.0,4.43.0,4.42.0,4.42.0,4.42.0,4.42.0,4.41.0,4.41.0,4.41.0,4.41.0,4.40.0,4.40.0,4.40.0,4.40.0,4.39.0,4.39.0
:doc:`Tensile <tensile:src/index>`,4.44.0,4.43.0,4.43.0,4.43.0,4.43.0,4.42.0,4.42.0,4.42.0,4.42.0,4.41.0,4.41.0,4.41.0,4.41.0,4.40.0,4.40.0,4.40.0,4.40.0,4.39.0,4.39.0 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, PRIMITIVES,.. _primitivelibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
PRIMITIVES,.. _primitivelibs-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, :doc:`hipCUB <hipcub:index>`,4.1.0,4.1.0,4.0.0,4.0.0,3.4.0,3.4.0,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`hipCUB <hipcub:index>`,4.0.0,3.4.0,3.4.0,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0 :doc:`hipTensor <hiptensor:index>`,2.0.0,2.0.0,2.0.0,2.0.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0,1.4.0,1.4.0,1.3.0,1.3.0,1.3.0,1.3.0,1.2.0,1.2.0,1.2.0,1.2.0,1.1.0,1.1.0
:doc:`hipTensor <hiptensor:index>`,2.0.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0,1.4.0,1.4.0,1.3.0,1.3.0,1.3.0,1.3.0,1.2.0,1.2.0,1.2.0,1.2.0,1.1.0,1.1.0 :doc:`rocPRIM <rocprim:index>`,4.1.0,4.1.0,4.0.1,4.0.0,3.4.1,3.4.1,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.2,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`rocPRIM <rocprim:index>`,4.0.0,3.4.1,3.4.1,3.4.0,3.4.0,3.3.0,3.3.0,3.3.0,3.3.0,3.2.2,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0 :doc:`rocThrust <rocthrust:index>`,4.1.0,4.1.0,4.0.0,4.0.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.1.1,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0
:doc:`rocThrust <rocthrust:index>`,4.0.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.3.0,3.1.1,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, SUPPORT LIBS,,,,,,,,,,,,,,,,,,,,,,
SUPPORT LIBS,,,,,,,,,,,,,,,,,,, `hipother <https://github.com/ROCm/hipother>`_,7.1.52802,7.1.25424,7.0.51831,7.0.51830,6.4.43483,6.4.43483,6.4.43483,6.4.43482,6.3.42134,6.3.42134,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
`hipother <https://github.com/ROCm/hipother>`_,7.0.51830,6.4.43483,6.4.43483,6.4.43483,6.4.43482,6.3.42134,6.3.42134,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830 `rocm-core <https://github.com/ROCm/rocm-core>`_,7.1.1,7.1.0,7.0.2,7.0.1/7.0.0,6.4.3,6.4.2,6.4.1,6.4.0,6.3.3,6.3.2,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0,6.1.5,6.1.2,6.1.1,6.1.0,6.0.2,6.0.0
`rocm-core <https://github.com/ROCm/rocm-core>`_,7.0.1/7.0.0,6.4.3,6.4.2,6.4.1,6.4.0,6.3.3,6.3.2,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0,6.1.5,6.1.2,6.1.1,6.1.0,6.0.2,6.0.0 `ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,20240607.5.7,20240607.5.7,20240607.4.05,20240607.1.4246,20240125.5.08,20240125.5.08,20240125.5.08,20240125.3.30,20231016.2.245,20231016.2.245
`ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,20240607.5.7,20240607.5.7,20240607.4.05,20240607.1.4246,20240125.5.08,20240125.5.08,20240125.5.08,20240125.3.30,20231016.2.245,20231016.2.245 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, :doc:`AMD SMI <amdsmi:index>`,26.2.0,26.1.0,26.0.2,26.0.0,25.5.1,25.5.1,25.4.2,25.3.0,24.7.1,24.7.1,24.7.1,24.7.1,24.6.3,24.6.3,24.6.3,24.6.2,24.5.1,24.5.1,24.5.1,24.4.1,23.4.2,23.4.2
:doc:`AMD SMI <amdsmi:index>`,26.0.0,25.5.1,25.5.1,25.4.2,25.3.0,24.7.1,24.7.1,24.7.1,24.7.1,24.6.3,24.6.3,24.6.3,24.6.2,24.5.1,24.5.1,24.5.1,24.4.1,23.4.2,23.4.2 :doc:`ROCm Data Center Tool <rdc:index>`,1.2.0,1.2.0,1.1.0,1.1.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0
:doc:`ROCm Data Center Tool <rdc:index>`,1.1.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0 :doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0 :doc:`ROCm SMI <rocm_smi_lib:index>`,7.8.0,7.8.0,7.8.0,7.8.0,7.7.0,7.5.0,7.5.0,7.5.0,7.4.0,7.4.0,7.4.0,7.4.0,7.3.0,7.3.0,7.3.0,7.3.0,7.2.0,7.2.0,7.0.0,7.0.0,6.0.2,6.0.0
:doc:`ROCm SMI <rocm_smi_lib:index>`,7.8.0,7.7.0,7.5.0,7.5.0,7.5.0,7.4.0,7.4.0,7.4.0,7.4.0,7.3.0,7.3.0,7.3.0,7.3.0,7.2.0,7.2.0,7.0.0,7.0.0,6.0.2,6.0.0 :doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.3.0,1.2.0,1.2.0,1.2.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.0.60204,1.0.60202,1.0.60201,1.0.60200,1.0.60105,1.0.60102,1.0.60101,1.0.60100,1.0.60002,1.0.60000
:doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.2.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.0.60204,1.0.60202,1.0.60201,1.0.60200,1.0.60105,1.0.60102,1.0.60101,1.0.60100,1.0.60002,1.0.60000 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, PERFORMANCE TOOLS,,,,,,,,,,,,,,,,,,,,,,
PERFORMANCE TOOLS,,,,,,,,,,,,,,,,,,, :doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,2.6.0,2.6.0,2.6.0,2.6.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0
:doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,2.6.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0 :doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.3.1,3.3.0,3.2.3,3.2.3,3.1.1,3.1.1,3.1.0,3.1.0,3.0.0,3.0.0,3.0.0,3.0.0,2.0.1,2.0.1,2.0.1,2.0.1,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.2.3,3.1.1,3.1.1,3.1.0,3.1.0,3.0.0,3.0.0,3.0.0,3.0.0,2.0.1,2.0.1,2.0.1,2.0.1,N/A,N/A,N/A,N/A,N/A,N/A :doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,1.2.1,1.2.0,1.1.1,1.1.0,1.0.2,1.0.2,1.0.1,1.0.0,0.1.2,0.1.1,0.1.0,0.1.0,1.11.2,1.11.2,1.11.2,1.11.2,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,1.1.0,1.0.2,1.0.2,1.0.1,1.0.0,0.1.2,0.1.1,0.1.0,0.1.0,1.11.2,1.11.2,1.11.2,1.11.2,N/A,N/A,N/A,N/A,N/A,N/A :doc:`ROCProfiler <rocprofiler:index>`,2.0.70101,2.0.70100,2.0.70002,2.0.70000,2.0.60403,2.0.60402,2.0.60401,2.0.60400,2.0.60303,2.0.60302,2.0.60301,2.0.60300,2.0.60204,2.0.60202,2.0.60201,2.0.60200,2.0.60105,2.0.60102,2.0.60101,2.0.60100,2.0.60002,2.0.60000
:doc:`ROCProfiler <rocprofiler:index>`,2.0.70000,2.0.60403,2.0.60402,2.0.60401,2.0.60400,2.0.60303,2.0.60302,2.0.60301,2.0.60300,2.0.60204,2.0.60202,2.0.60201,2.0.60200,2.0.60105,2.0.60102,2.0.60101,2.0.60100,2.0.60002,2.0.60000 :doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,1.0.0,1.0.0,1.0.0,1.0.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.0,0.5.0,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,1.0.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.0,0.5.0,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,N/A,N/A,N/A,N/A,N/A,N/A :doc:`ROCTracer <roctracer:index>`,4.1.70101,4.1.70100,4.1.70002,4.1.70000,4.1.60403,4.1.60402,4.1.60401,4.1.60400,4.1.60303,4.1.60302,4.1.60301,4.1.60300,4.1.60204,4.1.60202,4.1.60201,4.1.60200,4.1.60105,4.1.60102,4.1.60101,4.1.60100,4.1.60002,4.1.60000
:doc:`ROCTracer <roctracer:index>`,4.1.70000,4.1.60403,4.1.60402,4.1.60401,4.1.60400,4.1.60303,4.1.60302,4.1.60301,4.1.60300,4.1.60204,4.1.60202,4.1.60201,4.1.60200,4.1.60105,4.1.60102,4.1.60101,4.1.60100,4.1.60002,4.1.60000 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, DEVELOPMENT TOOLS,,,,,,,,,,,,,,,,,,,,,,
DEVELOPMENT TOOLS,,,,,,,,,,,,,,,,,,, :doc:`HIPIFY <hipify:index>`,20.0.0,20.0.0,20.0.0,20.0.0,19.0.0,19.0.0,19.0.0,19.0.0,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
:doc:`HIPIFY <hipify:index>`,20.0.0,19.0.0,19.0.0,19.0.0,19.0.0,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483 :doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.13.0,0.13.0,0.13.0,0.13.0,0.12.0,0.12.0,0.12.0,0.12.0,0.11.0,0.11.0
:doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.14.0,0.13.0,0.13.0,0.13.0,0.13.0,0.12.0,0.12.0,0.12.0,0.12.0,0.11.0,0.11.0 :doc:`ROCdbgapi <rocdbgapi:index>`,0.77.4,0.77.4,0.77.4,0.77.3,0.77.2,0.77.2,0.77.2,0.77.2,0.77.0,0.77.0,0.77.0,0.77.0,0.76.0,0.76.0,0.76.0,0.76.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0
:doc:`ROCdbgapi <rocdbgapi:index>`,0.77.3,0.77.2,0.77.2,0.77.2,0.77.2,0.77.0,0.77.0,0.77.0,0.77.0,0.76.0,0.76.0,0.76.0,0.76.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0 :doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,16.3.0,16.3.0,16.3.0,16.3.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,14.2.0,14.2.0,14.2.0,14.2.0,14.1.0,14.1.0,14.1.0,14.1.0,13.2.0,13.2.0
:doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,16.3.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,15.2.0,14.2.0,14.2.0,14.2.0,14.2.0,14.1.0,14.1.0,14.1.0,14.1.0,13.2.0,13.2.0 `rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.5.0,0.5.0,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.3.0,0.3.0,0.3.0,0.3.0,N/A,N/A
`rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.3.0,0.3.0,0.3.0,0.3.0,N/A,N/A :doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.1.0,2.1.0,2.1.0,2.1.0,2.0.4,2.0.4,2.0.4,2.0.4,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3
:doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.1.0,2.0.4,2.0.4,2.0.4,2.0.4,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, COMPILERS,.. _compilers-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
COMPILERS,.. _compilers-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, `clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0
`clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0 :doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0 `Flang <https://github.com/ROCm/flang>`_,20.0.025444,20.0.025425,20.0.0.25385,20.0.0.25314,19.0.0.25224,19.0.0.25224,19.0.0.25184,19.0.0.25133,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
`Flang <https://github.com/ROCm/flang>`_,20.0.0.25314,19.0.0.25224,19.0.0.25224,19.0.0.25184,19.0.0.25133,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483 :doc:`llvm-project <llvm-project:index>`,20.0.025444,20.0.025425,20.0.0.25385,20.0.0.25314,19.0.0.25224,19.0.0.25224,19.0.0.25184,19.0.0.25133,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24491,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
:doc:`llvm-project <llvm-project:index>`,20.0.0.25314,19.0.0.25224,19.0.0.25224,19.0.0.25184,19.0.0.25133,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24491,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483 `OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,20.0.025444,20.0.025425,20.0.0.25385,20.0.0.25314,19.0.0.25224,19.0.0.25224,19.0.0.25184,19.0.0.25133,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24491,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
`OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,20.0.0.25314,19.0.0.25224,19.0.0.25224,19.0.0.25184,19.0.0.25133,18.0.0.25012,18.0.0.25012,18.0.0.24491,18.0.0.24491,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483 ,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,, RUNTIMES,.. _runtime-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,,,,,
RUNTIMES,.. _runtime-support-compatibility-matrix-past-60:,,,,,,,,,,,,,,,,,, :doc:`AMD CLR <hip:understand/amd_clr>`,7.1.52802,7.1.25424,7.0.51831,7.0.51830,6.4.43484,6.4.43484,6.4.43483,6.4.43482,6.3.42134,6.3.42134,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
:doc:`AMD CLR <hip:understand/amd_clr>`,7.0.51830,6.4.43484,6.4.43484,6.4.43483,6.4.43482,6.3.42134,6.3.42134,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830 :doc:`HIP <hip:index>`,7.1.52802,7.1.25424,7.0.51831,7.0.51830,6.4.43484,6.4.43484,6.4.43483,6.4.43482,6.3.42134,6.3.42134,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
:doc:`HIP <hip:index>`,7.0.51830,6.4.43484,6.4.43484,6.4.43483,6.4.43482,6.3.42134,6.3.42134,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830 `OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0
`OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0 :doc:`ROCr Runtime <rocr-runtime:index>`,1.18.0,1.18.0,1.18.0,1.18.0,1.15.0,1.15.0,1.15.0,1.15.0,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.13.0,1.13.0,1.13.0,1.13.0,1.13.0,1.12.0,1.12.0
:doc:`ROCr Runtime <rocr-runtime:index>`,1.18.0,1.15.0,1.15.0,1.15.0,1.15.0,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.13.0,1.13.0,1.13.0,1.13.0,1.13.0,1.12.0,1.12.0
1 ROCm Version 7.1.1 7.1.0 7.0.2 7.0.1/7.0.0 6.4.3 6.4.2 6.4.1 6.4.0 6.3.3 6.3.2 6.3.1 6.3.0 6.2.4 6.2.2 6.2.1 6.2.0 6.1.5 6.1.2 6.1.1 6.1.0 6.0.2 6.0.0
2 :ref:`Operating systems & kernels <OS-kernel-versions>` :ref:`Operating systems & kernels <OS-kernel-versions>` [#os-compatibility-past-60]_ Ubuntu 24.04.3 Ubuntu 24.04.3 Ubuntu 24.04.3 Ubuntu 24.04.3 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.1, 24.04 Ubuntu 24.04.1, 24.04 Ubuntu 24.04.1, 24.04 Ubuntu 24.04
3 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3, 22.04.2 Ubuntu 22.04.4, 22.04.3, 22.04.2
4 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5
5 RHEL 10.1, 10.0, 9.7, 9.6, 9.4 RHEL 10.0, 9.6, 9.4 RHEL 10.0, 9.6, 9.4 RHEL 9.6, 9.4 RHEL 9.6, 9.4 RHEL 9.6, 9.4 RHEL 9.6, 9.5, 9.4 RHEL 9.5, 9.4 RHEL 9.5, 9.4 RHEL 9.5, 9.4 RHEL 9.5, 9.4 RHEL 9.5, 9.4 RHEL 9.4, 9.3 RHEL 9.4, 9.3 RHEL 9.4, 9.3 RHEL 9.4, 9.3 RHEL 9.4, 9.3, 9.2 RHEL 9.4, 9.3, 9.2 RHEL 9.4, 9.3, 9.2 RHEL 9.4, 9.3, 9.2 RHEL 9.3, 9.2 RHEL 9.3, 9.2
6 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 [#rhel-700-past-60]_ RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10 RHEL 8.10, 8.9 RHEL 8.10, 8.9 RHEL 8.10, 8.9 RHEL 8.10, 8.9 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8
7 SLES 15 SP7 SLES 15 SP7 SLES 15 SP7 SLES 15 SP7 [#sles-db-700-past-60]_ SLES 15 SP7 SLES 15 SP7, SP6 SLES 15 SP7, SP6 SLES 15 SP6 SLES 15 SP6 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4
8 CentOS 7.9 CentOS 7.9 CentOS 7.9 CentOS 7.9 CentOS 7.9
9 Oracle Linux 10, 9, 8 Oracle Linux 10, 9, 8 Oracle Linux 10, 9, 8 Oracle Linux 9, 8 [#ol-700-mi300x-past-60]_ Oracle Linux 9, 8 Oracle Linux 9, 8 [#mi300x-past-60]_ Oracle Linux 9, 8 Oracle Linux 9, 8 [#mi300x-past-60]_ Oracle Linux 9, 8 Oracle Linux 9, 8 [#mi300x-past-60]_ Oracle Linux 9, 8 Oracle Linux 9, 8 [#mi300x-past-60]_ Oracle Linux 9, 8 Oracle Linux 8.10 [#mi300x-past-60]_ Oracle Linux 8.10 Oracle Linux 8.10 [#mi300x-past-60]_ Oracle Linux 8.10 Oracle Linux 8.10 [#mi300x-past-60]_ Oracle Linux 8.10 Oracle Linux 8.10 [#mi300x-past-60]_ Oracle Linux 8.10 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9 Oracle Linux 8.9 [#mi300x-past-60]_ Oracle Linux 8.9
10 Debian 13, 12 Debian 13, 12 Debian 13, 12 Debian 12 [#sles-db-700-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12 Debian 12 [#single-node-past-60]_ Debian 12
11 Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-past-60]_ Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-past-60]_ Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-past-60]_ Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-past-60]_ Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-past-60]_ Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-630-past-60]_ Azure Linux 3.0 Azure Linux 3.0 [#az-mi300x-630-past-60]_ Azure Linux 3.0
12 Rocky Linux 9 Rocky Linux 9 Rocky Linux 9 Rocky Linux 9 [#rl-700-past-60]_ Rocky Linux 9
13 .. _architecture-support-compatibility-matrix-past-60: .. _architecture-support-compatibility-matrix-past-60:
14 :doc:`Architecture <rocm-install-on-linux:reference/system-requirements>` CDNA4 CDNA4 CDNA4 CDNA4
15 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3
16 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2
17 CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA CDNA
18 RDNA4 RDNA4 RDNA4 RDNA4 RDNA4 RDNA4 RDNA4
19 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3
20 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2 RDNA2
21 .. _gpu-support-compatibility-matrix-past-60: .. _gpu-support-compatibility-matrix-past-60:
22 :doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>` :doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>` [#gpu-compatibility-past-60]_ gfx950 gfx950 gfx950 gfx950 [#mi350x-os-past-60]_ gfx950
23 gfx1201 gfx1201 gfx1201 gfx1201 [#RDNA-OS-700-past-60]_ gfx1201 gfx1201 [#RDNA-OS-past-60]_ gfx1201 gfx1201 [#RDNA-OS-past-60]_ gfx1201 gfx1201 [#RDNA-OS-past-60]_ gfx1201
24 gfx1200 gfx1200 gfx1200 gfx1200 [#RDNA-OS-700-past-60]_ gfx1200 gfx1200 [#RDNA-OS-past-60]_ gfx1200 gfx1200 [#RDNA-OS-past-60]_ gfx1200 gfx1200 [#RDNA-OS-past-60]_ gfx1200
25 gfx1101 gfx1101 gfx1101 gfx1101 [#RDNA-OS-700-past-60]_ [#rd-v710-past-60]_ gfx1101 gfx1101 [#RDNA-OS-past-60]_ [#7700XT-OS-past-60]_ gfx1101 gfx1101 [#RDNA-OS-past-60]_ [#7700XT-OS-past-60]_ gfx1101 gfx1101 [#RDNA-OS-past-60]_ gfx1101
26 gfx1100 gfx1100 gfx1100 gfx1100 [#RDNA-OS-700-past-60]_ gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100 gfx1100
27 gfx1030 gfx1030 gfx1030 gfx1030 [#RDNA-OS-700-past-60]_ [#rd-v620-past-60]_ gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030
28 gfx942 gfx942 gfx942 gfx942 [#mi325x-os-past-60]_ [#mi300x-os-past-60]_ [#mi300A-os-past-60]_ gfx942 gfx942 gfx942 gfx942 gfx942 gfx942 gfx942 gfx942 gfx942 gfx942 [#mi300_624-past-60]_ gfx942 gfx942 [#mi300_622-past-60]_ gfx942 gfx942 [#mi300_621-past-60]_ gfx942 gfx942 [#mi300_620-past-60]_ gfx942 gfx942 [#mi300_612-past-60]_ gfx942 gfx942 [#mi300_612-past-60]_ gfx942 gfx942 [#mi300_611-past-60]_ gfx942 gfx942 [#mi300_610-past-60]_ gfx942 gfx942 [#mi300_602-past-60]_ gfx942 gfx942 [#mi300_600-past-60]_ gfx942
29 gfx90a gfx90a gfx90a gfx90a [#mi200x-os-past-60]_ gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a
30 gfx908 gfx908 gfx908 gfx908 [#mi100-os-past-60]_ gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908
31
32 FRAMEWORK SUPPORT .. _framework-support-compatibility-matrix-past-60: .. _framework-support-compatibility-matrix-past-60:
33 :doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>` 2.9, 2.8, 2.7 2.8, 2.7, 2.6 2.8, 2.7, 2.6 2.7, 2.6, 2.5, 2.4, 2.3 2.7, 2.6, 2.5 2.6, 2.5, 2.4, 2.3 2.6, 2.5, 2.4, 2.3 2.6, 2.5, 2.4, 2.3 2.6, 2.5, 2.4, 2.3 2.4, 2.3, 2.2, 1.13 2.4, 2.3, 2.2, 1.13 2.4, 2.3, 2.2, 1.13 2.4, 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13
34 :doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>` 2.20.0, 2.19.1, 2.18.1 2.20.0, 2.19.1, 2.18.1 2.19.1, 2.18.1, 2.17.1 [#tf-mi350-past-60]_ 2.19.1, 2.18.1 2.19.1, 2.18.1, 2.17.1 [#tf-mi350-past-60]_ 2.18.1, 2.17.1, 2.16.2 2.18.1, 2.17.1, 2.16.2 2.18.1, 2.17.1, 2.16.2 2.18.1, 2.17.1, 2.16.2 2.17.0, 2.16.2, 2.15.1 2.17.0, 2.16.2, 2.15.1 2.17.0, 2.16.2, 2.15.1 2.17.0, 2.16.2, 2.15.1 2.16.1, 2.15.1, 2.14.1 2.16.1, 2.15.1, 2.14.1 2.16.1, 2.15.1, 2.14.1 2.16.1, 2.15.1, 2.14.1 2.15.0, 2.14.0, 2.13.1 2.15.0, 2.14.0, 2.13.1 2.15.0, 2.14.0, 2.13.1 2.15.0, 2.14.0, 2.13.1 2.14.0, 2.13.1, 2.12.1 2.14.0, 2.13.1, 2.12.1
35 :doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>` 0.7.1 0.7.1 0.6.0 0.6.0 0.4.35 0.4.35 0.4.35 0.4.35 0.4.31 0.4.31 0.4.31 0.4.31 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26
36 :doc:`verl <../compatibility/ml-compatibility/verl-compatibility>` [#verl_compat-past-60]_ N/A N/A N/A N/A 0.6.0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 0.3.0.post0 N/A N/A N/A N/A N/A N/A
37 :doc:`Stanford Megatron-LM <../compatibility/ml-compatibility/stanford-megatron-lm-compatibility>` [#stanford-megatron-lm_compat-past-60]_ N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 85f95ae N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
38 :doc:`DGL <../compatibility/ml-compatibility/dgl-compatibility>` [#dgl_compat-past-60]_ N/A N/A N/A N/A 2.4.0 N/A 2.4.0 N/A N/A 2.4.0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
39 :doc:`Megablocks <../compatibility/ml-compatibility/megablocks-compatibility>` [#megablocks_compat-past-60]_ N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 0.7.0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
40 :doc:`Taichi <../compatibility/ml-compatibility/taichi-compatibility>` [#taichi_compat-past-60]_ :doc:`Ray <../compatibility/ml-compatibility/ray-compatibility>` [#ray_compat-past-60]_ N/A N/A N/A N/A 2.51.1 N/A N/A N/A 2.48.0.post0 N/A N/A 1.8.0b1 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
41 :doc:`Ray <../compatibility/ml-compatibility/ray-compatibility>` [#ray_compat-past-60]_ :doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat-past-60]_ N/A N/A N/A N/A b6652 N/A b6356 N/A b6356 2.48.0.post0 b6356 N/A b5997 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
42 :doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat-past-60]_ :doc:`FlashInfer <../compatibility/ml-compatibility/flashinfer-compatibility>` [#flashinfer_compat-past-60]_ N/A N/A N/A b6356 N/A b6356 N/A b6356 N/A b6356 v0.2.5 b5997 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
43 :doc:`FlashInfer <../compatibility/ml-compatibility/flashinfer-compatibility>` [#flashinfer_compat-past-60]_ `ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_ 1.23.1 1.22.0 1.22.0 N/A 1.22.0 N/A 1.20.0 N/A 1.20.0 v0.2.5 1.20.0 N/A 1.20.0 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.17.3 N/A 1.14.1 N/A 1.14.1
44 `ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_ 1.22.0 1.20.0 1.20.0 1.20.0 1.20.0 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.14.1 1.14.1
45
46 THIRD PARTY COMMS .. _thirdpartycomms-support-compatibility-matrix-past-60:
47 THIRD PARTY COMMS `UCC <https://github.com/ROCm/ucc>`_ >=1.4.0 >=1.4.0 >=1.4.0 .. _thirdpartycomms-support-compatibility-matrix-past-60: >=1.4.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.2.0 >=1.2.0
48 `UCC <https://github.com/ROCm/ucc>`_ `UCX <https://github.com/ROCm/ucx>`_ >=1.17.0 >=1.17.0 >=1.17.0 >=1.4.0 >=1.17.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.15.0 >=1.3.0 >=1.14.1 >=1.3.0 >=1.14.1 >=1.3.0 >=1.14.1 >=1.3.0 >=1.14.1 >=1.2.0 >=1.14.1 >=1.2.0 >=1.14.1
49 `UCX <https://github.com/ROCm/ucx>`_ >=1.17.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1
50 THIRD PARTY ALGORITHM .. _thirdpartyalgorithm-support-compatibility-matrix-past-60:
51 THIRD PARTY ALGORITHM Thrust 2.8.5 2.8.5 2.6.0 .. _thirdpartyalgorithm-support-compatibility-matrix-past-60: 2.6.0 2.5.0 2.5.0 2.5.0 2.5.0 2.3.2 2.3.2 2.3.2 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
52 Thrust CUB 2.8.5 2.8.5 2.6.0 2.6.0 2.5.0 2.5.0 2.5.0 2.5.0 2.3.2 2.3.2 2.3.2 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
53 CUB 2.6.0 2.5.0 2.5.0 2.5.0 2.5.0 2.3.2 2.3.2 2.3.2 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
54 DRIVER & USER SPACE [#kfd_support-past-60]_ .. _kfd-userspace-support-compatibility-matrix-past-60:
55 DRIVER & USER SPACE [#kfd_support-past-60]_ :doc:`AMD GPU Driver <rocm-install-on-linux:reference/user-kernel-space-compat-matrix>` 30.20.1, 30.20.0 [#mi325x_KVM-past-60]_, 30.10.2, 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x 30.20.0 [#mi325x_KVM-past-60]_, 30.10.2, 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x 30.10.2, 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x, 6.3.x .. _kfd-userspace-support-compatibility-matrix-past-60: 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x, 6.3.x, 6.2.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x
56 :doc:`AMD GPU Driver <rocm-install-on-linux:reference/user-kernel-space-compat-matrix>` 30.10.1 [#driver_patch-past-60]_, 30.10, 6.4.x, 6.3.x, 6.2.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.4.x, 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x
57 ML & COMPUTER VISION .. _mllibs-support-compatibility-matrix-past-60:
58 ML & COMPUTER VISION :doc:`Composable Kernel <composable_kernel:index>` 1.1.0 1.1.0 1.1.0 .. _mllibs-support-compatibility-matrix-past-60: 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0
59 :doc:`Composable Kernel <composable_kernel:index>` :doc:`MIGraphX <amdmigraphx:index>` 2.14.0 2.14.0 2.13.0 1.1.0 2.13.0 1.1.0 2.12.0 1.1.0 2.12.0 1.1.0 2.12.0 1.1.0 2.12.0 1.1.0 2.11.0 1.1.0 2.11.0 1.1.0 2.11.0 1.1.0 2.11.0 1.1.0 2.10.0 1.1.0 2.10.0 1.1.0 2.10.0 1.1.0 2.10.0 1.1.0 2.9.0 1.1.0 2.9.0 1.1.0 2.9.0 1.1.0 2.9.0 1.1.0 2.8.0 1.1.0 2.8.0
60 :doc:`MIGraphX <amdmigraphx:index>` :doc:`MIOpen <miopen:index>` 3.5.1 3.5.1 3.5.0 2.13.0 3.5.0 2.12.0 3.4.0 2.12.0 3.4.0 2.12.0 3.4.0 2.12.0 3.4.0 2.11.0 3.3.0 2.11.0 3.3.0 2.11.0 3.3.0 2.11.0 3.3.0 2.10.0 3.2.0 2.10.0 3.2.0 2.10.0 3.2.0 2.10.0 3.2.0 2.9.0 3.1.0 2.9.0 3.1.0 2.9.0 3.1.0 2.9.0 3.1.0 2.8.0 3.0.0 2.8.0 3.0.0
61 :doc:`MIOpen <miopen:index>` :doc:`MIVisionX <mivisionx:index>` 3.4.0 3.4.0 3.3.0 3.5.0 3.3.0 3.4.0 3.2.0 3.4.0 3.2.0 3.4.0 3.2.0 3.4.0 3.2.0 3.3.0 3.1.0 3.3.0 3.1.0 3.3.0 3.1.0 3.3.0 3.1.0 3.2.0 3.0.0 3.2.0 3.0.0 3.2.0 3.0.0 3.2.0 3.0.0 3.1.0 2.5.0 3.1.0 2.5.0 3.1.0 2.5.0 3.1.0 2.5.0 3.0.0 2.5.0 3.0.0 2.5.0
62 :doc:`MIVisionX <mivisionx:index>` :doc:`rocAL <rocal:index>` 2.4.0 2.4.0 2.3.0 3.3.0 2.3.0 3.2.0 2.2.0 3.2.0 2.2.0 3.2.0 2.2.0 3.2.0 2.2.0 3.1.0 2.1.0 3.1.0 2.1.0 3.1.0 2.1.0 3.1.0 2.1.0 3.0.0 2.0.0 3.0.0 2.0.0 3.0.0 2.0.0 3.0.0 1.0.0 2.5.0 1.0.0 2.5.0 1.0.0 2.5.0 1.0.0 2.5.0 1.0.0 2.5.0 1.0.0 2.5.0 1.0.0
63 :doc:`rocAL <rocal:index>` :doc:`rocDecode <rocdecode:index>` 1.4.0 1.4.0 1.0.0 2.3.0 1.0.0 2.2.0 0.10.0 2.2.0 0.10.0 2.2.0 0.10.0 2.2.0 0.10.0 2.1.0 0.8.0 2.1.0 0.8.0 2.1.0 0.8.0 2.1.0 0.8.0 2.0.0 0.6.0 2.0.0 0.6.0 2.0.0 0.6.0 1.0.0 0.6.0 1.0.0 0.6.0 1.0.0 0.6.0 1.0.0 0.5.0 1.0.0 0.5.0 1.0.0 N/A 1.0.0 N/A
64 :doc:`rocDecode <rocdecode:index>` :doc:`rocJPEG <rocjpeg:index>` 1.2.0 1.2.0 1.1.0 1.0.0 1.1.0 0.10.0 0.8.0 0.10.0 0.8.0 0.10.0 0.8.0 0.10.0 0.8.0 0.8.0 0.6.0 0.8.0 0.6.0 0.8.0 0.6.0 0.8.0 0.6.0 0.6.0 N/A 0.6.0 N/A 0.6.0 N/A 0.6.0 N/A 0.6.0 N/A 0.6.0 N/A 0.5.0 N/A 0.5.0 N/A N/A N/A
65 :doc:`rocJPEG <rocjpeg:index>` :doc:`rocPyDecode <rocpydecode:index>` 0.7.0 0.7.0 0.6.0 1.1.0 0.6.0 0.8.0 0.3.1 0.8.0 0.3.1 0.8.0 0.3.1 0.8.0 0.3.1 0.6.0 0.2.0 0.6.0 0.2.0 0.6.0 0.2.0 0.6.0 0.2.0 N/A 0.1.0 N/A 0.1.0 N/A 0.1.0 N/A 0.1.0 N/A N/A N/A N/A N/A N/A
66 :doc:`rocPyDecode <rocpydecode:index>` :doc:`RPP <rpp:index>` 2.1.0 2.1.0 2.0.0 0.6.0 2.0.0 0.3.1 1.9.10 0.3.1 1.9.10 0.3.1 1.9.10 0.3.1 1.9.10 0.2.0 1.9.1 0.2.0 1.9.1 0.2.0 1.9.1 0.2.0 1.9.1 0.1.0 1.8.0 0.1.0 1.8.0 0.1.0 1.8.0 0.1.0 1.8.0 N/A 1.5.0 N/A 1.5.0 N/A 1.5.0 N/A 1.5.0 N/A 1.4.0 N/A 1.4.0
67 :doc:`RPP <rpp:index>` 2.0.0 1.9.10 1.9.10 1.9.10 1.9.10 1.9.1 1.9.1 1.9.1 1.9.1 1.8.0 1.8.0 1.8.0 1.8.0 1.5.0 1.5.0 1.5.0 1.5.0 1.4.0 1.4.0
68 COMMUNICATION .. _commlibs-support-compatibility-matrix-past-60:
69 COMMUNICATION :doc:`RCCL <rccl:index>` 2.27.7 2.27.7 2.26.6 .. _commlibs-support-compatibility-matrix-past-60: 2.26.6 2.22.3 2.22.3 2.22.3 2.22.3 2.21.5 2.21.5 2.21.5 2.21.5 2.20.5 2.20.5 2.20.5 2.20.5 2.18.6 2.18.6 2.18.6 2.18.6 2.18.3 2.18.3
70 :doc:`RCCL <rccl:index>` :doc:`rocSHMEM <rocshmem:index>` 3.1.0 3.0.0 3.0.0 2.26.6 3.0.0 2.22.3 2.0.1 2.22.3 2.0.1 2.22.3 2.0.0 2.22.3 2.0.0 2.21.5 N/A 2.21.5 N/A 2.21.5 N/A 2.21.5 N/A 2.20.5 N/A 2.20.5 N/A 2.20.5 N/A 2.20.5 N/A 2.18.6 N/A 2.18.6 N/A 2.18.6 N/A 2.18.6 N/A 2.18.3 N/A 2.18.3 N/A
71 :doc:`rocSHMEM <rocshmem:index>` 3.0.0 2.0.1 2.0.1 2.0.0 2.0.0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
72 MATH LIBS .. _mathlibs-support-compatibility-matrix-past-60:
73 MATH LIBS `half <https://github.com/ROCm/half>`_ 1.12.0 1.12.0 1.12.0 .. _mathlibs-support-compatibility-matrix-past-60: 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0 1.12.0
74 `half <https://github.com/ROCm/half>`_ :doc:`hipBLAS <hipblas:index>` 3.1.0 3.1.0 3.0.2 1.12.0 3.0.0 1.12.0 2.4.0 1.12.0 2.4.0 1.12.0 2.4.0 1.12.0 2.4.0 1.12.0 2.3.0 1.12.0 2.3.0 1.12.0 2.3.0 1.12.0 2.3.0 1.12.0 2.2.0 1.12.0 2.2.0 1.12.0 2.2.0 1.12.0 2.2.0 1.12.0 2.1.0 1.12.0 2.1.0 1.12.0 2.1.0 1.12.0 2.1.0 1.12.0 2.0.0 1.12.0 2.0.0
75 :doc:`hipBLAS <hipblas:index>` :doc:`hipBLASLt <hipblaslt:index>` 1.1.0 1.1.0 1.0.0 3.0.0 1.0.0 2.4.0 0.12.1 2.4.0 0.12.1 2.4.0 0.12.1 2.4.0 0.12.0 2.3.0 0.10.0 2.3.0 0.10.0 2.3.0 0.10.0 2.3.0 0.10.0 2.2.0 0.8.0 2.2.0 0.8.0 2.2.0 0.8.0 2.2.0 0.8.0 2.1.0 0.7.0 2.1.0 0.7.0 2.1.0 0.7.0 2.1.0 0.7.0 2.0.0 0.6.0 2.0.0 0.6.0
76 :doc:`hipBLASLt <hipblaslt:index>` :doc:`hipFFT <hipfft:index>` 1.0.21 1.0.21 1.0.20 1.0.0 1.0.20 0.12.1 1.0.18 0.12.1 1.0.18 0.12.1 1.0.18 0.12.0 1.0.18 0.10.0 1.0.17 0.10.0 1.0.17 0.10.0 1.0.17 0.10.0 1.0.17 0.8.0 1.0.16 0.8.0 1.0.15 0.8.0 1.0.15 0.8.0 1.0.14 0.7.0 1.0.14 0.7.0 1.0.14 0.7.0 1.0.14 0.7.0 1.0.14 0.6.0 1.0.13 0.6.0 1.0.13
77 :doc:`hipFFT <hipfft:index>` :doc:`hipfort <hipfort:index>` 0.7.1 0.7.1 0.7.0 1.0.20 0.7.0 1.0.18 0.6.0 1.0.18 0.6.0 1.0.18 0.6.0 1.0.18 0.6.0 1.0.17 0.5.1 1.0.17 0.5.1 1.0.17 0.5.0 1.0.17 0.5.0 1.0.16 0.4.0 1.0.15 0.4.0 1.0.15 0.4.0 1.0.14 0.4.0 1.0.14 0.4.0 1.0.14 0.4.0 1.0.14 0.4.0 1.0.14 0.4.0 1.0.13 0.4.0 1.0.13 0.4.0
78 :doc:`hipfort <hipfort:index>` :doc:`hipRAND <hiprand:index>` 3.1.0 3.1.0 3.0.0 0.7.0 3.0.0 0.6.0 2.12.0 0.6.0 2.12.0 0.6.0 2.12.0 0.6.0 2.12.0 0.5.1 2.11.1 0.5.1 2.11.1 0.5.0 2.11.1 0.5.0 2.11.0 0.4.0 2.11.1 0.4.0 2.11.0 0.4.0 2.11.0 0.4.0 2.11.0 0.4.0 2.10.16 0.4.0 2.10.16 0.4.0 2.10.16 0.4.0 2.10.16 0.4.0 2.10.16 0.4.0 2.10.16
79 :doc:`hipRAND <hiprand:index>` :doc:`hipSOLVER <hipsolver:index>` 3.1.0 3.1.0 3.0.0 3.0.0 2.12.0 2.4.0 2.12.0 2.4.0 2.12.0 2.4.0 2.12.0 2.4.0 2.11.1 2.3.0 2.11.1 2.3.0 2.11.1 2.3.0 2.11.0 2.3.0 2.11.1 2.2.0 2.11.0 2.2.0 2.11.0 2.2.0 2.11.0 2.2.0 2.10.16 2.1.1 2.10.16 2.1.1 2.10.16 2.1.1 2.10.16 2.1.0 2.10.16 2.0.0 2.10.16 2.0.0
80 :doc:`hipSOLVER <hipsolver:index>` :doc:`hipSPARSE <hipsparse:index>` 4.1.0 4.1.0 4.0.1 3.0.0 4.0.1 2.4.0 3.2.0 2.4.0 3.2.0 2.4.0 3.2.0 2.4.0 3.2.0 2.3.0 3.1.2 2.3.0 3.1.2 2.3.0 3.1.2 2.3.0 3.1.2 2.2.0 3.1.1 2.2.0 3.1.1 2.2.0 3.1.1 2.2.0 3.1.1 2.1.1 3.0.1 2.1.1 3.0.1 2.1.1 3.0.1 2.1.0 3.0.1 2.0.0 3.0.0 2.0.0 3.0.0
81 :doc:`hipSPARSE <hipsparse:index>` :doc:`hipSPARSELt <hipsparselt:index>` 0.2.5 0.2.5 0.2.4 4.0.1 0.2.4 3.2.0 0.2.3 3.2.0 0.2.3 3.2.0 0.2.3 3.2.0 0.2.3 3.1.2 0.2.2 3.1.2 0.2.2 3.1.2 0.2.2 3.1.2 0.2.2 3.1.1 0.2.1 3.1.1 0.2.1 3.1.1 0.2.1 3.1.1 0.2.1 3.0.1 0.2.0 3.0.1 0.2.0 3.0.1 0.1.0 3.0.1 0.1.0 3.0.0 0.1.0 3.0.0 0.1.0
82 :doc:`hipSPARSELt <hipsparselt:index>` :doc:`rocALUTION <rocalution:index>` 4.0.1 4.0.1 4.0.0 0.2.4 4.0.0 0.2.3 3.2.3 0.2.3 3.2.3 0.2.3 3.2.3 0.2.3 3.2.2 0.2.2 3.2.1 0.2.2 3.2.1 0.2.2 3.2.1 0.2.2 3.2.1 0.2.1 3.2.1 0.2.1 3.2.0 0.2.1 3.2.0 0.2.1 3.2.0 0.2.0 3.1.1 0.2.0 3.1.1 0.1.0 3.1.1 0.1.0 3.1.1 0.1.0 3.0.3 0.1.0 3.0.3
83 :doc:`rocALUTION <rocalution:index>` :doc:`rocBLAS <rocblas:index>` 5.1.1 5.1.0 5.0.2 4.0.0 5.0.0 3.2.3 4.4.1 3.2.3 4.4.1 3.2.3 4.4.0 3.2.2 4.4.0 3.2.1 4.3.0 3.2.1 4.3.0 3.2.1 4.3.0 3.2.1 4.3.0 3.2.1 4.2.4 3.2.0 4.2.1 3.2.0 4.2.1 3.2.0 4.2.0 3.1.1 4.1.2 3.1.1 4.1.2 3.1.1 4.1.0 3.1.1 4.1.0 3.0.3 4.0.0 3.0.3 4.0.0
84 :doc:`rocBLAS <rocblas:index>` :doc:`rocFFT <rocfft:index>` 1.0.35 1.0.35 1.0.34 5.0.0 1.0.34 4.4.1 1.0.32 4.4.1 1.0.32 4.4.0 1.0.32 4.4.0 1.0.32 4.3.0 1.0.31 4.3.0 1.0.31 4.3.0 1.0.31 4.3.0 1.0.31 4.2.4 1.0.30 4.2.1 1.0.29 4.2.1 1.0.29 4.2.0 1.0.28 4.1.2 1.0.27 4.1.2 1.0.27 4.1.0 1.0.27 4.1.0 1.0.26 4.0.0 1.0.25 4.0.0 1.0.23
85 :doc:`rocFFT <rocfft:index>` :doc:`rocRAND <rocrand:index>` 4.1.0 4.1.0 4.0.0 1.0.34 4.0.0 1.0.32 3.3.0 1.0.32 3.3.0 1.0.32 3.3.0 1.0.32 3.3.0 1.0.31 3.2.0 1.0.31 3.2.0 1.0.31 3.2.0 1.0.31 3.2.0 1.0.30 3.1.1 1.0.29 3.1.0 1.0.29 3.1.0 1.0.28 3.1.0 1.0.27 3.0.1 1.0.27 3.0.1 1.0.27 3.0.1 1.0.26 3.0.1 1.0.25 3.0.0 1.0.23 2.10.17
86 :doc:`rocRAND <rocrand:index>` :doc:`rocSOLVER <rocsolver:index>` 3.31.0 3.31.0 3.30.1 4.0.0 3.30.0 3.3.0 3.28.2 3.3.0 3.28.2 3.3.0 3.28.0 3.3.0 3.28.0 3.2.0 3.27.0 3.2.0 3.27.0 3.2.0 3.27.0 3.2.0 3.27.0 3.1.1 3.26.2 3.1.0 3.26.0 3.1.0 3.26.0 3.1.0 3.26.0 3.0.1 3.25.0 3.0.1 3.25.0 3.0.1 3.25.0 3.0.1 3.25.0 3.0.0 3.24.0 2.10.17 3.24.0
87 :doc:`rocSOLVER <rocsolver:index>` :doc:`rocSPARSE <rocsparse:index>` 4.1.0 4.1.0 4.0.2 3.30.0 4.0.2 3.28.2 3.4.0 3.28.2 3.4.0 3.28.0 3.4.0 3.28.0 3.4.0 3.27.0 3.3.0 3.27.0 3.3.0 3.27.0 3.3.0 3.27.0 3.3.0 3.26.2 3.2.1 3.26.0 3.2.0 3.26.0 3.2.0 3.26.0 3.2.0 3.25.0 3.1.2 3.25.0 3.1.2 3.25.0 3.1.2 3.25.0 3.1.2 3.24.0 3.0.2 3.24.0 3.0.2
88 :doc:`rocSPARSE <rocsparse:index>` :doc:`rocWMMA <rocwmma:index>` 2.1.0 2.0.0 2.0.0 4.0.2 2.0.0 3.4.0 1.7.0 3.4.0 1.7.0 3.4.0 1.7.0 3.4.0 1.7.0 3.3.0 1.6.0 3.3.0 1.6.0 3.3.0 1.6.0 3.3.0 1.6.0 3.2.1 1.5.0 3.2.0 1.5.0 3.2.0 1.5.0 3.2.0 1.5.0 3.1.2 1.4.0 3.1.2 1.4.0 3.1.2 1.4.0 3.1.2 1.4.0 3.0.2 1.3.0 3.0.2 1.3.0
89 :doc:`rocWMMA <rocwmma:index>` :doc:`Tensile <tensile:src/index>` 4.44.0 4.44.0 4.44.0 2.0.0 4.44.0 1.7.0 4.43.0 1.7.0 4.43.0 1.7.0 4.43.0 1.7.0 4.43.0 1.6.0 4.42.0 1.6.0 4.42.0 1.6.0 4.42.0 1.6.0 4.42.0 1.5.0 4.41.0 1.5.0 4.41.0 1.5.0 4.41.0 1.5.0 4.41.0 1.4.0 4.40.0 1.4.0 4.40.0 1.4.0 4.40.0 1.4.0 4.40.0 1.3.0 4.39.0 1.3.0 4.39.0
90 :doc:`Tensile <tensile:src/index>` 4.44.0 4.43.0 4.43.0 4.43.0 4.43.0 4.42.0 4.42.0 4.42.0 4.42.0 4.41.0 4.41.0 4.41.0 4.41.0 4.40.0 4.40.0 4.40.0 4.40.0 4.39.0 4.39.0
91 PRIMITIVES .. _primitivelibs-support-compatibility-matrix-past-60:
92 PRIMITIVES :doc:`hipCUB <hipcub:index>` 4.1.0 4.1.0 4.0.0 .. _primitivelibs-support-compatibility-matrix-past-60: 4.0.0 3.4.0 3.4.0 3.4.0 3.4.0 3.3.0 3.3.0 3.3.0 3.3.0 3.2.1 3.2.0 3.2.0 3.2.0 3.1.0 3.1.0 3.1.0 3.1.0 3.0.0 3.0.0
93 :doc:`hipCUB <hipcub:index>` :doc:`hipTensor <hiptensor:index>` 2.0.0 2.0.0 2.0.0 4.0.0 2.0.0 3.4.0 1.5.0 3.4.0 1.5.0 3.4.0 1.5.0 3.4.0 1.5.0 3.3.0 1.4.0 3.3.0 1.4.0 3.3.0 1.4.0 3.3.0 1.4.0 3.2.1 1.3.0 3.2.0 1.3.0 3.2.0 1.3.0 3.2.0 1.3.0 3.1.0 1.2.0 3.1.0 1.2.0 3.1.0 1.2.0 3.1.0 1.2.0 3.0.0 1.1.0 3.0.0 1.1.0
94 :doc:`hipTensor <hiptensor:index>` :doc:`rocPRIM <rocprim:index>` 4.1.0 4.1.0 4.0.1 2.0.0 4.0.0 1.5.0 3.4.1 1.5.0 3.4.1 1.5.0 3.4.0 1.5.0 3.4.0 1.4.0 3.3.0 1.4.0 3.3.0 1.4.0 3.3.0 1.4.0 3.3.0 1.3.0 3.2.2 1.3.0 3.2.0 1.3.0 3.2.0 1.3.0 3.2.0 1.2.0 3.1.0 1.2.0 3.1.0 1.2.0 3.1.0 1.2.0 3.1.0 1.1.0 3.0.0 1.1.0 3.0.0
95 :doc:`rocPRIM <rocprim:index>` :doc:`rocThrust <rocthrust:index>` 4.1.0 4.1.0 4.0.0 4.0.0 3.4.1 3.3.0 3.4.1 3.3.0 3.4.0 3.3.0 3.4.0 3.3.0 3.3.0 3.3.0 3.3.0 3.3.0 3.2.2 3.1.1 3.2.0 3.1.0 3.2.0 3.1.0 3.2.0 3.0.1 3.1.0 3.0.1 3.1.0 3.0.1 3.1.0 3.0.1 3.1.0 3.0.1 3.0.0 3.0.0
96 :doc:`rocThrust <rocthrust:index>` 4.0.0 3.3.0 3.3.0 3.3.0 3.3.0 3.3.0 3.3.0 3.3.0 3.3.0 3.1.1 3.1.0 3.1.0 3.0.1 3.0.1 3.0.1 3.0.1 3.0.1 3.0.0 3.0.0
97 SUPPORT LIBS
98 SUPPORT LIBS `hipother <https://github.com/ROCm/hipother>`_ 7.1.52802 7.1.25424 7.0.51831 7.0.51830 6.4.43483 6.4.43483 6.4.43483 6.4.43482 6.3.42134 6.3.42134 6.3.42133 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
99 `hipother <https://github.com/ROCm/hipother>`_ `rocm-core <https://github.com/ROCm/rocm-core>`_ 7.1.1 7.1.0 7.0.2 7.0.51830 7.0.1/7.0.0 6.4.43483 6.4.3 6.4.43483 6.4.2 6.4.43483 6.4.1 6.4.43482 6.4.0 6.3.42134 6.3.3 6.3.42134 6.3.2 6.3.42133 6.3.1 6.3.42131 6.3.0 6.2.41134 6.2.4 6.2.41134 6.2.2 6.2.41134 6.2.1 6.2.41133 6.2.0 6.1.40093 6.1.5 6.1.40093 6.1.2 6.1.40092 6.1.1 6.1.40091 6.1.0 6.1.32831 6.0.2 6.1.32830 6.0.0
100 `rocm-core <https://github.com/ROCm/rocm-core>`_ `ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ 7.0.1/7.0.0 N/A [#ROCT-rocr-past-60]_ 6.4.3 N/A [#ROCT-rocr-past-60]_ 6.4.2 N/A [#ROCT-rocr-past-60]_ 6.4.1 N/A [#ROCT-rocr-past-60]_ 6.4.0 N/A [#ROCT-rocr-past-60]_ 6.3.3 N/A [#ROCT-rocr-past-60]_ 6.3.2 N/A [#ROCT-rocr-past-60]_ 6.3.1 N/A [#ROCT-rocr-past-60]_ 6.3.0 N/A [#ROCT-rocr-past-60]_ 6.2.4 20240607.5.7 6.2.2 20240607.5.7 6.2.1 20240607.4.05 6.2.0 20240607.1.4246 6.1.5 20240125.5.08 6.1.2 20240125.5.08 6.1.1 20240125.5.08 6.1.0 20240125.3.30 6.0.2 20231016.2.245 6.0.0 20231016.2.245
101 `ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ N/A [#ROCT-rocr-past-60]_ 20240607.5.7 20240607.5.7 20240607.4.05 20240607.1.4246 20240125.5.08 20240125.5.08 20240125.5.08 20240125.3.30 20231016.2.245 20231016.2.245
102 SYSTEM MGMT TOOLS .. _tools-support-compatibility-matrix-past-60:
103 SYSTEM MGMT TOOLS :doc:`AMD SMI <amdsmi:index>` 26.2.0 26.1.0 26.0.2 .. _tools-support-compatibility-matrix-past-60: 26.0.0 25.5.1 25.5.1 25.4.2 25.3.0 24.7.1 24.7.1 24.7.1 24.7.1 24.6.3 24.6.3 24.6.3 24.6.2 24.5.1 24.5.1 24.5.1 24.4.1 23.4.2 23.4.2
104 :doc:`AMD SMI <amdsmi:index>` :doc:`ROCm Data Center Tool <rdc:index>` 1.2.0 1.2.0 1.1.0 26.0.0 1.1.0 25.5.1 0.3.0 25.5.1 0.3.0 25.4.2 0.3.0 25.3.0 0.3.0 24.7.1 0.3.0 24.7.1 0.3.0 24.7.1 0.3.0 24.7.1 0.3.0 24.6.3 0.3.0 24.6.3 0.3.0 24.6.3 0.3.0 24.6.2 0.3.0 24.5.1 0.3.0 24.5.1 0.3.0 24.5.1 0.3.0 24.4.1 0.3.0 23.4.2 0.3.0 23.4.2 0.3.0
105 :doc:`ROCm Data Center Tool <rdc:index>` :doc:`rocminfo <rocminfo:index>` 1.0.0 1.0.0 1.0.0 1.1.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0 0.3.0 1.0.0
106 :doc:`rocminfo <rocminfo:index>` :doc:`ROCm SMI <rocm_smi_lib:index>` 7.8.0 7.8.0 7.8.0 1.0.0 7.8.0 1.0.0 7.7.0 1.0.0 7.5.0 1.0.0 7.5.0 1.0.0 7.5.0 1.0.0 7.4.0 1.0.0 7.4.0 1.0.0 7.4.0 1.0.0 7.4.0 1.0.0 7.3.0 1.0.0 7.3.0 1.0.0 7.3.0 1.0.0 7.3.0 1.0.0 7.2.0 1.0.0 7.2.0 1.0.0 7.0.0 1.0.0 7.0.0 1.0.0 6.0.2 1.0.0 6.0.0
107 :doc:`ROCm SMI <rocm_smi_lib:index>` :doc:`ROCm Validation Suite <rocmvalidationsuite:index>` 1.3.0 1.2.0 1.2.0 7.8.0 1.2.0 7.7.0 1.1.0 7.5.0 1.1.0 7.5.0 1.1.0 7.5.0 1.1.0 7.4.0 1.1.0 7.4.0 1.1.0 7.4.0 1.1.0 7.4.0 1.1.0 7.3.0 1.0.60204 7.3.0 1.0.60202 7.3.0 1.0.60201 7.3.0 1.0.60200 7.2.0 1.0.60105 7.2.0 1.0.60102 7.0.0 1.0.60101 7.0.0 1.0.60100 6.0.2 1.0.60002 6.0.0 1.0.60000
108 :doc:`ROCm Validation Suite <rocmvalidationsuite:index>` 1.2.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.1.0 1.0.60204 1.0.60202 1.0.60201 1.0.60200 1.0.60105 1.0.60102 1.0.60101 1.0.60100 1.0.60002 1.0.60000
109 PERFORMANCE TOOLS
110 PERFORMANCE TOOLS :doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>` 2.6.0 2.6.0 2.6.0 2.6.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0 1.4.0
111 :doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>` :doc:`ROCm Compute Profiler <rocprofiler-compute:index>` 3.3.1 3.3.0 3.2.3 2.6.0 3.2.3 1.4.0 3.1.1 1.4.0 3.1.1 1.4.0 3.1.0 1.4.0 3.1.0 1.4.0 3.0.0 1.4.0 3.0.0 1.4.0 3.0.0 1.4.0 3.0.0 1.4.0 2.0.1 1.4.0 2.0.1 1.4.0 2.0.1 1.4.0 2.0.1 1.4.0 N/A 1.4.0 N/A 1.4.0 N/A 1.4.0 N/A 1.4.0 N/A 1.4.0 N/A
112 :doc:`ROCm Compute Profiler <rocprofiler-compute:index>` :doc:`ROCm Systems Profiler <rocprofiler-systems:index>` 1.2.1 1.2.0 1.1.1 3.2.3 1.1.0 3.1.1 1.0.2 3.1.1 1.0.2 3.1.0 1.0.1 3.1.0 1.0.0 3.0.0 0.1.2 3.0.0 0.1.1 3.0.0 0.1.0 3.0.0 0.1.0 2.0.1 1.11.2 2.0.1 1.11.2 2.0.1 1.11.2 2.0.1 1.11.2 N/A N/A N/A N/A N/A N/A
113 :doc:`ROCm Systems Profiler <rocprofiler-systems:index>` :doc:`ROCProfiler <rocprofiler:index>` 2.0.70101 2.0.70100 2.0.70002 1.1.0 2.0.70000 1.0.2 2.0.60403 1.0.2 2.0.60402 1.0.1 2.0.60401 1.0.0 2.0.60400 0.1.2 2.0.60303 0.1.1 2.0.60302 0.1.0 2.0.60301 0.1.0 2.0.60300 1.11.2 2.0.60204 1.11.2 2.0.60202 1.11.2 2.0.60201 1.11.2 2.0.60200 N/A 2.0.60105 N/A 2.0.60102 N/A 2.0.60101 N/A 2.0.60100 N/A 2.0.60002 N/A 2.0.60000
114 :doc:`ROCProfiler <rocprofiler:index>` :doc:`ROCprofiler-SDK <rocprofiler-sdk:index>` 1.0.0 1.0.0 1.0.0 2.0.70000 1.0.0 2.0.60403 0.6.0 2.0.60402 0.6.0 2.0.60401 0.6.0 2.0.60400 0.6.0 2.0.60303 0.5.0 2.0.60302 0.5.0 2.0.60301 0.5.0 2.0.60300 0.5.0 2.0.60204 0.4.0 2.0.60202 0.4.0 2.0.60201 0.4.0 2.0.60200 0.4.0 2.0.60105 N/A 2.0.60102 N/A 2.0.60101 N/A 2.0.60100 N/A 2.0.60002 N/A 2.0.60000 N/A
115 :doc:`ROCprofiler-SDK <rocprofiler-sdk:index>` :doc:`ROCTracer <roctracer:index>` 4.1.70101 4.1.70100 4.1.70002 1.0.0 4.1.70000 0.6.0 4.1.60403 0.6.0 4.1.60402 0.6.0 4.1.60401 0.6.0 4.1.60400 0.5.0 4.1.60303 0.5.0 4.1.60302 0.5.0 4.1.60301 0.5.0 4.1.60300 0.4.0 4.1.60204 0.4.0 4.1.60202 0.4.0 4.1.60201 0.4.0 4.1.60200 N/A 4.1.60105 N/A 4.1.60102 N/A 4.1.60101 N/A 4.1.60100 N/A 4.1.60002 N/A 4.1.60000
116 :doc:`ROCTracer <roctracer:index>` 4.1.70000 4.1.60403 4.1.60402 4.1.60401 4.1.60400 4.1.60303 4.1.60302 4.1.60301 4.1.60300 4.1.60204 4.1.60202 4.1.60201 4.1.60200 4.1.60105 4.1.60102 4.1.60101 4.1.60100 4.1.60002 4.1.60000
117 DEVELOPMENT TOOLS
118 DEVELOPMENT TOOLS :doc:`HIPIFY <hipify:index>` 20.0.0 20.0.0 20.0.0 20.0.0 19.0.0 19.0.0 19.0.0 19.0.0 18.0.0.25012 18.0.0.25012 18.0.0.24491 18.0.0.24455 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
119 :doc:`HIPIFY <hipify:index>` :doc:`ROCm CMake <rocmcmakebuildtools:index>` 0.14.0 0.14.0 0.14.0 20.0.0 0.14.0 19.0.0 0.14.0 19.0.0 0.14.0 19.0.0 0.14.0 19.0.0 0.14.0 18.0.0.25012 0.14.0 18.0.0.25012 0.14.0 18.0.0.24491 0.14.0 18.0.0.24455 0.14.0 18.0.0.24392 0.13.0 18.0.0.24355 0.13.0 18.0.0.24355 0.13.0 18.0.0.24232 0.13.0 17.0.0.24193 0.12.0 17.0.0.24193 0.12.0 17.0.0.24154 0.12.0 17.0.0.24103 0.12.0 17.0.0.24012 0.11.0 17.0.0.23483 0.11.0
120 :doc:`ROCm CMake <rocmcmakebuildtools:index>` :doc:`ROCdbgapi <rocdbgapi:index>` 0.77.4 0.77.4 0.77.4 0.14.0 0.77.3 0.14.0 0.77.2 0.14.0 0.77.2 0.14.0 0.77.2 0.14.0 0.77.2 0.14.0 0.77.0 0.14.0 0.77.0 0.14.0 0.77.0 0.14.0 0.77.0 0.13.0 0.76.0 0.13.0 0.76.0 0.13.0 0.76.0 0.13.0 0.76.0 0.12.0 0.71.0 0.12.0 0.71.0 0.12.0 0.71.0 0.12.0 0.71.0 0.11.0 0.71.0 0.11.0 0.71.0
121 :doc:`ROCdbgapi <rocdbgapi:index>` :doc:`ROCm Debugger (ROCgdb) <rocgdb:index>` 16.3.0 16.3.0 16.3.0 0.77.3 16.3.0 0.77.2 15.2.0 0.77.2 15.2.0 0.77.2 15.2.0 0.77.2 15.2.0 0.77.0 15.2.0 0.77.0 15.2.0 0.77.0 15.2.0 0.77.0 15.2.0 0.76.0 14.2.0 0.76.0 14.2.0 0.76.0 14.2.0 0.76.0 14.2.0 0.71.0 14.1.0 0.71.0 14.1.0 0.71.0 14.1.0 0.71.0 14.1.0 0.71.0 13.2.0 0.71.0 13.2.0
122 :doc:`ROCm Debugger (ROCgdb) <rocgdb:index>` `rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_ 0.5.0 0.5.0 0.5.0 16.3.0 0.5.0 15.2.0 0.4.0 15.2.0 0.4.0 15.2.0 0.4.0 15.2.0 0.4.0 15.2.0 0.4.0 15.2.0 0.4.0 15.2.0 0.4.0 15.2.0 0.4.0 14.2.0 0.4.0 14.2.0 0.4.0 14.2.0 0.4.0 14.2.0 0.4.0 14.1.0 0.3.0 14.1.0 0.3.0 14.1.0 0.3.0 14.1.0 0.3.0 13.2.0 N/A 13.2.0 N/A
123 `rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_ :doc:`ROCr Debug Agent <rocr_debug_agent:index>` 2.1.0 2.1.0 2.1.0 0.5.0 2.1.0 0.4.0 2.0.4 0.4.0 2.0.4 0.4.0 2.0.4 0.4.0 2.0.4 0.4.0 2.0.3 0.4.0 2.0.3 0.4.0 2.0.3 0.4.0 2.0.3 0.4.0 2.0.3 0.4.0 2.0.3 0.4.0 2.0.3 0.4.0 2.0.3 0.3.0 2.0.3 0.3.0 2.0.3 0.3.0 2.0.3 0.3.0 2.0.3 N/A 2.0.3 N/A 2.0.3
124 :doc:`ROCr Debug Agent <rocr_debug_agent:index>` 2.1.0 2.0.4 2.0.4 2.0.4 2.0.4 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3
125 COMPILERS .. _compilers-support-compatibility-matrix-past-60:
126 COMPILERS `clang-ocl <https://github.com/ROCm/clang-ocl>`_ N/A N/A N/A .. _compilers-support-compatibility-matrix-past-60: N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 0.5.0 0.5.0 0.5.0 0.5.0 0.5.0 0.5.0
127 `clang-ocl <https://github.com/ROCm/clang-ocl>`_ :doc:`hipCC <hipcc:index>` 1.1.1 1.1.1 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 N/A 1.1.1 0.5.0 1.0.0 0.5.0 1.0.0 0.5.0 1.0.0 0.5.0 1.0.0 0.5.0 1.0.0 0.5.0 1.0.0
128 :doc:`hipCC <hipcc:index>` `Flang <https://github.com/ROCm/flang>`_ 20.0.025444 20.0.025425 20.0.0.25385 1.1.1 20.0.0.25314 1.1.1 19.0.0.25224 1.1.1 19.0.0.25224 1.1.1 19.0.0.25184 1.1.1 19.0.0.25133 1.1.1 18.0.0.25012 1.1.1 18.0.0.25012 1.1.1 18.0.0.24491 1.1.1 18.0.0.24455 1.1.1 18.0.0.24392 1.1.1 18.0.0.24355 1.1.1 18.0.0.24355 1.1.1 18.0.0.24232 1.0.0 17.0.0.24193 1.0.0 17.0.0.24193 1.0.0 17.0.0.24154 1.0.0 17.0.0.24103 1.0.0 17.0.0.24012 1.0.0 17.0.0.23483
129 `Flang <https://github.com/ROCm/flang>`_ :doc:`llvm-project <llvm-project:index>` 20.0.025444 20.0.025425 20.0.0.25385 20.0.0.25314 19.0.0.25224 19.0.0.25224 19.0.0.25184 19.0.0.25133 18.0.0.25012 18.0.0.25012 18.0.0.24491 18.0.0.24455 18.0.0.24491 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
130 :doc:`llvm-project <llvm-project:index>` `OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_ 20.0.025444 20.0.025425 20.0.0.25385 20.0.0.25314 19.0.0.25224 19.0.0.25224 19.0.0.25184 19.0.0.25133 18.0.0.25012 18.0.0.25012 18.0.0.24491 18.0.0.24491 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
131 `OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_ 20.0.0.25314 19.0.0.25224 19.0.0.25224 19.0.0.25184 19.0.0.25133 18.0.0.25012 18.0.0.25012 18.0.0.24491 18.0.0.24491 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
132 RUNTIMES .. _runtime-support-compatibility-matrix-past-60:
133 RUNTIMES :doc:`AMD CLR <hip:understand/amd_clr>` 7.1.52802 7.1.25424 7.0.51831 .. _runtime-support-compatibility-matrix-past-60: 7.0.51830 6.4.43484 6.4.43484 6.4.43483 6.4.43482 6.3.42134 6.3.42134 6.3.42133 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
134 :doc:`AMD CLR <hip:understand/amd_clr>` :doc:`HIP <hip:index>` 7.1.52802 7.1.25424 7.0.51831 7.0.51830 6.4.43484 6.4.43484 6.4.43483 6.4.43482 6.3.42134 6.3.42134 6.3.42133 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
135 :doc:`HIP <hip:index>` `OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_ 2.0.0 2.0.0 2.0.0 7.0.51830 2.0.0 6.4.43484 2.0.0 6.4.43484 2.0.0 6.4.43483 2.0.0 6.4.43482 2.0.0 6.3.42134 2.0.0 6.3.42134 2.0.0 6.3.42133 2.0.0 6.3.42131 2.0.0 6.2.41134 2.0.0 6.2.41134 2.0.0 6.2.41134 2.0.0 6.2.41133 2.0.0 6.1.40093 2.0.0 6.1.40093 2.0.0 6.1.40092 2.0.0 6.1.40091 2.0.0 6.1.32831 2.0.0 6.1.32830 2.0.0
136 `OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_ :doc:`ROCr Runtime <rocr-runtime:index>` 1.18.0 1.18.0 1.18.0 2.0.0 1.18.0 2.0.0 1.15.0 2.0.0 1.15.0 2.0.0 1.15.0 2.0.0 1.15.0 2.0.0 1.14.0 2.0.0 1.14.0 2.0.0 1.14.0 2.0.0 1.14.0 2.0.0 1.14.0 2.0.0 1.14.0 2.0.0 1.14.0 2.0.0 1.13.0 2.0.0 1.13.0 2.0.0 1.13.0 2.0.0 1.13.0 2.0.0 1.13.0 2.0.0 1.12.0 2.0.0 1.12.0
:doc:`ROCr Runtime <rocr-runtime:index>` 1.18.0 1.15.0 1.15.0 1.15.0 1.15.0 1.14.0 1.14.0 1.14.0 1.14.0 1.14.0 1.14.0 1.14.0 1.13.0 1.13.0 1.13.0 1.13.0 1.13.0 1.12.0 1.12.0

View File

@@ -10,10 +10,9 @@ Use this matrix to view the ROCm compatibility and system requirements across su
You can also refer to the :ref:`past versions of ROCm compatibility matrix<past-rocm-compatibility-matrix>`. You can also refer to the :ref:`past versions of ROCm compatibility matrix<past-rocm-compatibility-matrix>`.
Accelerators and GPUs listed in the following table support compute workloads (no display GPUs listed in the following table support compute workloads (no display
information or graphics). If youre using ROCm with AMD Radeon GPUs or Ryzen APUs for graphics information or graphics). If youre using ROCm with AMD Radeon GPUs or Ryzen APUs for graphics
workloads, see the `Use ROCm on Radeon and Ryzen workloads, see the :doc:`Use ROCm on Radeon and Ryzen <radeon:index>` to verify
<https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/index.html>`_ to verify
compatibility and system requirements. compatibility and system requirements.
.. |br| raw:: html .. |br| raw:: html
@@ -23,20 +22,20 @@ compatibility and system requirements.
.. container:: format-big-table .. container:: format-big-table
.. csv-table:: .. csv-table::
:header: "ROCm Version", "7.0.1/7.0.0", "6.4.3", "6.3.0" :header: "ROCm Version", "7.1.1", "7.1.0", "6.4.0"
:stub-columns: 1 :stub-columns: 1
:ref:`Operating systems & kernels <OS-kernel-versions>`,Ubuntu 24.04.3,Ubuntu 24.04.2,Ubuntu 24.04.2 :ref:`Operating systems & kernels <OS-kernel-versions>` [#os-compatibility]_,Ubuntu 24.04.3,Ubuntu 24.04.3,Ubuntu 24.04.2
,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5 ,Ubuntu 22.04.5,Ubuntu 22.04.5,Ubuntu 22.04.5
,"RHEL 9.6, 9.4","RHEL 9.6, 9.4","RHEL 9.5, 9.4" ,"RHEL 10.1, 10.0, 9.7, |br| 9.6, 9.4","RHEL 10.0, 9.6, 9.4","RHEL 9.5, 9.4"
,RHEL 8.10 [#rhel-700]_,RHEL 8.10,RHEL 8.10 ,RHEL 8.10,RHEL 8.10,RHEL 8.10
,SLES 15 SP7 [#sles-db-700]_,"SLES 15 SP7, SP6","SLES 15 SP6, SP5" ,SLES 15 SP7,SLES 15 SP7,SLES 15 SP6
,"Oracle Linux 9, 8 [#ol-700-mi300x]_","Oracle Linux 9, 8 [#ol-mi300x]_",Oracle Linux 8.10 [#ol-mi300x]_ ,"Oracle Linux 10, 9, 8","Oracle Linux 10, 9, 8","Oracle Linux 9, 8"
,Debian 12 [#sles-db-700]_,Debian 12 [#single-node]_, ,"Debian 13, 12","Debian 13, 12",Debian 12
,Azure Linux 3.0 [#az-mi300x]_,Azure Linux 3.0 [#az-mi300x]_, ,,,Azure Linux 3.0
,Rocky Linux 9 [#rl-700]_,, ,Rocky Linux 9,Rocky Linux 9,
,.. _architecture-support-compatibility-matrix:,, ,.. _architecture-support-compatibility-matrix:,,
:doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA4,, :doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA4,CDNA4,
,CDNA3,CDNA3,CDNA3 ,CDNA3,CDNA3,CDNA3
,CDNA2,CDNA2,CDNA2 ,CDNA2,CDNA2,CDNA2
,CDNA,CDNA,CDNA ,CDNA,CDNA,CDNA
@@ -44,181 +43,133 @@ compatibility and system requirements.
,RDNA3,RDNA3,RDNA3 ,RDNA3,RDNA3,RDNA3
,RDNA2,RDNA2,RDNA2 ,RDNA2,RDNA2,RDNA2
,.. _gpu-support-compatibility-matrix:,, ,.. _gpu-support-compatibility-matrix:,,
:doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>`,gfx950 [#mi350x-os]_,, :doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>` [#gpu-compatibility]_,gfx950,gfx950,
,gfx1201 [#RDNA-OS-700]_,gfx1201 [#RDNA-OS]_, ,gfx1201,gfx1201,
,gfx1200 [#RDNA-OS-700]_,gfx1200 [#RDNA-OS]_, ,gfx1200,gfx1200,
,gfx1101 [#RDNA-OS-700]_ [#rd-v710]_,gfx1101 [#RDNA-OS]_ [#7700XT-OS]_, ,gfx1101,gfx1101,
,gfx1100 [#RDNA-OS-700]_,gfx1100,gfx1100 ,gfx1100,gfx1100,gfx1100
,gfx1030 [#RDNA-OS-700]_ [#rd-v620]_,gfx1030,gfx1030 ,gfx1030,gfx1030,gfx1030
,gfx942 [#mi325x-os]_ [#mi300x-os]_ [#mi300A-os]_,gfx942,gfx942 ,gfx942,gfx942,gfx942
,gfx90a [#mi200x-os]_,gfx90a,gfx90a ,gfx90a,gfx90a,gfx90a
,gfx908 [#mi100-os]_,gfx908,gfx908 ,gfx908,gfx908,gfx908
,,, ,,,
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix:,, FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix:,,
:doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.7, 2.6, 2.5, 2.4, 2.3","2.6, 2.5, 2.4, 2.3","2.4, 2.3, 2.2, 2.1, 2.0, 1.13" :doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.9, 2.8, 2.7","2.8, 2.7, 2.6","2.6, 2.5, 2.4, 2.3"
:doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.19.1, 2.18.1","2.18.1, 2.17.1, 2.16.2","2.17.0, 2.16.2, 2.15.1" :doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.20.0, 2.19.1, 2.18.1","2.20.0, 2.19.1, 2.18.1","2.18.1, 2.17.1, 2.16.2"
:doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.6.0,0.4.35,0.4.31 :doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.7.1,0.7.1,0.4.35
:doc:`Stanford Megatron-LM <../compatibility/ml-compatibility/stanford-megatron-lm-compatibility>` [#stanford-megatron-lm_compat]_,N/A,N/A,85f95ae :doc:`DGL <../compatibility/ml-compatibility/dgl-compatibility>` [#dgl_compat]_,N/A,N/A,2.4.0
:doc:`Megablocks <../compatibility/ml-compatibility/megablocks-compatibility>` [#megablocks_compat]_,N/A,N/A,0.7.0 :doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat]_,N/A,N/A,b5997
:doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat]_,b6356,b6356,N/A `ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.23.1,1.22.0,1.20.0
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.22.0,1.20.0,1.17.3
,,, ,,,
THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix:,, THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix:,,
`UCC <https://github.com/ROCm/ucc>`_,>=1.4.0,>=1.3.0,>=1.3.0 `UCC <https://github.com/ROCm/ucc>`_,>=1.4.0,>=1.4.0,>=1.3.0
`UCX <https://github.com/ROCm/ucx>`_,>=1.17.0,>=1.15.0,>=1.15.0 `UCX <https://github.com/ROCm/ucx>`_,>=1.17.0,>=1.17.0,>=1.15.0
,,, ,,,
THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix:,, THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix:,,
Thrust,2.6.0,2.5.0,2.3.2 Thrust,2.8.5,2.8.5,2.5.0
CUB,2.6.0,2.5.0,2.3.2 CUB,2.8.5,2.8.5,2.5.0
,,, ,,,
DRIVER & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,, DRIVER & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,,
:doc:`AMD GPU Driver <rocm-install-on-linux:reference/user-kernel-space-compat-matrix>`,"30.10.1 [#driver_patch]_, 30.10, 6.4.x, 6.3.x, 6.2.x","6.4.x, 6.3.x, 6.2.x, 6.1.x","6.4.x, 6.3.x, 6.2.x, 6.1.x" :doc:`AMD GPU Driver <rocm-install-on-linux:reference/user-kernel-space-compat-matrix>`,"30.20.1, 30.20.0 [#mi325x_KVM]_, |br| 30.10.2, 30.10.1 [#driver_patch]_, |br| 30.10, 6.4.x","30.20.0 [#mi325x_KVM]_, 30.10.2, |br| 30.10.1 [#driver_patch]_, 30.10, 6.4.x","6.4.x, 6.3.x, 6.2.x, 6.1.x"
,,, ,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix:,, ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix:,,
:doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0 :doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0
:doc:`MIGraphX <amdmigraphx:index>`,2.13.0,2.12.0,2.11.0 :doc:`MIGraphX <amdmigraphx:index>`,2.14.0,2.14.0,2.12.0
:doc:`MIOpen <miopen:index>`,3.5.0,3.4.0,3.3.0 :doc:`MIOpen <miopen:index>`,3.5.1,3.5.1,3.4.0
:doc:`MIVisionX <mivisionx:index>`,3.3.0,3.2.0,3.1.0 :doc:`MIVisionX <mivisionx:index>`,3.4.0,3.4.0,3.2.0
:doc:`rocAL <rocal:index>`,2.3.0,2.2.0,2.1.0 :doc:`rocAL <rocal:index>`,2.4.0,2.4.0,2.2.0
:doc:`rocDecode <rocdecode:index>`,1.0.0,0.10.0,0.8.0 :doc:`rocDecode <rocdecode:index>`,1.4.0,1.4.0,0.10.0
:doc:`rocJPEG <rocjpeg:index>`,1.1.0,0.8.0,0.6.0 :doc:`rocJPEG <rocjpeg:index>`,1.2.0,1.2.0,0.8.0
:doc:`rocPyDecode <rocpydecode:index>`,0.6.0,0.3.1,0.2.0 :doc:`rocPyDecode <rocpydecode:index>`,0.7.0,0.7.0,0.3.1
:doc:`RPP <rpp:index>`,2.0.0,1.9.10,1.9.1 :doc:`RPP <rpp:index>`,2.1.0,2.1.0,1.9.10
,,, ,,,
COMMUNICATION,.. _commlibs-support-compatibility-matrix:,, COMMUNICATION,.. _commlibs-support-compatibility-matrix:,,
:doc:`RCCL <rccl:index>`,2.26.6,2.22.3,2.21.5 :doc:`RCCL <rccl:index>`,2.27.7,2.27.7,2.22.3
:doc:`rocSHMEM <rocshmem:index>`,3.0.0,2.0.1,N/A :doc:`rocSHMEM <rocshmem:index>`,3.1.0,3.0.0,2.0.0
,,, ,,,
MATH LIBS,.. _mathlibs-support-compatibility-matrix:,, MATH LIBS,.. _mathlibs-support-compatibility-matrix:,,
`half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0 `half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0
:doc:`hipBLAS <hipblas:index>`,3.0.0,2.4.0,2.3.0 :doc:`hipBLAS <hipblas:index>`,3.1.0,3.1.0,2.4.0
:doc:`hipBLASLt <hipblaslt:index>`,1.0.0,0.12.1,0.10.0 :doc:`hipBLASLt <hipblaslt:index>`,1.1.0,1.1.0,0.12.0
:doc:`hipFFT <hipfft:index>`,1.0.20,1.0.18,1.0.17 :doc:`hipFFT <hipfft:index>`,1.0.21,1.0.21,1.0.18
:doc:`hipfort <hipfort:index>`,0.7.0,0.6.0,0.5.0 :doc:`hipfort <hipfort:index>`,0.7.1,0.7.1,0.6.0
:doc:`hipRAND <hiprand:index>`,3.0.0,2.12.0,2.11.0 :doc:`hipRAND <hiprand:index>`,3.1.0,3.1.0,2.12.0
:doc:`hipSOLVER <hipsolver:index>`,3.0.0,2.4.0,2.3.0 :doc:`hipSOLVER <hipsolver:index>`,3.1.0,3.1.0,2.4.0
:doc:`hipSPARSE <hipsparse:index>`,4.0.1,3.2.0,3.1.2 :doc:`hipSPARSE <hipsparse:index>`,4.1.0,4.1.0,3.2.0
:doc:`hipSPARSELt <hipsparselt:index>`,0.2.4,0.2.3,0.2.2 :doc:`hipSPARSELt <hipsparselt:index>`,0.2.5,0.2.5,0.2.3
:doc:`rocALUTION <rocalution:index>`,4.0.0,3.2.3,3.2.1 :doc:`rocALUTION <rocalution:index>`,4.0.1,4.0.1,3.2.2
:doc:`rocBLAS <rocblas:index>`,5.0.0,4.4.1,4.3.0 :doc:`rocBLAS <rocblas:index>`,5.1.1,5.1.0,4.4.0
:doc:`rocFFT <rocfft:index>`,1.0.34,1.0.32,1.0.31 :doc:`rocFFT <rocfft:index>`,1.0.35,1.0.35,1.0.32
:doc:`rocRAND <rocrand:index>`,4.0.0,3.3.0,3.2.0 :doc:`rocRAND <rocrand:index>`,4.1.0,4.1.0,3.3.0
:doc:`rocSOLVER <rocsolver:index>`,3.30.0,3.28.2,3.27.0 :doc:`rocSOLVER <rocsolver:index>`,3.31.0,3.31.0,3.28.0
:doc:`rocSPARSE <rocsparse:index>`,4.0.2,3.4.0,3.3.0 :doc:`rocSPARSE <rocsparse:index>`,4.1.0,4.1.0,3.4.0
:doc:`rocWMMA <rocwmma:index>`,2.0.0,1.7.0,1.6.0 :doc:`rocWMMA <rocwmma:index>`,2.1.0,2.0.0,1.7.0
:doc:`Tensile <tensile:src/index>`,4.44.0,4.43.0,4.42.0 :doc:`Tensile <tensile:src/index>`,4.44.0,4.44.0,4.43.0
,,, ,,,
PRIMITIVES,.. _primitivelibs-support-compatibility-matrix:,, PRIMITIVES,.. _primitivelibs-support-compatibility-matrix:,,
:doc:`hipCUB <hipcub:index>`,4.0.0,3.4.0,3.3.0 :doc:`hipCUB <hipcub:index>`,4.1.0,4.1.0,3.4.0
:doc:`hipTensor <hiptensor:index>`,2.0.0,1.5.0,1.4.0 :doc:`hipTensor <hiptensor:index>`,2.0.0,2.0.0,1.5.0
:doc:`rocPRIM <rocprim:index>`,4.0.0,3.4.1,3.3.0 :doc:`rocPRIM <rocprim:index>`,4.1.0,4.1.0,3.4.0
:doc:`rocThrust <rocthrust:index>`,4.0.0,3.3.0,3.3.0 :doc:`rocThrust <rocthrust:index>`,4.1.0,4.1.0,3.3.0
,,, ,,,
SUPPORT LIBS,,, SUPPORT LIBS,,,
`hipother <https://github.com/ROCm/hipother>`_,7.0.51830,6.4.43483,6.3.42131 `hipother <https://github.com/ROCm/hipother>`_,7.1.52802,7.1.25424,6.4.43482
`rocm-core <https://github.com/ROCm/rocm-core>`_,7.0.1/7.0.0,6.4.3,6.3.0 `rocm-core <https://github.com/ROCm/rocm-core>`_,7.1.1,7.1.0,6.4.0
`ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_ `ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_
,,, ,,,
SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix:,, SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix:,,
:doc:`AMD SMI <amdsmi:index>`,26.0.0,25.5.1,24.7.1 :doc:`AMD SMI <amdsmi:index>`,26.2.0,26.1.0,25.3.0
:doc:`ROCm Data Center Tool <rdc:index>`,1.1.0,0.3.0,0.3.0 :doc:`ROCm Data Center Tool <rdc:index>`,1.2.0,1.2.0,0.3.0
:doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0 :doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0
:doc:`ROCm SMI <rocm_smi_lib:index>`,7.8.0,7.7.0,7.4.0 :doc:`ROCm SMI <rocm_smi_lib:index>`,7.8.0,7.8.0,7.5.0
:doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.2.0,1.1.0,1.1.0 :doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.3.0,1.2.0,1.1.0
,,, ,,,
PERFORMANCE TOOLS,,, PERFORMANCE TOOLS,,,
:doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,2.6.0,1.4.0,1.4.0 :doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,2.6.0,2.6.0,1.4.0
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.2.3,3.1.1,3.0.0 :doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.3.1,3.3.0,3.1.0
:doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,1.1.0,1.0.2,0.1.0 :doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,1.2.1,1.2.0,1.0.0
:doc:`ROCProfiler <rocprofiler:index>`,2.0.70000,2.0.60403,2.0.60300 :doc:`ROCProfiler <rocprofiler:index>`,2.0.70101,2.0.70100,2.0.60400
:doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,1.0.0,0.6.0,0.5.0 :doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,1.0.0,1.0.0,0.6.0
:doc:`ROCTracer <roctracer:index>`,4.1.70000,4.1.60403,4.1.60300 :doc:`ROCTracer <roctracer:index>`,4.1.70101,4.1.70100,4.1.60400
,,, ,,,
DEVELOPMENT TOOLS,,, DEVELOPMENT TOOLS,,,
:doc:`HIPIFY <hipify:index>`,20.0.0,19.0.0,18.0.0.24455 :doc:`HIPIFY <hipify:index>`,20.0.0,20.0.0,19.0.0
:doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.14.0,0.14.0 :doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.14.0,0.14.0
:doc:`ROCdbgapi <rocdbgapi:index>`,0.77.3,0.77.2,0.77.0 :doc:`ROCdbgapi <rocdbgapi:index>`,0.77.4,0.77.4,0.77.2
:doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,16.3.0,15.2.0,15.2.0 :doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,16.3.0,16.3.0,15.2.0
`rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.5.0,0.4.0,0.4.0 `rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.5.0,0.5.0,0.4.0
:doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.1.0,2.0.4,2.0.3 :doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.1.0,2.1.0,2.0.4
,,, ,,,
COMPILERS,.. _compilers-support-compatibility-matrix:,, COMPILERS,.. _compilers-support-compatibility-matrix:,,
`clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,N/A
:doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1 :doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1
`Flang <https://github.com/ROCm/flang>`_,20.0.0.25314,19.0.0.25224,18.0.0.24455 `Flang <https://github.com/ROCm/flang>`_,20.0.025444,20.0.025425,19.0.0.25133
:doc:`llvm-project <llvm-project:index>`,20.0.0.25314,19.0.0.25224,18.0.0.24491 :doc:`llvm-project <llvm-project:index>`,20.0.025444,20.0.025425,19.0.0.25133
`OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,20.0.0.25314,19.0.0.25224,18.0.0.24491 `OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,20.0.025444,20.0.025425,19.0.0.25133
,,, ,,,
RUNTIMES,.. _runtime-support-compatibility-matrix:,, RUNTIMES,.. _runtime-support-compatibility-matrix:,,
:doc:`AMD CLR <hip:understand/amd_clr>`,7.0.51830,6.4.43484,6.3.42131 :doc:`AMD CLR <hip:understand/amd_clr>`,7.1.52802,7.1.25424,6.4.43482
:doc:`HIP <hip:index>`,7.0.51830,6.4.43484,6.3.42131 :doc:`HIP <hip:index>`,7.1.52802,7.1.25424,6.4.43482
`OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0 `OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0
:doc:`ROCr Runtime <rocr-runtime:index>`,1.18.0,1.15.0,1.14.0 :doc:`ROCr Runtime <rocr-runtime:index>`,1.18.0,1.18.0,1.15.0
.. rubric:: Footnotes .. rubric:: Footnotes
.. [#rhel-700] RHEL 8.10 is only supported on AMD Instinct MI300X, MI300A, MI250X, MI250, MI210, and MI100 GPUs. .. [#os-compatibility] Some operating systems are supported on limited GPUs. For detailed information, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.1 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.1/reference/system-requirements.html#supported-operating-systems>`__, `ROCm 7.1.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.0/reference/system-requirements.html#supported-operating-systems>`__, and `ROCm 6.4.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.4.0/reference/system-requirements.html#supported-operating-systems>`__.
.. [#ol-700-mi300x] **For ROCm 7.0.x** - Oracle Linux 9 is supported only on AMD Instinct MI355X, MI350X, and MI300X GPUs. Oracle Linux 8 is supported only on AMD Instinct MI300X GPUs. .. [#gpu-compatibility] Some GPUs have limited operating system support. For detailed information, see the latest :ref:`supported_GPUs`. For version specific information, see `ROCm 7.1.1 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.1/reference/system-requirements.html#supported-gpus>`__, `ROCm 7.1.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.0/reference/system-requirements.html#supported-gpus>`__, and `ROCm 6.4.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.4.0/reference/system-requirements.html#supported-gpus>`__.
.. [#ol-mi300x] **Prior ROCm 7.0.0** - Oracle Linux is supported only on AMD Instinct MI300X GPUs. .. [#dgl_compat] DGL is only supported on ROCm 7.0.0, ROCm 6.4.3 and ROCm 6.4.0.
.. [#sles-db-700] **For ROCm 7.0.x** - SLES 15 SP7 and Debian 12 are only supported on AMD Instinct MI300X, MI300A, MI250X, MI250, and MI210 GPUs. .. [#llama-cpp_compat] llama.cpp is only supported on ROCm 7.0.0 and ROCm 6.4.x.
.. [#az-mi300x] Starting ROCm 6.4.0, Azure Linux 3.0 is supported only on AMD Instinct MI300X and AMD Radeon PRO V710. .. [#mi325x_KVM] For AMD Instinct MI325X KVM SR-IOV users, do not use AMD GPU Driver (amdgpu) 30.20.0.
.. [#rl-700] Rocky Linux 9 is only supported on AMD Instinct MI300X and MI300A GPUs.
.. [#single-node] **Prior to ROCm 7.0.0** - Debian 12 is supported only on AMD Instinct MI300X for single-node functionality.
.. [#mi350x-os] AMD Instinct MI355X (gfx950) and MI350X(gfx950) GPUs are only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, RHEL 9.4, and Oracle Linux 9.
.. [#RDNA-OS-700] **For ROCm 7.0.x** - AMD Radeon PRO AI PRO R9700 (gfx1201), AMD Radeon RX 9070 XT (gfx1201), AMD Radeon RX 9070 GRE (gfx1201), AMD Radeon RX 9070 (gfx1201), AMD Radeon RX 9060 XT (gfx1200), AMD Radeon RX 7800 XT (gfx1101), AMD Radeon RX 7700 XT (gfx1101), AMD Radeon PRO W7700 (gfx1101), and AMD Radeon PRO W6800 (gfx1030) are only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, and RHEL 9.6.
.. [#RDNA-OS] **Prior ROCm 7.0.0** - Radeon AI PRO R9700, Radeon RX 9070 XT (gfx1201), Radeon RX 9060 XT (gfx1200), Radeon PRO W7700 (gfx1101), and Radeon RX 7800 XT (gfx1101) are supported only on Ubuntu 24.04.2, Ubuntu 22.04.5, RHEL 9.6, and RHEL 9.4.
.. [#rd-v710] **For ROCm 7.0.x** - AMD Radeon PRO V710 (gfx1101) is only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, and Azure Linux 3.0.
.. [#rd-v620] **For ROCm 7.0.x** - AMD Radeon PRO V620 (gfx1030) is only supported on Ubuntu 24.04.3 and Ubuntu 22.04.5.
.. [#mi325x-os] **For ROCm 7.0.x** - AMD Instinct MI325X GPU (gfx942) is only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, and RHEL 9.4.
.. [#mi300x-os] **For ROCm 7.0.x** - AMD Instinct MI300X GPU (gfx942) is supported on all listed :ref:`supported_distributions`.
.. [#mi300A-os] **For ROCm 7.0.x** - AMD Instinct MI300A GPU (gfx942) is supported only on Ubuntu 24.04, Ubuntu 22.04, RHEL 9.6, RHEL 9.4, RHEL 8.10, SLES 15 SP7, Debian 12, and Rocky Linux 9.
.. [#mi200x-os] **For ROCm 7.0.x** - AMD Instinct MI200 Series GPUs (gfx90a) are supported only on Ubuntu 24.04, Ubuntu 22.04, RHEL 9.6, RHEL 9.4, RHEL 8.10, SLES 15 SP7, and Debian 12.
.. [#mi100-os] **For ROCm 7.0.x** - AMD Instinct MI100 GPU (gfx908) is only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, RHEL 9.4, and RHEL 8.10.
.. [#7700XT-OS] **Prior ROCm 7.0.0** - Radeon RX 7700 XT (gfx1101) is supported only on Ubuntu 24.04.2 and RHEL 9.6.
.. [#stanford-megatron-lm_compat] Stanford Megatron-LM is only supported on ROCm 6.3.0.
.. [#megablocks_compat] Megablocks is only supported on ROCm 6.3.0.
.. [#llama-cpp_compat] llama.cpp is only supported on ROCm 7.0.0 and 6.4.x.
.. [#driver_patch] AMD GPU Driver (amdgpu) 30.10.1 is a quality release that resolves an issue identified in the 30.10 release. There are no other significant changes or feature additions in ROCm 7.0.1 from ROCm 7.0.0. AMD GPU Driver (amdgpu) 30.10.1 is compatible with ROCm 7.0.1 and ROCm 7.0.0. .. [#driver_patch] AMD GPU Driver (amdgpu) 30.10.1 is a quality release that resolves an issue identified in the 30.10 release. There are no other significant changes or feature additions in ROCm 7.0.1 from ROCm 7.0.0. AMD GPU Driver (amdgpu) 30.10.1 is compatible with ROCm 7.0.1 and ROCm 7.0.0.
.. [#kfd_support] As of ROCm 6.4.0, forward and backward compatibility between the AMD GPU Driver (amdgpu) and its user space software is provided up to a year apart. For earlier ROCm releases, the compatibility is provided for +/- 2 releases. The supported user space versions on this page were accurate as of the time of initial ROCm release. For the most up-to-date information, see the latest version of this information at `User and AMD GPU Driver support matrix <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/user-kernel-space-compat-matrix.html>`_. .. [#kfd_support] As of ROCm 6.4.0, forward and backward compatibility between the AMD GPU Driver (amdgpu) and its user space software is provided up to a year apart. For earlier ROCm releases, the compatibility is provided for +/- 2 releases. The supported user space versions on this page were accurate as of the time of initial ROCm release. For the most up-to-date information, see the latest version of this information at `User and AMD GPU Driver support matrix <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/user-kernel-space-compat-matrix.html>`_.
.. [#ROCT-rocr] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package. .. [#ROCT-rocr] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package.
.. _OS-kernel-versions: .. _OS-kernel-versions:
Operating systems, kernel and Glibc versions Operating systems, kernel and Glibc versions
********************************************* *********************************************
Use this lookup table to confirm which operating system and kernel versions are supported with ROCm. For detailed information on operating system supported on ROCm 7.1.1 and associated Kernel and Glibc version, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.0/reference/system-requirements.html#supported-operating-systems>`__, and `ROCm 6.4.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.4.0/reference/system-requirements.html#supported-operating-systems>`__.
.. csv-table::
:header: "OS", "Version", "Kernel", "Glibc"
:widths: 40, 20, 30, 20
:stub-columns: 1
`Ubuntu <https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle>`_, 24.04.3, "6.8 [GA], 6.14 [HWE]", 2.39
,,
`Ubuntu <https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle>`_, 24.04.2, "6.8 [GA], 6.11 [HWE]", 2.39
,,
`Ubuntu <https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle>`_, 22.04.5, "5.15 [GA], 6.8 [HWE]", 2.35
,,
`Red Hat Enterprise Linux (RHEL 9) <https://access.redhat.com/articles/3078#RHEL9>`_, 9.6, 5.14.0-570, 2.34
,9.5, 5.14+, 2.34
,9.4, 5.14.0-427, 2.34
,,
`Red Hat Enterprise Linux (RHEL 8) <https://access.redhat.com/articles/3078#RHEL8>`_, 8.10, 4.18.0-553, 2.28
,,
`SUSE Linux Enterprise Server (SLES) <https://www.suse.com/support/kb/doc/?id=000019587#SLE15SP4>`_, 15 SP7, 6.40-150700.51, 2.38
,15 SP6, "6.5.0+, 6.4.0", 2.38
,15 SP5, 5.14.21, 2.31
,,
`Rocky Linux <https://wiki.rockylinux.org/rocky/version/>`_, 9, 5.14.0-570, 2.34
,,
`Oracle Linux <https://blogs.oracle.com/scoter/post/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases>`_, 9, 6.12.0 (UEK), 2.34
,8, 5.15.0 (UEK), 2.28
,,
`Debian <https://www.debian.org/download>`_,12, 6.1.0, 2.36
,,
`Azure Linux <https://techcommunity.microsoft.com/blog/linuxandopensourceblog/azure-linux-3-0-now-in-preview-on-azure-kubernetes-service-v1-31/4287229>`_,3.0, 6.6.92, 2.38
,,
.. note:: .. note::
@@ -250,42 +201,17 @@ Expand for full historical view of:
.. rubric:: Footnotes .. rubric:: Footnotes
.. [#rhel-700-past-60] **For ROCm 7.0.x** - RHEL 8.10 is only supported on AMD Instinct MI300X, MI300A, MI250X, MI250, MI210, and MI100 GPUs. .. [#os-compatibility-past-60] Some operating systems are supported on limited GPUs. For detailed information, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.1 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.1/reference/system-requirements.html#supported-operating-systems>`__, `ROCm 7.1.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.0/reference/system-requirements.html#supported-operating-systems>`__, and `ROCm 6.4.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.4.0/reference/system-requirements.html#supported-operating-systems>`__.
.. [#ol-700-mi300x-past-60] **For ROCm 7.0.x** - Oracle Linux 9 is supported only on AMD Instinct MI300X, MI350X, and MI355X. Oracle Linux 8 is only supported on AMD Instinct MI300X. .. [#gpu-compatibility-past-60] Some GPUs have limited operating system support. For detailed information, see the latest :ref:`supported_GPUs`. For version specific information, see `ROCm 7.1.1 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.1/reference/system-requirements.html#supported-gpus>`__, `ROCm 7.1.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-7.1.0/reference/system-requirements.html#supported-gpus>`__, and `ROCm 6.4.0 <https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.4.0/reference/system-requirements.html#supported-gpus>`__.
.. [#mi300x-past-60] **Prior ROCm 7.0.0** - Oracle Linux is supported only on AMD Instinct MI300X. .. [#tf-mi350-past-60] TensorFlow 2.17.1 is not supported on AMD Instinct MI350 Series GPUs. Use TensorFlow 2.19.1 or 2.18.1 with MI350 Series GPUs instead.
.. [#sles-db-700-past-60] **For ROCm 7.0.x** - SLES 15 SP7 and Debian 12 are only supported on AMD Instinct MI300X, MI300A, MI250X, MI250, and MI210 GPUs. .. [#verl_compat-past-60] verl is only supported on ROCm 7.0.0 and 6.2.0.
.. [#single-node-past-60] **Prior to ROCm 7.0.0** - Debian 12 is supported only on AMD Instinct MI300X for single-node functionality.
.. [#az-mi300x-past-60] Starting from ROCm 6.4.0, Azure Linux 3.0 is supported only on AMD Instinct MI300X and AMD Radeon PRO V710.
.. [#az-mi300x-630-past-60] **Prior ROCm 6.4.0**- Azure Linux 3.0 is supported only on AMD Instinct MI300X.
.. [#rl-700-past-60] Rocky Linux 9 is only supported on AMD Instinct MI300X and MI300A GPUs.
.. [#mi350x-os-past-60] AMD Instinct MI355X (gfx950) and MI350X(gfx950) GPUs are only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, RHEL 9.4, and Oracle Linux 9.
.. [#RDNA-OS-700-past-60] **For ROCm 7.0.x** AMD Radeon PRO AI PRO R9700 (gfx1201), AMD Radeon RX 9070 XT (gfx1201), AMD Radeon RX 9070 GRE (gfx1201), AMD Radeon RX 9070 (gfx1201), AMD Radeon RX 9060 XT (gfx1200), AMD Radeon RX 7800 XT (gfx1101), AMD Radeon RX 7700 XT (gfx1101), AMD Radeon PRO W7700 (gfx1101), and AMD Radeon PRO W6800 (gfx1030) are only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, and RHEL 9.6.
.. [#RDNA-OS-past-60] **Prior ROCm 7.0.0** - Radeon AI PRO R9700, Radeon RX 9070 XT (gfx1201), Radeon RX 9060 XT (gfx1200), Radeon PRO W7700 (gfx1101), and Radeon RX 7800 XT (gfx1101) are supported only on Ubuntu 24.04.2, Ubuntu 22.04.5, RHEL 9.6, and RHEL 9.4.
.. [#rd-v710-past-60] **For ROCm 7.0.x** - AMD Radeon PRO V710 (gfx1101) is only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, and Azure Linux 3.0.
.. [#rd-v620-past-60] **For ROCm 7.0.x** - AMD Radeon PRO V620 (gfx1030) is only supported on Ubuntu 24.04.3 and Ubuntu 22.04.5.
.. [#mi325x-os-past-60] **For ROCm 7.0.x** - AMD Instinct MI325X GPU (gfx942) is only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, and RHEL 9.4.
.. [#mi300x-os-past-60] **For ROCm 7.0.x** - AMD Instinct MI300X GPU (gfx942) is supported on all listed :ref:`supported_distributions`.
.. [#mi300A-os-past-60] **For ROCm 7.0.x** - AMD Instinct MI300A GPU (gfx942) is supported only on Ubuntu 24.04, Ubuntu 22.04, RHEL 9.6, RHEL 9.4, RHEL 8.10, SLES 15 SP7, Debian 12, and Rocky Linux 9.
.. [#mi200x-os-past-60] **For ROCm 7.0.x** - AMD Instinct MI200 Series GPUs (gfx90a) are supported only on Ubuntu 24.04, Ubuntu 22.04, RHEL 9.6, RHEL 9.4, RHEL 8.10, SLES 15 SP7, and Debian 12.
.. [#mi100-os-past-60] **For ROCm 7.0.x** - AMD Instinct MI100 GPU (gfx908) is only supported on Ubuntu 24.04.3, Ubuntu 22.04.5, RHEL 9.6, RHEL 9.4, and RHEL 8.10.
.. [#7700XT-OS-past-60] Radeon RX 7700 XT (gfx1101) is supported only on Ubuntu 24.04.2 and RHEL 9.6.
.. [#mi300_624-past-60] **For ROCm 6.2.4** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_622-past-60] **For ROCm 6.2.2** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_621-past-60] **For ROCm 6.2.1** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_620-past-60] **For ROCm 6.2.0** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_612-past-60] **For ROCm 6.1.2** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4 and Oracle Linux.
.. [#mi300_611-past-60] **For ROCm 6.1.1** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4 and Oracle Linux.
.. [#mi300_610-past-60] **For ROCm 6.1.0** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4.
.. [#mi300_602-past-60] **For ROCm 6.0.2** - MI300A (gfx942) is supported on Ubuntu 22.04.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.3.
.. [#mi300_600-past-60] **For ROCm 6.0.0** - MI300A (gfx942) is supported on Ubuntu 22.04.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.3.
.. [#verl_compat-past-60] verl is only supported on ROCm 6.2.0.
.. [#stanford-megatron-lm_compat-past-60] Stanford Megatron-LM is only supported on ROCm 6.3.0. .. [#stanford-megatron-lm_compat-past-60] Stanford Megatron-LM is only supported on ROCm 6.3.0.
.. [#dgl_compat-past-60] DGL is only supported on ROCm 6.4.0. .. [#dgl_compat-past-60] DGL is only supported on ROCm 7.0.0, ROCm 6.4.3 and ROCm 6.4.0.
.. [#megablocks_compat-past-60] Megablocks is only supported on ROCm 6.3.0. .. [#megablocks_compat-past-60] Megablocks is only supported on ROCm 6.3.0.
.. [#taichi_compat-past-60] Taichi is only supported on ROCm 6.3.2. .. [#ray_compat-past-60] Ray is only supported on ROCm 7.0.0 and 6.4.1.
.. [#ray_compat-past-60] Ray is only supported on ROCm 6.4.1.
.. [#llama-cpp_compat-past-60] llama.cpp is only supported on ROCm 7.0.0 and 6.4.x. .. [#llama-cpp_compat-past-60] llama.cpp is only supported on ROCm 7.0.0 and 6.4.x.
.. [#flashinfer_compat-past-60] FlashInfer is only supported on ROCm 6.4.1. .. [#flashinfer_compat-past-60] FlashInfer is only supported on ROCm 6.4.1.
.. [#mi325x_KVM-past-60] For AMD Instinct MI325X KVM SR-IOV users, do not use AMD GPU Driver (amdgpu) 30.20.0.
.. [#driver_patch-past-60] AMD GPU Driver (amdgpu) 30.10.1 is a quality release that resolves an issue identified in the 30.10 release. There are no other significant changes or feature additions in ROCm 7.0.1 from ROCm 7.0.0. AMD GPU Driver (amdgpu) 30.10.1 is compatible with ROCm 7.0.1 and ROCm 7.0.0. .. [#driver_patch-past-60] AMD GPU Driver (amdgpu) 30.10.1 is a quality release that resolves an issue identified in the 30.10 release. There are no other significant changes or feature additions in ROCm 7.0.1 from ROCm 7.0.0. AMD GPU Driver (amdgpu) 30.10.1 is compatible with ROCm 7.0.1 and ROCm 7.0.0.
.. [#kfd_support-past-60] As of ROCm 6.4.0, forward and backward compatibility between the AMD GPU Driver (amdgpu) and its user space software is provided up to a year apart. For earlier ROCm releases, the compatibility is provided for +/- 2 releases. The supported user space versions on this page were accurate as of the time of initial ROCm release. For the most up-to-date information, see the latest version of this information at `User and AMD GPU Driver support matrix <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/user-kernel-space-compat-matrix.html>`_. .. [#kfd_support-past-60] As of ROCm 6.4.0, forward and backward compatibility between the AMD GPU Driver (amdgpu) and its user space software is provided up to a year apart. For earlier ROCm releases, the compatibility is provided for +/- 2 releases. The supported user space versions on this page were accurate as of the time of initial ROCm release. For the most up-to-date information, see the latest version of this information at `User and AMD GPU Driver support matrix <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/user-kernel-space-compat-matrix.html>`_.
.. [#ROCT-rocr-past-60] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package. .. [#ROCT-rocr-past-60] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package.

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: Deep Graph Library (DGL) compatibility :description: Deep Graph Library (DGL) compatibility
:keywords: GPU, DGL compatibility :keywords: GPU, CPU, deep graph library, DGL, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -10,215 +10,274 @@
DGL compatibility DGL compatibility
******************************************************************************** ********************************************************************************
Deep Graph Library `(DGL) <https://www.dgl.ai/>`_ is an easy-to-use, high-performance and scalable Deep Graph Library (`DGL <https://www.dgl.ai/>`__) is an easy-to-use, high-performance, and scalable
Python package for deep learning on graphs. DGL is framework agnostic, meaning Python package for deep learning on graphs. DGL is framework agnostic, meaning
if a deep graph model is a component in an end-to-end application, the rest of that if a deep graph model is a component in an end-to-end application, the rest of
the logic is implemented using PyTorch. the logic is implemented using PyTorch.
* ROCm support for DGL is hosted in the `https://github.com/ROCm/dgl <https://github.com/ROCm/dgl>`_ repository. DGL provides a high-performance graph object that can reside on either CPUs or GPUs.
* Due to independent compatibility considerations, this location differs from the `https://github.com/dmlc/dgl <https://github.com/dmlc/dgl>`_ upstream repository. It bundles structural data features for better control and provides a variety of functions
* Use the prebuilt :ref:`Docker images <dgl-docker-compat>` with DGL, PyTorch, and ROCm preinstalled. for computing with graph objects, including efficient and customizable message passing
* See the :doc:`ROCm DGL installation guide <rocm-install-on-linux:install/3rd-party/dgl-install>` primitives for Graph Neural Networks.
to install and get started.
Support overview
Supported devices
================================================================================ ================================================================================
- **Officially Supported**: TF32 with AMD Instinct MI300X (through hipblaslt) - The ROCm-supported version of DGL is maintained in the official `https://github.com/ROCm/dgl
- **Partially Supported**: TF32 with AMD Instinct MI250X <https://github.com/ROCm/dgl>`__ repository, which differs from the
`https://github.com/dmlc/dgl <https://github.com/dmlc/dgl>`__ upstream repository.
- To get started and install DGL on ROCm, use the prebuilt :ref:`Docker images <dgl-docker-compat>`,
which include ROCm, DGL, and all required dependencies.
.. _dgl-recommendations: - See the :doc:`ROCm DGL installation guide <rocm-install-on-linux:install/3rd-party/dgl-install>`
for installation and setup instructions.
Use cases and recommendations
================================================================================
DGL can be used for Graph Learning, and building popular graph models like
GAT, GCN and GraphSage. Using these we can support a variety of use-cases such as:
- Recommender systems
- Network Optimization and Analysis
- 1D (Temporal) and 2D (Image) Classification
- Drug Discovery
Multiple use cases of DGL have been tested and verified.
However, a recommended example follows a drug discovery pipeline using the ``SE3Transformer``.
Refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`_,
where you can search for DGL examples and best practices to optimize your training workflows on AMD GPUs.
Coverage includes:
- Single-GPU training/inference
- Multi-GPU training
- You can also consult the upstream `Installation guide <https://www.dgl.ai/pages/start.html>`__
for additional context.
.. _dgl-docker-compat: .. _dgl-docker-compat:
Docker image compatibility Compatibility matrix
================================================================================ ================================================================================
.. |docker-icon| raw:: html .. |docker-icon| raw:: html
<i class="fab fa-docker"></i> <i class="fab fa-docker"></i>
AMD validates and publishes `DGL images <https://hub.docker.com/r/rocm/dgl>`_ AMD validates and publishes `DGL images <https://hub.docker.com/r/rocm/dgl/tags>`__
with ROCm and Pytorch backends on Docker Hub. The following Docker image tags and associated with ROCm backends on Docker Hub. The following Docker image tags and associated
inventories were tested on `ROCm 6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`_. inventories represent the latest available DGL version from the official Docker Hub.
Click the |docker-icon| to view the image on Docker Hub. Click the |docker-icon| to view the image on Docker Hub.
.. list-table:: DGL Docker image components .. list-table::
:header-rows: 1 :header-rows: 1
:class: docker-image-compatibility :class: docker-image-compatibility
* - Docker * - Docker image
- ROCm
- DGL - DGL
- PyTorch - PyTorch
- Ubuntu - Ubuntu
- Python - Python
- GPU
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu24.04_py3.12_pytorch_release_2.6.0/images/sha256-8ce2c3bcfaa137ab94a75f9e2ea711894748980f57417739138402a542dd5564"><i class="fab fa-docker fa-lg"></i></a> <a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4.0.amd0_rocm7.0.0_ubuntu24.04_py3.12_pytorch_2.8.0/images/sha256-943698ddf54c22a7bcad2e5b4ff467752e29e4ba6d0c926789ae7b242cbd92dd"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`_ - `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- `2.6.0 <https://github.com/ROCm/pytorch/tree/release/2.6>`_ - `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.8.0 <https://github.com/pytorch/pytorch/releases/tag/v2.8.0>`__
- 24.04 - 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`_ - `3.12.9 <https://www.python.org/downloads/release/python-3129/>`__
- MI300X, MI250X
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu24.04_py3.12_pytorch_release_2.4.1/images/sha256-cf1683283b8eeda867b690229c8091c5bbf1edb9f52e8fb3da437c49a612ebe4"><i class="fab fa-docker fa-lg"></i></a> <a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4.0.amd0_rocm7.0.0_ubuntu24.04_py3.12_pytorch_2.6.0/images/sha256-b2ec286a035eb7d0a6aab069561914d21a3cac462281e9c024501ba5ccedfbf7"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`_ - `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- `2.4.1 <https://github.com/ROCm/pytorch/tree/release/2.4>`_ - `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.6.0 <https://github.com/pytorch/pytorch/releases/tag/v2.6.0>`__
- 24.04 - 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`_ - `3.12.9 <https://www.python.org/downloads/release/python-3129/>`__
- MI300X, MI250X
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu22.04_py3.10_pytorch_release_2.4.1/images/sha256-4834f178c3614e2d09e89e32041db8984c456d45dfd20286e377ca8635686554"><i class="fab fa-docker fa-lg"></i></a> <a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4.0.amd0_rocm7.0.0_ubuntu22.04_py3.10_pytorch_2.7.1/images/sha256-d27aee16df922ccf0bcd9107bfcb6d20d34235445d456c637e33ca6f19d11a51"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`_ - `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- `2.4.1 <https://github.com/ROCm/pytorch/tree/release/2.4>`_ - `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.7.1 <https://github.com/pytorch/pytorch/releases/tag/v2.7.1>`__
- 22.04 - 22.04
- `3.10.16 <https://www.python.org/downloads/release/python-31016/>`_ - `3.10.16 <https://www.python.org/downloads/release/python-31016/>`__
- MI300X, MI250X
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu22.04_py3.10_pytorch_release_2.3.0/images/sha256-88740a2c8ab4084b42b10c3c6ba984cab33dd3a044f479c6d7618e2b2cb05e69"><i class="fab fa-docker fa-lg"></i></a> <a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4.0.amd0_rocm6.4.3_ubuntu24.04_py3.12_pytorch_2.6.0/images/sha256-f3ba6a3c9ec9f6c1cde28449dc9780e0c4c16c4140f4b23f158565fbfd422d6b"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`_ - `6.4.3 <https://repo.radeon.com/rocm/apt/6.4.3/>`__
- `2.3.0 <https://github.com/ROCm/pytorch/tree/release/2.3>`_ - `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.6.0 <https://github.com/pytorch/pytorch/releases/tag/v2.6.0>`__
- 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`__
- MI300X, MI250X
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu24.04_py3.12_pytorch_release_2.6.0/images/sha256-8ce2c3bcfaa137ab94a75f9e2ea711894748980f57417739138402a542dd5564"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`__
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.6.0 <https://github.com/pytorch/pytorch/releases/tag/v2.6.0>`__
- 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`__
- MI300X, MI250X
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu24.04_py3.12_pytorch_release_2.4.1/images/sha256-cf1683283b8eeda867b690229c8091c5bbf1edb9f52e8fb3da437c49a612ebe4"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`__
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.4.1 <https://github.com/pytorch/pytorch/releases/tag/v2.4.1>`__
- 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`__
- MI300X, MI250X
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu22.04_py3.10_pytorch_release_2.4.1/images/sha256-4834f178c3614e2d09e89e32041db8984c456d45dfd20286e377ca8635686554"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`__
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.4.1 <https://github.com/pytorch/pytorch/releases/tag/v2.4.1>`__
- 22.04 - 22.04
- `3.10.16 <https://www.python.org/downloads/release/python-31016/>`_ - `3.10.16 <https://www.python.org/downloads/release/python-31016/>`__
- MI300X, MI250X
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/dgl/dgl-2.4_rocm6.4_ubuntu22.04_py3.10_pytorch_release_2.3.0/images/sha256-88740a2c8ab4084b42b10c3c6ba984cab33dd3a044f479c6d7618e2b2cb05e69"><i class="fab fa-docker fa-lg"></i> rocm/dgl</a>
- `6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`__
- `2.4.0 <https://github.com/dmlc/dgl/releases/tag/v2.4.0>`__
- `2.3.0 <https://github.com/pytorch/pytorch/releases/tag/v2.3.0>`__
- 22.04
- `3.10.16 <https://www.python.org/downloads/release/python-31016/>`__
- MI300X, MI250X
.. _dgl-key-rocm-libraries:
Key ROCm libraries for DGL Key ROCm libraries for DGL
================================================================================ ================================================================================
DGL on ROCm depends on specific libraries that affect its features and performance. DGL on ROCm depends on specific libraries that affect its features and performance.
Using the DGL Docker container or building it with the provided docker file or a ROCm base image is recommended. Using the DGL Docker container or building it with the provided Docker file or a ROCm base image is recommended.
If you prefer to build it yourself, ensure the following dependencies are installed: If you prefer to build it yourself, ensure the following dependencies are installed:
.. list-table:: .. list-table::
:header-rows: 1 :header-rows: 1
* - ROCm library * - ROCm library
- Version - ROCm 7.0.0 Version
- ROCm 6.4.x Version
- Purpose - Purpose
* - `Composable Kernel <https://github.com/ROCm/composable_kernel>`_ * - `Composable Kernel <https://github.com/ROCm/composable_kernel>`_
- :version-ref:`"Composable Kernel" rocm_version` - 1.1.0
- 1.1.0
- Enables faster execution of core operations like matrix multiplication - Enables faster execution of core operations like matrix multiplication
(GEMM), convolutions and transformations. (GEMM), convolutions and transformations.
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_ * - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- :version-ref:`hipBLAS rocm_version` - 3.0.0
- 2.4.0
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for - Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations. matrix and vector operations.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_ * - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- :version-ref:`hipBLASLt rocm_version` - 1.0.0
- 0.12.0
- hipBLASLt is an extension of the hipBLAS library, providing additional - hipBLASLt is an extension of the hipBLAS library, providing additional
features like epilogues fused into the matrix multiplication kernel or features like epilogues fused into the matrix multiplication kernel or
use of integer tensor cores. use of integer tensor cores.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_ * - `hipCUB <https://github.com/ROCm/hipCUB>`_
- :version-ref:`hipCUB rocm_version` - 4.0.0
- 3.4.0
- Provides a C++ template library for parallel algorithms for reduction, - Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select. scan, sort and select.
* - `hipFFT <https://github.com/ROCm/hipFFT>`_ * - `hipFFT <https://github.com/ROCm/hipFFT>`_
- :version-ref:`hipFFT rocm_version` - 1.0.20
- 1.0.18
- Provides GPU-accelerated Fast Fourier Transform (FFT) operations. - Provides GPU-accelerated Fast Fourier Transform (FFT) operations.
* - `hipRAND <https://github.com/ROCm/hipRAND>`_ * - `hipRAND <https://github.com/ROCm/hipRAND>`_
- :version-ref:`hipRAND rocm_version` - 3.0.0
- 2.12.0
- Provides fast random number generation for GPUs. - Provides fast random number generation for GPUs.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_ * - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- :version-ref:`hipSOLVER rocm_version` - 3.0.0
- 2.4.0
- Provides GPU-accelerated solvers for linear systems, eigenvalues, and - Provides GPU-accelerated solvers for linear systems, eigenvalues, and
singular value decompositions (SVD). singular value decompositions (SVD).
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_ * - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- :version-ref:`hipSPARSE rocm_version` - 4.0.1
- 3.2.0
- Accelerates operations on sparse matrices, such as sparse matrix-vector - Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products. or matrix-matrix products.
* - `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_ * - `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_
- :version-ref:`hipSPARSELt rocm_version` - 0.2.4
- 0.2.3
- Accelerates operations on sparse matrices, such as sparse matrix-vector - Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products. or matrix-matrix products.
* - `hipTensor <https://github.com/ROCm/hipTensor>`_ * - `hipTensor <https://github.com/ROCm/hipTensor>`_
- :version-ref:`hipTensor rocm_version` - 2.0.0
- 1.5.0
- Optimizes for high-performance tensor operations, such as contractions. - Optimizes for high-performance tensor operations, such as contractions.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_ * - `MIOpen <https://github.com/ROCm/MIOpen>`_
- :version-ref:`MIOpen rocm_version` - 3.5.0
- 3.4.0
- Optimizes deep learning primitives such as convolutions, pooling, - Optimizes deep learning primitives such as convolutions, pooling,
normalization, and activation functions. normalization, and activation functions.
* - `MIGraphX <https://github.com/ROCm/AMDMIGraphX>`_ * - `MIGraphX <https://github.com/ROCm/AMDMIGraphX>`_
- :version-ref:`MIGraphX rocm_version` - 2.13.0
- 2.12.0
- Adds graph-level optimizations, ONNX models and mixed precision support - Adds graph-level optimizations, ONNX models and mixed precision support
and enable Ahead-of-Time (AOT) Compilation. and enable Ahead-of-Time (AOT) Compilation.
* - `MIVisionX <https://github.com/ROCm/MIVisionX>`_ * - `MIVisionX <https://github.com/ROCm/MIVisionX>`_
- :version-ref:`MIVisionX rocm_version` - 3.3.0
- 3.2.0
- Optimizes acceleration for computer vision and AI workloads like - Optimizes acceleration for computer vision and AI workloads like
preprocessing, augmentation, and inferencing. preprocessing, augmentation, and inferencing.
* - `rocAL <https://github.com/ROCm/rocAL>`_ * - `rocAL <https://github.com/ROCm/rocAL>`_
- :version-ref:`rocAL rocm_version` - 3.3.0
- 2.2.0
- Accelerates the data pipeline by offloading intensive preprocessing and - Accelerates the data pipeline by offloading intensive preprocessing and
augmentation tasks. rocAL is part of MIVisionX. augmentation tasks. rocAL is part of MIVisionX.
* - `RCCL <https://github.com/ROCm/rccl>`_ * - `RCCL <https://github.com/ROCm/rccl>`_
- :version-ref:`RCCL rocm_version` - 2.26.6
- 2.22.3
- Optimizes for multi-GPU communication for operations like AllReduce and - Optimizes for multi-GPU communication for operations like AllReduce and
Broadcast. Broadcast.
* - `rocDecode <https://github.com/ROCm/rocDecode>`_ * - `rocDecode <https://github.com/ROCm/rocDecode>`_
- :version-ref:`rocDecode rocm_version` - 1.0.0
- 0.10.0
- Provides hardware-accelerated data decoding capabilities, particularly - Provides hardware-accelerated data decoding capabilities, particularly
for image, video, and other dataset formats. for image, video, and other dataset formats.
* - `rocJPEG <https://github.com/ROCm/rocJPEG>`_ * - `rocJPEG <https://github.com/ROCm/rocJPEG>`_
- :version-ref:`rocJPEG rocm_version` - 1.1.0
- 0.8.0
- Provides hardware-accelerated JPEG image decoding and encoding. - Provides hardware-accelerated JPEG image decoding and encoding.
* - `RPP <https://github.com/ROCm/RPP>`_ * - `RPP <https://github.com/ROCm/RPP>`_
- :version-ref:`RPP rocm_version` - 2.0.0
- 1.9.10
- Speeds up data augmentation, transformation, and other preprocessing steps. - Speeds up data augmentation, transformation, and other preprocessing steps.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_ * - `rocThrust <https://github.com/ROCm/rocThrust>`_
- :version-ref:`rocThrust rocm_version` - 4.0.0
- 3.3.0
- Provides a C++ template library for parallel algorithms like sorting, - Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning. reduction, and scanning.
* - `rocWMMA <https://github.com/ROCm/rocWMMA>`_ * - `rocWMMA <https://github.com/ROCm/rocWMMA>`_
- :version-ref:`rocWMMA rocm_version` - 2.0.0
- 1.7.0
- Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix - Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix
multiplication (GEMM) and accumulation operations with mixed precision multiplication (GEMM) and accumulation operations with mixed precision
support. support.
.. _dgl-supported-features-latest:
Supported features Supported features with ROCm 7.0.0
================================================================================ ================================================================================
Many functions and methods available in DGL Upstream are also supported in DGL ROCm. Many functions and methods available upstream are also supported in DGL on ROCm.
Instead of listing them all, support is grouped into the following categories to provide a general overview. Instead of listing them all, support is grouped into the following categories to provide a general overview.
* DGL Base * DGL Base
* DGL Backend * DGL Backend
* DGL Data * DGL Data
* DGL Dataloading * DGL Dataloading
* DGL DGLGraph * DGL Graph
* DGL Function * DGL Function
* DGL Ops * DGL Ops
* DGL Sampling * DGL Sampling
@@ -230,26 +289,76 @@ Instead of listing them all, support is grouped into the following categories to
* DGL NN * DGL NN
* DGL Optim * DGL Optim
* DGL Sparse * DGL Sparse
* GraphBolt
.. _dgl-unsupported-features-latest:
Unsupported features Unsupported features with ROCm 7.0.0
================================================================================ ================================================================================
* Graphbolt * TF32 Support (only supported for PyTorch 2.7 and above)
* Partial TF32 Support (MI250x only) * Kineto/ROCTracer integration
* Kineto/ ROCTracer integration
.. _dgl-unsupported-functions:
Unsupported functions Unsupported functions with ROCm 7.0.0
================================================================================ ================================================================================
* ``more_nnz`` * ``bfs``
* ``format`` * ``format``
* ``multiprocess_sparse_adam_state_dict`` * ``multiprocess_sparse_adam_state_dict``
* ``record_stream_ndarray``
* ``half_spmm`` * ``half_spmm``
* ``segment_mm`` * ``segment_mm``
* ``gather_mm_idx_b`` * ``gather_mm_idx_b``
* ``pgexplainer``
* ``sample_labors_prob`` * ``sample_labors_prob``
* ``sample_labors_noprob`` * ``sample_labors_noprob``
* ``sparse_admin``
.. _dgl-recommendations:
Use cases and recommendations
================================================================================
DGL can be used for Graph Learning, and building popular graph models like
GAT, GCN, and GraphSage. Using these models, a variety of use cases are supported:
- Recommender systems
- Network Optimization and Analysis
- 1D (Temporal) and 2D (Image) Classification
- Drug Discovery
For use cases and recommendations, refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__,
where you can search for DGL examples and best practices to optimize your workloads on AMD GPUs.
* Although multiple use cases of DGL have been tested and verified, a few have been
outlined in the `DGL in the Real World: Running GNNs on Real Use Cases
<https://rocm.blogs.amd.com/artificial-intelligence/dgl_blog2/README.html>`__ blog
post, which walks through four real-world graph neural network (GNN) workloads
implemented with the Deep Graph Library on ROCm. It covers tasks ranging from
heterogeneous e-commerce graphs and multiplex networks (GATNE) to molecular graph
regression (GNN-FiLM) and EEG-based neurological diagnosis (EEG-GCNN). For each use
case, the authors detail: the dataset and task, how DGL is used, and their experience
porting to ROCm. It is shown that DGL codebases often run without modification, with
seamless integration of graph operations, message passing, sampling, and convolution.
* The `Graph Neural Networks (GNNs) at Scale: DGL with ROCm on AMD Hardware
<https://rocm.blogs.amd.com/artificial-intelligence/why-graph-neural/README.html>`__
blog post introduces the Deep Graph Library (DGL) and its enablement on the AMD ROCm platform,
bringing high-performance graph neural network (GNN) training to AMD GPUs. DGL bridges
the gap between dense tensor frameworks and the irregular nature of graph data through a
graph-first, message-passing abstraction. Its design ensures scalability, flexibility, and
interoperability across frameworks like PyTorch and TensorFlow. AMDs ROCm integration
enables DGL to run efficiently on HIP-based GPUs, supported by prebuilt Docker containers
and open-source repositories. This marks a major step in AMD's mission to advance open,
scalable AI ecosystems beyond traditional architectures.
You can pre-process datasets and begin training on AMD GPUs through:
* Single-GPU training/inference
* Multi-GPU training
Previous versions
===============================================================================
See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/dgl-history` to find documentation for previous releases
of the ``ROCm/dgl`` Docker image.

View File

@@ -1,8 +1,8 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: FlashInfer deep learning framework compatibility :description: FlashInfer compatibility
:keywords: GPU, LLM, FlashInfer, compatibility :keywords: GPU, LLM, FlashInfer, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -11,7 +11,7 @@ FlashInfer compatibility
******************************************************************************** ********************************************************************************
`FlashInfer <https://docs.flashinfer.ai/index.html>`__ is a library and kernel generator `FlashInfer <https://docs.flashinfer.ai/index.html>`__ is a library and kernel generator
for Large Language Models (LLMs) that provides high-performance implementation of graphics for Large Language Models (LLMs) that provides a high-performance implementation of graphics
processing units (GPUs) kernels. FlashInfer focuses on LLM serving and inference, as well processing units (GPUs) kernels. FlashInfer focuses on LLM serving and inference, as well
as advanced performance across diverse scenarios. as advanced performance across diverse scenarios.
@@ -25,63 +25,35 @@ offers high-performance LLM-specific operators, with easy integration through Py
For the latest feature compatibility matrix, refer to the ``README`` of the For the latest feature compatibility matrix, refer to the ``README`` of the
`https://github.com/ROCm/flashinfer <https://github.com/ROCm/flashinfer>`__ repository. `https://github.com/ROCm/flashinfer <https://github.com/ROCm/flashinfer>`__ repository.
Support for the ROCm port of FlashInfer is available as follows: Support overview
================================================================================
- ROCm support for FlashInfer is hosted in the `https://github.com/ROCm/flashinfer - The ROCm-supported version of FlashInfer is maintained in the official `https://github.com/ROCm/flashinfer
<https://github.com/ROCm/flashinfer>`__ repository. This location differs from the <https://github.com/ROCm/flashinfer>`__ repository, which differs from the
`https://github.com/flashinfer-ai/flashinfer <https://github.com/flashinfer-ai/flashinfer>`_ `https://github.com/flashinfer-ai/flashinfer <https://github.com/flashinfer-ai/flashinfer>`__
upstream repository. upstream repository.
- To install FlashInfer, use the prebuilt :ref:`Docker image <flashinfer-docker-compat>`, - To get started and install FlashInfer on ROCm, use the prebuilt :ref:`Docker images <flashinfer-docker-compat>`,
which includes ROCm, FlashInfer, and all required dependencies. which include ROCm, FlashInfer, and all required dependencies.
- See the :doc:`ROCm FlashInfer installation guide <rocm-install-on-linux:install/3rd-party/flashinfer-install>` - See the :doc:`ROCm FlashInfer installation guide <rocm-install-on-linux:install/3rd-party/flashinfer-install>`
to install and get started. for installation and setup instructions.
- See the `Installation guide <https://docs.flashinfer.ai/installation.html>`__ - You can also consult the upstream `Installation guide <https://docs.flashinfer.ai/installation.html>`__
in the upstream FlashInfer documentation. for additional context.
.. note::
Flashinfer is supported on ROCm 6.4.1.
Supported devices
================================================================================
**Officially Supported**: AMD Instinct™ MI300X
.. _flashinfer-recommendations:
Use cases and recommendations
================================================================================
This release of FlashInfer on ROCm provides the decode functionality for LLM inferencing.
In the decode phase, tokens are generated sequentially, with the model predicting each new
token based on the previously generated tokens and the input context.
FlashInfer on ROCm brings over upstream features such as load balancing, sparse and dense
attention optimizations, and batching support, enabling efficient execution on AMD Instinct™ MI300X GPUs.
Because large LLMs often require substantial KV caches or long context windows, FlashInfer on ROCm
also implements cascade attention from upstream to reduce memory usage.
For currently supported use cases and recommendations, refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__,
where you can search for examples and best practices to optimize your workloads on AMD GPUs.
.. _flashinfer-docker-compat: .. _flashinfer-docker-compat:
Docker image compatibility Compatibility matrix
================================================================================ ================================================================================
.. |docker-icon| raw:: html .. |docker-icon| raw:: html
<i class="fab fa-docker"></i> <i class="fab fa-docker"></i>
AMD validates and publishes `ROCm FlashInfer images <https://hub.docker.com/r/rocm/flashinfer/tags>`__ AMD validates and publishes `FlashInfer images <https://hub.docker.com/r/rocm/flashinfer/tags>`__
with ROCm and Pytorch backends on Docker Hub. The following Docker image tags and associated with ROCm backends on Docker Hub. The following Docker image tag and associated
inventories represent the FlashInfer version from the official Docker Hub. inventories represent the latest available FlashInfer version from the official Docker Hub.
The Docker images have been validated for `ROCm 6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`__.
Click |docker-icon| to view the image on Docker Hub. Click |docker-icon| to view the image on Docker Hub.
.. list-table:: .. list-table::
@@ -94,6 +66,7 @@ Click |docker-icon| to view the image on Docker Hub.
- PyTorch - PyTorch
- Ubuntu - Ubuntu
- Python - Python
- GPU
* - .. raw:: html * - .. raw:: html
@@ -103,5 +76,23 @@ Click |docker-icon| to view the image on Docker Hub.
- `2.7.1 <https://github.com/ROCm/pytorch/releases/tag/v2.7.1>`__ - `2.7.1 <https://github.com/ROCm/pytorch/releases/tag/v2.7.1>`__
- 24.04 - 24.04
- `3.12 <https://www.python.org/downloads/release/python-3129/>`__ - `3.12 <https://www.python.org/downloads/release/python-3129/>`__
- MI300X
.. _flashinfer-recommendations:
Use cases and recommendations
================================================================================
The release of FlashInfer on ROCm provides the decode functionality for LLM inferencing.
In the decode phase, tokens are generated sequentially, with the model predicting each new
token based on the previously generated tokens and the input context.
FlashInfer on ROCm brings over upstream features such as load balancing, sparse and dense
attention optimizations, and batching support, enabling efficient execution on AMD Instinct™ MI300X GPUs.
Because large LLMs often require substantial KV caches or long context windows, FlashInfer on ROCm
also implements cascade attention from upstream to reduce memory usage.
For currently supported use cases and recommendations, refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__,
where you can search for examples and best practices to optimize your workloads on AMD GPUs.

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: JAX compatibility :description: JAX compatibility
:keywords: GPU, JAX compatibility :keywords: GPU, JAX, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -10,42 +10,58 @@
JAX compatibility JAX compatibility
******************************************************************************* *******************************************************************************
JAX provides a NumPy-like API, which combines automatic differentiation and the `JAX <https://docs.jax.dev/en/latest/notebooks/thinking_in_jax.html>`__ is a library
Accelerated Linear Algebra (XLA) compiler to achieve high-performance machine for array-oriented numerical computation (similar to NumPy), with automatic differentiation
learning at scale. and just-in-time (JIT) compilation to enable high-performance machine learning research.
JAX uses composable transformations of Python and NumPy through just-in-time JAX provides an API that combines automatic differentiation and the
(JIT) compilation, automatic vectorization, and parallelization. To learn about Accelerated Linear Algebra (XLA) compiler to achieve high-performance machine
JAX, including profiling and optimizations, see the official `JAX documentation learning at scale. JAX uses composable transformations of Python and NumPy through
<https://jax.readthedocs.io/en/latest/notebooks/quickstart.html>`_. JIT compilation, automatic vectorization, and parallelization.
ROCm support for JAX is upstreamed, and users can build the official source code Support overview
with ROCm support: ================================================================================
- ROCm JAX release: - The ROCm-supported version of JAX is maintained in the official `https://github.com/ROCm/rocm-jax
<https://github.com/ROCm/rocm-jax>`__ repository, which differs from the
`https://github.com/jax-ml/jax <https://github.com/jax-ml/jax>`__ upstream repository.
- Offers AMD-validated and community :ref:`Docker images <jax-docker-compat>` - To get started and install JAX on ROCm, use the prebuilt :ref:`Docker images <jax-docker-compat>`,
with ROCm and JAX preinstalled. which include ROCm, JAX, and all required dependencies.
- ROCm JAX repository: `ROCm/rocm-jax <https://github.com/ROCm/rocm-jax>`_ - See the :doc:`ROCm JAX installation guide <rocm-install-on-linux:install/3rd-party/jax-install>`
for installation and setup instructions.
- See the :doc:`ROCm JAX installation guide <rocm-install-on-linux:install/3rd-party/jax-install>` - You can also consult the upstream `Installation guide <https://jax.readthedocs.io/en/latest/installation.html#amd-gpu-linux>`__
to get started. for additional context.
- Official JAX release: Version support
--------------------------------------------------------------------------------
- Official JAX repository: `jax-ml/jax <https://github.com/jax-ml/jax>`_ AMD releases official `ROCm JAX Docker images <https://hub.docker.com/r/rocm/jax/tags>`_
quarterly alongside new ROCm releases. These images undergo full AMD testing.
`Community ROCm JAX Docker images <https://hub.docker.com/r/rocm/jax-community/tags>`_
follow upstream JAX releases and use the latest available ROCm version.
- See the `AMD GPU (Linux) installation section JAX Plugin-PJRT with JAX/JAXLIB compatibility
<https://jax.readthedocs.io/en/latest/installation.html#amd-gpu-linux>`_ in ================================================================================
the JAX documentation.
.. note:: Portable JIT Runtime (PJRT) is an open, stable interface for device runtime and
compiler. The following table details the ROCm version compatibility matrix
between JAX PluginPJRT and JAX/JAXLIB.
AMD releases official `ROCm JAX Docker images <https://hub.docker.com/r/rocm/jax>`_ .. list-table::
quarterly alongside new ROCm releases. These images undergo full AMD testing. :header-rows: 1
`Community ROCm JAX Docker images <https://hub.docker.com/r/rocm/jax-community>`_
follow upstream JAX releases and use the latest available ROCm version. * - JAX Plugin-PJRT
- JAX/JAXLIB
- ROCm
* - 0.7.1
- 0.7.1
- 7.1.1, 7.1.0
* - 0.6.0
- 0.6.2, 0.6.0
- 7.0.2, 7.0.1, 7.0.0
Use cases and recommendations Use cases and recommendations
================================================================================ ================================================================================
@@ -71,7 +87,7 @@ Use cases and recommendations
* The `Distributed fine-tuning with JAX on AMD GPUs <https://rocm.blogs.amd.com/artificial-intelligence/distributed-sft-jax/README.html>`_ * The `Distributed fine-tuning with JAX on AMD GPUs <https://rocm.blogs.amd.com/artificial-intelligence/distributed-sft-jax/README.html>`_
outlines the process of fine-tuning a Bidirectional Encoder Representations outlines the process of fine-tuning a Bidirectional Encoder Representations
from Transformers (BERT)-based large language model (LLM) using JAX for a text from Transformers (BERT)-based large language model (LLM) using JAX for a text
classification task. The blog post discuss techniques for parallelizing the classification task. The blog post discusses techniques for parallelizing the
fine-tuning across multiple AMD GPUs and assess the model's performance on a fine-tuning across multiple AMD GPUs and assess the model's performance on a
holdout dataset. During the fine-tuning, a BERT-base-cased transformer model holdout dataset. During the fine-tuning, a BERT-base-cased transformer model
and the General Language Understanding Evaluation (GLUE) benchmark dataset was and the General Language Understanding Evaluation (GLUE) benchmark dataset was
@@ -79,7 +95,7 @@ Use cases and recommendations
* The `MI300X workload optimization guide <https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html>`_ * The `MI300X workload optimization guide <https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html>`_
provides detailed guidance on optimizing workloads for the AMD Instinct MI300X provides detailed guidance on optimizing workloads for the AMD Instinct MI300X
accelerator using ROCm. The page is aimed at helping users achieve optimal GPU using ROCm. The page is aimed at helping users achieve optimal
performance for deep learning and other high-performance computing tasks on performance for deep learning and other high-performance computing tasks on
the MI300X GPU. the MI300X GPU.
@@ -90,9 +106,9 @@ For more use cases and recommendations, see `ROCm JAX blog posts <https://rocm.b
Docker image compatibility Docker image compatibility
================================================================================ ================================================================================
AMD provides preconfigured Docker images with JAX and the ROCm backend. AMD validates and publishes `JAX images <https://hub.docker.com/r/rocm/jax/tags>`__
These images are published on `Docker Hub <https://hub.docker.com/r/rocm/jax>`__ and are the with ROCm backends on Docker Hub.
recommended way to get started with deep learning with JAX on ROCm.
For ``jax-community`` images, see `rocm/jax-community For ``jax-community`` images, see `rocm/jax-community
<https://hub.docker.com/r/rocm/jax-community/tags>`__ on Docker Hub. <https://hub.docker.com/r/rocm/jax-community/tags>`__ on Docker Hub.
@@ -234,7 +250,7 @@ The ROCm supported data types in JAX are collected in the following table.
.. note:: .. note::
JAX data type support is effected by the :ref:`key_rocm_libraries` and it's JAX data type support is affected by the :ref:`key_rocm_libraries` and it's
collected on :doc:`ROCm data types and precision support <rocm:reference/precision-support>` collected on :doc:`ROCm data types and precision support <rocm:reference/precision-support>`
page. page.

View File

@@ -1,8 +1,8 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: llama.cpp deep learning framework compatibility :description: llama.cpp compatibility
:keywords: GPU, GGML, llama.cpp compatibility :keywords: GPU, GGML, llama.cpp, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -20,73 +20,34 @@ to accelerate inference and reduce memory usage. Originally built as a CPU-first
llama.cpp is easy to integrate with other programming environments and is widely llama.cpp is easy to integrate with other programming environments and is widely
adopted across diverse platforms, including consumer devices. adopted across diverse platforms, including consumer devices.
ROCm support for llama.cpp is upstreamed, and you can build the official source code Support overview
with ROCm support: ================================================================================
- ROCm support for llama.cpp is hosted in the official `https://github.com/ROCm/llama.cpp - The ROCm-supported version of llama.cpp is maintained in the official `https://github.com/ROCm/llama.cpp
<https://github.com/ROCm/llama.cpp>`_ repository. <https://github.com/ROCm/llama.cpp>`__ repository, which differs from the
`https://github.com/ggml-org/llama.cpp <https://github.com/ggml-org/llama.cpp>`__ upstream repository.
- Due to independent compatibility considerations, this location differs from the - To get started and install llama.cpp on ROCm, use the prebuilt :ref:`Docker images <llama-cpp-docker-compat>`,
`https://github.com/ggml-org/llama.cpp <https://github.com/ggml-org/llama.cpp>`_ upstream repository. which include ROCm, llama.cpp, and all required dependencies.
- To install llama.cpp, use the prebuilt :ref:`Docker image <llama-cpp-docker-compat>`,
which includes ROCm, llama.cpp, and all required dependencies.
- See the :doc:`ROCm llama.cpp installation guide <rocm-install-on-linux:install/3rd-party/llama-cpp-install>` - See the :doc:`ROCm llama.cpp installation guide <rocm-install-on-linux:install/3rd-party/llama-cpp-install>`
to install and get started. for installation and setup instructions.
- See the `Installation guide <https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#hip>`__ - You can also consult the upstream `Installation guide <https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md>`__
in the upstream llama.cpp documentation. for additional context.
.. note::
llama.cpp is supported on ROCm 7.0.0 and ROCm 6.4.x.
Supported devices
================================================================================
**Officially Supported**: AMD Instinct™ MI300X, MI325X, MI210
Use cases and recommendations
================================================================================
llama.cpp can be applied in a variety of scenarios, particularly when you need to meet one or more of the following requirements:
- Plain C/C++ implementation with no external dependencies
- Support for 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory usage
- Custom HIP (Heterogeneous-compute Interface for Portability) kernels for running large language models (LLMs) on AMD GPUs (graphics processing units)
- CPU (central processing unit) + GPU (graphics processing unit) hybrid inference for partially accelerating models larger than the total available VRAM (video random-access memory)
llama.cpp is also used in a range of real-world applications, including:
- Games such as `Lucy's Labyrinth <https://github.com/MorganRO8/Lucys_Labyrinth>`__:
A simple maze game where AI-controlled agents attempt to trick the player.
- Tools such as `Styled Lines <https://marketplace.unity.com/packages/tools/ai-ml-integration/style-text-webgl-ios-stand-alone-llm-llama-cpp-wrapper-292902>`__:
A proprietary, asynchronous inference wrapper for Unity3D game development, including pre-built mobile and web platform wrappers and a model example.
- Various other AI applications use llama.cpp as their inference engine;
for a detailed list, see the `user interfaces (UIs) section <https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#description>`__.
For more use cases and recommendations, refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__,
where you can search for llama.cpp examples and best practices to optimize your workloads on AMD GPUs.
- The `Llama.cpp Meets Instinct: A New Era of Open-Source AI Acceleration <https://rocm.blogs.amd.com/ecosystems-and-partners/llama-cpp/README.html>`__
blog post outlines how the open-source llama.cpp framework enables efficient LLM inference—including interactive inference with ``llama-cli``,
server deployment with ``llama-server``, GGUF model preparation and quantization, performance benchmarking, and optimizations tailored for
AMD Instinct GPUs within the ROCm ecosystem.
.. _llama-cpp-docker-compat: .. _llama-cpp-docker-compat:
Docker image compatibility Compatibility matrix
================================================================================ ================================================================================
.. |docker-icon| raw:: html .. |docker-icon| raw:: html
<i class="fab fa-docker"></i> <i class="fab fa-docker"></i>
AMD validates and publishes `ROCm llama.cpp Docker images <https://hub.docker.com/r/rocm/llama.cpp/tags>`__ AMD validates and publishes `llama.cpp images <https://hub.docker.com/r/rocm/llama.cpp/tags>`__
with ROCm backends on Docker Hub. The following Docker image tags and associated with ROCm backends on Docker Hub. The following Docker image tags and associated
inventories represent the available llama.cpp versions from the official Docker Hub. inventories represent the latest available llama.cpp versions from the official Docker Hub.
Click |docker-icon| to view the image on Docker Hub. Click |docker-icon| to view the image on Docker Hub.
.. important:: .. important::
@@ -107,32 +68,35 @@ Click |docker-icon| to view the image on Docker Hub.
- llama.cpp - llama.cpp
- ROCm - ROCm
- Ubuntu - Ubuntu
- GPU
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6356_rocm7.0.0_ubuntu24.04_full/images/sha256-a2ecd635eaa65bb289a9041330128677f3ae88bee6fee0597424b17e38d4903c"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a> <a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6652.amd0_rocm7.0.0_ubuntu24.04_full/images/sha256-a94f0c7a598cc6504ff9e8371c016d7a2f93e69bf54a36c870f9522567201f10g"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a>
- .. raw:: html - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6356_rocm7.0.0_ubuntu24.04_server/images/sha256-cb46b47df415addb5ceb6e6fdf0be70bf9d7f6863bbe6e10c2441ecb84246d52"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a> <a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6652.amd0_rocm7.0.0_ubuntu24.04_server/images/sha256-be175932c3c96e882dfbc7e20e0e834f58c89c2925f48b222837ee929dfc47ee"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a>
- .. raw:: html - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6356_rocm7.0.0_ubuntu24.04_light/images/sha256-8f8536eec4b05c0ff1c022f9fc6c527ad1c89e6c1ca0906e4d39e4de73edbde9"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a> <a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6652.amd0_rocm7.0.0_ubuntu24.04_light/images/sha256-d8ba0c70603da502c879b1f8010b439c8e7fa9f6cbdac8bbbbbba97cb41ebc9e"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a>
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6652 <https://github.com/ROCm/llama.cpp/tree/release/b6652>`__
- `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__ - `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- 24.04 - 24.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6356_rocm7.0.0_ubuntu22.04_full/images/sha256-f36de2a3b03ae53e81c85422cb3780368c9891e1ac7884b04403a921fe2ea45d"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a> <a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6652.amd0_rocm7.0.0_ubuntu22.04_full/images/sha256-37582168984f25dce636cc7288298e06d94472ea35f65346b3541e6422b678ee"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a>
- .. raw:: html - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6356_rocm7.0.0_ubuntu22.04_server/images/sha256-df15e8ab11a6837cd3736644fec1e047465d49e37d610ab0b79df000371327df"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a> <a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6652.amd0_rocm7.0.0_ubuntu22.04_server/images/sha256-7e70578e6c3530c6591cc2c26da24a9ee68a20d318e12241de93c83224f83720"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a>
- .. raw:: html - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6356_rocm7.0.0_ubuntu22.04_light/images/sha256-4ea2d5bb7964f0ee3ea9b30ba7f343edd6ddfab1b1037669ca7eafad2e3c2bd7"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a> <a href="https://hub.docker.com/layers/rocm/llama.cpp/llama.cpp-b6652.amd0_rocm7.0.0_ubuntu22.04_light/images/sha256-9a5231acf88b4a229677bc2c636ea3fe78a7a80f558bd80910b919855de93ad5"><i class="fab fa-docker fa-lg"></i> rocm/llama.cpp</a>
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6652 <https://github.com/ROCm/llama.cpp/tree/release/b6652>`__
- `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__ - `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- 22.04 - 22.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -146,6 +110,7 @@ Click |docker-icon| to view the image on Docker Hub.
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__
- `6.4.3 <https://repo.radeon.com/rocm/apt/6.4.3/>`__ - `6.4.3 <https://repo.radeon.com/rocm/apt/6.4.3/>`__
- 24.04 - 24.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -159,7 +124,7 @@ Click |docker-icon| to view the image on Docker Hub.
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__
- `6.4.3 <https://repo.radeon.com/rocm/apt/6.4.3/>`__ - `6.4.3 <https://repo.radeon.com/rocm/apt/6.4.3/>`__
- 22.04 - 22.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -173,6 +138,7 @@ Click |docker-icon| to view the image on Docker Hub.
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__
- `6.4.2 <https://repo.radeon.com/rocm/apt/6.4.2/>`__ - `6.4.2 <https://repo.radeon.com/rocm/apt/6.4.2/>`__
- 24.04 - 24.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -186,7 +152,7 @@ Click |docker-icon| to view the image on Docker Hub.
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__
- `6.4.2 <https://repo.radeon.com/rocm/apt/6.4.2/>`__ - `6.4.2 <https://repo.radeon.com/rocm/apt/6.4.2/>`__
- 22.04 - 22.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -200,6 +166,7 @@ Click |docker-icon| to view the image on Docker Hub.
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__
- `6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`__ - `6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`__
- 24.04 - 24.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -213,6 +180,7 @@ Click |docker-icon| to view the image on Docker Hub.
- `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__ - `b6356 <https://github.com/ROCm/llama.cpp/tree/release/b6356>`__
- `6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`__ - `6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`__
- 22.04 - 22.04
- MI325X, MI300X, MI210
* - .. raw:: html * - .. raw:: html
@@ -226,7 +194,9 @@ Click |docker-icon| to view the image on Docker Hub.
- `b5997 <https://github.com/ROCm/llama.cpp/tree/release/b5997>`__ - `b5997 <https://github.com/ROCm/llama.cpp/tree/release/b5997>`__
- `6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`__ - `6.4.0 <https://repo.radeon.com/rocm/apt/6.4/>`__
- 24.04 - 24.04
- MI300X, MI210
.. _llama-cpp-key-rocm-libraries:
Key ROCm libraries for llama.cpp Key ROCm libraries for llama.cpp
================================================================================ ================================================================================
@@ -269,6 +239,36 @@ your corresponding ROCm version.
- Can be used to enhance the flash attention performance on AMD compute, by enabling - Can be used to enhance the flash attention performance on AMD compute, by enabling
the flag during compile time. the flag during compile time.
.. _llama-cpp-uses-recommendations:
Use cases and recommendations
================================================================================
llama.cpp can be applied in a variety of scenarios, particularly when you need to meet one or more of the following requirements:
- Plain C/C++ implementation with no external dependencies
- Support for 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory usage
- Custom HIP (Heterogeneous-compute Interface for Portability) kernels for running large language models (LLMs) on AMD GPUs (graphics processing units)
- CPU (central processing unit) + GPU (graphics processing unit) hybrid inference for partially accelerating models larger than the total available VRAM (video random-access memory)
llama.cpp is also used in a range of real-world applications, including:
- Games such as `Lucy's Labyrinth <https://github.com/MorganRO8/Lucys_Labyrinth>`__:
A simple maze game where AI-controlled agents attempt to trick the player.
- Tools such as `Styled Lines <https://marketplace.unity.com/packages/tools/ai-ml-integration/style-text-webgl-ios-stand-alone-llm-llama-cpp-wrapper-292902>`__:
A proprietary, asynchronous inference wrapper for Unity3D game development, including pre-built mobile and web platform wrappers and a model example.
- Various other AI applications use llama.cpp as their inference engine;
for a detailed list, see the `user interfaces (UIs) section <https://github.com/ggml-org/llama.cpp?tab=readme-ov-file#description>`__.
For more use cases and recommendations, refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__,
where you can search for llama.cpp examples and best practices to optimize your workloads on AMD GPUs.
- The `Llama.cpp Meets Instinct: A New Era of Open-Source AI Acceleration <https://rocm.blogs.amd.com/ecosystems-and-partners/llama-cpp/README.html>`__
blog post outlines how the open-source llama.cpp framework enables efficient LLM inference—including interactive inference with ``llama-cli``,
server deployment with ``llama-server``, GGUF model preparation and quantization, performance benchmarking, and optimizations tailored for
AMD Instinct GPUs within the ROCm ecosystem.
Previous versions Previous versions
=============================================================================== ===============================================================================
See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/llama-cpp-history` to find documentation for previous releases See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/llama-cpp-history` to find documentation for previous releases

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: Megablocks compatibility :description: Megablocks compatibility
:keywords: GPU, megablocks, compatibility :keywords: GPU, megablocks, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -10,64 +10,41 @@
Megablocks compatibility Megablocks compatibility
******************************************************************************** ********************************************************************************
Megablocks is a light-weight library for mixture-of-experts (MoE) training. `Megablocks <https://github.com/databricks/megablocks>`__ is a lightweight library
for mixture-of-experts `(MoE) <https://huggingface.co/blog/moe>`__ training.
The core of the system is efficient "dropless-MoE" and standard MoE layers. The core of the system is efficient "dropless-MoE" and standard MoE layers.
Megablocks is integrated with `https://github.com/stanford-futuredata/Megatron-LM <https://github.com/stanford-futuredata/Megatron-LM>`_, Megablocks is integrated with `https://github.com/stanford-futuredata/Megatron-LM
<https://github.com/stanford-futuredata/Megatron-LM>`__,
where data and pipeline parallel training of MoEs is supported. where data and pipeline parallel training of MoEs is supported.
* ROCm support for Megablocks is hosted in the official `https://github.com/ROCm/megablocks <https://github.com/ROCm/megablocks>`_ repository. Support overview
* Due to independent compatibility considerations, this location differs from the `https://github.com/stanford-futuredata/Megatron-LM <https://github.com/stanford-futuredata/Megatron-LM>`_ upstream repository.
* Use the prebuilt :ref:`Docker image <megablocks-docker-compat>` with ROCm, PyTorch, and Megablocks preinstalled.
* See the :doc:`ROCm Megablocks installation guide <rocm-install-on-linux:install/3rd-party/megablocks-install>` to install and get started.
.. note::
Megablocks is supported on ROCm 6.3.0.
Supported devices
================================================================================ ================================================================================
- **Officially Supported**: AMD Instinct MI300X - The ROCm-supported version of Megablocks is maintained in the official `https://github.com/ROCm/megablocks
- **Partially Supported** (functionality or performance limitations): AMD Instinct MI250X, MI210 <https://github.com/ROCm/megablocks>`__ repository, which differs from the
`https://github.com/stanford-futuredata/Megatron-LM <https://github.com/stanford-futuredata/Megatron-LM>`__ upstream repository.
Supported models and features - To get started and install Megablocks on ROCm, use the prebuilt :ref:`Docker image <megablocks-docker-compat>`,
================================================================================ which includes ROCm, Megablocks, and all required dependencies.
This section summarizes the Megablocks features supported by ROCm. - See the :doc:`ROCm Megablocks installation guide <rocm-install-on-linux:install/3rd-party/megablocks-install>`
for installation and setup instructions.
* Distributed Pre-training
* Activation Checkpointing and Recomputation
* Distributed Optimizer
* Mixture-of-Experts
* dropless-Mixture-of-Experts
.. _megablocks-recommendations:
Use cases and recommendations
================================================================================
The `ROCm Megablocks blog posts <https://rocm.blogs.amd.com/artificial-intelligence/megablocks/README.html>`_
guide how to leverage the ROCm platform for pre-training using the Megablocks framework.
It features how to pre-process datasets and how to begin pre-training on AMD GPUs through:
* Single-GPU pre-training
* Multi-GPU pre-training
- You can also consult the upstream `Installation guide <https://github.com/databricks/megablocks>`__
for additional context.
.. _megablocks-docker-compat: .. _megablocks-docker-compat:
Docker image compatibility Compatibility matrix
================================================================================ ================================================================================
.. |docker-icon| raw:: html .. |docker-icon| raw:: html
<i class="fab fa-docker"></i> <i class="fab fa-docker"></i>
AMD validates and publishes `ROCm Megablocks images <https://hub.docker.com/r/rocm/megablocks/tags>`_ AMD validates and publishes `Megablocks images <https://hub.docker.com/r/rocm/megablocks/tags>`__
with ROCm and Pytorch backends on Docker Hub. The following Docker image tags and associated with ROCm backends on Docker Hub. The following Docker image tag and associated
inventories represent the latest Megatron-LM version from the official Docker Hub. inventories represent the latest available Megablocks version from the official Docker Hub.
The Docker images have been validated for `ROCm 6.3.0 <https://repo.radeon.com/rocm/apt/6.3/>`_.
Click |docker-icon| to view the image on Docker Hub. Click |docker-icon| to view the image on Docker Hub.
.. list-table:: .. list-table::
@@ -80,6 +57,7 @@ Click |docker-icon| to view the image on Docker Hub.
- PyTorch - PyTorch
- Ubuntu - Ubuntu
- Python - Python
- GPU
* - .. raw:: html * - .. raw:: html
@@ -89,5 +67,38 @@ Click |docker-icon| to view the image on Docker Hub.
- `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_ - `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_
- 24.04 - 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`_ - `3.12.9 <https://www.python.org/downloads/release/python-3129/>`_
- MI300X
Supported models and features with ROCm 6.3.0
================================================================================
This section summarizes the Megablocks features supported by ROCm.
* Distributed Pre-training
* Activation Checkpointing and Recomputation
* Distributed Optimizer
* Mixture-of-Experts
* dropless-Mixture-of-Experts
.. _megablocks-recommendations:
Use cases and recommendations
================================================================================
* The `Efficient MoE training on AMD ROCm: How-to use Megablocks on AMD GPUs
<https://rocm.blogs.amd.com/artificial-intelligence/megablocks/README.html>`__
blog post guides how to leverage the ROCm platform for pre-training using the
Megablocks framework. It introduces a streamlined approach for training Mixture-of-Experts
(MoE) models using the Megablocks library on AMD hardware. Focusing on GPT-2, it
demonstrates how block-sparse computations can enhance scalability and efficiency in MoE
training. The guide provides step-by-step instructions for setting up the environment,
including cloning the repository, building the Docker image, and running the training container.
Additionally, it offers insights into utilizing the ``oscar-1GB.json`` dataset for pre-training
language models. By leveraging Megablocks and the ROCm platform, you can optimize your MoE
training workflows for large-scale transformer models.
It features how to pre-process datasets and how to begin pre-training on AMD GPUs through:
* Single-GPU pre-training
* Multi-GPU pre-training

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: PyTorch compatibility :description: PyTorch compatibility
:keywords: GPU, PyTorch compatibility :keywords: GPU, PyTorch, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -15,40 +15,42 @@ deep learning. PyTorch on ROCm provides mixed-precision and large-scale training
using `MIOpen <https://github.com/ROCm/MIOpen>`__ and using `MIOpen <https://github.com/ROCm/MIOpen>`__ and
`RCCL <https://github.com/ROCm/rccl>`__ libraries. `RCCL <https://github.com/ROCm/rccl>`__ libraries.
ROCm support for PyTorch is upstreamed into the official PyTorch repository. Due PyTorch provides two high-level features:
to independent compatibility considerations, this results in two distinct
release cycles for PyTorch on ROCm:
- ROCm PyTorch release: - Tensor computation (like NumPy) with strong GPU acceleration
- Provides the latest version of ROCm but might not necessarily support the - Deep neural networks built on a tape-based autograd system (rapid computation
latest stable PyTorch version. of multiple partial derivatives or gradients)
- Offers :ref:`Docker images <pytorch-docker-compat>` with ROCm and PyTorch Support overview
preinstalled. ================================================================================
- ROCm PyTorch repository: `<https://github.com/ROCm/pytorch>`__ ROCm support for PyTorch is upstreamed into the official PyTorch repository.
ROCm development is aligned with the stable release of PyTorch, while upstream
PyTorch testing uses the stable release of ROCm to maintain consistency:
- See the :doc:`ROCm PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>` - The ROCm-supported version of PyTorch is maintained in the official `https://github.com/ROCm/pytorch
to get started. <https://github.com/ROCm/pytorch>`__ repository, which differs from the
`https://github.com/pytorch/pytorch <https://github.com/pytorch/pytorch>`__ upstream repository.
- Official PyTorch release: - To get started and install PyTorch on ROCm, use the prebuilt :ref:`Docker images <pytorch-docker-compat>`,
which include ROCm, PyTorch, and all required dependencies.
- Provides the latest stable version of PyTorch but might not necessarily - See the :doc:`ROCm PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>`
support the latest ROCm version. for installation and setup instructions.
- Official PyTorch repository: `<https://github.com/pytorch/pytorch>`__ - You can also consult the upstream `Installation guide <https://pytorch.org/get-started/locally/>`__ or
`Previous versions <https://pytorch.org/get-started/previous-versions/>`__ for additional context.
- See the `Nightly and latest stable version installation guide <https://pytorch.org/get-started/locally/>`__
or `Previous versions <https://pytorch.org/get-started/previous-versions/>`__
to get started.
PyTorch includes tooling that generates HIP source code from the CUDA backend. PyTorch includes tooling that generates HIP source code from the CUDA backend.
This approach allows PyTorch to support ROCm without requiring manual code This approach allows PyTorch to support ROCm without requiring manual code
modifications. For more information, see :doc:`HIPIFY <hipify:index>`. modifications. For more information, see :doc:`HIPIFY <hipify:index>`.
ROCm development is aligned with the stable release of PyTorch, while upstream Version support
PyTorch testing uses the stable release of ROCm to maintain consistency. --------------------------------------------------------------------------------
AMD releases official `ROCm PyTorch Docker images <https://hub.docker.com/r/rocm/pytorch/tags>`_
quarterly alongside new ROCm releases. These images undergo full AMD testing.
.. _pytorch-recommendations: .. _pytorch-recommendations:
@@ -73,12 +75,12 @@ Use cases and recommendations
* The :doc:`Instinct MI300X workload optimization guide </how-to/rocm-for-ai/inference-optimization/workload>` * The :doc:`Instinct MI300X workload optimization guide </how-to/rocm-for-ai/inference-optimization/workload>`
provides detailed guidance on optimizing workloads for the AMD Instinct MI300X provides detailed guidance on optimizing workloads for the AMD Instinct MI300X
accelerator using ROCm. This guide helps users achieve optimal performance for GPU using ROCm. This guide helps users achieve optimal performance for
deep learning and other high-performance computing tasks on the MI300X deep learning and other high-performance computing tasks on the MI300X
accelerator. GPU.
* The :doc:`Inception with PyTorch documentation </conceptual/ai-pytorch-inception>` * The :doc:`Inception with PyTorch documentation </conceptual/ai-pytorch-inception>`
describes how PyTorch integrates with ROCm for AI workloads It outlines the describes how PyTorch integrates with ROCm for AI workloads. It outlines the
use of PyTorch on the ROCm platform and focuses on efficiently leveraging AMD use of PyTorch on the ROCm platform and focuses on efficiently leveraging AMD
GPU hardware for training and inference tasks in AI applications. GPU hardware for training and inference tasks in AI applications.
@@ -89,9 +91,8 @@ For more use cases and recommendations, see `ROCm PyTorch blog posts <https://ro
Docker image compatibility Docker image compatibility
================================================================================ ================================================================================
AMD provides preconfigured Docker images with PyTorch and the ROCm backend. AMD validates and publishes `PyTorch images <https://hub.docker.com/r/rocm/pytorch/tags>`__
These images are published on `Docker Hub <https://hub.docker.com/r/rocm/pytorch>`__ and are the with ROCm backends on Docker Hub.
recommended way to get started with deep learning with PyTorch on ROCm.
To find the right image tag, see the :ref:`PyTorch on ROCm installation To find the right image tag, see the :ref:`PyTorch on ROCm installation
documentation <rocm-install-on-linux:pytorch-docker-support>` for a list of documentation <rocm-install-on-linux:pytorch-docker-support>` for a list of
@@ -348,7 +349,7 @@ with ROCm.
you need to explicitly move audio data (waveform tensor) to GPU using you need to explicitly move audio data (waveform tensor) to GPU using
``.to('cuda')``. ``.to('cuda')``.
* - `torchtune <https://docs.pytorch.org/torchtune/stable/index.html>`_ * - `torchtune <https://meta-pytorch.org/torchtune/stable/index.html>`_
- PyTorch-native library designed for fine-tuning large language models - PyTorch-native library designed for fine-tuning large language models
(LLMs). Provides supports the full fine-tuning workflow and offers (LLMs). Provides supports the full fine-tuning workflow and offers
compatibility with popular production inference systems. compatibility with popular production inference systems.
@@ -360,21 +361,12 @@ with ROCm.
popular datasets, model architectures, and common image transformations popular datasets, model architectures, and common image transformations
for computer vision applications. for computer vision applications.
* - `torchtext <https://docs.pytorch.org/text/stable/index.html>`_
- Text processing library for PyTorch. Provides data processing utilities
and popular datasets for natural language processing, including
tokenization, vocabulary management, and text embeddings.
**Note:** ``torchtext`` does not implement ROCm-specific kernels.
ROCm acceleration is provided through the underlying PyTorch framework
and ROCm library integration. Only official release exists.
* - `torchdata <https://meta-pytorch.org/data/beta/index.html#torchdata>`_ * - `torchdata <https://meta-pytorch.org/data/beta/index.html#torchdata>`_
- Beta library of common modular data loading primitives for easily - Beta library of common modular data loading primitives for easily
constructing flexible and performant data pipelines, with features still constructing flexible and performant data pipelines, with features still
in prototype stage. in prototype stage.
* - `torchrec <https://docs.pytorch.org/torchrec/>`_ * - `torchrec <https://meta-pytorch.org/torchrec/>`_
- PyTorch domain library for common sparsity and parallelism primitives - PyTorch domain library for common sparsity and parallelism primitives
needed for large-scale recommender systems, enabling authors to train needed for large-scale recommender systems, enabling authors to train
models with large embedding tables shared across many GPUs. models with large embedding tables shared across many GPUs.
@@ -407,7 +399,40 @@ with ROCm.
**Note:** Only official release exists. **Note:** Only official release exists.
Key features and enhancements for PyTorch 2.7 with ROCm 7.0 Key features and enhancements for PyTorch 2.9 with ROCm 7.1.1
================================================================================
- Scaled Dot Product Attention (SDPA) upgraded to use AOTriton version 0.11b.
- Default hipBLASLt support enabled for gfx908 architecture on ROCm 6.3 and later.
- MIOpen now supports channels last memory format for 3D convolutions and batch normalization.
- NHWC convolution operations in MIOpen optimized by eliminating unnecessary transpose operations.
- Improved tensor.item() performance by removing redundant synchronization.
- Enhanced performance for element-wise operations and reduction kernels.
- Added support for grouped GEMM operations through fbgemm_gpu generative AI components.
- Resolved device error in Inductor when using CUDA graph trees with HIP.
- Corrected logsumexp scaling in AOTriton-based SDPA implementation.
- Added stream graph capture status validation in memory copy synchronization functions.
Key features and enhancements for PyTorch 2.8 with ROCm 7.1
================================================================================
- MIOpen deep learning optimizations: Further optimized NHWC BatchNorm feature.
- Added float8 support for the DeepSpeed extension, allowing for decreased
memory footprint and increased throughput in training and inference workloads.
- ``torch.nn.functional.scaled_dot_product_attention`` now calling optimized
flash attention kernel automatically.
Key features and enhancements for PyTorch 2.7/2.8 with ROCm 7.0
================================================================================ ================================================================================
- Enhanced TunableOp framework: Introduces ``tensorfloat32`` support for - Enhanced TunableOp framework: Introduces ``tensorfloat32`` support for
@@ -417,7 +442,7 @@ Key features and enhancements for PyTorch 2.7 with ROCm 7.0
- Expanded GPU architecture support: Provides optimized support for newer GPU - Expanded GPU architecture support: Provides optimized support for newer GPU
architectures, including gfx1200 and gfx1201 with preferred hipBLASLt backend architectures, including gfx1200 and gfx1201 with preferred hipBLASLt backend
selection, along with improvements for gfx950 and gfx1100 series GPUs. selection, along with improvements for gfx950 and gfx1100 Series GPUs.
- Advanced Triton Integration: AOTriton 0.10b introduces official support for - Advanced Triton Integration: AOTriton 0.10b introduces official support for
gfx950 and gfx1201, along with experimental support for gfx1101, gfx1151, gfx950 and gfx1201, along with experimental support for gfx1101, gfx1151,
@@ -442,10 +467,6 @@ Key features and enhancements for PyTorch 2.7 with ROCm 7.0
ROCm-specific test conditions, and enhanced unit test coverage for Flash ROCm-specific test conditions, and enhanced unit test coverage for Flash
Attention and Memory Efficient operations. Attention and Memory Efficient operations.
- Build system and infrastructure improvements: Provides updated CentOS Stream 9
support, improved Docker configuration, migration to public MAGMA repository,
and enhanced QA automation scripts for PyTorch unit testing.
- Composable Kernel (CK) updates: Features updated CK submodule integration with - Composable Kernel (CK) updates: Features updated CK submodule integration with
the latest optimizations and performance improvements for core mathematical the latest optimizations and performance improvements for core mathematical
operations. operations.
@@ -467,7 +488,7 @@ Key features and enhancements for PyTorch 2.7 with ROCm 7.0
network training or inference. For AMD platforms, ``amdclang++`` has been network training or inference. For AMD platforms, ``amdclang++`` has been
validated as the supported compiler for building these extensions. validated as the supported compiler for building these extensions.
Known issues and notes for PyTorch 2.7 with ROCm 7.0 Known issues and notes for PyTorch 2.7/2.8 with ROCm 7.0 and ROCm 7.1
================================================================================ ================================================================================
- The ``matmul.allow_fp16_reduced_precision_reduction`` and - The ``matmul.allow_fp16_reduced_precision_reduction`` and

View File

@@ -1,8 +1,8 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: Ray deep learning framework compatibility :description: Ray compatibility
:keywords: GPU, Ray compatibility :keywords: GPU, Ray, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -12,43 +12,74 @@ Ray compatibility
Ray is a unified framework for scaling AI and Python applications from your laptop Ray is a unified framework for scaling AI and Python applications from your laptop
to a full cluster, without changing your code. Ray consists of `a core distributed to a full cluster, without changing your code. Ray consists of `a core distributed
runtime <https://docs.ray.io/en/latest/ray-core/walkthrough.html>`_ and a set of runtime <https://docs.ray.io/en/latest/ray-core/walkthrough.html>`__ and a set of
`AI libraries <https://docs.ray.io/en/latest/ray-air/getting-started.html>`_ for `AI libraries <https://docs.ray.io/en/latest/ray-air/getting-started.html>`__ for
simplifying machine learning computations. simplifying machine learning computations.
Ray is a general-purpose framework that runs many types of workloads efficiently. Ray is a general-purpose framework that runs many types of workloads efficiently.
Any Python application can be scaled with Ray, without extra infrastructure. Any Python application can be scaled with Ray, without extra infrastructure.
ROCm support for Ray is upstreamed, and you can build the official source code Support overview
with ROCm support:
- ROCm support for Ray is hosted in the official `https://github.com/ROCm/ray
<https://github.com/ROCm/ray>`_ repository.
- Due to independent compatibility considerations, this location differs from the
`https://github.com/ray-project/ray <https://github.com/ray-project/ray>`_ upstream repository.
- To install Ray, use the prebuilt :ref:`Docker image <ray-docker-compat>`
which includes ROCm, Ray, and all required dependencies.
- See the :doc:`ROCm Ray installation guide <rocm-install-on-linux:install/3rd-party/ray-install>`
for instructions to get started.
- See the `Installation section <https://docs.ray.io/en/latest/ray-overview/installation.html>`_
in the upstream Ray documentation.
- The Docker image provided is based on the upstream Ray `Daily Release (Nightly) wheels <https://docs.ray.io/en/latest/ray-overview/installation.html#daily-releases-nightlies>`__
corresponding to commit `005c372 <https://github.com/ray-project/ray/commit/005c372262e050d5745f475e22e64305fa07f8b8>`__.
.. note::
Ray is supported on ROCm 6.4.1.
Supported devices
================================================================================ ================================================================================
**Officially Supported**: AMD Instinct™ MI300X, MI210 - The ROCm-supported version of Ray is maintained in the official `https://github.com/ROCm/ray
<https://github.com/ROCm/ray>`__ repository, which differs from the
`https://github.com/ray-project/ray <https://github.com/ray-project/ray>`__ upstream repository.
- To get started and install Ray on ROCm, use the prebuilt :ref:`Docker image <ray-docker-compat>`,
which includes ROCm, Ray, and all required dependencies.
- See the :doc:`ROCm Ray installation guide <rocm-install-on-linux:install/3rd-party/ray-install>`
for installation and setup instructions.
- You can also consult the upstream `Installation guide <https://docs.ray.io/en/latest/ray-overview/installation.html>`__
for additional context.
.. _ray-docker-compat:
Compatibility matrix
================================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes `ROCm Ray Docker images <https://hub.docker.com/r/rocm/ray/tags>`__
with ROCm backends on Docker Hub. The following Docker image tags and
associated inventories represent the latest Ray version from the official Docker Hub.
Click |docker-icon| to view the image on Docker Hub.
.. list-table::
:header-rows: 1
:class: docker-image-compatibility
* - Docker image
- ROCm
- Ray
- Pytorch
- Ubuntu
- Python
- GPU
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/ray/ray-2.51.1_rocm7.0.0_ubuntu22.04_py3.12_pytorch2.9.0/images/sha256-a02f6766b4ba406f88fd7e85707ec86c04b569834d869a08043ec9bcbd672168"><i class="fab fa-docker fa-lg"></i> rocm/ray</a>
- `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- `2.51.1 <https://github.com/ROCm/ray/tree/release/2.51.1>`__
- 2.9.0a0+git1c57644
- 22.04
- `3.12.12 <https://www.python.org/downloads/release/python-31212/>`__
- MI300X
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/ray/ray-2.48.0.post0_rocm6.4.1_ubuntu24.04_py3.12_pytorch2.6.0/images/sha256-0d166fe6bdced38338c78eedfb96eff92655fb797da3478a62dd636365133cc0"><i class="fab fa-docker fa-lg"></i> rocm/ray</a>
- `6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`__
- `2.48.0.post0 <https://github.com/ROCm/ray/tree/release/2.48.0.post0>`__
- 2.6.0+git684f6f2
- 24.04
- `3.12.10 <https://www.python.org/downloads/release/python-31210/>`__
- MI300X, MI210
Use cases and recommendations Use cases and recommendations
================================================================================ ================================================================================
@@ -77,35 +108,7 @@ topic <https://docs.ray.io/en/latest/ray-core/scheduling/accelerators.html#accel
of the Ray core documentation and refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__, of the Ray core documentation and refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`__,
where you can search for Ray examples and best practices to optimize your workloads on AMD GPUs. where you can search for Ray examples and best practices to optimize your workloads on AMD GPUs.
.. _ray-docker-compat: Previous versions
===============================================================================
Docker image compatibility See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/ray-history` to find documentation for previous releases
================================================================================ of the ``ROCm/ray`` Docker image.
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes ready-made `ROCm Ray Docker images <https://hub.docker.com/r/rocm/ray/tags>`__
with ROCm backends on Docker Hub. The following Docker image tags and
associated inventories represent the latest Ray version from the official Docker Hub and are validated for
`ROCm 6.4.1 <https://repo.radeon.com/rocm/apt/6.4.1/>`_. Click the |docker-icon|
icon to view the image on Docker Hub.
.. list-table::
:header-rows: 1
:class: docker-image-compatibility
* - Docker image
- Ray
- Pytorch
- Ubuntu
- Python
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/ray/ray-2.48.0.post0_rocm6.4.1_ubuntu24.04_py3.12_pytorch2.6.0/images/sha256-0d166fe6bdced38338c78eedfb96eff92655fb797da3478a62dd636365133cc0"><i class="fab fa-docker fa-lg"></i> rocm/ray</a>
- `2.48.0.post0 <https://github.com/ROCm/ray/tree/release/2.48.0.post0>`_
- 2.6.0+git684f6f2
- 24.04
- `3.12.10 <https://www.python.org/downloads/release/python-31210/>`_

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: Stanford Megatron-LM compatibility :description: Stanford Megatron-LM compatibility
:keywords: Stanford, Megatron-LM, compatibility :keywords: Stanford, Megatron-LM, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -10,34 +10,76 @@
Stanford Megatron-LM compatibility Stanford Megatron-LM compatibility
******************************************************************************** ********************************************************************************
Stanford Megatron-LM is a large-scale language model training framework developed by NVIDIA `https://github.com/NVIDIA/Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`_. It is Stanford Megatron-LM is a large-scale language model training framework developed
designed to train massive transformer-based language models efficiently by model and data parallelism. by NVIDIA at `https://github.com/NVIDIA/Megatron-LM <https://github.com/NVIDIA/Megatron-LM>`_.
It is designed to train massive transformer-based language models efficiently by model
and data parallelism.
* ROCm support for Stanford Megatron-LM is hosted in the official `https://github.com/ROCm/Stanford-Megatron-LM <https://github.com/ROCm/Stanford-Megatron-LM>`_ repository. It provides efficient tensor, pipeline, and sequence-based model parallelism for
* Due to independent compatibility considerations, this location differs from the `https://github.com/stanford-futuredata/Megatron-LM <https://github.com/stanford-futuredata/Megatron-LM>`_ upstream repository. pre-training transformer-based language models such as GPT (Decoder Only), BERT
* Use the prebuilt :ref:`Docker image <megatron-lm-docker-compat>` with ROCm, PyTorch, and Megatron-LM preinstalled. (Encoder Only), and T5 (Encoder-Decoder).
* See the :doc:`ROCm Stanford Megatron-LM installation guide <rocm-install-on-linux:install/3rd-party/stanford-megatron-lm-install>` to install and get started.
.. note:: Support overview
Stanford Megatron-LM is supported on ROCm 6.3.0.
Supported Devices
================================================================================ ================================================================================
- **Officially Supported**: AMD Instinct MI300X - The ROCm-supported version of Stanford Megatron-LM is maintained in the official `https://github.com/ROCm/Stanford-Megatron-LM
- **Partially Supported** (functionality or performance limitations): AMD Instinct MI250X, MI210 <https://github.com/ROCm/Stanford-Megatron-LM>`__ repository, which differs from the
`https://github.com/stanford-futuredata/Megatron-LM <https://github.com/stanford-futuredata/Megatron-LM>`__ upstream repository.
- To get started and install Stanford Megatron-LM on ROCm, use the prebuilt :ref:`Docker image <megatron-lm-docker-compat>`,
which includes ROCm, Stanford Megatron-LM, and all required dependencies.
Supported models and features - See the :doc:`ROCm Stanford Megatron-LM installation guide <rocm-install-on-linux:install/3rd-party/stanford-megatron-lm-install>`
for installation and setup instructions.
- You can also consult the upstream `Installation guide <https://github.com/NVIDIA/Megatron-LM>`__
for additional context.
.. _megatron-lm-docker-compat:
Compatibility matrix
================================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes `Stanford Megatron-LM images <https://hub.docker.com/r/rocm/stanford-megatron-lm/tags>`_
with ROCm and Pytorch backends on Docker Hub. The following Docker image tags and associated
inventories represent the latest Stanford Megatron-LM version from the official Docker Hub.
Click |docker-icon| to view the image on Docker Hub.
.. list-table::
:header-rows: 1
:class: docker-image-compatibility
* - Docker image
- ROCm
- Stanford Megatron-LM
- PyTorch
- Ubuntu
- Python
- GPU
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/stanford-megatron-lm/stanford-megatron-lm85f95ae_rocm6.3.0_ubuntu24.04_py3.12_pytorch2.4.0/images/sha256-070556f078be10888a1421a2cb4f48c29f28b02bfeddae02588d1f7fc02a96a6"><i class="fab fa-docker fa-lg"></i> rocm/stanford-megatron-lm</a>
- `6.3.0 <https://repo.radeon.com/rocm/apt/6.3/>`_
- `85f95ae <https://github.com/stanford-futuredata/Megatron-LM/commit/85f95aef3b648075fe6f291c86714fdcbd9cd1f5>`_
- `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_
- 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`_
- MI300X
Supported models and features with ROCm 6.3.0
================================================================================ ================================================================================
This section details models & features that are supported by the ROCm version on Stanford Megatron-LM. This section details models & features that are supported by the ROCm version on Stanford Megatron-LM.
Models: Models:
* Bert * BERT
* GPT * GPT
* T5 * T5
* ICT * ICT
@@ -54,47 +96,21 @@ Features:
Use cases and recommendations Use cases and recommendations
================================================================================ ================================================================================
See the `Efficient MoE training on AMD ROCm: How-to use Megablocks on AMD GPUs blog <https://rocm.blogs.amd.com/artificial-intelligence/megablocks/README.html>`_ post The following blog post mentions Megablocks, but you can run Stanford Megatron-LM with the same steps to pre-process datasets on AMD GPUs:
to leverage the ROCm platform for pre-training by using the Stanford Megatron-LM framework of pre-processing datasets on AMD GPUs.
Coverage includes:
* Single-GPU pre-training * The `Efficient MoE training on AMD ROCm: How-to use Megablocks on AMD GPUs
* Multi-GPU pre-training <https://rocm.blogs.amd.com/artificial-intelligence/megablocks/README.html>`__
blog post guides how to leverage the ROCm platform for pre-training using the
Megablocks framework. It introduces a streamlined approach for training Mixture-of-Experts
(MoE) models using the Megablocks library on AMD hardware. Focusing on GPT-2, it
demonstrates how block-sparse computations can enhance scalability and efficiency in MoE
training. The guide provides step-by-step instructions for setting up the environment,
including cloning the repository, building the Docker image, and running the training container.
Additionally, it offers insights into utilizing the ``oscar-1GB.json`` dataset for pre-training
language models. By leveraging Megablocks and the ROCm platform, you can optimize your MoE
training workflows for large-scale transformer models.
It features how to pre-process datasets and how to begin pre-training on AMD GPUs through:
.. _megatron-lm-docker-compat: * Single-GPU pre-training
* Multi-GPU pre-training
Docker image compatibility
================================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes `Stanford Megatron-LM images <https://hub.docker.com/r/rocm/megatron-lm>`_
with ROCm and Pytorch backends on Docker Hub. The following Docker image tags and associated
inventories represent the latest Megatron-LM version from the official Docker Hub.
The Docker images have been validated for `ROCm 6.3.0 <https://repo.radeon.com/rocm/apt/6.3/>`_.
Click |docker-icon| to view the image on Docker Hub.
.. list-table::
:header-rows: 1
:class: docker-image-compatibility
* - Docker image
- Stanford Megatron-LM
- PyTorch
- Ubuntu
- Python
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/stanford-megatron-lm/stanford-megatron-lm85f95ae_rocm6.3.0_ubuntu24.04_py3.12_pytorch2.4.0/images/sha256-070556f078be10888a1421a2cb4f48c29f28b02bfeddae02588d1f7fc02a96a6"><i class="fab fa-docker fa-lg"></i></a>
- `85f95ae <https://github.com/stanford-futuredata/Megatron-LM/commit/85f95aef3b648075fe6f291c86714fdcbd9cd1f5>`_
- `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_
- 24.04
- `3.12.9 <https://www.python.org/downloads/release/python-3129/>`_

View File

@@ -1,76 +0,0 @@
:orphan:
.. meta::
:description: Taichi compatibility
:keywords: GPU, Taichi compatibility
.. version-set:: rocm_version latest
*******************************************************************************
Taichi compatibility
*******************************************************************************
`Taichi <https://www.taichi-lang.org/>`_ is an open-source, imperative, and parallel
programming language designed for high-performance numerical computation.
Embedded in Python, it leverages just-in-time (JIT) compilation frameworks such as LLVM to accelerate
compute-intensive Python code by compiling it to native GPU or CPU instructions.
Taichi is widely used across various domains, including real-time physical simulation,
numerical computing, augmented reality, artificial intelligence, computer vision, robotics,
visual effects in film and gaming, and general-purpose computing.
* ROCm support for Taichi is hosted in the official `https://github.com/ROCm/taichi <https://github.com/ROCm/taichi>`_ repository.
* Due to independent compatibility considerations, this location differs from the `https://github.com/taichi-dev <https://github.com/taichi-dev>`_ upstream repository.
* Use the prebuilt :ref:`Docker image <taichi-docker-compat>` with ROCm, PyTorch, and Taichi preinstalled.
* See the :doc:`ROCm Taichi installation guide <rocm-install-on-linux:install/3rd-party/taichi-install>` to install and get started.
.. note::
Taichi is supported on ROCm 6.3.2.
Supported devices and features
===============================================================================
There is support through the ROCm software stack for all Taichi GPU features on AMD Instinct MI250X and MI210X series GPUs with the exception of Taichis GPU rendering system, CGUI.
AMD Instinct MI300X series GPUs will be supported by November.
.. _taichi-recommendations:
Use cases and recommendations
================================================================================
To fully leverage Taichi's performance capabilities in compute-intensive tasks, it is best to adhere to specific coding patterns and utilize Taichi decorators.
A collection of example use cases is available in the `https://github.com/ROCm/taichi_examples <https://github.com/ROCm/taichi_examples>`_ repository,
providing practical insights and foundational knowledge for working with the Taichi programming language.
You can also refer to the `AMD ROCm blog <https://rocm.blogs.amd.com/>`_ to search for Taichi examples and best practices to optimize your workflows on AMD GPUs.
.. _taichi-docker-compat:
Docker image compatibility
================================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes ready-made `ROCm Taichi Docker images <https://hub.docker.com/r/rocm/taichi/tags>`_
with ROCm backends on Docker Hub. The following Docker image tags and associated inventories
represent the latest Taichi version from the official Docker Hub.
The Docker images have been validated for `ROCm 6.3.2 <https://rocm.docs.amd.com/en/docs-6.3.2/about/release-notes.html>`_.
Click |docker-icon| to view the image on Docker Hub.
.. list-table::
:header-rows: 1
:class: docker-image-compatibility
* - Docker image
- ROCm
- Taichi
- Ubuntu
- Python
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/taichi/taichi-1.8.0b1_rocm6.3.2_ubuntu22.04_py3.10.12/images/sha256-e016964a751e6a92199032d23e70fa3a564fff8555afe85cd718f8aa63f11fc6"><i class="fab fa-docker fa-lg"></i> rocm/taichi</a>
- `6.3.2 <https://repo.radeon.com/rocm/apt/6.3.2/>`_
- `1.8.0b1 <https://github.com/taichi-dev/taichi>`_
- 22.04
- `3.10.12 <https://www.python.org/downloads/release/python-31012/>`_

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: TensorFlow compatibility :description: TensorFlow compatibility
:keywords: GPU, TensorFlow compatibility :keywords: GPU, TensorFlow, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -12,37 +12,33 @@ TensorFlow compatibility
`TensorFlow <https://www.tensorflow.org/>`__ is an open-source library for `TensorFlow <https://www.tensorflow.org/>`__ is an open-source library for
solving machine learning, deep learning, and AI problems. It can solve many solving machine learning, deep learning, and AI problems. It can solve many
problems across different sectors and industries but primarily focuses on problems across different sectors and industries, but primarily focuses on
neural network training and inference. It is one of the most popular and neural network training and inference. It is one of the most popular deep
in-demand frameworks and is very active in open-source contribution and learning frameworks and is very active in open-source development.
development.
Support overview
================================================================================
- The ROCm-supported version of TensorFlow is maintained in the official `https://github.com/ROCm/tensorflow-upstream
<https://github.com/ROCm/tensorflow-upstream>`__ repository, which differs from the
`https://github.com/tensorflow/tensorflow <https://github.com/tensorflow/tensorflow>`__ upstream repository.
- To get started and install TensorFlow on ROCm, use the prebuilt :ref:`Docker images <tensorflow-docker-compat>`,
which include ROCm, TensorFlow, and all required dependencies.
- See the :doc:`ROCm TensorFlow installation guide <rocm-install-on-linux:install/3rd-party/tensorflow-install>`
for installation and setup instructions.
- You can also consult the `TensorFlow API versions <https://www.tensorflow.org/versions>`__ list
for additional context.
Version support
--------------------------------------------------------------------------------
The `official TensorFlow repository <http://github.com/tensorflow/tensorflow>`__ The `official TensorFlow repository <http://github.com/tensorflow/tensorflow>`__
includes full ROCm support. AMD maintains a TensorFlow `ROCm repository includes full ROCm support. AMD maintains a TensorFlow `ROCm repository
<http://github.com/rocm/tensorflow-upstream>`__ in order to quickly add bug <http://github.com/rocm/tensorflow-upstream>`__ in order to quickly add bug
fixes, updates, and support for the latest ROCM versions. fixes, updates, and support for the latest ROCm versions.
- ROCm TensorFlow release:
- Offers :ref:`Docker images <tensorflow-docker-compat>` with
ROCm and TensorFlow pre-installed.
- ROCm TensorFlow repository: `<https://github.com/ROCm/tensorflow-upstream>`__
- See the :doc:`ROCm TensorFlow installation guide <rocm-install-on-linux:install/3rd-party/tensorflow-install>`
to get started.
- Official TensorFlow release:
- Official TensorFlow repository: `<https://github.com/tensorflow/tensorflow>`__
- See the `TensorFlow API versions <https://www.tensorflow.org/versions>`__ list.
.. note::
The official TensorFlow documentation does not cover ROCm support. Use the
ROCm documentation for installation instructions for Tensorflow on ROCm.
See :doc:`rocm-install-on-linux:install/3rd-party/tensorflow-install`.
.. _tensorflow-docker-compat: .. _tensorflow-docker-compat:
@@ -140,7 +136,7 @@ The following section maps supported data types and GPU-accelerated TensorFlow
features to their minimum supported ROCm and TensorFlow versions. features to their minimum supported ROCm and TensorFlow versions.
Data types Data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ---------------
The data type of a tensor is specified using the ``dtype`` attribute or The data type of a tensor is specified using the ``dtype`` attribute or
argument, and TensorFlow supports a wide range of data types for different use argument, and TensorFlow supports a wide range of data types for different use
@@ -258,7 +254,7 @@ are as follows:
- 1.7 - 1.7
Features Features
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ---------------
This table provides an overview of key features in TensorFlow and their This table provides an overview of key features in TensorFlow and their
availability in ROCm. availability in ROCm.
@@ -350,7 +346,7 @@ availability in ROCm.
- 1.9.2 - 1.9.2
Distributed library features Distributed library features
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -----------------------------------
Enables developers to scale computations across multiple devices on a single machine or Enables developers to scale computations across multiple devices on a single machine or
across multiple machines. across multiple machines.

View File

@@ -2,7 +2,7 @@
.. meta:: .. meta::
:description: verl compatibility :description: verl compatibility
:keywords: GPU, verl compatibility :keywords: GPU, verl, deep learning, framework compatibility
.. version-set:: rocm_version latest .. version-set:: rocm_version latest
@@ -10,77 +10,109 @@
verl compatibility verl compatibility
******************************************************************************* *******************************************************************************
Volcano Engine Reinforcement Learning for LLMs (verl) is a reinforcement learning framework designed for large language models (LLMs). Volcano Engine Reinforcement Learning for LLMs (`verl <https://verl.readthedocs.io/en/latest/>`__)
verl offers a scalable, open-source fine-tuning solution optimized for AMD Instinct GPUs with full ROCm support. is a reinforcement learning framework designed for large language models (LLMs).
verl offers a scalable, open-source fine-tuning solution by using a hybrid programming model
that makes it easy to define and run complex post-training dataflows efficiently.
* See the `verl documentation <https://verl.readthedocs.io/en/latest/>`_ for more information about verl. Its modular APIs separate computation from data, allowing smooth integration with other frameworks.
* The official verl GitHub repository is `https://github.com/volcengine/verl <https://github.com/volcengine/verl>`_. It also supports flexible model placement across GPUs for efficient scaling on different cluster sizes.
* Use the AMD-validated :ref:`Docker images <verl-docker-compat>` with ROCm and verl preinstalled. verl achieves high training and generation throughput by building on existing LLM frameworks.
* See the :doc:`ROCm verl installation guide <rocm-install-on-linux:install/3rd-party/verl-install>` to install and get started. Its 3D-HybridEngine reduces memory use and communication overhead when switching between training
and inference, improving overall performance.
.. note:: Support overview
verl is supported on ROCm 6.2.0.
.. _verl-recommendations:
Use cases and recommendations
================================================================================ ================================================================================
The benefits of verl in large-scale reinforcement learning from human feedback (RLHF) are discussed in the `Reinforcement Learning from Human Feedback on AMD GPUs with verl and ROCm Integration <https://rocm.blogs.amd.com/artificial-intelligence/verl-large-scale/README.html>`_ blog. - The ROCm-supported version of verl is maintained in the official `https://github.com/ROCm/verl
<https://github.com/ROCm/verl>`__ repository, which differs from the
`https://github.com/volcengine/verl <https://github.com/volcengine/verl>`__ upstream repository.
.. _verl-supported_features: - To get started and install verl on ROCm, use the prebuilt :ref:`Docker image <verl-docker-compat>`,
which includes ROCm, verl, and all required dependencies.
Supported features - See the :doc:`ROCm verl installation guide <rocm-install-on-linux:install/3rd-party/verl-install>`
=============================================================================== for installation and setup instructions.
The following table shows verl on ROCm support for GPU-accelerated modules. - You can also consult the upstream `verl documentation <https://verl.readthedocs.io/en/latest/>`__
for additional context.
.. list-table::
:header-rows: 1
* - Module
- Description
- verl version
- ROCm version
* - ``FSDP``
- Training engine
- 0.3.0.post0
- 6.2.0
* - ``vllm``
- Inference engine
- 0.3.0.post0
- 6.2.0
.. _verl-docker-compat: .. _verl-docker-compat:
Docker image compatibility Compatibility matrix
================================================================================ ================================================================================
.. |docker-icon| raw:: html .. |docker-icon| raw:: html
<i class="fab fa-docker"></i> <i class="fab fa-docker"></i>
AMD validates and publishes ready-made `ROCm verl Docker images <https://hub.docker.com/r/rocm/verl/tags>`_ AMD validates and publishes `verl Docker images <https://hub.docker.com/r/rocm/verl/tags>`_
with ROCm backends on Docker Hub. The following Docker image tags and associated inventories represent the available verl versions from the official Docker Hub. with ROCm backends on Docker Hub. The following Docker image tag and associated inventories
represent the latest verl version from the official Docker Hub.
Click |docker-icon| to view the image on Docker Hub.
.. list-table:: .. list-table::
:header-rows: 1 :header-rows: 1
:class: docker-image-compatibility
* - Docker image * - Docker image
- ROCm - ROCm
- verl - verl
- Ubuntu - Ubuntu
- Pytorch - PyTorch
- Python - Python
- vllm - vllm
- GPU
* - .. raw:: html * - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/verl/verl-0.3.0.post0_rocm6.2_vllm0.6.3/images/sha256-cbe423803fd7850448b22444176bee06f4dcf22cd3c94c27732752d3a39b04b2"><i class="fab fa-docker fa-lg"></i> rocm/verl</a> <a href="https://hub.docker.com/layers/rocm/verl/verl-0.6.0.amd0_rocm7.0_vllm0.11.0.dev/images/sha256-f70a3ebc94c1f66de42a2fcc3f8a6a8d6d0881eb0e65b6958d7d6d24b3eecb0d"><i class="fab fa-docker fa-lg"></i> rocm/verl</a>
- `6.2.0 <https://repo.radeon.com/rocm/apt/6.2/>`_ - `7.0.0 <https://repo.radeon.com/rocm/apt/7.0/>`__
- `0.3.0post0 <https://github.com/volcengine/verl/releases/tag/v0.3.0.post0>`_ - `0.6.0 <https://github.com/volcengine/verl/releases/tag/v0.6.0>`__
- 20.04 - 22.04
- `2.5.0 <https://github.com/ROCm/pytorch/tree/release/2.5>`_ - `2.9.0 <https://github.com/ROCm/pytorch/tree/release/2.9-rocm7.x-gfx115x>`__
- `3.9.19 <https://www.python.org/downloads/release/python-3919/>`_ - `3.12.11 <https://www.python.org/downloads/release/python-31211/>`__
- `0.6.3 <https://github.com/vllm-project/vllm/releases/tag/v0.6.3>`_ - `0.11.0 <https://github.com/vllm-project/vllm/releases/tag/v0.11.0>`__
- MI300X
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/verl/verl-0.3.0.post0_rocm6.2_vllm0.6.3/images/sha256-cbe423803fd7850448b22444176bee06f4dcf22cd3c94c27732752d3a39b04b2"><i class="fab fa-docker fa-lg"></i> rocm/verl</a>
- `6.2.0 <https://repo.radeon.com/rocm/apt/6.2/>`__
- `0.3.0.post0 <https://github.com/volcengine/verl/releases/tag/v0.3.0.post0>`__
- 20.04
- `2.5.0 <https://github.com/ROCm/pytorch/tree/release/2.5>`__
- `3.9.19 <https://www.python.org/downloads/release/python-3919/>`__
- `0.6.3 <https://github.com/vllm-project/vllm/releases/tag/v0.6.3>`__
- MI300X
.. _verl-supported_features:
Supported modules with verl on ROCm
===============================================================================
The following GPU-accelerated modules are supported with verl on ROCm:
- ``FSDP``: Training engine
- ``vllm``: Inference engine
.. _verl-recommendations:
Use cases and recommendations
================================================================================
* The benefits of verl in large-scale reinforcement learning from human feedback
(RLHF) are discussed in the `Reinforcement Learning from Human Feedback on AMD
GPUs with verl and ROCm Integration <https://rocm.blogs.amd.com/artificial-intelligence/verl-large-scale/README.html>`__
blog. The blog post outlines how the Volcano Engine Reinforcement Learning
(verl) framework integrates with the AMD ROCm platform to optimize training on
AMD Instinct™ GPUs. The guide details the process of building a Docker image,
setting up single-node and multi-node training environments, and highlights
performance benchmarks demonstrating improved throughput and convergence accuracy.
This resource serves as a comprehensive starting point for deploying verl on AMD GPUs,
facilitating efficient RLHF training workflows.
Previous versions
===============================================================================
See :doc:`rocm-install-on-linux:install/3rd-party/previous-versions/verl-history` to find documentation for previous releases
of the ``ROCm/verl`` Docker image.

View File

@@ -13,22 +13,22 @@
:gutter: 1 :gutter: 1
:::{grid-item-card} :::{grid-item-card}
**AMD Instinct MI300 series** **AMD Instinct MI300 Series**
Review hardware aspects of the AMD Instinct™ MI300 series of GPU accelerators and the CDNA™ 3 Review hardware aspects of the AMD Instinct™ MI300 Series GPUs and the CDNA™ 3
architecture. architecture.
* [AMD Instinct™ MI300 microarchitecture](./gpu-arch/mi300.md) * [AMD Instinct™ MI300 microarchitecture](./gpu-arch/mi300.md)
* [AMD Instinct MI300/CDNA3 ISA](https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/instruction-set-architectures/amd-instinct-mi300-cdna3-instruction-set-architecture.pdf) * [AMD Instinct MI300/CDNA3 ISA](https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/instruction-set-architectures/amd-instinct-mi300-cdna3-instruction-set-architecture.pdf)
* [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf) * [White paper](https://www.amd.com/content/dam/amd/en/documents/instinct-tech-docs/white-papers/amd-cdna-3-white-paper.pdf)
* [MI300 performance counters](./gpu-arch/mi300-mi200-performance-counters.rst) * [MI300 performance counters](./gpu-arch/mi300-mi200-performance-counters.rst)
* [MI350 series performance counters](./gpu-arch/mi350-performance-counters.rst) * [MI350 Series performance counters](./gpu-arch/mi350-performance-counters.rst)
::: :::
:::{grid-item-card} :::{grid-item-card}
**AMD Instinct MI200 series** **AMD Instinct MI200 Series**
Review hardware aspects of the AMD Instinct™ MI200 series of GPU accelerators and the CDNA™ 2 Review hardware aspects of the AMD Instinct™ MI200 Series GPUs and the CDNA™ 2
architecture. architecture.
* [AMD Instinct™ MI250 microarchitecture](./gpu-arch/mi250.md) * [AMD Instinct™ MI250 microarchitecture](./gpu-arch/mi250.md)
@@ -41,7 +41,7 @@ architecture.
:::{grid-item-card} :::{grid-item-card}
**AMD Instinct MI100** **AMD Instinct MI100**
Review hardware aspects of the AMD Instinct™ MI100 series of GPU accelerators and the CDNA™ 1 Review hardware aspects of the AMD Instinct™ MI100 Series GPUs and the CDNA™ 1
architecture. architecture.
* [AMD Instinct™ MI100 microarchitecture](./gpu-arch/mi100.md) * [AMD Instinct™ MI100 microarchitecture](./gpu-arch/mi100.md)

View File

@@ -1,14 +1,14 @@
--- ---
myst: myst:
html_meta: html_meta:
"description lang=en": "Learn about the AMD Instinct MI100 series architecture." "description lang=en": "Learn about the AMD Instinct MI100 Series architecture."
"keywords": "Instinct, MI100, microarchitecture, AMD, ROCm" "keywords": "Instinct, MI100, microarchitecture, AMD, ROCm"
--- ---
# AMD Instinct™ MI100 microarchitecture # AMD Instinct™ MI100 microarchitecture
The following image shows the node-level architecture of a system that The following image shows the node-level architecture of a system that
comprises two AMD EPYC™ processors and (up to) eight AMD Instinct™ accelerators. comprises two AMD EPYC™ processors and (up to) eight AMD Instinct™ GPUs.
The two EPYC processors are connected to each other with the AMD Infinity™ The two EPYC processors are connected to each other with the AMD Infinity™
fabric which provides a high-bandwidth (up to 18 GT/sec) and coherent links such fabric which provides a high-bandwidth (up to 18 GT/sec) and coherent links such
that each processor can access the available node memory as a single that each processor can access the available node memory as a single
@@ -18,29 +18,29 @@ available to connect the processors plus one PCIe Gen 4 x16 link per processor
can attach additional I/O devices such as the host adapters for the network can attach additional I/O devices such as the host adapters for the network
fabric. fabric.
![Structure of a single GCD in the AMD Instinct MI100 accelerator](../../data/conceptual/gpu-arch/image004.png "Node-level system architecture with two AMD EPYC™ processors and eight AMD Instinct™ accelerators.") ![Structure of a single GCD in the AMD Instinct MI100 GPU](../../data/conceptual/gpu-arch/image004.png "Node-level system architecture with two AMD EPYC™ processors and eight AMD Instinct™ GPUs.")
In a typical node configuration, each processor can host up to four AMD In a typical node configuration, each processor can host up to four AMD
Instinct™ accelerators that are attached using PCIe Gen 4 links at 16 GT/sec, Instinct™ GPUs that are attached using PCIe Gen 4 links at 16 GT/sec,
which corresponds to a peak bidirectional link bandwidth of 32 GB/sec. Each hive which corresponds to a peak bidirectional link bandwidth of 32 GB/sec. Each hive
of four accelerators can participate in a fully connected, coherent AMD of four GPUs can participate in a fully connected, coherent AMD
Instinct™ fabric that connects the four accelerators using 23 GT/sec AMD Instinct™ fabric that connects the four GPUs using 23 GT/sec AMD
Infinity fabric links that run at a higher frequency than the inter-processor Infinity fabric links that run at a higher frequency than the inter-processor
links. This inter-GPU link can be established in certified server systems if the links. This inter-GPU link can be established in certified server systems if the
GPUs are mounted in neighboring PCIe slots by installing the AMD Infinity GPUs are mounted in neighboring PCIe slots by installing the AMD Infinity
Fabric™ bridge for the AMD Instinct™ accelerators. Fabric™ bridge for the AMD Instinct™ GPUs.
## Microarchitecture ## Microarchitecture
The microarchitecture of the AMD Instinct accelerators is based on the AMD CDNA The microarchitecture of the AMD Instinct GPUs is based on the AMD CDNA
architecture, which targets compute applications such as high-performance architecture, which targets compute applications such as high-performance
computing (HPC) and AI & machine learning (ML) that run on everything from computing (HPC) and AI & machine learning (ML) that run on everything from
individual servers to the world's largest exascale supercomputers. The overall individual servers to the world's largest exascale supercomputers. The overall
system architecture is designed for extreme scalability and compute performance. system architecture is designed for extreme scalability and compute performance.
![Structure of the AMD Instinct accelerator (MI100 generation)](../../data/conceptual/gpu-arch/image005.png "Structure of the AMD Instinct accelerator (MI100 generation)") ![Structure of the AMD Instinct GPU (MI100 generation)](../../data/conceptual/gpu-arch/image005.png "Structure of the AMD Instinct GPU (MI100 generation)")
The above image shows the AMD Instinct accelerator with its PCIe Gen 4 x16 The above image shows the AMD Instinct GPU with its PCIe Gen 4 x16
link (16 GT/sec, at the bottom) that connects the GPU to (one of) the host link (16 GT/sec, at the bottom) that connects the GPU to (one of) the host
processor(s). It also shows the three AMD Infinity Fabric ports that provide processor(s). It also shows the three AMD Infinity Fabric ports that provide
high-speed links (23 GT/sec, also at the bottom) to the other GPUs of the local high-speed links (23 GT/sec, also at the bottom) to the other GPUs of the local
@@ -48,7 +48,7 @@ hive.
On the left and right of the floor plan, the High Bandwidth Memory (HBM) On the left and right of the floor plan, the High Bandwidth Memory (HBM)
attaches via the GPU memory controller. The MI100 generation of the AMD attaches via the GPU memory controller. The MI100 generation of the AMD
Instinct accelerator offers four stacks of HBM generation 2 (HBM2) for a total Instinct GPU offers four stacks of HBM generation 2 (HBM2) for a total
of 32GB with a 4,096bit-wide memory interface. The peak memory bandwidth of the of 32GB with a 4,096bit-wide memory interface. The peak memory bandwidth of the
attached HBM2 is 1.228 TB/sec at a memory clock frequency of 1.2 GHz. attached HBM2 is 1.228 TB/sec at a memory clock frequency of 1.2 GHz.
@@ -64,7 +64,7 @@ Therefore, the theoretical maximum FP64 peak performance is 11.5 TFLOPS
![Block diagram of an MI100 compute unit with detailed SIMD view of the AMD CDNA architecture](../../data/conceptual/gpu-arch/image006.png "An MI100 compute unit with detailed SIMD view of the AMD CDNA architecture") ![Block diagram of an MI100 compute unit with detailed SIMD view of the AMD CDNA architecture](../../data/conceptual/gpu-arch/image006.png "An MI100 compute unit with detailed SIMD view of the AMD CDNA architecture")
The preceding image shows the block diagram of a single CU of an AMD Instinct™ The preceding image shows the block diagram of a single CU of an AMD Instinct™
MI100 accelerator and summarizes how instructions flow through the execution MI100 GPU and summarizes how instructions flow through the execution
engines. The CU fetches the instructions via a 32KB instruction cache and moves engines. The CU fetches the instructions via a 32KB instruction cache and moves
them forward to execution via a dispatcher. The CU can handle up to ten them forward to execution via a dispatcher. The CU can handle up to ten
wavefronts at a time and feed their instructions into the execution unit. The wavefronts at a time and feed their instructions into the execution unit. The

View File

@@ -1,13 +1,13 @@
--- ---
myst: myst:
html_meta: html_meta:
"description lang=en": "Learn about the AMD Instinct MI250 series architecture." "description lang=en": "Learn about the AMD Instinct MI250 Series architecture."
"keywords": "Instinct, MI250, microarchitecture, AMD, ROCm" "keywords": "Instinct, MI250, microarchitecture, AMD, ROCm"
--- ---
# AMD Instinct™ MI250 microarchitecture # AMD Instinct™ MI250 microarchitecture
The microarchitecture of the AMD Instinct MI250 accelerators is based on the The microarchitecture of the AMD Instinct MI250 GPU is based on the
AMD CDNA 2 architecture that targets compute applications such as HPC, AMD CDNA 2 architecture that targets compute applications such as HPC,
artificial intelligence (AI), and machine learning (ML) and that run on artificial intelligence (AI), and machine learning (ML) and that run on
everything from individual servers to the worlds largest exascale everything from individual servers to the worlds largest exascale
@@ -40,7 +40,7 @@ execution units (also called matrix cores), which are geared toward executing
matrix operations like matrix-matrix multiplications. For FP64, the peak matrix operations like matrix-matrix multiplications. For FP64, the peak
performance of these units amounts to 90.5 TFLOPS. performance of these units amounts to 90.5 TFLOPS.
![Structure of a single GCD in the AMD Instinct MI250 accelerator.](../../data/conceptual/gpu-arch/image001.png "Structure of a single GCD in the AMD Instinct MI250 accelerator.") ![Structure of a single GCD in the AMD Instinct MI250 GPU.](../../data/conceptual/gpu-arch/image001.png "Structure of a single GCD in the AMD Instinct MI250 GPU.")
```{list-table} Peak-performance capabilities of the MI250 OAM for different data types. ```{list-table} Peak-performance capabilities of the MI250 OAM for different data types.
:header-rows: 1 :header-rows: 1
@@ -84,16 +84,9 @@ performance of these units amounts to 90.5 TFLOPS.
- 362.1 - 362.1
``` ```
The above table summarizes the aggregated peak performance of the AMD The above table summarizes the aggregated peak performance of the AMD Instinct MI250 Open Compute Platform (OCP) Open Accelerator Modules (OAMs) and its two GCDs for different data types and execution units. The middle column lists the peak performance (number of data elements processed in a single instruction) of a single compute unit if a SIMD (or matrix) instruction is being retired in each clock cycle. The third column lists the theoretical peak performance of the OAM module. The theoretical aggregated peak memory bandwidth of the GPU is 3.2 TB/sec (1.6 TB/sec per GCD).
Instinct MI250 OCP Open Accelerator Modules (OAM, OCP is short for Open Compute
Platform) and its two GCDs for different data types and execution units. The
middle column lists the peak performance (number of data elements processed in a
single instruction) of a single compute unit if a SIMD (or matrix) instruction
is being retired in each clock cycle. The third column lists the theoretical
peak performance of the OAM module. The theoretical aggregated peak memory
bandwidth of the GPU is 3.2 TB/sec (1.6 TB/sec per GCD).
![Dual-GCD architecture of the AMD Instinct MI250 accelerators](../../data/conceptual/gpu-arch/image002.png "Dual-GCD architecture of the AMD Instinct MI250 accelerators") ![Dual-GCD architecture of the AMD Instinct MI250 GPUs](../../data/conceptual/gpu-arch/image002.png "Dual-GCD architecture of the AMD Instinct MI250 GPUs")
The following image shows the block diagram of an OAM package that consists The following image shows the block diagram of an OAM package that consists
of two GCDs, each of which constitutes one GPU device in the system. The two of two GCDs, each of which constitutes one GPU device in the system. The two
@@ -105,18 +98,18 @@ between the two GCDs of an OAM, or a bidirectional peak transfer bandwidth of
## Node-level architecture ## Node-level architecture
The following image shows the node-level architecture of a system that is The following image shows the node-level architecture of a system that is
based on the AMD Instinct MI250 accelerator. The MI250 OAMs attach to the host based on the AMD Instinct MI250 GPU. The MI250 OAMs attach to the host
system via PCIe Gen 4 x16 links (yellow lines). Each GCD maintains its own PCIe system via PCIe Gen 4 x16 links (yellow lines). Each GCD maintains its own PCIe
x16 link to the host part of the system. Depending on the server platform, the x16 link to the host part of the system. Depending on the server platform, the
GCD can attach to the AMD EPYC processor directly or via an optional PCIe switch GCD can attach to the AMD EPYC processor directly or via an optional PCIe switch
. Note that some platforms may offer an x8 interface to the GCDs, which reduces . Note that some platforms may offer an x8 interface to the GCDs, which reduces
the available host-to-GPU bandwidth. the available host-to-GPU bandwidth.
![Block diagram of AMD Instinct MI250 Accelerators with 3rd Generation AMD EPYC processor](../../data/conceptual/gpu-arch/image003.png "Block diagram of AMD Instinct MI250 Accelerators with 3rd Generation AMD EPYC processor") ![Block diagram of AMD Instinct MI250 GPUs with 3rd Generation AMD EPYC processor](../../data/conceptual/gpu-arch/image003.png "Block diagram of AMD Instinct MI250 GPUs with 3rd Generation AMD EPYC processor")
The preceding image shows the node-level architecture of a system with AMD The preceding image shows the node-level architecture of a system with AMD
EPYC processors in a dual-socket configuration and four AMD Instinct MI250 EPYC processors in a dual-socket configuration and four AMD Instinct MI250
accelerators. The MI250 OAMs attach to the host processors system via PCIe Gen 4 GPUs. The MI250 OAMs attach to the host processors system via PCIe Gen 4
x16 links (yellow lines). Depending on the system design, a PCIe switch may x16 links (yellow lines). Depending on the system design, a PCIe switch may
exist to make more PCIe lanes available for additional components like network exist to make more PCIe lanes available for additional components like network
interfaces and/or storage devices. Each GCD maintains its own PCIe x16 link to interfaces and/or storage devices. Each GCD maintains its own PCIe x16 link to

View File

@@ -1,16 +1,16 @@
.. meta:: .. meta::
:description: MI300 and MI200 series performance counters and metrics :description: MI300 and MI200 Series performance counters and metrics
:keywords: MI300, MI200, performance counters, command processor counters :keywords: MI300, MI200, performance counters, command processor counters
*************************************************************************************************** ***************************************************************************************************
MI300 and MI200 series performance counters and metrics MI300 and MI200 Series performance counters and metrics
*************************************************************************************************** ***************************************************************************************************
This document lists and describes the hardware performance counters and derived metrics available This document lists and describes the hardware performance counters and derived metrics available
for the AMD Instinct™ MI300 and MI200 GPU. You can also access this information using the for the AMD Instinct™ MI300 and MI200 GPU. You can also access this information using the
:doc:`ROCprofiler-SDK <rocprofiler-sdk:how-to/using-rocprofv3>`. :doc:`ROCprofiler-SDK <rocprofiler-sdk:how-to/using-rocprofv3>`.
MI300 and MI200 series performance counters MI300 and MI200 Series performance counters
=============================================================== ===============================================================
Series performance counters include the following categories: Series performance counters include the following categories:
@@ -27,7 +27,7 @@ The following sections provide additional details for each category.
.. note:: .. note::
Preliminary validation of all MI300 and MI200 series performance counters is in progress. Those with Preliminary validation of all MI300 and MI200 Series performance counters is in progress. Those with
an asterisk (*) require further evaluation. an asterisk (*) require further evaluation.
.. _command-processor-counters: .. _command-processor-counters:
@@ -171,7 +171,7 @@ Instruction mix
"``SQ_INSTS_SMEM``", "Instr", "Number of scalar memory instructions issued" "``SQ_INSTS_SMEM``", "Instr", "Number of scalar memory instructions issued"
"``SQ_INSTS_SMEM_NORM``", "Instr", "Number of scalar memory instructions normalized to match ``smem_level`` issued" "``SQ_INSTS_SMEM_NORM``", "Instr", "Number of scalar memory instructions normalized to match ``smem_level`` issued"
"``SQ_INSTS_FLAT``", "Instr", "Number of flat instructions issued" "``SQ_INSTS_FLAT``", "Instr", "Number of flat instructions issued"
"``SQ_INSTS_FLAT_LDS_ONLY``", "Instr", "**MI200 series only** Number of FLAT instructions that read/write only from/to LDS issued. Works only if ``EARLY_TA_DONE`` is enabled." "``SQ_INSTS_FLAT_LDS_ONLY``", "Instr", "**MI200 Series only** Number of FLAT instructions that read/write only from/to LDS issued. Works only if ``EARLY_TA_DONE`` is enabled."
"``SQ_INSTS_LDS``", "Instr", "Number of LDS instructions issued **(MI200: includes flat; MI300: does not include flat)**" "``SQ_INSTS_LDS``", "Instr", "Number of LDS instructions issued **(MI200: includes flat; MI300: does not include flat)**"
"``SQ_INSTS_GDS``", "Instr", "Number of global data share instructions issued" "``SQ_INSTS_GDS``", "Instr", "Number of global data share instructions issued"
"``SQ_INSTS_EXP_GDS``", "Instr", "Number of EXP and global data share instructions excluding skipped export instructions issued" "``SQ_INSTS_EXP_GDS``", "Instr", "Number of EXP and global data share instructions excluding skipped export instructions issued"
@@ -396,9 +396,9 @@ Texture cache per pipe counters
"``TCP_UTCL1_TRANSLATION_MISS[n]``", "Req", "Number of unified translation cache (L1) translation misses", "0-15" "``TCP_UTCL1_TRANSLATION_MISS[n]``", "Req", "Number of unified translation cache (L1) translation misses", "0-15"
"``TCP_UTCL1_PERMISSION_MISS[n]``", "Req", "Number of unified translation cache (L1) permission misses", "0-15" "``TCP_UTCL1_PERMISSION_MISS[n]``", "Req", "Number of unified translation cache (L1) permission misses", "0-15"
"``TCP_TOTAL_CACHE_ACCESSES[n]``", "Req", "Number of vector L1d cache accesses including hits and misses", "0-15" "``TCP_TOTAL_CACHE_ACCESSES[n]``", "Req", "Number of vector L1d cache accesses including hits and misses", "0-15"
"``TCP_TCP_LATENCY[n]``", "Cycles", "**MI200 series only** Accumulated wave access latency to vL1D over all wavefronts", "0-15" "``TCP_TCP_LATENCY[n]``", "Cycles", "**MI200 Series only** Accumulated wave access latency to vL1D over all wavefronts", "0-15"
"``TCP_TCC_READ_REQ_LATENCY[n]``", "Cycles", "**MI200 series only** Total vL1D to L2 request latency over all wavefronts for reads and atomics with return", "0-15" "``TCP_TCC_READ_REQ_LATENCY[n]``", "Cycles", "**MI200 Series only** Total vL1D to L2 request latency over all wavefronts for reads and atomics with return", "0-15"
"``TCP_TCC_WRITE_REQ_LATENCY[n]``", "Cycles", "**MI200 series only** Total vL1D to L2 request latency over all wavefronts for writes and atomics without return", "0-15" "``TCP_TCC_WRITE_REQ_LATENCY[n]``", "Cycles", "**MI200 Series only** Total vL1D to L2 request latency over all wavefronts for writes and atomics without return", "0-15"
"``TCP_TCC_READ_REQ[n]``", "Req", "Number of read requests to L2 cache", "0-15" "``TCP_TCC_READ_REQ[n]``", "Req", "Number of read requests to L2 cache", "0-15"
"``TCP_TCC_WRITE_REQ[n]``", "Req", "Number of write requests to L2 cache", "0-15" "``TCP_TCC_WRITE_REQ[n]``", "Req", "Number of write requests to L2 cache", "0-15"
"``TCP_TCC_ATOMIC_WITH_RET_REQ[n]``", "Req", "Number of atomic requests to L2 cache with return", "0-15" "``TCP_TCC_ATOMIC_WITH_RET_REQ[n]``", "Req", "Number of atomic requests to L2 cache with return", "0-15"
@@ -560,7 +560,7 @@ Note the following:
``TCC_TAG_STALL[n]``, probes can stall the pipeline at a variety of places. There is no single point that ``TCC_TAG_STALL[n]``, probes can stall the pipeline at a variety of places. There is no single point that
can accurately measure the total stalls can accurately measure the total stalls
MI300 and MI200 series derived metrics list MI300 and MI200 Series derived metrics list
============================================================== ==============================================================
.. csv-table:: .. csv-table::

View File

@@ -1,21 +1,21 @@
--- ---
myst: myst:
html_meta: html_meta:
"description lang=en": "Learn about the AMD Instinct MI300 series architecture." "description lang=en": "Learn about the AMD Instinct MI300 Series architecture."
"keywords": "Instinct, MI300X, MI300A, microarchitecture, AMD, ROCm" "keywords": "Instinct, MI300X, MI300A, microarchitecture, AMD, ROCm"
--- ---
# AMD Instinct™ MI300 series microarchitecture # AMD Instinct™ MI300 Series microarchitecture
The AMD Instinct MI300 series accelerators are based on the AMD CDNA 3 The AMD Instinct MI300 Series GPUs are based on the AMD CDNA 3
architecture which was designed to deliver leadership performance for HPC, artificial intelligence (AI), and machine architecture which was designed to deliver leadership performance for HPC, artificial intelligence (AI), and machine
learning (ML) workloads. The AMD Instinct MI300 series accelerators are well-suited for extreme scalability and compute performance, running learning (ML) workloads. The AMD Instinct MI300 Series GPUs are well-suited for extreme scalability and compute performance, running
on everything from individual servers to the worlds largest exascale supercomputers. on everything from individual servers to the worlds largest exascale supercomputers.
With the MI300 series, AMD is introducing the Accelerator Complex Die (XCD), which contains the With the MI300 Series, AMD is introducing the Accelerator Complex Die (XCD), which contains the
GPU computational elements of the processor along with the lower levels of the cache hierarchy. GPU computational elements of the processor along with the lower levels of the cache hierarchy.
The following image depicts the structure of a single XCD in the AMD Instinct MI300 accelerator series. The following image depicts the structure of a single XCD in the AMD Instinct MI300 GPU Series.
```{figure} ../../data/shared/xcd-sys-arch.png ```{figure} ../../data/shared/xcd-sys-arch.png
--- ---
@@ -39,7 +39,7 @@ infrastructure) using the AMD Infinity Fabric™ technology as interconnect.
The Matrix Cores inside the CDNA 3 CUs have significant improvements, emphasizing AI and machine The Matrix Cores inside the CDNA 3 CUs have significant improvements, emphasizing AI and machine
learning, enhancing throughput of existing data types while adding support for new data types. learning, enhancing throughput of existing data types while adding support for new data types.
CDNA 2 Matrix Cores support FP16 and BF16, while offering INT8 for inference. Compared to MI250X CDNA 2 Matrix Cores support FP16 and BF16, while offering INT8 for inference. Compared to MI250X
accelerators, CDNA 3 Matrix Cores triple the performance for FP16 and BF16, while providing a GPUs, CDNA 3 Matrix Cores triple the performance for FP16 and BF16, while providing a
performance gain of 6.8 times for INT8. FP8 has a performance gain of 16 times compared to FP32, performance gain of 6.8 times for INT8. FP8 has a performance gain of 16 times compared to FP32,
while TF32 has a gain of 4 times compared to FP32. while TF32 has a gain of 4 times compared to FP32.
@@ -105,7 +105,7 @@ name: mi300-arch
alt: alt:
align: center align: center
--- ---
MI300 series system architecture showing MI300A (left) with 6 XCDs and 3 CCDs, while the MI300X (right) has 8 XCDs. MI300 Series system architecture showing MI300A (left) with 6 XCDs and 3 CCDs, while the MI300X (right) has 8 XCDs.
``` ```
## Node-level architecture ## Node-level architecture
@@ -116,11 +116,11 @@ name: mi300-node
align: center align: center
--- ---
MI300 series node-level architecture showing 8 fully interconnected MI300X OAM modules connected to (optional) PCIEe switches via retimers and HGX connectors. MI300 Series node-level architecture showing 8 fully interconnected MI300X OAM modules connected to (optional) PCIEe switches via retimers and HGX connectors.
``` ```
The image above shows the node-level architecture of a system with AMD EPYC processors in a The image above shows the node-level architecture of a system with AMD EPYC processors in a
dual-socket configuration and eight AMD Instinct MI300X accelerators. The MI300X OAMs attach to the dual-socket configuration and eight AMD Instinct MI300X GPUs. The MI300X OAMs attach to the
host system via PCIe Gen 5 x16 links (yellow lines). The GPUs are using seven high-bandwidth, host system via PCIe Gen 5 x16 links (yellow lines). The GPUs are using seven high-bandwidth,
low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected 8-GPU system. low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected 8-GPU system.

View File

@@ -1,12 +1,12 @@
.. meta:: .. meta::
:description: MI355 series performance counters and metrics :description: MI355 Series performance counters and metrics
:keywords: MI355, MI355X, MI3XX :keywords: MI355, MI355X, MI3XX
*********************************** ***********************************
MI350 series performance counters MI350 Series performance counters
*********************************** ***********************************
This topic lists and describes the hardware performance counters and derived metrics available on the AMD Instinct MI350 and MI355 accelerators. These counters are available for profiling using `ROCprofiler-SDK <https://rocm.docs.amd.com/projects/rocprofiler-sdk/en/latest/index.html>`_ and `ROCm Compute Profiler <https://rocm.docs.amd.com/projects/rocprofiler-compute/en/latest/>`_. This topic lists and describes the hardware performance counters and derived metrics available on the AMD Instinct MI350 and MI355 GPUs. These counters are available for profiling using `ROCprofiler-SDK <https://rocm.docs.amd.com/projects/rocprofiler-sdk/en/latest/index.html>`_ and `ROCm Compute Profiler <https://rocm.docs.amd.com/projects/rocprofiler-compute/en/latest/>`_.
The following sections list the performance counters based on the IP blocks. The following sections list the performance counters based on the IP blocks.

View File

@@ -34,7 +34,7 @@ Runtime
```{code-block} shell ```{code-block} shell
:caption: Example to expose the 1. device and a device based on UUID. :caption: Example to expose the 1. device and a device based on UUID.
export ROCR_VISIBLE_DEVICES="0,GPU-DEADBEEFDEADBEEF" export ROCR_VISIBLE_DEVICES="0,GPU-4b2c1a9f-8d3e-6f7a-b5c9-2e4d8a1f6c3b"
``` ```
### `GPU_DEVICE_ORDINAL` ### `GPU_DEVICE_ORDINAL`

View File

@@ -8,6 +8,7 @@ import os
import shutil import shutil
import sys import sys
from pathlib import Path from pathlib import Path
from subprocess import run
gh_release_path = os.path.join("..", "RELEASE.md") gh_release_path = os.path.join("..", "RELEASE.md")
gh_changelog_path = os.path.join("..", "CHANGELOG.md") gh_changelog_path = os.path.join("..", "CHANGELOG.md")
@@ -80,24 +81,27 @@ latex_elements = {
} }
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "rocm.docs.amd.com") html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "rocm.docs.amd.com")
html_context = {} html_context = {"docs_header_version": "7.1.1"}
if os.environ.get("READTHEDOCS", "") == "True": if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True html_context["READTHEDOCS"] = True
# Check if the branch is a docs/ branch
official_branch = run(["git", "rev-parse", "--abbrev-ref", "HEAD"], capture_output=True, text=True).stdout.find("docs/")
# configurations for PDF output by Read the Docs # configurations for PDF output by Read the Docs
project = "ROCm Documentation" project = "ROCm Documentation"
project_path = os.path.abspath(".").replace("\\", "/") project_path = os.path.abspath(".").replace("\\", "/")
author = "Advanced Micro Devices, Inc." author = "Advanced Micro Devices, Inc."
copyright = "Copyright (c) 2025 Advanced Micro Devices, Inc. All rights reserved." copyright = "Copyright (c) 2025 Advanced Micro Devices, Inc. All rights reserved."
version = "7.0.1" version = "7.1.1"
release = "7.0.1" release = "7.1.1"
setting_all_article_info = True setting_all_article_info = True
all_article_info_os = ["linux", "windows"] all_article_info_os = ["linux", "windows"]
all_article_info_author = "" all_article_info_author = ""
# pages with specific settings # pages with specific settings
article_pages = [ article_pages = [
{"file": "about/release-notes", "os": ["linux"], "date": "2025-09-17"}, {"file": "about/release-notes", "os": ["linux"], "date": "2025-11-26"},
{"file": "release/changelog", "os": ["linux"],}, {"file": "release/changelog", "os": ["linux"],},
{"file": "compatibility/compatibility-matrix", "os": ["linux"]}, {"file": "compatibility/compatibility-matrix", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/pytorch-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/pytorch-compatibility", "os": ["linux"]},
@@ -107,7 +111,6 @@ article_pages = [
{"file": "compatibility/ml-compatibility/stanford-megatron-lm-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/stanford-megatron-lm-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/dgl-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/dgl-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/megablocks-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/megablocks-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/taichi-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/ray-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/ray-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/llama-cpp-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/llama-cpp-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/flashinfer-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/flashinfer-compatibility", "os": ["linux"]},
@@ -132,9 +135,15 @@ article_pages = [
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.5", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.5", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.6", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.6", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.7", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.7", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.8", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.9", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-v25.10", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-primus-migration-guide", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/megatron-lm-primus-migration-guide", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-megatron-v25.7", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/primus-megatron", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/primus-megatron", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-megatron-v25.7", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-megatron-v25.8", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-megatron-v25.9", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-megatron-v25.10", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/pytorch-training", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/pytorch-training", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-history", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-history", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.3", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.3", "os": ["linux"]},
@@ -142,13 +151,19 @@ article_pages = [
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.5", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.5", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.6", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.6", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.7", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.7", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.8", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.9", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.10", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/primus-pytorch", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/primus-pytorch", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/pytorch-training", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-pytorch-v25.8", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-pytorch-v25.9", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/primus-pytorch-v25.10", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/jax-maxtext", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/jax-maxtext", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/jax-maxtext-history", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/jax-maxtext-history", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/jax-maxtext-v25.4", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/jax-maxtext-v25.4", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/jax-maxtext-v25.5", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/previous-versions/jax-maxtext-v25.5", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/benchmark-docker/mpt-llm-foundry", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/training/benchmark-docker/mpt-llm-foundry", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/xdit-diffusion-inference", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/index", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/fine-tuning/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/overview", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/fine-tuning/overview", "os": ["linux"]},
@@ -173,8 +188,16 @@ article_pages = [
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.9.1-20250702", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.9.1-20250702", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.9.1-20250715", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.9.1-20250715", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.10.0-20250812", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.10.0-20250812", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.10.1-20250909", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.10.2-20251006", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/vllm-0.11.1-20251103", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/sglang-history", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/sglang-history", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/pytorch-inference", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference/benchmark-docker/pytorch-inference", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/xdit-diffusion-inference", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.10", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.11", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.12", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.13", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/deploy-your-model", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference/deploy-your-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/index", "os": ["linux"]}, {"file": "how-to/rocm-for-ai/inference-optimization/index", "os": ["linux"]},
@@ -202,7 +225,7 @@ external_toc_path = "./sphinx/_toc.yml"
# Add the _extensions directory to Python's search path # Add the _extensions directory to Python's search path
sys.path.append(str(Path(__file__).parent / 'extension')) sys.path.append(str(Path(__file__).parent / 'extension'))
extensions = ["rocm_docs", "sphinx_reredirects", "sphinx_sitemap", "sphinxcontrib.datatemplates", "version-ref", "csv-to-list-table"] extensions = ["rocm_docs", "sphinx_reredirects", "sphinx_sitemap", "sphinxcontrib.datatemplates", "remote-content", "version-ref", "csv-to-list-table"]
compatibility_matrix_file = str(Path(__file__).parent / 'compatibility/compatibility-matrix-historical-6.0.csv') compatibility_matrix_file = str(Path(__file__).parent / 'compatibility/compatibility-matrix-historical-6.0.csv')
@@ -212,10 +235,14 @@ external_projects_current_project = "rocm"
# external_projects_remote_repository = "" # external_projects_remote_repository = ""
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "https://rocm-stg.amd.com/") html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "https://rocm-stg.amd.com/")
html_context = {} html_context = {"docs_header_version": "7.1.0"}
if os.environ.get("READTHEDOCS", "") == "True": if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True html_context["READTHEDOCS"] = True
html_context["official_branch"] = official_branch
html_context["version"] = version
html_context["release"] = release
html_theme = "rocm_docs_theme" html_theme = "rocm_docs_theme"
html_theme_options = {"flavor": "rocm-docs-home"} html_theme_options = {"flavor": "rocm-docs-home"}
@@ -234,10 +261,13 @@ suppress_warnings = ["autosectionlabel.*"]
html_context = { html_context = {
"project_path" : {project_path}, "project_path" : {project_path},
"gpu_type" : [('AMD Instinct accelerators', 'intrinsic'), ('AMD gfx families', 'gfx'), ('NVIDIA families', 'nvidia') ], "gpu_type" : [('AMD Instinct GPUs', 'intrinsic'), ('AMD gfx families', 'gfx'), ('NVIDIA families', 'nvidia') ],
"atomics_type" : [('HW atomics', 'hw-atomics'), ('CAS emulation', 'cas-atomics')], "atomics_type" : [('HW atomics', 'hw-atomics'), ('CAS emulation', 'cas-atomics')],
"pcie_type" : [('No PCIe atomics', 'nopcie'), ('PCIe atomics', 'pcie')], "pcie_type" : [('No PCIe atomics', 'nopcie'), ('PCIe atomics', 'pcie')],
"memory_type" : [('Device DRAM', 'device-dram'), ('Migratable Host DRAM', 'migratable-host-dram'), ('Pinned Host DRAM', 'pinned-host-dram')], "memory_type" : [('Device DRAM', 'device-dram'), ('Migratable Host DRAM', 'migratable-host-dram'), ('Pinned Host DRAM', 'pinned-host-dram')],
"granularity_type" : [('Coarse-grained', 'coarse-grained'), ('Fine-grained', 'fine-grained')], "granularity_type" : [('Coarse-grained', 'coarse-grained'), ('Fine-grained', 'fine-grained')],
"scope_type" : [('Device', 'device'), ('System', 'system')] "scope_type" : [('Device', 'device'), ('System', 'system')]
} }
# Disable figure and table numbering
numfig = False

View File

@@ -0,0 +1,188 @@
dockers:
- pull_tag: rocm/vllm:rocm6.4.1_vllm_0.10.1_20250909
docker_hub_url: https://hub.docker.com/layers/rocm/vllm/rocm6.4.1_vllm_0.10.1_20250909/images/sha256-1113268572e26d59b205792047bea0e61e018e79aeadceba118b7bf23cb3715c
components:
ROCm: 6.4.1
vLLM: 0.10.1 (0.10.1rc2.dev409+g0b6bf6691.rocm641)
PyTorch: 2.7.0+gitf717b2a
hipBLASLt: 0.15
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.1 8B
mad_tag: pyt_vllm_llama-3.1-8b
model_repo: meta-llama/Llama-3.1-8B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 70B
mad_tag: pyt_vllm_llama-3.1-70b
model_repo: meta-llama/Llama-3.1-70B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B
mad_tag: pyt_vllm_llama-3.1-405b
model_repo: meta-llama/Llama-3.1-405B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 2 70B
mad_tag: pyt_vllm_llama-2-70b
model_repo: meta-llama/Llama-2-70b-chat-hf
url: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 4096
max_num_batched_tokens: 4096
max_model_len: 4096
- model: Llama 3.1 8B FP8
mad_tag: pyt_vllm_llama-3.1-8b_fp8
model_repo: amd/Llama-3.1-8B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-8B-Instruct-FP8-KV
precision: float8
config:
tp: 1
dtype: auto
kv_cache_dtype: fp8
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 70B FP8
mad_tag: pyt_vllm_llama-3.1-70b_fp8
model_repo: amd/Llama-3.1-70B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B FP8
mad_tag: pyt_vllm_llama-3.1-405b_fp8
model_repo: amd/Llama-3.1-405B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- group: Mistral AI
tag: mistral
models:
- model: Mixtral MoE 8x7B
mad_tag: pyt_vllm_mixtral-8x7b
model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 32768
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Mixtral MoE 8x22B
mad_tag: pyt_vllm_mixtral-8x22b
model_repo: mistralai/Mixtral-8x22B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 65536
max_num_batched_tokens: 65536
max_model_len: 8192
- model: Mixtral MoE 8x7B FP8
mad_tag: pyt_vllm_mixtral-8x7b_fp8
model_repo: amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_seq_len_to_capture: 32768
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Mixtral MoE 8x22B FP8
mad_tag: pyt_vllm_mixtral-8x22b_fp8
model_repo: amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_seq_len_to_capture: 65536
max_num_batched_tokens: 65536
max_model_len: 8192
- group: Qwen
tag: qwen
models:
- model: QwQ-32B
mad_tag: pyt_vllm_qwq-32b
model_repo: Qwen/QwQ-32B
url: https://huggingface.co/Qwen/QwQ-32B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 131072
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Qwen3 30B A3B
mad_tag: pyt_vllm_qwen3-30b-a3b
model_repo: Qwen/Qwen3-30B-A3B
url: https://huggingface.co/Qwen/Qwen3-30B-A3B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 32768
max_num_batched_tokens: 32768
max_model_len: 8192
- group: Microsoft Phi
tag: phi
models:
- model: Phi-4
mad_tag: pyt_vllm_phi-4
model_repo: microsoft/phi-4
url: https://huggingface.co/microsoft/phi-4
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_seq_len_to_capture: 16384
max_num_batched_tokens: 16384
max_model_len: 8192

View File

@@ -0,0 +1,316 @@
dockers:
- pull_tag: rocm/vllm:rocm7.0.0_vllm_0.10.2_20251006
docker_hub_url: https://hub.docker.com/layers/rocm/vllm/rocm7.0.0_vllm_0.10.2_20251006/images/sha256-94fd001964e1cf55c3224a445b1fb5be31a7dac302315255db8422d813edd7f5
components:
ROCm: 7.0.0
vLLM: 0.10.2 (0.11.0rc2.dev160+g790d22168.rocm700)
PyTorch: 2.9.0a0+git1c57644
hipBLASLt: 1.0.0
dockerfile:
commit: 790d22168820507f3105fef29596549378cfe399
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 2 70B
mad_tag: pyt_vllm_llama-2-70b
model_repo: meta-llama/Llama-2-70b-chat-hf
url: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 4096
max_model_len: 4096
- model: Llama 3.1 8B
mad_tag: pyt_vllm_llama-3.1-8b
model_repo: meta-llama/Llama-3.1-8B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 8B FP8
mad_tag: pyt_vllm_llama-3.1-8b_fp8
model_repo: amd/Llama-3.1-8B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-8B-Instruct-FP8-KV
precision: float8
config:
tp: 1
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B
mad_tag: pyt_vllm_llama-3.1-405b
model_repo: meta-llama/Llama-3.1-405B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B FP8
mad_tag: pyt_vllm_llama-3.1-405b_fp8
model_repo: amd/Llama-3.1-405B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B MXFP4
mad_tag: pyt_vllm_llama-3.1-405b_fp4
model_repo: amd/Llama-3.1-405B-Instruct-MXFP4-Preview
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-MXFP4-Preview
precision: float4
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B
mad_tag: pyt_vllm_llama-3.3-70b
model_repo: meta-llama/Llama-3.3-70B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B FP8
mad_tag: pyt_vllm_llama-3.3-70b_fp8
model_repo: amd/Llama-3.3-70B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.3-70B-Instruct-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B MXFP4
mad_tag: pyt_vllm_llama-3.3-70b_fp4
model_repo: amd/Llama-3.3-70B-Instruct-MXFP4-Preview
url: https://huggingface.co/amd/Llama-3.3-70B-Instruct-MXFP4-Preview
precision: float4
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 4 Scout 17Bx16E
mad_tag: pyt_vllm_llama-4-scout-17b-16e
model_repo: meta-llama/Llama-4-Scout-17B-16E-Instruct
url: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Llama 4 Maverick 17Bx128E
mad_tag: pyt_vllm_llama-4-maverick-17b-128e
model_repo: meta-llama/Llama-4-Maverick-17B-128E-Instruct
url: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Llama 4 Maverick 17Bx128E FP8
mad_tag: pyt_vllm_llama-4-maverick-17b-128e_fp8
model_repo: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
url: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek R1 0528 FP8
mad_tag: pyt_vllm_deepseek-r1
model_repo: deepseek-ai/DeepSeek-R1-0528
url: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_seqs: 1024
max_num_batched_tokens: 131072
max_model_len: 8192
- group: OpenAI GPT OSS
tag: gpt-oss
models:
- model: GPT OSS 20B
mad_tag: pyt_vllm_gpt-oss-20b
model_repo: openai/gpt-oss-20b
url: https://huggingface.co/openai/gpt-oss-20b
precision: bfloat16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 8192
max_model_len: 8192
- model: GPT OSS 120B
mad_tag: pyt_vllm_gpt-oss-120b
model_repo: openai/gpt-oss-120b
url: https://huggingface.co/openai/gpt-oss-120b
precision: bfloat16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 8192
max_model_len: 8192
- group: Mistral AI
tag: mistral
models:
- model: Mixtral MoE 8x7B
mad_tag: pyt_vllm_mixtral-8x7b
model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Mixtral MoE 8x7B FP8
mad_tag: pyt_vllm_mixtral-8x7b_fp8
model_repo: amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Mixtral MoE 8x22B
mad_tag: pyt_vllm_mixtral-8x22b
model_repo: mistralai/Mixtral-8x22B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 65536
max_model_len: 8192
- model: Mixtral MoE 8x22B FP8
mad_tag: pyt_vllm_mixtral-8x22b_fp8
model_repo: amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 65536
max_model_len: 8192
- group: Qwen
tag: qwen
models:
- model: Qwen3 8B
mad_tag: pyt_vllm_qwen3-8b
model_repo: Qwen/Qwen3-8B
url: https://huggingface.co/Qwen/Qwen3-8B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 32B
mad_tag: pyt_vllm_qwen3-32b
model_repo: Qwen/Qwen3-32b
url: https://huggingface.co/Qwen/Qwen3-32B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 30B A3B
mad_tag: pyt_vllm_qwen3-30b-a3b
model_repo: Qwen/Qwen3-30B-A3B
url: https://huggingface.co/Qwen/Qwen3-30B-A3B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 30B A3B FP8
mad_tag: pyt_vllm_qwen3-30b-a3b_fp8
model_repo: Qwen/Qwen3-30B-A3B-FP8
url: https://huggingface.co/Qwen/Qwen3-30B-A3B-FP8
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 235B A22B
mad_tag: pyt_vllm_qwen3-235b-a22b
model_repo: Qwen/Qwen3-235B-A22B
url: https://huggingface.co/Qwen/Qwen3-235B-A22B
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 235B A22B FP8
mad_tag: pyt_vllm_qwen3-235b-a22b_fp8
model_repo: Qwen/Qwen3-235B-A22B-FP8
url: https://huggingface.co/Qwen/Qwen3-235B-A22B-FP8
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 40960
max_model_len: 8192
- group: Microsoft Phi
tag: phi
models:
- model: Phi-4
mad_tag: pyt_vllm_phi-4
model_repo: microsoft/phi-4
url: https://huggingface.co/microsoft/phi-4
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 16384
max_model_len: 8192

View File

@@ -0,0 +1,316 @@
dockers:
- pull_tag: rocm/vllm:rocm7.0.0_vllm_0.11.1_20251103
docker_hub_url: https://hub.docker.com/layers/rocm/vllm/rocm7.0.0_vllm_0.11.1_20251103/images/sha256-8d60429043d4d00958da46039a1de0d9b82df814d45da482497eef26a6076506
components:
ROCm: 7.0.0
vLLM: 0.11.1 (0.11.1rc2.dev141+g38f225c2a.rocm700)
PyTorch: 2.9.0a0+git1c57644
hipBLASLt: 1.0.0
dockerfile:
commit: 38f225c2abeadc04c2cc398814c2f53ea02c3c72
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 2 70B
mad_tag: pyt_vllm_llama-2-70b
model_repo: meta-llama/Llama-2-70b-chat-hf
url: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 4096
max_model_len: 4096
- model: Llama 3.1 8B
mad_tag: pyt_vllm_llama-3.1-8b
model_repo: meta-llama/Llama-3.1-8B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 8B FP8
mad_tag: pyt_vllm_llama-3.1-8b_fp8
model_repo: amd/Llama-3.1-8B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-8B-Instruct-FP8-KV
precision: float8
config:
tp: 1
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B
mad_tag: pyt_vllm_llama-3.1-405b
model_repo: meta-llama/Llama-3.1-405B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B FP8
mad_tag: pyt_vllm_llama-3.1-405b_fp8
model_repo: amd/Llama-3.1-405B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.1 405B MXFP4
mad_tag: pyt_vllm_llama-3.1-405b_fp4
model_repo: amd/Llama-3.1-405B-Instruct-MXFP4-Preview
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-MXFP4-Preview
precision: float4
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B
mad_tag: pyt_vllm_llama-3.3-70b
model_repo: meta-llama/Llama-3.3-70B-Instruct
url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B FP8
mad_tag: pyt_vllm_llama-3.3-70b_fp8
model_repo: amd/Llama-3.3-70B-Instruct-FP8-KV
url: https://huggingface.co/amd/Llama-3.3-70B-Instruct-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B MXFP4
mad_tag: pyt_vllm_llama-3.3-70b_fp4
model_repo: amd/Llama-3.3-70B-Instruct-MXFP4-Preview
url: https://huggingface.co/amd/Llama-3.3-70B-Instruct-MXFP4-Preview
precision: float4
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 4 Scout 17Bx16E
mad_tag: pyt_vllm_llama-4-scout-17b-16e
model_repo: meta-llama/Llama-4-Scout-17B-16E-Instruct
url: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Llama 4 Maverick 17Bx128E
mad_tag: pyt_vllm_llama-4-maverick-17b-128e
model_repo: meta-llama/Llama-4-Maverick-17B-128E-Instruct
url: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Llama 4 Maverick 17Bx128E FP8
mad_tag: pyt_vllm_llama-4-maverick-17b-128e_fp8
model_repo: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
url: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek R1 0528 FP8
mad_tag: pyt_vllm_deepseek-r1
model_repo: deepseek-ai/DeepSeek-R1-0528
url: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_seqs: 1024
max_num_batched_tokens: 131072
max_model_len: 8192
- group: OpenAI GPT OSS
tag: gpt-oss
models:
- model: GPT OSS 20B
mad_tag: pyt_vllm_gpt-oss-20b
model_repo: openai/gpt-oss-20b
url: https://huggingface.co/openai/gpt-oss-20b
precision: bfloat16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 8192
max_model_len: 8192
- model: GPT OSS 120B
mad_tag: pyt_vllm_gpt-oss-120b
model_repo: openai/gpt-oss-120b
url: https://huggingface.co/openai/gpt-oss-120b
precision: bfloat16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 8192
max_model_len: 8192
- group: Mistral AI
tag: mistral
models:
- model: Mixtral MoE 8x7B
mad_tag: pyt_vllm_mixtral-8x7b
model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Mixtral MoE 8x7B FP8
mad_tag: pyt_vllm_mixtral-8x7b_fp8
model_repo: amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Mixtral MoE 8x22B
mad_tag: pyt_vllm_mixtral-8x22b
model_repo: mistralai/Mixtral-8x22B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 65536
max_model_len: 8192
- model: Mixtral MoE 8x22B FP8
mad_tag: pyt_vllm_mixtral-8x22b_fp8
model_repo: amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 65536
max_model_len: 8192
- group: Qwen
tag: qwen
models:
- model: Qwen3 8B
mad_tag: pyt_vllm_qwen3-8b
model_repo: Qwen/Qwen3-8B
url: https://huggingface.co/Qwen/Qwen3-8B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 32B
mad_tag: pyt_vllm_qwen3-32b
model_repo: Qwen/Qwen3-32b
url: https://huggingface.co/Qwen/Qwen3-32B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 30B A3B
mad_tag: pyt_vllm_qwen3-30b-a3b
model_repo: Qwen/Qwen3-30B-A3B
url: https://huggingface.co/Qwen/Qwen3-30B-A3B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 30B A3B FP8
mad_tag: pyt_vllm_qwen3-30b-a3b_fp8
model_repo: Qwen/Qwen3-30B-A3B-FP8
url: https://huggingface.co/Qwen/Qwen3-30B-A3B-FP8
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 235B A22B
mad_tag: pyt_vllm_qwen3-235b-a22b
model_repo: Qwen/Qwen3-235B-A22B
url: https://huggingface.co/Qwen/Qwen3-235B-A22B
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 235B A22B FP8
mad_tag: pyt_vllm_qwen3-235b-a22b_fp8
model_repo: Qwen/Qwen3-235B-A22B-FP8
url: https://huggingface.co/Qwen/Qwen3-235B-A22B-FP8
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 40960
max_model_len: 8192
- group: Microsoft Phi
tag: phi
models:
- model: Phi-4
mad_tag: pyt_vllm_phi-4
model_repo: microsoft/phi-4
url: https://huggingface.co/microsoft/phi-4
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 16384
max_model_len: 8192

View File

@@ -0,0 +1,55 @@
xdit_diffusion_inference:
docker:
pull_tag: rocm/pytorch-xdit:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-xdit/v25.10/images/sha256-d79715ff18a9470e3f907cec8a9654d6b783c63370b091446acffc0de4d7070e
ROCm: 7.9.0
components:
TheRock: 7afbe45
rccl: 9b04b2a
composable_kernel: b7a806f
rocm-libraries: f104555
rocm-systems: 25922d0
torch: 2.10.0a0+gite9c9017
torchvision: 0.22.0a0+966da7e
triton: 3.5.0+git52e49c12
accelerate: 1.11.0.dev0
aiter: 0.1.5.post4.dev20+ga25e55e79
diffusers: 0.36.0.dev0
xfuser: 0.4.4
yunchang: 0.6.3.post1
model_groups:
- group: Hunyuan Video
tag: hunyuan
models:
- model: Hunyuan Video
model_name: hunyuanvideo
model_repo: tencent/HunyuanVideo
revision: refs/pr/18
url: https://huggingface.co/tencent/HunyuanVideo
github: https://github.com/Tencent-Hunyuan/HunyuanVideo
mad_tag: pyt_xdit_hunyuanvideo
- group: Wan-AI
tag: wan
models:
- model: Wan2.1
model_name: wan2_1-i2v-14b-720p
model_repo: Wan-AI/Wan2.1-I2V-14B-720P
url: https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P
github: https://github.com/Wan-Video/Wan2.1
mad_tag: pyt_xdit_wan_2_1
- model: Wan2.2
model_name: wan2_2-i2v-a14b
model_repo: Wan-AI/Wan2.2-I2V-A14B
url: https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B
github: https://github.com/Wan-Video/Wan2.2
mad_tag: pyt_xdit_wan_2_2
- group: FLUX
tag: flux
models:
- model: FLUX.1
model_name: FLUX.1-dev
model_repo: black-forest-labs/FLUX.1-dev
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
github: https://github.com/black-forest-labs/flux
mad_tag: pyt_xdit_flux

View File

@@ -0,0 +1,109 @@
xdit_diffusion_inference:
docker:
- version: v25-11
pull_tag: rocm/pytorch-xdit:v25.11
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-xdit/v25.11/images/sha256-c9fa659439bb024f854b4d5eea598347251b02c341c55f66c98110832bde4216
ROCm: 7.10.0
supported_models:
- group: Hunyuan Video
models:
- Hunyuan Video
- group: Wan-AI
models:
- Wan2.1
- Wan2.2
- group: FLUX
models:
- FLUX.1
whats_new:
- "Minor bug fixes and clarifications to READMEs."
- "Bumps TheRock, AITER, Diffusers, xDiT versions."
- "Changes Aiter rounding mode for faster gfx942 FWD Attention."
components:
TheRock: 3e3f834
rccl: d23d18f
composable_kernel: 2570462
rocm-libraries: 0588f07
rocm-systems: 473025a
torch: 73adac
torchvision: f5c6c2e
triton: 7416ffc
accelerate: 34c1779
aiter: de14bec
diffusers: 40528e9
xfuser: 83978b5
yunchang: 2c9b712
- version: v25-10
pull_tag: rocm/pytorch-xdit:v25.10
docker_hub_url: https://hub.docker.com/r/rocm/pytorch-xdit
ROCm: 7.9.0
supported_models:
- group: Hunyuan Video
models:
- Hunyuan Video
- group: Wan-AI
models:
- Wan2.1
- Wan2.2
- group: FLUX
models:
- FLUX.1
whats_new:
- "First official xDiT Docker Release for Diffusion Inference."
- "Supports gfx942 and gfx950 series (AMD Instinct™ MI300X, MI325X, MI350X, and MI355X)."
- "Support Wan 2.1, Wan 2.2, HunyuanVideo and Flux workloads."
components:
TheRock: 7afbe45
rccl: 9b04b2a
composable_kernel: b7a806f
rocm-libraries: f104555
rocm-systems: 25922d0
torch: 2.10.0a0+gite9c9017
torchvision: 0.22.0a0+966da7e
triton: 3.5.0+git52e49c12
accelerate: 1.11.0.dev0
aiter: 0.1.5.post4.dev20+ga25e55e79
diffusers: 0.36.0.dev0
xfuser: 0.4.4
yunchang: 0.6.3.post1
model_groups:
- group: Hunyuan Video
tag: hunyuan
models:
- model: Hunyuan Video
page_tag: hunyuan_tag
model_name: hunyuanvideo
model_repo: tencent/HunyuanVideo
revision: refs/pr/18
url: https://huggingface.co/tencent/HunyuanVideo
github: https://github.com/Tencent-Hunyuan/HunyuanVideo
mad_tag: pyt_xdit_hunyuanvideo
- group: Wan-AI
tag: wan
models:
- model: Wan2.1
page_tag: wan_21_tag
model_name: wan2_1-i2v-14b-720p
model_repo: Wan-AI/Wan2.1-I2V-14B-720P
url: https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P
github: https://github.com/Wan-Video/Wan2.1
mad_tag: pyt_xdit_wan_2_1
- model: Wan2.2
page_tag: wan_22_tag
model_name: wan2_2-i2v-a14b
model_repo: Wan-AI/Wan2.2-I2V-A14B
url: https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B
github: https://github.com/Wan-Video/Wan2.2
mad_tag: pyt_xdit_wan_2_2
- group: FLUX
tag: flux
models:
- model: FLUX.1
page_tag: flux_1_tag
model_name: FLUX.1-dev
model_repo: black-forest-labs/FLUX.1-dev
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
github: https://github.com/black-forest-labs/flux
mad_tag: pyt_xdit_flux

View File

@@ -0,0 +1,91 @@
docker:
pull_tag: rocm/pytorch-xdit:v25.12
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-xdit/v25.12/images/sha256-e06895132316bf3c393366b70a91eaab6755902dad0100e6e2b38310547d9256
ROCm: 7.10.0
whats_new:
- "Adds T2V and TI2V support for Wan models."
- "Adds support for SD-3.5 T2I model."
components:
TheRock:
version: 3e3f834
url: https://github.com/ROCm/TheRock
rccl:
version: d23d18f
url: https://github.com/ROCm/rccl
composable_kernel:
version: 2570462
url: https://github.com/ROCm/composable_kernel
rocm-libraries:
version: 0588f07
url: https://github.com/ROCm/rocm-libraries
rocm-systems:
version: 473025a
url: https://github.com/ROCm/rocm-systems
torch:
version: 73adac
url: https://github.com/pytorch/pytorch
torchvision:
version: f5c6c2e
url: https://github.com/pytorch/vision
triton:
version: 7416ffc
url: https://github.com/triton-lang/triton
accelerate:
version: 34c1779
url: https://github.com/huggingface/accelerate
aiter:
version: de14bec
url: https://github.com/ROCm/aiter
diffusers:
version: 40528e9
url: https://github.com/huggingface/diffusers
xfuser:
version: ccba9d5
url: https://github.com/xdit-project/xDiT
yunchang:
version: 2c9b712
url: https://github.com/feifeibear/long-context-attention
supported_models:
- group: Hunyuan Video
js_tag: hunyuan
models:
- model: Hunyuan Video
model_repo: tencent/HunyuanVideo
revision: refs/pr/18
url: https://huggingface.co/tencent/HunyuanVideo
github: https://github.com/Tencent-Hunyuan/HunyuanVideo
mad_tag: pyt_xdit_hunyuanvideo
js_tag: hunyuan_tag
- group: Wan-AI
js_tag: wan
models:
- model: Wan2.1
model_repo: Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
url: https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
github: https://github.com/Wan-Video/Wan2.1
mad_tag: pyt_xdit_wan_2_1
js_tag: wan_21_tag
- model: Wan2.2
model_repo: Wan-AI/Wan2.2-I2V-A14B-Diffusers
url: https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers
github: https://github.com/Wan-Video/Wan2.2
mad_tag: pyt_xdit_wan_2_2
js_tag: wan_22_tag
- group: FLUX
js_tag: flux
models:
- model: FLUX.1
model_repo: black-forest-labs/FLUX.1-dev
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
github: https://github.com/black-forest-labs/flux
mad_tag: pyt_xdit_flux
js_tag: flux_1_tag
- group: Stable Diffusion
js_tag: stablediffusion
models:
- model: stable-diffusion-3.5-large
model_repo: stabilityai/stable-diffusion-3.5-large
url: https://huggingface.co/stabilityai/stable-diffusion-3.5-large
github: https://github.com/Stability-AI/sd3.5
mad_tag: pyt_xdit_sd_3_5
js_tag: stable_diffusion_3_5_large_tag

View File

@@ -1,188 +1,316 @@
dockers: dockers:
- pull_tag: rocm/vllm:rocm6.4.1_vllm_0.10.1_20250909 - pull_tag: rocm/vllm:rocm7.0.0_vllm_0.11.2_20251210
docker_hub_url: https://hub.docker.com/layers/rocm/vllm/rocm6.4.1_vllm_0.10.1_20250909/images/sha256-1113268572e26d59b205792047bea0e61e018e79aeadceba118b7bf23cb3715c docker_hub_url: https://hub.docker.com/layers/rocm/vllm/rocm7.0.0_vllm_0.11.2_20251210/images/sha256-e7f02dd2ce3824959658bc0391296f6158638e3ebce164f6c019c4eca8150ec7
components: components:
ROCm: 6.4.1 ROCm: 7.0.0
vLLM: 0.10.1 (0.10.1rc2.dev409+g0b6bf6691.rocm641) vLLM: 0.11.2 (0.11.2.dev673+g839868462.rocm700)
PyTorch: 2.7.0+gitf717b2a PyTorch: 2.9.0a0+git1c57644
hipBLASLt: 0.15 hipBLASLt: 1.0.0
dockerfile:
commit: 8398684622109c806a35d660647060b0b9910663
model_groups: model_groups:
- group: Meta Llama - group: Meta Llama
tag: llama tag: llama
models: models:
- model: Llama 3.1 8B - model: Llama 2 70B
mad_tag: pyt_vllm_llama-3.1-8b mad_tag: pyt_vllm_llama-2-70b
model_repo: meta-llama/Llama-3.1-8B-Instruct model_repo: meta-llama/Llama-2-70b-chat-hf
url: https://huggingface.co/meta-llama/Llama-3.1-8B url: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf
precision: float16 precision: float16
config: config:
tp: 1 tp: 8
dtype: auto dtype: auto
kv_cache_dtype: auto kv_cache_dtype: auto
max_seq_len_to_capture: 131072 max_num_batched_tokens: 4096
max_num_batched_tokens: 131072 max_model_len: 4096
max_model_len: 8192 - model: Llama 3.1 8B
- model: Llama 3.1 70B mad_tag: pyt_vllm_llama-3.1-8b
mad_tag: pyt_vllm_llama-3.1-70b model_repo: meta-llama/Llama-3.1-8B-Instruct
model_repo: meta-llama/Llama-3.1-70B-Instruct url: https://huggingface.co/meta-llama/Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct precision: float16
precision: float16 config:
config: tp: 1
tp: 8 dtype: auto
dtype: auto kv_cache_dtype: auto
kv_cache_dtype: auto max_num_batched_tokens: 131072
max_seq_len_to_capture: 131072 max_model_len: 8192
max_num_batched_tokens: 131072 - model: Llama 3.1 8B FP8
max_model_len: 8192 mad_tag: pyt_vllm_llama-3.1-8b_fp8
- model: Llama 3.1 405B model_repo: amd/Llama-3.1-8B-Instruct-FP8-KV
mad_tag: pyt_vllm_llama-3.1-405b url: https://huggingface.co/amd/Llama-3.1-8B-Instruct-FP8-KV
model_repo: meta-llama/Llama-3.1-405B-Instruct precision: float8
url: https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct config:
precision: float16 tp: 1
config: dtype: auto
tp: 8 kv_cache_dtype: fp8
dtype: auto max_num_batched_tokens: 131072
kv_cache_dtype: auto max_model_len: 8192
max_seq_len_to_capture: 131072 - model: Llama 3.1 405B
max_num_batched_tokens: 131072 mad_tag: pyt_vllm_llama-3.1-405b
max_model_len: 8192 model_repo: meta-llama/Llama-3.1-405B-Instruct
- model: Llama 2 70B url: https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct
mad_tag: pyt_vllm_llama-2-70b precision: float16
model_repo: meta-llama/Llama-2-70b-chat-hf config:
url: https://huggingface.co/meta-llama/Llama-2-70b-chat-hf tp: 8
precision: float16 dtype: auto
config: kv_cache_dtype: auto
tp: 8 max_num_batched_tokens: 131072
dtype: auto max_model_len: 8192
kv_cache_dtype: auto - model: Llama 3.1 405B FP8
max_seq_len_to_capture: 4096 mad_tag: pyt_vllm_llama-3.1-405b_fp8
max_num_batched_tokens: 4096 model_repo: amd/Llama-3.1-405B-Instruct-FP8-KV
max_model_len: 4096 url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV
- model: Llama 3.1 8B FP8 precision: float8
mad_tag: pyt_vllm_llama-3.1-8b_fp8 config:
model_repo: amd/Llama-3.1-8B-Instruct-FP8-KV tp: 8
url: https://huggingface.co/amd/Llama-3.1-8B-Instruct-FP8-KV dtype: auto
precision: float8 kv_cache_dtype: fp8
config: max_num_batched_tokens: 131072
tp: 1 max_model_len: 8192
dtype: auto - model: Llama 3.1 405B MXFP4
kv_cache_dtype: fp8 mad_tag: pyt_vllm_llama-3.1-405b_fp4
max_seq_len_to_capture: 131072 model_repo: amd/Llama-3.1-405B-Instruct-MXFP4-Preview
max_num_batched_tokens: 131072 url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-MXFP4-Preview
max_model_len: 8192 precision: float4
- model: Llama 3.1 70B FP8 config:
mad_tag: pyt_vllm_llama-3.1-70b_fp8 tp: 8
model_repo: amd/Llama-3.1-70B-Instruct-FP8-KV dtype: auto
url: https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV kv_cache_dtype: fp8
precision: float8 max_num_batched_tokens: 131072
config: max_model_len: 8192
tp: 8 - model: Llama 3.3 70B
dtype: auto mad_tag: pyt_vllm_llama-3.3-70b
kv_cache_dtype: fp8 model_repo: meta-llama/Llama-3.3-70B-Instruct
max_seq_len_to_capture: 131072 url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
max_num_batched_tokens: 131072 precision: float16
max_model_len: 8192 config:
- model: Llama 3.1 405B FP8 tp: 8
mad_tag: pyt_vllm_llama-3.1-405b_fp8 dtype: auto
model_repo: amd/Llama-3.1-405B-Instruct-FP8-KV kv_cache_dtype: auto
url: https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV max_num_batched_tokens: 131072
precision: float8 max_model_len: 8192
config: - model: Llama 3.3 70B FP8
tp: 8 mad_tag: pyt_vllm_llama-3.3-70b_fp8
dtype: auto model_repo: amd/Llama-3.3-70B-Instruct-FP8-KV
kv_cache_dtype: fp8 url: https://huggingface.co/amd/Llama-3.3-70B-Instruct-FP8-KV
max_seq_len_to_capture: 131072 precision: float8
max_num_batched_tokens: 131072 config:
max_model_len: 8192 tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 3.3 70B MXFP4
mad_tag: pyt_vllm_llama-3.3-70b_fp4
model_repo: amd/Llama-3.3-70B-Instruct-MXFP4-Preview
url: https://huggingface.co/amd/Llama-3.3-70B-Instruct-MXFP4-Preview
precision: float4
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- model: Llama 4 Scout 17Bx16E
mad_tag: pyt_vllm_llama-4-scout-17b-16e
model_repo: meta-llama/Llama-4-Scout-17B-16E-Instruct
url: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Llama 4 Maverick 17Bx128E
mad_tag: pyt_vllm_llama-4-maverick-17b-128e
model_repo: meta-llama/Llama-4-Maverick-17B-128E-Instruct
url: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 32768
max_model_len: 8192
- model: Llama 4 Maverick 17Bx128E FP8
mad_tag: pyt_vllm_llama-4-maverick-17b-128e_fp8
model_repo: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
url: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 131072
max_model_len: 8192
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek R1 0528 FP8
mad_tag: pyt_vllm_deepseek-r1
model_repo: deepseek-ai/DeepSeek-R1-0528
url: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_seqs: 1024
max_num_batched_tokens: 131072
max_model_len: 8192
- group: OpenAI GPT OSS
tag: gpt-oss
models:
- model: GPT OSS 20B
mad_tag: pyt_vllm_gpt-oss-20b
model_repo: openai/gpt-oss-20b
url: https://huggingface.co/openai/gpt-oss-20b
precision: bfloat16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 8192
max_model_len: 8192
- model: GPT OSS 120B
mad_tag: pyt_vllm_gpt-oss-120b
model_repo: openai/gpt-oss-120b
url: https://huggingface.co/openai/gpt-oss-120b
precision: bfloat16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 8192
max_model_len: 8192
- group: Mistral AI - group: Mistral AI
tag: mistral tag: mistral
models: models:
- model: Mixtral MoE 8x7B - model: Mixtral MoE 8x7B
mad_tag: pyt_vllm_mixtral-8x7b mad_tag: pyt_vllm_mixtral-8x7b
model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1 model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1
url: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 url: https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
precision: float16 precision: float16
config: config:
tp: 8 tp: 8
dtype: auto dtype: auto
kv_cache_dtype: auto kv_cache_dtype: auto
max_seq_len_to_capture: 32768 max_num_batched_tokens: 32768
max_num_batched_tokens: 32768 max_model_len: 8192
max_model_len: 8192 - model: Mixtral MoE 8x7B FP8
- model: Mixtral MoE 8x22B mad_tag: pyt_vllm_mixtral-8x7b_fp8
mad_tag: pyt_vllm_mixtral-8x22b model_repo: amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
model_repo: mistralai/Mixtral-8x22B-Instruct-v0.1 url: https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV
url: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1 precision: float8
precision: float16 config:
config: tp: 8
tp: 8 dtype: auto
dtype: auto kv_cache_dtype: fp8
kv_cache_dtype: auto max_num_batched_tokens: 32768
max_seq_len_to_capture: 65536 max_model_len: 8192
max_num_batched_tokens: 65536 - model: Mixtral MoE 8x22B
max_model_len: 8192 mad_tag: pyt_vllm_mixtral-8x22b
- model: Mixtral MoE 8x7B FP8 model_repo: mistralai/Mixtral-8x22B-Instruct-v0.1
mad_tag: pyt_vllm_mixtral-8x7b_fp8 url: https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
model_repo: amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV precision: float16
url: https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV config:
precision: float8 tp: 8
config: dtype: auto
tp: 8 kv_cache_dtype: auto
dtype: auto max_num_batched_tokens: 65536
kv_cache_dtype: fp8 max_model_len: 8192
max_seq_len_to_capture: 32768 - model: Mixtral MoE 8x22B FP8
max_num_batched_tokens: 32768 mad_tag: pyt_vllm_mixtral-8x22b_fp8
max_model_len: 8192 model_repo: amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
- model: Mixtral MoE 8x22B FP8 url: https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV
mad_tag: pyt_vllm_mixtral-8x22b_fp8 precision: float8
model_repo: amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV config:
url: https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV tp: 8
precision: float8 dtype: auto
config: kv_cache_dtype: fp8
tp: 8 max_num_batched_tokens: 65536
dtype: auto max_model_len: 8192
kv_cache_dtype: fp8
max_seq_len_to_capture: 65536
max_num_batched_tokens: 65536
max_model_len: 8192
- group: Qwen - group: Qwen
tag: qwen tag: qwen
models: models:
- model: QwQ-32B - model: Qwen3 8B
mad_tag: pyt_vllm_qwq-32b mad_tag: pyt_vllm_qwen3-8b
model_repo: Qwen/QwQ-32B model_repo: Qwen/Qwen3-8B
url: https://huggingface.co/Qwen/QwQ-32B url: https://huggingface.co/Qwen/Qwen3-8B
precision: float16 precision: float16
config: config:
tp: 1 tp: 1
dtype: auto dtype: auto
kv_cache_dtype: auto kv_cache_dtype: auto
max_seq_len_to_capture: 131072 max_num_batched_tokens: 40960
max_num_batched_tokens: 131072 max_model_len: 8192
max_model_len: 8192 - model: Qwen3 32B
- model: Qwen3 30B A3B mad_tag: pyt_vllm_qwen3-32b
mad_tag: pyt_vllm_qwen3-30b-a3b model_repo: Qwen/Qwen3-32b
model_repo: Qwen/Qwen3-30B-A3B url: https://huggingface.co/Qwen/Qwen3-32B
url: https://huggingface.co/Qwen/Qwen3-30B-A3B precision: float16
precision: float16 config:
config: tp: 1
tp: 1 dtype: auto
dtype: auto kv_cache_dtype: auto
kv_cache_dtype: auto max_num_batched_tokens: 40960
max_seq_len_to_capture: 32768 max_model_len: 8192
max_num_batched_tokens: 32768 - model: Qwen3 30B A3B
max_model_len: 8192 mad_tag: pyt_vllm_qwen3-30b-a3b
model_repo: Qwen/Qwen3-30B-A3B
url: https://huggingface.co/Qwen/Qwen3-30B-A3B
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 30B A3B FP8
mad_tag: pyt_vllm_qwen3-30b-a3b_fp8
model_repo: Qwen/Qwen3-30B-A3B-FP8
url: https://huggingface.co/Qwen/Qwen3-30B-A3B-FP8
precision: float16
config:
tp: 1
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 235B A22B
mad_tag: pyt_vllm_qwen3-235b-a22b
model_repo: Qwen/Qwen3-235B-A22B
url: https://huggingface.co/Qwen/Qwen3-235B-A22B
precision: float16
config:
tp: 8
dtype: auto
kv_cache_dtype: auto
max_num_batched_tokens: 40960
max_model_len: 8192
- model: Qwen3 235B A22B FP8
mad_tag: pyt_vllm_qwen3-235b-a22b_fp8
model_repo: Qwen/Qwen3-235B-A22B-FP8
url: https://huggingface.co/Qwen/Qwen3-235B-A22B-FP8
precision: float8
config:
tp: 8
dtype: auto
kv_cache_dtype: fp8
max_num_batched_tokens: 40960
max_model_len: 8192
- group: Microsoft Phi - group: Microsoft Phi
tag: phi tag: phi
models: models:
- model: Phi-4 - model: Phi-4
mad_tag: pyt_vllm_phi-4 mad_tag: pyt_vllm_phi-4
model_repo: microsoft/phi-4 model_repo: microsoft/phi-4
url: https://huggingface.co/microsoft/phi-4 url: https://huggingface.co/microsoft/phi-4
config: precision: float16
tp: 1 config:
dtype: auto tp: 1
kv_cache_dtype: auto dtype: auto
max_seq_len_to_capture: 16384 kv_cache_dtype: auto
max_num_batched_tokens: 16384 max_num_batched_tokens: 16384
max_model_len: 8192 max_model_len: 8192

View File

@@ -0,0 +1,105 @@
docker:
pull_tag: rocm/pytorch-xdit:v25.13
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-xdit/v25.13/images/sha256-81954713070d67bde08595e03f62110c8a3dd66a9ae17a77d611e01f83f0f4ef
ROCm: 7.11.0
whats_new:
- "Flux.1 Kontext support"
- "Flux.2 Dev support"
- "Flux FP8 GEMM support"
- "Hybrid FP8 attention support for Wan models"
components:
TheRock:
version: 1728a81
url: https://github.com/ROCm/TheRock
rccl:
version: d23d18f
url: https://github.com/ROCm/rccl
composable_kernel:
version: ab0101c
url: https://github.com/ROCm/composable_kernel
rocm-libraries:
version: a2f7c35
url: https://github.com/ROCm/rocm-libraries
rocm-systems:
version: 659737c
url: https://github.com/ROCm/rocm-systems
torch:
version: 91be249
url: https://github.com/ROCm/pytorch
torchvision:
version: b919bd0
url: https://github.com/pytorch/vision
triton:
version: a272dfa
url: https://github.com/ROCm/triton
accelerate:
version: b521400f
url: https://github.com/huggingface/accelerate
aiter:
version: de14bec0
url: https://github.com/ROCm/aiter
diffusers:
version: a1f36ee3e
url: https://github.com/huggingface/diffusers
xfuser:
version: adf2681
url: https://github.com/xdit-project/xDiT
yunchang:
version: 2c9b712
url: https://github.com/feifeibear/long-context-attention
supported_models:
- group: Hunyuan Video
js_tag: hunyuan
models:
- model: Hunyuan Video
model_repo: tencent/HunyuanVideo
revision: refs/pr/18
url: https://huggingface.co/tencent/HunyuanVideo
github: https://github.com/Tencent-Hunyuan/HunyuanVideo
mad_tag: pyt_xdit_hunyuanvideo
js_tag: hunyuan_tag
- group: Wan-AI
js_tag: wan
models:
- model: Wan2.1
model_repo: Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
url: https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
github: https://github.com/Wan-Video/Wan2.1
mad_tag: pyt_xdit_wan_2_1
js_tag: wan_21_tag
- model: Wan2.2
model_repo: Wan-AI/Wan2.2-I2V-A14B-Diffusers
url: https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers
github: https://github.com/Wan-Video/Wan2.2
mad_tag: pyt_xdit_wan_2_2
js_tag: wan_22_tag
- group: FLUX
js_tag: flux
models:
- model: FLUX.1
model_repo: black-forest-labs/FLUX.1-dev
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
github: https://github.com/black-forest-labs/flux
mad_tag: pyt_xdit_flux
js_tag: flux_1_tag
- model: FLUX.1 Kontext
model_repo: black-forest-labs/FLUX.1-Kontext-dev
url: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev
github: https://github.com/black-forest-labs/flux
mad_tag: pyt_xdit_flux_kontext
js_tag: flux_1_kontext_tag
- model: FLUX.2
model_repo: black-forest-labs/FLUX.2-dev
url: https://huggingface.co/black-forest-labs/FLUX.2-dev
github: https://github.com/black-forest-labs/flux2
mad_tag: pyt_xdit_flux_2
js_tag: flux_2_tag
- group: StableDiffusion
js_tag: stablediffusion
models:
- model: stable-diffusion-3.5-large
model_repo: stabilityai/stable-diffusion-3.5-large
url: https://huggingface.co/stabilityai/stable-diffusion-3.5-large
github: https://github.com/Stability-AI/sd3.5
mad_tag: pyt_xdit_sd_3_5
js_tag: stable_diffusion_3_5_large_tag

View File

@@ -1,47 +1,16 @@
dockers: dockers:
- pull_tag: rocm/jax-training:maxtext-v25.7-jax060 - pull_tag: rocm/jax-training:maxtext-v25.11
docker_hub_url: https://hub.docker.com/layers/rocm/jax-training/maxtext-v25.7/images/sha256-45f4c727d4019a63fc47313d3a5f5a5105569539294ddfd2d742218212ae9025 docker_hub_url: https://hub.docker.com/layers/rocm/jax-training/maxtext-v25.11/images/sha256-18e4d8f0b8ce7a7422c58046940dd5f32249960449fca09a562b65fb8eb1562a
components: components:
ROCm: 6.4.1 ROCm: 7.1.0
JAX: 0.6.0 JAX: 0.7.1
Python: 3.10.12 Python: 3.12
Transformer Engine: 2.1.0+90d703dd Transformer Engine: 2.4.0.dev0+281042de
hipBLASLt: 1.1.0-499ece1c21 hipBLASLt: 1.2.x
- pull_tag: rocm/jax-training:maxtext-v25.7
docker_hub_url: https://hub.docker.com/layers/rocm/jax-training/maxtext-v25.7/images/sha256-45f4c727d4019a63fc47313d3a5f5a5105569539294ddfd2d742218212ae9025
components:
ROCm: 6.4.1
JAX: 0.5.0
Python: 3.10.12
Transformer Engine: 2.1.0+90d703dd
hipBLASLt: 1.x.x
model_groups: model_groups:
- group: Meta Llama - group: Meta Llama
tag: llama tag: llama
models: models:
- model: Llama 3.3 70B
mad_tag: jax_maxtext_train_llama-3.3-70b
model_repo: Llama-3.3-70B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.1 8B
mad_tag: jax_maxtext_train_llama-3.1-8b
model_repo: Llama-3.1-8B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.1 70B
mad_tag: jax_maxtext_train_llama-3.1-70b
model_repo: Llama-3.1-70B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3 8B
mad_tag: jax_maxtext_train_llama-3-8b
multinode_training_script: llama3_8b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 3 70B
mad_tag: jax_maxtext_train_llama-3-70b
multinode_training_script: llama3_70b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 2 7B - model: Llama 2 7B
mad_tag: jax_maxtext_train_llama-2-7b mad_tag: jax_maxtext_train_llama-2-7b
model_repo: Llama-2-7B model_repo: Llama-2-7B
@@ -54,6 +23,29 @@ model_groups:
precision: bf16 precision: bf16
multinode_training_script: llama2_70b_multinode.sh multinode_training_script: llama2_70b_multinode.sh
doc_options: ["single-node", "multi-node"] doc_options: ["single-node", "multi-node"]
- model: Llama 3 8B (multi-node)
mad_tag: jax_maxtext_train_llama-3-8b
multinode_training_script: llama3_8b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 3 70B (multi-node)
mad_tag: jax_maxtext_train_llama-3-70b
multinode_training_script: llama3_70b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 3.1 8B
mad_tag: jax_maxtext_train_llama-3.1-8b
model_repo: Llama-3.1-8B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.1 70B
mad_tag: jax_maxtext_train_llama-3.1-70b
model_repo: Llama-3.1-70B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.3 70B
mad_tag: jax_maxtext_train_llama-3.3-70b
model_repo: Llama-3.3-70B
precision: bf16
doc_options: ["single-node"]
- group: DeepSeek - group: DeepSeek
tag: deepseek tag: deepseek
models: models:

View File

@@ -1,14 +1,17 @@
dockers: docker:
- pull_tag: rocm/megatron-lm:v25.8_py310 pull_tag: rocm/primus:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/megatron-lm/v25.8_py310/images/sha256-50fc824361054e445e86d5d88d5f58817f61f8ec83ad4a7e43ea38bbc4a142c0 docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components: components:
ROCm: 6.4.3 ROCm: 7.1.0
PyTorch: 2.8.0a0+gitd06a406 Primus: 0.3.0
Python: "3.10" Primus Turbo: 0.1.1
Transformer Engine: 2.2.0.dev0+54dd2bdc PyTorch: 2.10.0.dev20251112+rocm7.1
hipBLASLt: d1b517fc7a Python: "3.10"
Triton: 3.3.0 Transformer Engine: 2.4.0.dev0+32e2d1d4
RCCL: 2.22.3 Flash Attention: 2.8.3
hipBLASLt: 1.2.0-09ab7153e2
Triton: 3.4.0
RCCL: 2.27.7
model_groups: model_groups:
- group: Meta Llama - group: Meta Llama
tag: llama tag: llama
@@ -19,8 +22,6 @@ model_groups:
mad_tag: pyt_megatron_lm_train_llama-3.1-8b mad_tag: pyt_megatron_lm_train_llama-3.1-8b
- model: Llama 3.1 70B - model: Llama 3.1 70B
mad_tag: pyt_megatron_lm_train_llama-3.1-70b mad_tag: pyt_megatron_lm_train_llama-3.1-70b
- model: Llama 3.1 70B (proxy)
mad_tag: pyt_megatron_lm_train_llama-3.1-70b-proxy
- model: Llama 2 7B - model: Llama 2 7B
mad_tag: pyt_megatron_lm_train_llama-2-7b mad_tag: pyt_megatron_lm_train_llama-2-7b
- model: Llama 2 70B - model: Llama 2 70B

View File

@@ -0,0 +1,72 @@
dockers:
- pull_tag: rocm/jax-training:maxtext-v25.7-jax060
docker_hub_url: https://hub.docker.com/layers/rocm/jax-training/maxtext-v25.7/images/sha256-45f4c727d4019a63fc47313d3a5f5a5105569539294ddfd2d742218212ae9025
components:
ROCm: 6.4.1
JAX: 0.6.0
Python: 3.10.12
Transformer Engine: 2.1.0+90d703dd
hipBLASLt: 1.1.0-499ece1c21
- pull_tag: rocm/jax-training:maxtext-v25.7
docker_hub_url: https://hub.docker.com/layers/rocm/jax-training/maxtext-v25.7/images/sha256-45f4c727d4019a63fc47313d3a5f5a5105569539294ddfd2d742218212ae9025
components:
ROCm: 6.4.1
JAX: 0.5.0
Python: 3.10.12
Transformer Engine: 2.1.0+90d703dd
hipBLASLt: 1.x.x
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: jax_maxtext_train_llama-3.3-70b
model_repo: Llama-3.3-70B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.1 8B
mad_tag: jax_maxtext_train_llama-3.1-8b
model_repo: Llama-3.1-8B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.1 70B
mad_tag: jax_maxtext_train_llama-3.1-70b
model_repo: Llama-3.1-70B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3 8B
mad_tag: jax_maxtext_train_llama-3-8b
multinode_training_script: llama3_8b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 3 70B
mad_tag: jax_maxtext_train_llama-3-70b
multinode_training_script: llama3_70b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 2 7B
mad_tag: jax_maxtext_train_llama-2-7b
model_repo: Llama-2-7B
precision: bf16
multinode_training_script: llama2_7b_multinode.sh
doc_options: ["single-node", "multi-node"]
- model: Llama 2 70B
mad_tag: jax_maxtext_train_llama-2-70b
model_repo: Llama-2-70B
precision: bf16
multinode_training_script: llama2_70b_multinode.sh
doc_options: ["single-node", "multi-node"]
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V2-Lite (16B)
mad_tag: jax_maxtext_train_deepseek-v2-lite-16b
model_repo: DeepSeek-V2-lite
precision: bf16
doc_options: ["single-node"]
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: jax_maxtext_train_mixtral-8x7b
model_repo: Mixtral-8x7B
precision: bf16
doc_options: ["single-node"]

View File

@@ -0,0 +1,64 @@
dockers:
- pull_tag: rocm/jax-training:maxtext-v25.9.1
docker_hub_url: https://hub.docker.com/layers/rocm/jax-training/maxtext-v25.9.1/images/sha256-60946cfbd470f6ee361fc9da740233a4fb2e892727f01719145b1f7627a1cff6
components:
ROCm: 7.0.0
JAX: 0.6.2
Python: 3.10.18
Transformer Engine: 2.2.0.dev0+c91bac54
hipBLASLt: 1.x.x
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 2 7B
mad_tag: jax_maxtext_train_llama-2-7b
model_repo: Llama-2-7B
precision: bf16
multinode_training_script: llama2_7b_multinode.sh
doc_options: ["single-node", "multi-node"]
- model: Llama 2 70B
mad_tag: jax_maxtext_train_llama-2-70b
model_repo: Llama-2-70B
precision: bf16
multinode_training_script: llama2_70b_multinode.sh
doc_options: ["single-node", "multi-node"]
- model: Llama 3 8B (multi-node)
mad_tag: jax_maxtext_train_llama-3-8b
multinode_training_script: llama3_8b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 3 70B (multi-node)
mad_tag: jax_maxtext_train_llama-3-70b
multinode_training_script: llama3_70b_multinode.sh
doc_options: ["multi-node"]
- model: Llama 3.1 8B
mad_tag: jax_maxtext_train_llama-3.1-8b
model_repo: Llama-3.1-8B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.1 70B
mad_tag: jax_maxtext_train_llama-3.1-70b
model_repo: Llama-3.1-70B
precision: bf16
doc_options: ["single-node"]
- model: Llama 3.3 70B
mad_tag: jax_maxtext_train_llama-3.3-70b
model_repo: Llama-3.3-70B
precision: bf16
doc_options: ["single-node"]
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V2-Lite (16B)
mad_tag: jax_maxtext_train_deepseek-v2-lite-16b
model_repo: DeepSeek-V2-lite
precision: bf16
doc_options: ["single-node"]
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: jax_maxtext_train_mixtral-8x7b
model_repo: Mixtral-8x7B
precision: bf16
doc_options: ["single-node"]

View File

@@ -0,0 +1,49 @@
docker:
pull_tag: rocm/primus:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components:
ROCm: 7.1.0
Primus: 0.3.0
Primus Turbo: 0.1.1
PyTorch: 2.10.0.dev20251112+rocm7.1
Python: "3.10"
Transformer Engine: 2.4.0.dev0+32e2d1d4
Flash Attention: 2.8.3
hipBLASLt: 1.2.0-09ab7153e2
Triton: 3.4.0
RCCL: 2.27.7
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: pyt_megatron_lm_train_llama-3.3-70b
- model: Llama 3.1 8B
mad_tag: pyt_megatron_lm_train_llama-3.1-8b
- model: Llama 3.1 70B
mad_tag: pyt_megatron_lm_train_llama-3.1-70b
- model: Llama 2 7B
mad_tag: pyt_megatron_lm_train_llama-2-7b
- model: Llama 2 70B
mad_tag: pyt_megatron_lm_train_llama-2-70b
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V3 (proxy)
mad_tag: pyt_megatron_lm_train_deepseek-v3-proxy
- model: DeepSeek-V2-Lite
mad_tag: pyt_megatron_lm_train_deepseek-v2-lite-16b
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: pyt_megatron_lm_train_mixtral-8x7b
- model: Mixtral 8x22B (proxy)
mad_tag: pyt_megatron_lm_train_mixtral-8x22b-proxy
- group: Qwen
tag: qwen
models:
- model: Qwen 2.5 7B
mad_tag: pyt_megatron_lm_train_qwen2.5-7b
- model: Qwen 2.5 72B
mad_tag: pyt_megatron_lm_train_qwen2.5-72b

View File

@@ -0,0 +1,48 @@
dockers:
- pull_tag: rocm/megatron-lm:v25.8_py310
docker_hub_url: https://hub.docker.com/layers/rocm/megatron-lm/v25.8_py310/images/sha256-50fc824361054e445e86d5d88d5f58817f61f8ec83ad4a7e43ea38bbc4a142c0
components:
ROCm: 6.4.3
PyTorch: 2.8.0a0+gitd06a406
Python: "3.10"
Transformer Engine: 2.2.0.dev0+54dd2bdc
hipBLASLt: d1b517fc7a
Triton: 3.3.0
RCCL: 2.22.3
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: pyt_megatron_lm_train_llama-3.3-70b
- model: Llama 3.1 8B
mad_tag: pyt_megatron_lm_train_llama-3.1-8b
- model: Llama 3.1 70B
mad_tag: pyt_megatron_lm_train_llama-3.1-70b
- model: Llama 3.1 70B (proxy)
mad_tag: pyt_megatron_lm_train_llama-3.1-70b-proxy
- model: Llama 2 7B
mad_tag: pyt_megatron_lm_train_llama-2-7b
- model: Llama 2 70B
mad_tag: pyt_megatron_lm_train_llama-2-70b
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V3 (proxy)
mad_tag: pyt_megatron_lm_train_deepseek-v3-proxy
- model: DeepSeek-V2-Lite
mad_tag: pyt_megatron_lm_train_deepseek-v2-lite-16b
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: pyt_megatron_lm_train_mixtral-8x7b
- model: Mixtral 8x22B (proxy)
mad_tag: pyt_megatron_lm_train_mixtral-8x22b-proxy
- group: Qwen
tag: qwen
models:
- model: Qwen 2.5 7B
mad_tag: pyt_megatron_lm_train_qwen2.5-7b
- model: Qwen 2.5 72B
mad_tag: pyt_megatron_lm_train_qwen2.5-72b

View File

@@ -0,0 +1,53 @@
dockers:
MI355X and MI350X:
pull_tag: rocm/megatron-lm:v25.9_gfx950
docker_hub_url: https://hub.docker.com/layers/rocm/megatron-lm/v25.9_gfx950/images/sha256-1a198be32f49efd66d0ff82066b44bd99b3e6b04c8e0e9b36b2c481e13bff7b6
components: &docker_components
ROCm: 7.0.0
Primus: aab4234
PyTorch: 2.9.0.dev20250821+rocm7.0.0.lw.git125803b7
Python: "3.10"
Transformer Engine: 2.2.0.dev0+54dd2bdc
Flash Attention: 2.8.3
hipBLASLt: 911283acd1
Triton: 3.4.0+rocm7.0.0.git56765e8c
RCCL: 2.26.6
MI325X and MI300X:
pull_tag: rocm/megatron-lm:v25.9_gfx942
docker_hub_url: https://hub.docker.com/layers/rocm/megatron-lm/v25.9_gfx942/images/sha256-df6ab8f45b4b9ceb100fb24e19b2019a364e351ee3b324dbe54466a1d67f8357
components: *docker_components
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: pyt_megatron_lm_train_llama-3.3-70b
- model: Llama 3.1 8B
mad_tag: pyt_megatron_lm_train_llama-3.1-8b
- model: Llama 3.1 70B
mad_tag: pyt_megatron_lm_train_llama-3.1-70b
- model: Llama 2 7B
mad_tag: pyt_megatron_lm_train_llama-2-7b
- model: Llama 2 70B
mad_tag: pyt_megatron_lm_train_llama-2-70b
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V3 (proxy)
mad_tag: pyt_megatron_lm_train_deepseek-v3-proxy
- model: DeepSeek-V2-Lite
mad_tag: pyt_megatron_lm_train_deepseek-v2-lite-16b
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: pyt_megatron_lm_train_mixtral-8x7b
- model: Mixtral 8x22B (proxy)
mad_tag: pyt_megatron_lm_train_mixtral-8x22b-proxy
- group: Qwen
tag: qwen
models:
- model: Qwen 2.5 7B
mad_tag: pyt_megatron_lm_train_qwen2.5-7b
- model: Qwen 2.5 72B
mad_tag: pyt_megatron_lm_train_qwen2.5-72b

View File

@@ -0,0 +1,58 @@
docker:
pull_tag: rocm/primus:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components:
ROCm: 7.1.0
PyTorch: 2.10.0.dev20251112+rocm7.1
Python: "3.10"
Transformer Engine: 2.4.0.dev0+32e2d1d4
Flash Attention: 2.8.3
hipBLASLt: 1.2.0-09ab7153e2
Triton: 3.4.0
RCCL: 2.27.7
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: primus_pyt_megatron_lm_train_llama-3.3-70b
config_name: llama3.3_70B-pretrain.yaml
- model: Llama 3.1 70B
mad_tag: primus_pyt_megatron_lm_train_llama-3.1-70b
config_name: llama3.1_70B-pretrain.yaml
- model: Llama 3.1 8B
mad_tag: primus_pyt_megatron_lm_train_llama-3.1-8b
config_name: llama3.1_8B-pretrain.yaml
- model: Llama 2 7B
mad_tag: primus_pyt_megatron_lm_train_llama-2-7b
config_name: llama2_7B-pretrain.yaml
- model: Llama 2 70B
mad_tag: primus_pyt_megatron_lm_train_llama-2-70b
config_name: llama2_70B-pretrain.yaml
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V3 (proxy)
mad_tag: primus_pyt_megatron_lm_train_deepseek-v3-proxy
config_name: deepseek_v3-pretrain.yaml
- model: DeepSeek-V2-Lite
mad_tag: primus_pyt_megatron_lm_train_deepseek-v2-lite-16b
config_name: deepseek_v2_lite-pretrain.yaml
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: primus_pyt_megatron_lm_train_mixtral-8x7b
config_name: mixtral_8x7B_v0.1-pretrain.yaml
- model: Mixtral 8x22B (proxy)
mad_tag: primus_pyt_megatron_lm_train_mixtral-8x22b-proxy
config_name: mixtral_8x22B_v0.1-pretrain.yaml
- group: Qwen
tag: qwen
models:
- model: Qwen 2.5 7B
mad_tag: primus_pyt_megatron_lm_train_qwen2.5-7b
config_name: primus_qwen2.5_7B-pretrain.yaml
- model: Qwen 2.5 72B
mad_tag: primus_pyt_megatron_lm_train_qwen2.5-72b
config_name: qwen2.5_72B-pretrain.yaml

View File

@@ -0,0 +1,58 @@
dockers:
- pull_tag: rocm/megatron-lm:v25.8_py310
docker_hub_url: https://hub.docker.com/layers/rocm/megatron-lm/v25.8_py310/images/sha256-50fc824361054e445e86d5d88d5f58817f61f8ec83ad4a7e43ea38bbc4a142c0
components:
ROCm: 6.4.3
Primus: 927a717
PyTorch: 2.8.0a0+gitd06a406
Python: "3.10"
Transformer Engine: 2.2.0.dev0+54dd2bdc
hipBLASLt: d1b517fc7a
Triton: 3.3.0
RCCL: 2.22.3
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: primus_pyt_megatron_lm_train_llama-3.3-70b
config_name: llama3.3_70B-pretrain.yaml
- model: Llama 3.1 70B
mad_tag: primus_pyt_megatron_lm_train_llama-3.1-70b
config_name: llama3.1_70B-pretrain.yaml
- model: Llama 3.1 8B
mad_tag: primus_pyt_megatron_lm_train_llama-3.1-8b
config_name: llama3.1_8B-pretrain.yaml
- model: Llama 2 7B
mad_tag: primus_pyt_megatron_lm_train_llama-2-7b
config_name: llama2_7B-pretrain.yaml
- model: Llama 2 70B
mad_tag: primus_pyt_megatron_lm_train_llama-2-70b
config_name: llama2_70B-pretrain.yaml
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V3 (proxy)
mad_tag: primus_pyt_megatron_lm_train_deepseek-v3-proxy
config_name: deepseek_v3-pretrain.yaml
- model: DeepSeek-V2-Lite
mad_tag: primus_pyt_megatron_lm_train_deepseek-v2-lite-16b
config_name: deepseek_v2_lite-pretrain.yaml
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: primus_pyt_megatron_lm_train_mixtral-8x7b
config_name: mixtral_8x7B_v0.1-pretrain.yaml
- model: Mixtral 8x22B (proxy)
mad_tag: primus_pyt_megatron_lm_train_mixtral-8x22b-proxy
config_name: mixtral_8x22B_v0.1-pretrain.yaml
- group: Qwen
tag: qwen
models:
- model: Qwen 2.5 7B
mad_tag: primus_pyt_megatron_lm_train_qwen2.5-7b
config_name: primus_qwen2.5_7B-pretrain.yaml
- model: Qwen 2.5 72B
mad_tag: primus_pyt_megatron_lm_train_qwen2.5-72b
config_name: qwen2.5_72B-pretrain.yaml

View File

@@ -0,0 +1,65 @@
dockers:
MI355X and MI350X:
pull_tag: rocm/primus:v25.9_gfx950
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.9_gfx950/images/sha256-1a198be32f49efd66d0ff82066b44bd99b3e6b04c8e0e9b36b2c481e13bff7b6
components: &docker_components
ROCm: 7.0.0
Primus: 0.3.0
Primus Turbo: 0.1.1
PyTorch: 2.9.0.dev20250821+rocm7.0.0.lw.git125803b7
Python: "3.10"
Transformer Engine: 2.2.0.dev0+54dd2bdc
Flash Attention: 2.8.3
hipBLASLt: 911283acd1
Triton: 3.4.0+rocm7.0.0.git56765e8c
RCCL: 2.26.6
MI325X and MI300X:
pull_tag: rocm/primus:v25.9_gfx942
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.9_gfx942/images/sha256-df6ab8f45b4b9ceb100fb24e19b2019a364e351ee3b324dbe54466a1d67f8357
components: *docker_components
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.3 70B
mad_tag: primus_pyt_megatron_lm_train_llama-3.3-70b
config_name: llama3.3_70B-pretrain.yaml
- model: Llama 3.1 70B
mad_tag: primus_pyt_megatron_lm_train_llama-3.1-70b
config_name: llama3.1_70B-pretrain.yaml
- model: Llama 3.1 8B
mad_tag: primus_pyt_megatron_lm_train_llama-3.1-8b
config_name: llama3.1_8B-pretrain.yaml
- model: Llama 2 7B
mad_tag: primus_pyt_megatron_lm_train_llama-2-7b
config_name: llama2_7B-pretrain.yaml
- model: Llama 2 70B
mad_tag: primus_pyt_megatron_lm_train_llama-2-70b
config_name: llama2_70B-pretrain.yaml
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek-V3 (proxy)
mad_tag: primus_pyt_megatron_lm_train_deepseek-v3-proxy
config_name: deepseek_v3-pretrain.yaml
- model: DeepSeek-V2-Lite
mad_tag: primus_pyt_megatron_lm_train_deepseek-v2-lite-16b
config_name: deepseek_v2_lite-pretrain.yaml
- group: Mistral AI
tag: mistral
models:
- model: Mixtral 8x7B
mad_tag: primus_pyt_megatron_lm_train_mixtral-8x7b
config_name: mixtral_8x7B_v0.1-pretrain.yaml
- model: Mixtral 8x22B (proxy)
mad_tag: primus_pyt_megatron_lm_train_mixtral-8x22b-proxy
config_name: mixtral_8x22B_v0.1-pretrain.yaml
- group: Qwen
tag: qwen
models:
- model: Qwen 2.5 7B
mad_tag: primus_pyt_megatron_lm_train_qwen2.5-7b
config_name: primus_qwen2.5_7B-pretrain.yaml
- model: Qwen 2.5 72B
mad_tag: primus_pyt_megatron_lm_train_qwen2.5-72b
config_name: qwen2.5_72B-pretrain.yaml

View File

@@ -0,0 +1,32 @@
docker:
pull_tag: rocm/primus:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components:
ROCm: 7.1.0
PyTorch: 2.10.0.dev20251112+rocm7.1
Python: "3.10"
Transformer Engine: 2.4.0.dev0+32e2d1d4
Flash Attention: 2.8.3
hipBLASLt: 1.2.0-09ab7153e2
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.1 8B
mad_tag: primus_pyt_train_llama-3.1-8b
model_repo: Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16
- model: Llama 3.1 70B
mad_tag: primus_pyt_train_llama-3.1-70b
model_repo: Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B
precision: BF16
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek V2 16B
mad_tag: primus_pyt_train_deepseek-v2
model_repo: DeepSeek-V2
url: https://huggingface.co/deepseek-ai/DeepSeek-V2
precision: BF16

View File

@@ -0,0 +1,24 @@
dockers:
- pull_tag: rocm/pytorch-training:v25.8
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-training/v25.8/images/sha256-5082ae01d73fec6972b0d84e5dad78c0926820dcf3c19f301d6c8eb892e573c5
components:
ROCm: 6.4.3
PyTorch: 2.8.0a0+gitd06a406
Python: 3.10.18
Transformer Engine: 2.2.0.dev0+a1e66aae
Flash Attention: 3.0.0.post1
hipBLASLt: 1.1.0-d1b517fc7a
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.1 8B
mad_tag: primus_pyt_train_llama-3.1-8b
model_repo: Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16
- model: Llama 3.1 70B
mad_tag: primus_pyt_train_llama-3.1-70b
model_repo: Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B
precision: BF16

View File

@@ -0,0 +1,39 @@
dockers:
MI355X and MI350X:
pull_tag: rocm/primus:v25.9_gfx950
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.9_gfx950/images/sha256-1a198be32f49efd66d0ff82066b44bd99b3e6b04c8e0e9b36b2c481e13bff7b6
components: &docker_components
ROCm: 7.0.0
Primus: 0.3.0
Primus Turbo: 0.1.1
PyTorch: 2.9.0.dev20250821+rocm7.0.0.lw.git125803b7
Python: "3.10"
Transformer Engine: 2.2.0.dev0+54dd2bdc
Flash Attention: 2.8.3
hipBLASLt: 911283acd1
Triton: 3.4.0+rocm7.0.0.git56765e8c
RCCL: 2.26.6
MI325X and MI300X:
pull_tag: rocm/primus:v25.9_gfx942
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.9_gfx942/images/sha256-df6ab8f45b4b9ceb100fb24e19b2019a364e351ee3b324dbe54466a1d67f8357
components: *docker_components
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 3.1 8B
mad_tag: primus_pyt_train_llama-3.1-8b
model_repo: meta-llama/Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16
config_file:
bf16: "./llama3_8b_fsdp_bf16.toml"
fp8: "./llama3_8b_fsdp_fp8.toml"
- model: Llama 3.1 70B
mad_tag: primus_pyt_train_llama-3.1-70b
model_repo: meta-llama/Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B
precision: BF16
config_file:
bf16: "./llama3_70b_fsdp_bf16.toml"
fp8: "./llama3_70b_fsdp_fp8.toml"

View File

@@ -0,0 +1,197 @@
docker:
pull_tag: rocm/primus:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components:
ROCm: 7.1.0
Primus: 0.3.0
Primus Turbo: 0.1.1
PyTorch: 2.10.0.dev20251112+rocm7.1
Python: "3.10"
Transformer Engine: 2.4.0.dev0+32e2d1d4
Flash Attention: 2.8.3
hipBLASLt: 1.2.0-09ab7153e2
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 4 Scout 17B-16E
mad_tag: pyt_train_llama-4-scout-17b-16e
model_repo: Llama-4-17B_16E
url: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.3 70B
mad_tag: pyt_train_llama-3.3-70b
model_repo: Llama-3.3-70B
url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
precision: BF16
training_modes: [finetune_fw, finetune_lora, finetune_qlora]
- model: Llama 3.2 1B
mad_tag: pyt_train_llama-3.2-1b
model_repo: Llama-3.2-1B
url: https://huggingface.co/meta-llama/Llama-3.2-1B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.2 3B
mad_tag: pyt_train_llama-3.2-3b
model_repo: Llama-3.2-3B
url: https://huggingface.co/meta-llama/Llama-3.2-3B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.2 Vision 11B
mad_tag: pyt_train_llama-3.2-vision-11b
model_repo: Llama-3.2-Vision-11B
url: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision
precision: BF16
training_modes: [finetune_fw]
- model: Llama 3.2 Vision 90B
mad_tag: pyt_train_llama-3.2-vision-90b
model_repo: Llama-3.2-Vision-90B
url: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision
precision: BF16
training_modes: [finetune_fw]
- model: Llama 3.1 8B
mad_tag: pyt_train_llama-3.1-8b
model_repo: Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16
training_modes: [pretrain, finetune_fw, finetune_lora, HF_pretrain]
- model: Llama 3.1 70B
mad_tag: pyt_train_llama-3.1-70b
model_repo: Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct
precision: BF16
training_modes: [pretrain, finetune_fw, finetune_lora]
- model: Llama 3.1 405B
mad_tag: pyt_train_llama-3.1-405b
model_repo: Llama-3.1-405B
url: https://huggingface.co/meta-llama/Llama-3.1-405B
precision: BF16
training_modes: [finetune_qlora]
- model: Llama 3 8B
mad_tag: pyt_train_llama-3-8b
model_repo: Llama-3-8B
url: https://huggingface.co/meta-llama/Meta-Llama-3-8B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3 70B
mad_tag: pyt_train_llama-3-70b
model_repo: Llama-3-70B
url: https://huggingface.co/meta-llama/Meta-Llama-3-70B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 2 7B
mad_tag: pyt_train_llama-2-7b
model_repo: Llama-2-7B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_fw, finetune_lora, finetune_qlora]
- model: Llama 2 13B
mad_tag: pyt_train_llama-2-13b
model_repo: Llama-2-13B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 2 70B
mad_tag: pyt_train_llama-2-70b
model_repo: Llama-2-70B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_lora, finetune_qlora]
- group: OpenAI
tag: openai
models:
- model: GPT OSS 20B
mad_tag: pyt_train_gpt_oss_20b
model_repo: GPT-OSS-20B
url: https://huggingface.co/openai/gpt-oss-20b
precision: BF16
training_modes: [HF_finetune_lora]
- model: GPT OSS 120B
mad_tag: pyt_train_gpt_oss_120b
model_repo: GPT-OSS-120B
url: https://huggingface.co/openai/gpt-oss-120b
precision: BF16
training_modes: [HF_finetune_lora]
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek V2 16B
mad_tag: primus_pyt_train_deepseek-v2
model_repo: DeepSeek-V2
url: https://huggingface.co/deepseek-ai/DeepSeek-V2
precision: BF16
training_modes: [pretrain]
- group: Qwen
tag: qwen
models:
- model: Qwen 3 8B
mad_tag: pyt_train_qwen3-8b
model_repo: Qwen3-8B
url: https://huggingface.co/Qwen/Qwen3-8B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Qwen 3 32B
mad_tag: pyt_train_qwen3-32b
model_repo: Qwen3-32
url: https://huggingface.co/Qwen/Qwen3-32B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2.5 32B
mad_tag: pyt_train_qwen2.5-32b
model_repo: Qwen2.5-32B
url: https://huggingface.co/Qwen/Qwen2.5-32B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2.5 72B
mad_tag: pyt_train_qwen2.5-72b
model_repo: Qwen2.5-72B
url: https://huggingface.co/Qwen/Qwen2.5-72B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2 1.5B
mad_tag: pyt_train_qwen2-1.5b
model_repo: Qwen2-1.5B
url: https://huggingface.co/Qwen/Qwen2-1.5B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Qwen 2 7B
mad_tag: pyt_train_qwen2-7b
model_repo: Qwen2-7B
url: https://huggingface.co/Qwen/Qwen2-7B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- group: Stable Diffusion
tag: sd
models:
- model: Stable Diffusion XL
mad_tag: pyt_huggingface_stable_diffusion_xl_2k_lora_finetuning
model_repo: SDXL
url: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
precision: BF16
training_modes: [posttrain]
- group: Flux
tag: flux
models:
- model: FLUX.1-dev
mad_tag: pyt_train_flux
model_repo: Flux
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
precision: BF16
training_modes: [posttrain]
- group: NCF
tag: ncf
models:
- model: NCF
mad_tag: pyt_ncf_training
model_repo:
url: https://github.com/ROCm/FluxBenchmark
precision: FP32
- group: DLRM
tag: dlrm
models:
- model: DLRM v2
mad_tag: pyt_train_dlrm
model_repo: DLRM
url: https://github.com/AMD-AGI/DLRMBenchmark
training_modes: [pretrain]

View File

@@ -0,0 +1,178 @@
dockers:
- pull_tag: rocm/pytorch-training:v25.8
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-training/v25.8/images/sha256-5082ae01d73fec6972b0d84e5dad78c0926820dcf3c19f301d6c8eb892e573c5
components:
ROCm: 6.4.3
PyTorch: 2.8.0a0+gitd06a406
Python: 3.10.18
Transformer Engine: 2.2.0.dev0+a1e66aae
Flash Attention: 3.0.0.post1
hipBLASLt: 1.1.0-d1b517fc7a
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 4 Scout 17B-16E
mad_tag: pyt_train_llama-4-scout-17b-16e
model_repo: Llama-4-17B_16E
url: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.3 70B
mad_tag: pyt_train_llama-3.3-70b
model_repo: Llama-3.3-70B
url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
precision: BF16
training_modes: [finetune_fw, finetune_lora, finetune_qlora]
- model: Llama 3.2 1B
mad_tag: pyt_train_llama-3.2-1b
model_repo: Llama-3.2-1B
url: https://huggingface.co/meta-llama/Llama-3.2-1B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.2 3B
mad_tag: pyt_train_llama-3.2-3b
model_repo: Llama-3.2-3B
url: https://huggingface.co/meta-llama/Llama-3.2-3B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.2 Vision 11B
mad_tag: pyt_train_llama-3.2-vision-11b
model_repo: Llama-3.2-Vision-11B
url: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision
precision: BF16
training_modes: [finetune_fw]
- model: Llama 3.2 Vision 90B
mad_tag: pyt_train_llama-3.2-vision-90b
model_repo: Llama-3.2-Vision-90B
url: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision
precision: BF16
training_modes: [finetune_fw]
- model: Llama 3.1 8B
mad_tag: pyt_train_llama-3.1-8b
model_repo: Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16
training_modes: [pretrain, finetune_fw, finetune_lora, HF_pretrain]
- model: Llama 3.1 70B
mad_tag: pyt_train_llama-3.1-70b
model_repo: Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct
precision: BF16
training_modes: [pretrain, finetune_fw, finetune_lora]
- model: Llama 3.1 405B
mad_tag: pyt_train_llama-3.1-405b
model_repo: Llama-3.1-405B
url: https://huggingface.co/meta-llama/Llama-3.1-405B
precision: BF16
training_modes: [finetune_qlora]
- model: Llama 3 8B
mad_tag: pyt_train_llama-3-8b
model_repo: Llama-3-8B
url: https://huggingface.co/meta-llama/Meta-Llama-3-8B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3 70B
mad_tag: pyt_train_llama-3-70b
model_repo: Llama-3-70B
url: https://huggingface.co/meta-llama/Meta-Llama-3-70B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 2 7B
mad_tag: pyt_train_llama-2-7b
model_repo: Llama-2-7B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_fw, finetune_lora, finetune_qlora]
- model: Llama 2 13B
mad_tag: pyt_train_llama-2-13b
model_repo: Llama-2-13B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 2 70B
mad_tag: pyt_train_llama-2-70b
model_repo: Llama-2-70B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_lora, finetune_qlora]
- group: OpenAI
tag: openai
models:
- model: GPT OSS 20B
mad_tag: pyt_train_gpt_oss_20b
model_repo: GPT-OSS-20B
url: https://huggingface.co/openai/gpt-oss-20b
precision: BF16
training_modes: [HF_finetune_lora]
- model: GPT OSS 120B
mad_tag: pyt_train_gpt_oss_120b
model_repo: GPT-OSS-120B
url: https://huggingface.co/openai/gpt-oss-120b
precision: BF16
training_modes: [HF_finetune_lora]
- group: Qwen
tag: qwen
models:
- model: Qwen 3 8B
mad_tag: pyt_train_qwen3-8b
model_repo: Qwen3-8B
url: https://huggingface.co/Qwen/Qwen3-8B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Qwen 3 32B
mad_tag: pyt_train_qwen3-32b
model_repo: Qwen3-32
url: https://huggingface.co/Qwen/Qwen3-32B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2.5 32B
mad_tag: pyt_train_qwen2.5-32b
model_repo: Qwen2.5-32B
url: https://huggingface.co/Qwen/Qwen2.5-32B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2.5 72B
mad_tag: pyt_train_qwen2.5-72b
model_repo: Qwen2.5-72B
url: https://huggingface.co/Qwen/Qwen2.5-72B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2 1.5B
mad_tag: pyt_train_qwen2-1.5b
model_repo: Qwen2-1.5B
url: https://huggingface.co/Qwen/Qwen2-1.5B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Qwen 2 7B
mad_tag: pyt_train_qwen2-7b
model_repo: Qwen2-7B
url: https://huggingface.co/Qwen/Qwen2-7B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- group: Stable Diffusion
tag: sd
models:
- model: Stable Diffusion XL
mad_tag: pyt_huggingface_stable_diffusion_xl_2k_lora_finetuning
model_repo: SDXL
url: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
precision: BF16
training_modes: [finetune_lora]
- group: Flux
tag: flux
models:
- model: FLUX.1-dev
mad_tag: pyt_train_flux
model_repo: Flux
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
precision: BF16
training_modes: [pretrain]
- group: NCF
tag: ncf
models:
- model: NCF
mad_tag: pyt_ncf_training
model_repo:
url: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Recommendation/NCF
precision: FP32

View File

@@ -0,0 +1,186 @@
dockers:
MI355X and MI350X:
pull_tag: rocm/pytorch-training:v25.9_gfx950
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-training/v25.9_gfx950/images/sha256-1a198be32f49efd66d0ff82066b44bd99b3e6b04c8e0e9b36b2c481e13bff7b6
components: &docker_components
ROCm: 7.0.0
Primus: aab4234
PyTorch: 2.9.0.dev20250821+rocm7.0.0.lw.git125803b7
Python: "3.10"
Transformer Engine: 2.2.0.dev0+54dd2bdc
Flash Attention: 2.8.3
hipBLASLt: 911283acd1
Triton: 3.4.0+rocm7.0.0.git56765e8c
RCCL: 2.26.6
MI325X and MI300X:
pull_tag: rocm/pytorch-training:v25.9_gfx942
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-training/v25.9_gfx942/images/sha256-df6ab8f45b4b9ceb100fb24e19b2019a364e351ee3b324dbe54466a1d67f8357
components: *docker_components
model_groups:
- group: Meta Llama
tag: llama
models:
- model: Llama 4 Scout 17B-16E
mad_tag: pyt_train_llama-4-scout-17b-16e
model_repo: Llama-4-17B_16E
url: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.3 70B
mad_tag: pyt_train_llama-3.3-70b
model_repo: Llama-3.3-70B
url: https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct
precision: BF16
training_modes: [finetune_fw, finetune_lora, finetune_qlora]
- model: Llama 3.2 1B
mad_tag: pyt_train_llama-3.2-1b
model_repo: Llama-3.2-1B
url: https://huggingface.co/meta-llama/Llama-3.2-1B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.2 3B
mad_tag: pyt_train_llama-3.2-3b
model_repo: Llama-3.2-3B
url: https://huggingface.co/meta-llama/Llama-3.2-3B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3.2 Vision 11B
mad_tag: pyt_train_llama-3.2-vision-11b
model_repo: Llama-3.2-Vision-11B
url: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision
precision: BF16
training_modes: [finetune_fw]
- model: Llama 3.2 Vision 90B
mad_tag: pyt_train_llama-3.2-vision-90b
model_repo: Llama-3.2-Vision-90B
url: https://huggingface.co/meta-llama/Llama-3.2-90B-Vision
precision: BF16
training_modes: [finetune_fw]
- model: Llama 3.1 8B
mad_tag: pyt_train_llama-3.1-8b
model_repo: Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16
training_modes: [pretrain, finetune_fw, finetune_lora, HF_pretrain]
- model: Llama 3.1 70B
mad_tag: pyt_train_llama-3.1-70b
model_repo: Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct
precision: BF16
training_modes: [pretrain, finetune_fw, finetune_lora]
- model: Llama 3.1 405B
mad_tag: pyt_train_llama-3.1-405b
model_repo: Llama-3.1-405B
url: https://huggingface.co/meta-llama/Llama-3.1-405B
precision: BF16
training_modes: [finetune_qlora]
- model: Llama 3 8B
mad_tag: pyt_train_llama-3-8b
model_repo: Llama-3-8B
url: https://huggingface.co/meta-llama/Meta-Llama-3-8B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 3 70B
mad_tag: pyt_train_llama-3-70b
model_repo: Llama-3-70B
url: https://huggingface.co/meta-llama/Meta-Llama-3-70B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 2 7B
mad_tag: pyt_train_llama-2-7b
model_repo: Llama-2-7B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_fw, finetune_lora, finetune_qlora]
- model: Llama 2 13B
mad_tag: pyt_train_llama-2-13b
model_repo: Llama-2-13B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Llama 2 70B
mad_tag: pyt_train_llama-2-70b
model_repo: Llama-2-70B
url: https://github.com/meta-llama/llama-models/tree/main/models/llama2
precision: BF16
training_modes: [finetune_lora, finetune_qlora]
- group: OpenAI
tag: openai
models:
- model: GPT OSS 20B
mad_tag: pyt_train_gpt_oss_20b
model_repo: GPT-OSS-20B
url: https://huggingface.co/openai/gpt-oss-20b
precision: BF16
training_modes: [HF_finetune_lora]
- model: GPT OSS 120B
mad_tag: pyt_train_gpt_oss_120b
model_repo: GPT-OSS-120B
url: https://huggingface.co/openai/gpt-oss-120b
precision: BF16
training_modes: [HF_finetune_lora]
- group: Qwen
tag: qwen
models:
- model: Qwen 3 8B
mad_tag: pyt_train_qwen3-8b
model_repo: Qwen3-8B
url: https://huggingface.co/Qwen/Qwen3-8B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Qwen 3 32B
mad_tag: pyt_train_qwen3-32b
model_repo: Qwen3-32
url: https://huggingface.co/Qwen/Qwen3-32B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2.5 32B
mad_tag: pyt_train_qwen2.5-32b
model_repo: Qwen2.5-32B
url: https://huggingface.co/Qwen/Qwen2.5-32B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2.5 72B
mad_tag: pyt_train_qwen2.5-72b
model_repo: Qwen2.5-72B
url: https://huggingface.co/Qwen/Qwen2.5-72B
precision: BF16
training_modes: [finetune_lora]
- model: Qwen 2 1.5B
mad_tag: pyt_train_qwen2-1.5b
model_repo: Qwen2-1.5B
url: https://huggingface.co/Qwen/Qwen2-1.5B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- model: Qwen 2 7B
mad_tag: pyt_train_qwen2-7b
model_repo: Qwen2-7B
url: https://huggingface.co/Qwen/Qwen2-7B
precision: BF16
training_modes: [finetune_fw, finetune_lora]
- group: Stable Diffusion
tag: sd
models:
- model: Stable Diffusion XL
mad_tag: pyt_huggingface_stable_diffusion_xl_2k_lora_finetuning
model_repo: SDXL
url: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
precision: BF16
training_modes: [posttrain-p]
- group: Flux
tag: flux
models:
- model: FLUX.1-dev
mad_tag: pyt_train_flux
model_repo: Flux
url: https://huggingface.co/black-forest-labs/FLUX.1-dev
precision: BF16
training_modes: [posttrain-p]
- group: NCF
tag: ncf
models:
- model: NCF
mad_tag: pyt_ncf_training
model_repo:
url: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Recommendation/NCF
precision: FP32

View File

@@ -1,15 +1,15 @@
dockers: docker:
- pull_tag: rocm/megatron-lm:v25.8_py310 pull_tag: rocm/primus:v25.11
docker_hub_url: https://hub.docker.com/layers/rocm/megatron-lm/v25.8_py310/images/sha256-50fc824361054e445e86d5d88d5f58817f61f8ec83ad4a7e43ea38bbc4a142c0 docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components: components:
ROCm: 6.4.3 ROCm: 7.1.0
Primus: 927a717 PyTorch: 2.10.0.dev20251112+rocm7.1
PyTorch: 2.8.0a0+gitd06a406 Python: "3.10"
Python: "3.10" Transformer Engine: 2.4.0.dev0+32e2d1d4
Transformer Engine: 2.2.0.dev0+54dd2bdc Flash Attention: 2.8.3
hipBLASLt: d1b517fc7a hipBLASLt: 1.2.0-09ab7153e2
Triton: 3.3.0 Triton: 3.4.0
RCCL: 2.22.3 RCCL: 2.27.7
model_groups: model_groups:
- group: Meta Llama - group: Meta Llama
tag: llama tag: llama

View File

@@ -1,24 +1,32 @@
dockers: docker:
- pull_tag: rocm/pytorch-training:v25.8 pull_tag: rocm/primus:v25.11
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-training/v25.8/images/sha256-5082ae01d73fec6972b0d84e5dad78c0926820dcf3c19f301d6c8eb892e573c5 docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components: components:
ROCm: 6.4.3 ROCm: 7.1.0
PyTorch: 2.8.0a0+gitd06a406 PyTorch: 2.10.0.dev20251112+rocm7.1
Python: 3.10.18 Python: "3.10"
Transformer Engine: 2.2.0.dev0+a1e66aae Transformer Engine: 2.4.0.dev0+32e2d1d4
Flash Attention: 3.0.0.post1 Flash Attention: 2.8.3
hipBLASLt: 1.1.0-d1b517fc7a hipBLASLt: 1.2.0-09ab7153e2
model_groups: model_groups:
- group: Meta Llama - group: Meta Llama
tag: llama tag: llama
models: models:
- model: Llama 3.1 8B - model: Llama 3.1 8B
mad_tag: primus_pyt_train_llama-3.1-8b mad_tag: primus_pyt_train_llama-3.1-8b
model_repo: Llama-3.1-8B model_repo: Llama-3.1-8B
url: https://huggingface.co/meta-llama/Llama-3.1-8B url: https://huggingface.co/meta-llama/Llama-3.1-8B
precision: BF16 precision: BF16
- model: Llama 3.1 70B - model: Llama 3.1 70B
mad_tag: primus_pyt_train_llama-3.1-70b mad_tag: primus_pyt_train_llama-3.1-70b
model_repo: Llama-3.1-70B model_repo: Llama-3.1-70B
url: https://huggingface.co/meta-llama/Llama-3.1-70B url: https://huggingface.co/meta-llama/Llama-3.1-70B
precision: BF16 precision: BF16
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek V3 16B
mad_tag: primus_pyt_train_deepseek-v3-16b
model_repo: DeepSeek-V3
url: https://huggingface.co/deepseek-ai/DeepSeek-V3
precision: BF16

View File

@@ -1,13 +1,15 @@
dockers: docker:
- pull_tag: rocm/pytorch-training:v25.8 pull_tag: rocm/primus:v25.10
docker_hub_url: https://hub.docker.com/layers/rocm/pytorch-training/v25.8/images/sha256-5082ae01d73fec6972b0d84e5dad78c0926820dcf3c19f301d6c8eb892e573c5 docker_hub_url: https://hub.docker.com/layers/rocm/primus/v25.10/images/sha256-140c37cd2eeeb183759b9622543fc03cc210dc97cbfa18eeefdcbda84420c197
components: components:
ROCm: 6.4.3 ROCm: 7.1.0
PyTorch: 2.8.0a0+gitd06a406 Primus: 0.3.0
Python: 3.10.18 Primus Turbo: 0.1.1
Transformer Engine: 2.2.0.dev0+a1e66aae PyTorch: 2.10.0.dev20251112+rocm7.1
Flash Attention: 3.0.0.post1 Python: "3.10"
hipBLASLt: 1.1.0-d1b517fc7a Transformer Engine: 2.4.0.dev0+32e2d1d4
Flash Attention: 2.8.3
hipBLASLt: 1.2.0-09ab7153e2
model_groups: model_groups:
- group: Meta Llama - group: Meta Llama
tag: llama tag: llama
@@ -111,6 +113,15 @@ model_groups:
url: https://huggingface.co/openai/gpt-oss-120b url: https://huggingface.co/openai/gpt-oss-120b
precision: BF16 precision: BF16
training_modes: [HF_finetune_lora] training_modes: [HF_finetune_lora]
- group: DeepSeek
tag: deepseek
models:
- model: DeepSeek V2 16B
mad_tag: primus_pyt_train_deepseek-v2
model_repo: DeepSeek-V2
url: https://huggingface.co/deepseek-ai/DeepSeek-V2
precision: BF16
training_modes: [pretrain]
- group: Qwen - group: Qwen
tag: qwen tag: qwen
models: models:
@@ -158,7 +169,7 @@ model_groups:
model_repo: SDXL model_repo: SDXL
url: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 url: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
precision: BF16 precision: BF16
training_modes: [finetune_lora] training_modes: [posttrain]
- group: Flux - group: Flux
tag: flux tag: flux
models: models:
@@ -167,12 +178,20 @@ model_groups:
model_repo: Flux model_repo: Flux
url: https://huggingface.co/black-forest-labs/FLUX.1-dev url: https://huggingface.co/black-forest-labs/FLUX.1-dev
precision: BF16 precision: BF16
training_modes: [pretrain] training_modes: [posttrain]
- group: NCF - group: NCF
tag: ncf tag: ncf
models: models:
- model: NCF - model: NCF
mad_tag: pyt_ncf_training mad_tag: pyt_ncf_training
model_repo: model_repo:
url: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Recommendation/NCF url: https://github.com/ROCm/FluxBenchmark
precision: FP32 precision: FP32
- group: DLRM
tag: dlrm
models:
- model: DLRM v2
mad_tag: pyt_train_dlrm
model_repo: DLRM
url: https://github.com/AMD-AGI/DLRMBenchmark
training_modes: [pretrain]

View File

@@ -1,4 +1,4 @@
Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X series,MI300A,MI350X series Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X Series,MI300A,MI350X Series
32 bit atomicAdd,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS 32 bit atomicAdd,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS
32 bit atomicSub,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS 32 bit atomicSub,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS
32 bit atomicMin,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS 32 bit atomicMin,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS
1 Atomic MI100 MI200 PCIe MI200 A+A MI300X series MI300X Series MI300A MI350X series MI350X Series
2 32 bit atomicAdd ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS
3 32 bit atomicSub ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS
4 32 bit atomicMin ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS

View File

@@ -1,4 +1,4 @@
Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X series,MI300A,MI350X series Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X Series,MI300A,MI350X Series
32 bit atomicAdd,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS 32 bit atomicAdd,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS
32 bit atomicSub,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS 32 bit atomicSub,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS
32 bit atomicMin,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS 32 bit atomicMin,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS,✅ CAS
1 Atomic MI100 MI200 PCIe MI200 A+A MI300X series MI300X Series MI300A MI350X series MI350X Series
2 32 bit atomicAdd ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS
3 32 bit atomicSub ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS
4 32 bit atomicMin ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS ✅ CAS

View File

@@ -1,4 +1,4 @@
Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X series,MI300A,MI350X series Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X Series,MI300A,MI350X Series
32 bit atomicAdd,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native 32 bit atomicAdd,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native
32 bit atomicSub,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native 32 bit atomicSub,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native
32 bit atomicMin,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native 32 bit atomicMin,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native
1 Atomic MI100 MI200 PCIe MI200 A+A MI300X series MI300X Series MI300A MI350X series MI350X Series
2 32 bit atomicAdd ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native
3 32 bit atomicSub ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native
4 32 bit atomicMin ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native

View File

@@ -1,4 +1,4 @@
Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X series,MI300A,MI350X series Atomic,MI100,MI200 PCIe,MI200 A+A,MI300X Series,MI300A,MI350X Series
32 bit atomicAdd,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native 32 bit atomicAdd,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native
32 bit atomicSub,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native 32 bit atomicSub,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native
32 bit atomicMin,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native 32 bit atomicMin,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native,✅ Native
1 Atomic MI100 MI200 PCIe MI200 A+A MI300X series MI300X Series MI300A MI350X series MI350X Series
2 32 bit atomicAdd ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native
3 32 bit atomicSub ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native
4 32 bit atomicMin ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native ✅ Native

View File

@@ -32,7 +32,7 @@ library_groups:
- name: "MIGraphX" - name: "MIGraphX"
tag: "migraphx" tag: "migraphx"
doc_link: "amdmigraphx:reference/cpp" doc_link: "amdmigraphx:reference/MIGraphX-cpp"
data_types: data_types:
- type: "int8" - type: "int8"
support: "⚠️" support: "⚠️"
@@ -290,7 +290,7 @@ library_groups:
- name: "Tensile" - name: "Tensile"
tag: "tensile" tag: "tensile"
doc_link: "tensile:reference/precision-support" doc_link: "tensile:src/reference/precision-support"
data_types: data_types:
- type: "int8" - type: "int8"
support: "✅" support: "✅"

View File

@@ -0,0 +1,141 @@
from docutils import nodes
from docutils.parsers.rst import Directive
from docutils.statemachine import ViewList
from sphinx.util import logging
from sphinx.util.nodes import nested_parse_with_titles
import requests
import re
logger = logging.getLogger(__name__)
class BranchAwareRemoteContent(Directive):
"""
Directive that downloads and includes content from other repositories,
matching the branch/tag of the current documentation build.
Usage:
.. remote-content::
:repo: owner/repository
:path: path/to/file.rst
:default_branch: docs/develop # Branch to use when not on a release
:tag_prefix: Docs/ # Optional
"""
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = True
has_content = False
option_spec = {
'repo': str,
'path': str,
'default_branch': str, # Branch to use when not on a release tag
'start_line': int, # Include the file from a specific line
'tag_prefix': str, # Prefix for release tags (e.g., 'Docs/')
}
def get_current_version(self):
"""Get current version/branch being built"""
env = self.state.document.settings.env
html_context = env.config.html_context
# Check if building from a tag
if "official_branch" in html_context:
if html_context["official_branch"] == 0:
if "version" in html_context:
# Remove any 'v' prefix
version = html_context["version"]
if re.match(r'^\d+\.\d+\.\d+$', version):
return version
# Not a version tag, so we'll use the default branch
return None
def get_target_ref(self):
"""Get target reference for the remote repository"""
current_version = self.get_current_version()
# If it's a version number, use tag prefix and version
if current_version:
tag_prefix = self.options.get('tag_prefix', '')
return f'{tag_prefix}{current_version}'
# For any other case, use the specified default branch
if 'default_branch' not in self.options:
logger.warning('No default_branch specified and not building from a version tag')
return None
return self.options['default_branch']
def construct_raw_url(self, repo, path, ref):
"""Construct the raw.githubusercontent.com URL"""
return f'https://raw.githubusercontent.com/{repo}/{ref}/{path}'
def fetch_and_parse_content(self, url, source_path):
"""Fetch content and parse it as RST"""
response = requests.get(url)
response.raise_for_status()
content = response.text
start_line = self.options.get('start_line', 0)
# Create ViewList for parsing
line_count = 0
content_list = ViewList()
for line_no, line in enumerate(content.splitlines()):
if line_count >= start_line:
content_list.append(line, source_path, line_no)
line_count+=1
# Create a section node and parse content
node = nodes.section()
nested_parse_with_titles(self.state, content_list, node)
return node.children
def run(self):
if 'repo' not in self.options or 'path' not in self.options:
logger.warning('Both repo and path options are required')
return []
target_ref = self.get_target_ref()
if not target_ref:
return []
raw_url = self.construct_raw_url(
self.options['repo'],
self.options['path'],
target_ref
)
try:
logger.info(f'Attempting to fetch content from {raw_url}')
return self.fetch_and_parse_content(raw_url, self.options['path'])
except requests.exceptions.RequestException as e:
logger.warning(f'Failed to fetch content from {raw_url}: {str(e)}')
# If we failed on a tag, try falling back to default_branch
if re.match(r'^\d+\.\d+\.\d+$', target_ref) or target_ref.startswith('Docs/'):
if 'default_branch' in self.options:
try:
fallback_ref = self.options['default_branch']
logger.info(f'Attempting fallback to {fallback_ref}...')
fallback_url = self.construct_raw_url(
self.options['repo'],
self.options['path'],
fallback_ref
)
return self.fetch_and_parse_content(fallback_url, self.options['path'])
except requests.exceptions.RequestException as e2:
logger.warning(f'Fallback also failed: {str(e2)}')
return []
def setup(app):
app.add_directive('remote-content', BranchAwareRemoteContent)
return {
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -10,7 +10,7 @@ Deep learning frameworks provide environments for machine learning, training, fi
ROCm offers a complete ecosystem for developing and running deep learning applications efficiently. It also provides ROCm-compatible versions of popular frameworks and libraries, such as PyTorch, TensorFlow, JAX, and others. ROCm offers a complete ecosystem for developing and running deep learning applications efficiently. It also provides ROCm-compatible versions of popular frameworks and libraries, such as PyTorch, TensorFlow, JAX, and others.
The AMD ROCm organization actively contributes to open-source development and collaborates closely with framework organizations. This collaboration ensures that framework-specific optimizations effectively leverage AMD GPUs and accelerators. The AMD ROCm organization actively contributes to open-source development and collaborates closely with framework organizations. This collaboration ensures that framework-specific optimizations effectively leverage AMD GPUs.
The table below summarizes information about ROCm-enabled deep learning frameworks. It includes details on ROCm compatibility and third-party tool support, installation steps and options, and links to GitHub resources. For a complete list of supported framework versions on ROCm, see the :doc:`Compatibility matrix <../compatibility/compatibility-matrix>` topic. The table below summarizes information about ROCm-enabled deep learning frameworks. It includes details on ROCm compatibility and third-party tool support, installation steps and options, and links to GitHub resources. For a complete list of supported framework versions on ROCm, see the :doc:`Compatibility matrix <../compatibility/compatibility-matrix>` topic.
@@ -84,6 +84,8 @@ The table below summarizes information about ROCm-enabled deep learning framewor
<a href="https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/dgl-install.html"><i class="fas fa-link fa-lg"></i></a> <a href="https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/dgl-install.html"><i class="fas fa-link fa-lg"></i></a>
- -
- `Docker image <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/dgl-install.html#use-a-prebuilt-docker-image-with-dgl-pre-installed>`__ - `Docker image <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/dgl-install.html#use-a-prebuilt-docker-image-with-dgl-pre-installed>`__
- `Wheels package <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/dgl-install.html#use-a-wheels-package>`__
- .. raw:: html - .. raw:: html
<a href="https://github.com/ROCm/dgl"><i class="fab fa-github fa-lg"></i></a> <a href="https://github.com/ROCm/dgl"><i class="fab fa-github fa-lg"></i></a>
@@ -98,18 +100,6 @@ The table below summarizes information about ROCm-enabled deep learning framewor
<a href="https://github.com/ROCm/megablocks"><i class="fab fa-github fa-lg"></i></a> <a href="https://github.com/ROCm/megablocks"><i class="fab fa-github fa-lg"></i></a>
* - `Taichi <https://rocm.docs.amd.com/en/latest/compatibility/ml-compatibility/taichi-compatibility.html>`__
- .. raw:: html
<a href="https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/taichi-install.html"><i class="fas fa-link fa-lg"></i></a>
-
- `Docker image <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/taichi-install.html#use-a-prebuilt-docker-image-with-taichi-pre-installed>`__
- `Wheels package <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/taichi-install.html#use-a-wheels-package>`__
- .. raw:: html
<a href="https://github.com/ROCm/taichi"><i class="fab fa-github fa-lg"></i></a>
* - `Ray <https://rocm.docs.amd.com/en/latest/compatibility/ml-compatibility/ray-compatibility.html>`__ * - `Ray <https://rocm.docs.amd.com/en/latest/compatibility/ml-compatibility/ray-compatibility.html>`__
- .. raw:: html - .. raw:: html

View File

@@ -1,5 +1,5 @@
.. meta:: .. meta::
:description: How to configure MI300X accelerators to fully leverage their capabilities and achieve optimal performance. :description: How to configure MI300X GPUs to fully leverage their capabilities and achieve optimal performance.
:keywords: ROCm, AI, machine learning, MI300X, LLM, usage, tutorial, optimization, tuning :keywords: ROCm, AI, machine learning, MI300X, LLM, usage, tutorial, optimization, tuning
************************************** **************************************
@@ -7,11 +7,11 @@ AMD Instinct MI300X performance guides
************************************** **************************************
The following performance guides provide essential guidance on the necessary The following performance guides provide essential guidance on the necessary
steps to properly `configure your system for AMD Instinct™ MI300X accelerators steps to properly `configure your system for AMD Instinct™ MI300X GPUs
<https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_. <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_.
They include detailed instructions on system settings and application They include detailed instructions on system settings and application
:doc:`workload tuning </how-to/rocm-for-ai/inference-optimization/workload>` to :doc:`workload tuning </how-to/rocm-for-ai/inference-optimization/workload>` to
help you leverage the maximum capabilities of these accelerators and achieve help you leverage the maximum capabilities of these GPUs and achieve
superior performance. superior performance.
* `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`__ * `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`__
@@ -19,9 +19,9 @@ superior performance.
your AMD Instinct MI300X system for performance. your AMD Instinct MI300X system for performance.
* :doc:`/how-to/rocm-for-ai/inference-optimization/workload` covers steps to * :doc:`/how-to/rocm-for-ai/inference-optimization/workload` covers steps to
optimize the performance of AMD Instinct MI300X series accelerators for HPC optimize the performance of AMD Instinct MI300X Series GPUs for HPC
and deep learning operations. and deep learning operations.
* :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm` introduces a preconfigured * :doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm` introduces a preconfigured
environment for LLM inference, designed to help you test performance with environment for LLM inference, designed to help you test performance with
popular models on AMD Instinct MI300X series accelerators. popular models on AMD Instinct MI300X Series GPUs.

View File

@@ -25,7 +25,7 @@ execute on AMD GPUs while maintaining compatibility with CUDA-based systems.
OpenCL (Open Computing Language) is an open standard for cross-platform, OpenCL (Open Computing Language) is an open standard for cross-platform,
parallel programming of diverse processors. ROCm supports OpenCL for developers parallel programming of diverse processors. ROCm supports OpenCL for developers
who want to use standard frameworks across different hardware platforms, who want to use standard frameworks across different hardware platforms,
including CPUs, GPUs, and other accelerators. For more information, see including CPUs, GPUs, and APUs. For more information, see
`OpenCL <https://www.khronos.org/opencl/>`_. `OpenCL <https://www.khronos.org/opencl/>`_.
Python bindings can be found at https://github.com/ROCm/hip-python. Python bindings can be found at https://github.com/ROCm/hip-python.

View File

@@ -11,10 +11,10 @@ Fine-tuning using ROCm involves leveraging AMD's GPU-accelerated :doc:`libraries
ecosystem for deep learning development, including open-source libraries for optimized deep learning operations and ecosystem for deep learning development, including open-source libraries for optimized deep learning operations and
ROCm-aware versions of :doc:`deep learning frameworks <../../deep-learning-rocm>` such as PyTorch, TensorFlow, and JAX. ROCm-aware versions of :doc:`deep learning frameworks <../../deep-learning-rocm>` such as PyTorch, TensorFlow, and JAX.
Single-accelerator systems, such as a machine equipped with a single accelerator or GPU, are commonly used for Single-accelerator systems, such as a machine equipped with a single GPU, are commonly used for
smaller-scale deep learning tasks, including fine-tuning pre-trained models and running inference on moderately smaller-scale deep learning tasks, including fine-tuning pre-trained models and running inference on moderately
sized datasets. See :doc:`single-gpu-fine-tuning-and-inference`. sized datasets. See :doc:`single-gpu-fine-tuning-and-inference`.
Multi-accelerator systems, on the other hand, consist of multiple accelerators working in parallel. These systems are Multi-accelerator systems, on the other hand, consist of multiple GPUs working in parallel. These systems are
typically used in LLMs and other large-scale deep learning tasks where performance, scalability, and the handling of typically used in LLMs and other large-scale deep learning tasks where performance, scalability, and the handling of
massive datasets are crucial. See :doc:`multi-gpu-fine-tuning-and-inference`. massive datasets are crucial. See :doc:`multi-gpu-fine-tuning-and-inference`.

View File

@@ -3,11 +3,11 @@
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, multi-GPU, distributed, inference, accelerators, PyTorch, HuggingFace, torchtune :keywords: ROCm, LLM, fine-tuning, usage, tutorial, multi-GPU, distributed, inference, accelerators, PyTorch, HuggingFace, torchtune
***************************************************** *****************************************************
Fine-tuning and inference using multiple accelerators Fine-tuning and inference using multiple GPUs
***************************************************** *****************************************************
This section explains how to fine-tune a model on a multi-accelerator system. See This section explains how to fine-tune a model on a multi-accelerator system. See
:doc:`Single-accelerator fine-tuning <single-gpu-fine-tuning-and-inference>` for a single accelerator or GPU setup. :doc:`Single-accelerator fine-tuning <single-gpu-fine-tuning-and-inference>` for a single GPU setup.
.. _fine-tuning-llms-multi-gpu-env: .. _fine-tuning-llms-multi-gpu-env:
@@ -20,7 +20,7 @@ This section was tested using the following hardware and software environment.
:stub-columns: 1 :stub-columns: 1
* - Hardware * - Hardware
- 4 AMD Instinct MI300X accelerators - 4 AMD Instinct MI300X GPUs
* - Software * - Software
- ROCm 6.1, Ubuntu 22.04, PyTorch 2.1.2, Python 3.10 - ROCm 6.1, Ubuntu 22.04, PyTorch 2.1.2, Python 3.10
@@ -40,13 +40,13 @@ Setting up the base implementation environment
:doc:`PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>`. For consistent :doc:`PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>`. For consistent
installation, its recommended to use official ROCm prebuilt Docker images with the framework pre-installed. installation, its recommended to use official ROCm prebuilt Docker images with the framework pre-installed.
#. In the Docker container, check the availability of ROCM-capable accelerators using the following command. #. In the Docker container, check the availability of ROCm-capable GPUs using the following command.
.. code-block:: shell .. code-block:: shell
rocm-smi --showproductname rocm-smi --showproductname
#. Check that your accelerators are available to PyTorch. #. Check that your GPUs are available to PyTorch.
.. code-block:: python .. code-block:: python
@@ -66,7 +66,7 @@ Setting up the base implementation environment
.. tip:: .. tip::
During training and inference, you can check the memory usage by running the ``rocm-smi`` command in your terminal. During training and inference, you can check the memory usage by running the ``rocm-smi`` command in your terminal.
This tool helps you see shows which accelerators or GPUs are involved. This tool helps you see shows which GPUs are involved.
.. _fine-tuning-llms-multi-gpu-hugging-face-accelerate: .. _fine-tuning-llms-multi-gpu-hugging-face-accelerate:
@@ -74,9 +74,9 @@ Setting up the base implementation environment
Hugging Face Accelerate for fine-tuning and inference Hugging Face Accelerate for fine-tuning and inference
=========================================================== ===========================================================
`Hugging Face Accelerate <https://huggingface.co/docs/accelerate/en/index>`_ is a library that simplifies turning raw `Hugging Face Accelerate <https://huggingface.co/docs/accelerate/en/index>`__ is a library that simplifies turning raw
PyTorch code for a single accelerator into code for multiple accelerators for LLM fine-tuning and inference. It is PyTorch code for a single GPU into code for multiple GPUs for LLM fine-tuning and inference. It is
integrated with `Transformers <https://huggingface.co/docs/transformers/en/index>`_ allowing you to scale your PyTorch integrated with `Transformers <https://huggingface.co/docs/transformers/en/index>`__, so you can scale your PyTorch
code while maintaining performance and flexibility. code while maintaining performance and flexibility.
As a brief example of model fine-tuning and inference using multiple GPUs, let's use Transformers and load in the Llama As a brief example of model fine-tuning and inference using multiple GPUs, let's use Transformers and load in the Llama
@@ -107,7 +107,7 @@ Now, it's important to adjust how you load the model. Add the ``device_map`` par
(``"auto"``, ``"balanced"``, ``"balanced_low_0"``, ``"sequential"``). (``"auto"``, ``"balanced"``, ``"balanced_low_0"``, ``"sequential"``).
It's recommended to set the ``device_map`` parameter to ``“auto”`` to allow Accelerate to automatically and It's recommended to set the ``device_map`` parameter to ``“auto”`` to allow Accelerate to automatically and
efficiently allocate the model given the available resources (4 accelerators in this case). efficiently allocate the model given the available resources (four GPUs in this case).
When you have more GPU memory available than the model size, here is the difference between each ``device_map`` When you have more GPU memory available than the model size, here is the difference between each ``device_map``
option: option:
@@ -130,8 +130,8 @@ After loading the model in this way, the model is fully ready to use the resourc
torchtune for fine-tuning and inference torchtune for fine-tuning and inference
============================================= =============================================
`torchtune <https://pytorch.org/torchtune/main/>`_ is a PyTorch-native library for easy single and multi-accelerator or `torchtune <https://meta-pytorch.org/torchtune/main/>`_ is a PyTorch-native library for easy single and multi-GPU
GPU model fine-tuning and inference with LLMs. model fine-tuning and inference with LLMs.
#. Install torchtune using pip. #. Install torchtune using pip.

View File

@@ -30,7 +30,7 @@ The challenge of fine-tuning models
However, the computational cost of fine-tuning is still high, especially for complex models and large datasets, which However, the computational cost of fine-tuning is still high, especially for complex models and large datasets, which
poses distinct challenges related to substantial computational and memory requirements. This might be a barrier for poses distinct challenges related to substantial computational and memory requirements. This might be a barrier for
accelerators or GPUs with low computing power or limited device memory resources. GPUs with low computing power or limited device memory resources.
For example, suppose we have a language model with 7 billion (7B) parameters, represented by a weight matrix :math:`W`. For example, suppose we have a language model with 7 billion (7B) parameters, represented by a weight matrix :math:`W`.
During backpropagation, the model needs to learn a :math:`ΔW` matrix, which updates the original weights to minimize the During backpropagation, the model needs to learn a :math:`ΔW` matrix, which updates the original weights to minimize the
@@ -84,8 +84,8 @@ Walkthrough
=========== ===========
To demonstrate the benefits of LoRA and the ideal compute compatibility of using PEFT and TRL libraries on AMD To demonstrate the benefits of LoRA and the ideal compute compatibility of using PEFT and TRL libraries on AMD
ROCm-compatible accelerators and GPUs, let's step through a comprehensive implementation of the fine-tuning process ROCm-compatible GPUs, let's step through a comprehensive implementation of the fine-tuning process
using the Llama 2 7B model with LoRA tailored specifically for question-and-answer tasks on AMD MI300X accelerators. using the Llama 2 7B model with LoRA tailored specifically for question-and-answer tasks on AMD MI300X GPUs.
Before starting, review and understand the key components of this walkthrough: Before starting, review and understand the key components of this walkthrough:

View File

@@ -3,12 +3,11 @@
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, single-GPU, LoRA, PEFT, inference, SFTTrainer :keywords: ROCm, LLM, fine-tuning, usage, tutorial, single-GPU, LoRA, PEFT, inference, SFTTrainer
**************************************************** ****************************************************
Fine-tuning and inference using a single accelerator Fine-tuning and inference using a single GPU
**************************************************** ****************************************************
This section explains model fine-tuning and inference techniques on a single-accelerator system. See This section explains model fine-tuning and inference techniques on a single-accelerator system. See
:doc:`Multi-accelerator fine-tuning <multi-gpu-fine-tuning-and-inference>` for a setup with multiple accelerators or :doc:`Multi-accelerator fine-tuning <multi-gpu-fine-tuning-and-inference>` for a setup with multiple GPUs.
GPUs.
.. _fine-tuning-llms-single-gpu-env: .. _fine-tuning-llms-single-gpu-env:
@@ -21,7 +20,7 @@ This section was tested using the following hardware and software environment.
:stub-columns: 1 :stub-columns: 1
* - Hardware * - Hardware
- AMD Instinct MI300X accelerator - AMD Instinct MI300X GPU
* - Software * - Software
- ROCm 6.1, Ubuntu 22.04, PyTorch 2.1.2, Python 3.10 - ROCm 6.1, Ubuntu 22.04, PyTorch 2.1.2, Python 3.10
@@ -41,7 +40,7 @@ Setting up the base implementation environment
:doc:`PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>`. For a consistent :doc:`PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>`. For a consistent
installation, its recommended to use official ROCm prebuilt Docker images with the framework pre-installed. installation, its recommended to use official ROCm prebuilt Docker images with the framework pre-installed.
#. In the Docker container, check the availability of ROCm-capable accelerators using the following command. #. In the Docker container, check the availability of ROCm-capable GPUs using the following command.
.. code-block:: shell .. code-block:: shell
@@ -53,14 +52,14 @@ Setting up the base implementation environment
============================ ROCm System Management Interface ============================ ============================ ROCm System Management Interface ============================
====================================== Product Info ====================================== ====================================== Product Info ======================================
GPU[0] : Card series: AMD Instinct MI300X OAM GPU[0] : Card Series: AMD Instinct MI300X OAM
GPU[0] : Card model: 0x74a1 GPU[0] : Card model: 0x74a1
GPU[0] : Card vendor: Advanced Micro Devices, Inc. [AMD/ATI] GPU[0] : Card vendor: Advanced Micro Devices, Inc. [AMD/ATI]
GPU[0] : Card SKU: MI3SRIOV GPU[0] : Card SKU: MI3SRIOV
========================================================================================== ==========================================================================================
================================== End of ROCm SMI Log =================================== ================================== End of ROCm SMI Log ===================================
#. Check that your accelerators are available to PyTorch. #. Check that your GPUs are available to PyTorch.
.. code-block:: python .. code-block:: python
@@ -502,9 +501,9 @@ Let's look at achieving model inference using these types of models.
# Token generation # Token generation
print(pipe("What is a large language model?")[0]["generated_text"]) print(pipe("What is a large language model?")[0]["generated_text"])
If using multiple accelerators, see If using multiple GPUs, see
:ref:`Multi-accelerator fine-tuning and inference <fine-tuning-llms-multi-gpu-hugging-face-accelerate>` to explore :ref:`Multi-accelerator fine-tuning and inference <fine-tuning-llms-multi-gpu-hugging-face-accelerate>` to explore
popular libraries that simplify fine-tuning and inference in a multi-accelerator system. popular libraries that simplify fine-tuning and inference in a multiple-GPU system.
Read more about inference frameworks like vLLM and Hugging Face TGI in Read more about inference frameworks like vLLM and Hugging Face TGI in
:doc:`LLM inference frameworks <../inference/llm-inference-frameworks>`. :doc:`LLM inference frameworks <../inference/llm-inference-frameworks>`.

View File

@@ -24,94 +24,102 @@ performance.
:alt: Attention module of a large language module utilizing tiling :alt: Attention module of a large language module utilizing tiling
:align: center :align: center
Installation prerequisites
----------------------------
Before installing Flash Attention 2, ensure the following are available:
* ROCm-enabled PyTorch
* Triton
These can be installed by following the official
`PyTorch installation guide <https://pytorch.org/get-started/locally/>`_. Alternatively, for a simpler setup, you can use a preconfigured
:ref:`ROCm PyTorch Docker image <using-docker-with-pytorch-pre-installed>`, which already includes the required libraries.
Installing Flash Attention 2 Installing Flash Attention 2
---------------------------- ----------------------------
ROCm provides two different implementations of Flash Attention 2 modules. They can be deployed interchangeably: `Flash Attention <https://github.com/Dao-AILab/flash-attention>`_ supports two backend implementations on AMD GPUs.
* ROCm `Composable Kernel <https://github.com/ROCm/composable_kernel/tree/develop/example/01_gemm>`_ * `Composable Kernel (CK) <https://github.com/ROCm/composable_kernel>`__ - the default backend
(CK) Flash Attention 2 * `OpenAI Triton <https://github.com/triton-lang/triton>`__ - an alternative backend
* `OpenAI Triton <https://triton-lang.org/main/index.html>`_ Flash Attention 2 You can switch between these backends using the environment variable ``FLASH_ATTENTION_TRITON_AMD_ENABLE``:
.. tab-set:: ``FLASH_ATTENTION_TRITON_AMD_ENABLE="FALSE"``
→ Use Composable Kernel (CK) backend (Flash Attention 2)
.. tab-item:: CK Flash Attention 2 ``FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE"``
→ Use OpenAI Triton backend (Flash Attention 2)
To install CK Flash Attention 2, use the following commands. To install Flash Attention 2, use the following commands:
.. code-block:: shell .. code-block:: shell
# Install from source git clone https://github.com/Dao-AILab/flash-attention.git
git clone https://github.com/ROCm/flash-attention.git cd flash-attention/
cd flash-attention/ pip install ninja
GPU_ARCHS=gfx942 python setup.py install #MI300 series
Hugging Face Transformers can easily deploy the CK Flash Attention 2 module by passing an argument # To install the CK backend flash attention
``attn_implementation="flash_attention_2"`` in the ``from_pretrained`` class. python setup.py install
.. code-block:: python # To install the Triton backend flash attention
FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE" python setup.py install
import torch # To install both CK and Triton backend flash attention
from transformers import AutoModelForCausalLM, AutoTokenizer FLASH_ATTENTION_TRITON_AMD_ENABLE=TRUE && FLASH_ATTENTION_SKIP_CK_BUILD=FALSE python setup.py install
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model_name = "NousResearch/Meta-Llama-3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name, torch_dtype=torch.float16, use_fast=False) For detailed installation instructions, see `Flash Attention <https://github.com/Dao-AILab/flash-attention>`_.
inputs = tokenizer('Today is', return_tensors='pt').to(device)
model_eager = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, attn_implementation="eager").cuda(device) Benchmarking Flash Attention 2
model_ckFAv2 = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda(device) ------------------------------
print("eager GQA: ", tokenizer.decode(model_eager.generate(**inputs, max_new_tokens=10)[0], skip_special_tokens=True)) Benchmark scripts to evaluate the performance of Flash Attention 2 are stored in the ``flash-attention/benchmarks/`` directory.
print("ckFAv2 GQA: ", tokenizer.decode(model_ckFAv2.generate(**inputs, max_new_tokens=10)[0], skip_special_tokens=True))
# eager GQA: Today is the day of the Lord, and we are the To benchmark the CK backend
# ckFAv2 GQA: Today is the day of the Lord, and we are the
.. tab-item:: Triton Flash Attention 2 .. code-block:: shell
The Triton Flash Attention 2 module is implemented in Python and uses OpenAIs JIT compiler. This module has been cd flash-attention/benchmarks
upstreamed into the vLLM serving toolkit, discussed in :doc:'llm-inference-frameworks'. pip install transformers einops ninja
1. To install Triton Flash Attention 2 and run the benchmark, use the following commands. python3 benchmark_flash_attention.py
.. code-block:: shell To benchmark the Triton backend
# Install from the source .. code-block:: shell
pip uninstall pytorch-triton-rocm triton -y
git clone https://github.com/ROCm/triton.git
cd triton/python
GPU_ARCHS=gfx942 python setup.py install #MI300 series
pip install matplotlib pandas
2. To test, run the Triton Flash Attention 2 performance benchmark. FLASH_ATTENTION_TRITON_AMD_ENABLE="TRUE" python3 benchmark_flash_attention.py
.. code-block:: shell Using Flash Attention 2
-----------------------
# Test the triton FA v2 kernel
python https://github.com/ROCm/triton/blob/triton-mlir/python/perf-kernels/flash-attention.py .. code-block:: python
# Results (Okay to release TFLOPS number ???)
fused-attention-fwd-d128: import torch
BATCH HQ HK N_CTX_Q N_CTX_K TFLOPS from transformers import AutoModelForCausalLM, AutoTokenizer
0 16.0 16.0 16.0 1024.0 1024.0 287.528411 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
1 8.0 16.0 16.0 2048.0 2048.0 287.490806 model_name = "NousResearch/Llama-3.2-1B"
2 4.0 16.0 16.0 4096.0 4096.0 345.966031
3 2.0 16.0 16.0 8192.0 8192.0 361.369510 tokenizer = AutoTokenizer.from_pretrained(model_name, dtype=torch.bfloat16, use_fast=False)
4 1.0 16.0 16.0 16384.0 16384.0 356.873720 inputs = tokenizer('Today is', return_tensors='pt').to(device)
5 2.0 48.0 48.0 1024.0 1024.0 216.916235
6 2.0 48.0 48.0 2048.0 1024.0 271.027578 model_eager = AutoModelForCausalLM.from_pretrained(model_name, dtype=torch.bfloat16, attn_implementation="eager").cuda(device)
7 2.0 48.0 48.0 4096.0 8192.0 337.367372 model_ckFAv2 = AutoModelForCausalLM.from_pretrained(model_name, dtype=torch.bfloat16, attn_implementation="flash_attention_2").cuda(device)
8 2.0 48.0 48.0 8192.0 4096.0 363.481649 model_eager.generation_config.pad_token_id = model_eager.generation_config.eos_token_id
9 2.0 48.0 48.0 16384.0 8192.0 375.013622 model_ckFAv2.generation_config.pad_token_id = model_ckFAv2.generation_config.eos_token_id
10 8.0 16.0 16.0 1989.0 15344.0 321.791333
11 4.0 16.0 16.0 4097.0 163.0 122.104888 print("eager\n GQA: ", tokenizer.decode(model_eager.generate(**inputs, max_new_tokens=22)[0], skip_special_tokens=True, do_sample=False, num_beams=1))
12 2.0 16.0 16.0 8122.0 2159.0 337.060283 print("ckFAv2\n GQA: ", tokenizer.decode(model_ckFAv2.generate(**inputs, max_new_tokens=22)[0], skip_special_tokens=True, do_sample=False, num_beams=1))
13 1.0 16.0 16.0 16281.0 7.0 5.234012
14 2.0 48.0 48.0 1021.0 1020.0 214.657425 The outputs from eager mode and FlashAttention-2 are identical, although their performance behavior differs.
15 2.0 48.0 48.0 2001.0 2048.0 314.429118
16 2.0 48.0 48.0 3996.0 9639.0 330.411368 .. code-block:: shell
17 2.0 48.0 48.0 8181.0 1021.0 324.614980
eager
GQA: Today is the 10th anniversary of the 9/11 attacks. I remember that day like it was yesterday.
ckFAv2
GQA: Today is the 10th anniversary of the 9/11 attacks. I remember that day like it was yesterday.
xFormers xFormers
======== ========
@@ -526,7 +534,7 @@ follow these instructions:
python -m pytest -v -rsx -s -W ignore::pytest.PytestCollectionWarning split_table_batched_embeddings_test.py python -m pytest -v -rsx -s -W ignore::pytest.PytestCollectionWarning split_table_batched_embeddings_test.py
To run the FBGEMM_GPU ``uvm`` test, use these commands. These tests only support the AMD MI210 and To run the FBGEMM_GPU ``uvm`` test, use these commands. These tests only support the AMD MI210 and
more recent accelerators. more recent GPUs.
.. code-block:: shell .. code-block:: shell

View File

@@ -7,7 +7,7 @@ Model quantization techniques
***************************** *****************************
Quantization reduces the model size compared to its native full-precision version, making it easier to fit large models Quantization reduces the model size compared to its native full-precision version, making it easier to fit large models
onto accelerators or GPUs with limited memory usage. This section explains how to perform LLM quantization using AMD Quark, GPTQ onto GPUs with limited memory usage. This section explains how to perform LLM quantization using AMD Quark, GPTQ
and bitsandbytes on AMD Instinct hardware. and bitsandbytes on AMD Instinct hardware.
.. _quantize-llms-quark: .. _quantize-llms-quark:
@@ -311,7 +311,7 @@ ExLlama-v2 support
ExLlama is a Python/C++/CUDA implementation of the Llama model that is ExLlama is a Python/C++/CUDA implementation of the Llama model that is
designed for faster inference with 4-bit GPTQ weights. The ExLlama designed for faster inference with 4-bit GPTQ weights. The ExLlama
kernel is activated by default when users create a ``GPTQConfig`` object. To kernel is activated by default when users create a ``GPTQConfig`` object. To
boost inference speed even further on Instinct accelerators, use the ExLlama-v2 boost inference speed even further on Instinct GPUs, use the ExLlama-v2
kernels by configuring the ``exllama_config`` parameter as the following. kernels by configuring the ``exllama_config`` parameter as the following.
.. code-block:: python .. code-block:: python
@@ -332,7 +332,7 @@ The `ROCm-aware bitsandbytes <https://github.com/ROCm/bitsandbytes>`_ library is
a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizer, matrix multiplication, and a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizer, matrix multiplication, and
8-bit and 4-bit quantization functions. The library includes quantization primitives for 8-bit and 4-bit operations 8-bit and 4-bit quantization functions. The library includes quantization primitives for 8-bit and 4-bit operations
through ``bitsandbytes.nn.Linear8bitLt`` and ``bitsandbytes.nn.Linear4bit`` and 8-bit optimizers through the through ``bitsandbytes.nn.Linear8bitLt`` and ``bitsandbytes.nn.Linear4bit`` and 8-bit optimizers through the
``bitsandbytes.optim`` module. These modules are supported on AMD Instinct accelerators. ``bitsandbytes.optim`` module. These modules are supported on AMD Instinct GPUs.
Installing bitsandbytes Installing bitsandbytes
----------------------- -----------------------

View File

@@ -9,13 +9,13 @@ myst:
The AMD ROCm Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads. It generates a general-purpose kernel during the compilation phase through a C++ template, enabling developers to achieve operation fusions on different data precisions. The AMD ROCm Composable Kernel (CK) library provides a programming model for writing performance-critical kernels for machine learning workloads. It generates a general-purpose kernel during the compilation phase through a C++ template, enabling developers to achieve operation fusions on different data precisions.
This article gives a high-level overview of CK General Matrix Multiplication (GEMM) kernel based on the design example of `03_gemm_bias_relu`. It also outlines the steps to construct the kernel and run it. Moreover, the article provides a detailed implementation of running SmoothQuant quantized INT8 models on AMD Instinct MI300X accelerators using CK. This article gives a high-level overview of CK General Matrix Multiplication (GEMM) kernel based on the design example of `03_gemm_bias_relu`. It also outlines the steps to construct the kernel and run it. Moreover, the article provides a detailed implementation of running SmoothQuant quantized INT8 models on AMD Instinct MI300X GPUs using CK.
## High-level overview: a CK GEMM instance ## High-level overview: a CK GEMM instance
GEMM is a fundamental block in linear algebra, machine learning, and deep neural networks. It is defined as the operation: GEMM is a fundamental block in linear algebra, machine learning, and deep neural networks. It is defined as the operation:
{math}`E = α \times (A \times B) + β \times (D)`, with A and B as matrix inputs, α and β as scalar inputs, and D as a pre-existing matrix. {math}`E = α \times (A \times B) + β \times (D)`, with A and B as matrix inputs, α and β as scalar inputs, and D as a pre-existing matrix.
Take the commonly used linear transformation in a fully connected layer as an example. These terms correspond to input activation (A), weight (B), bias (D), and output (E), respectively. The example employs a `DeviceGemmMultipleD_Xdl_CShuffle` struct from CK library as the fundamental instance to explore the compute capability of AMD Instinct accelerators for the computation of GEMM. The implementation of the instance contains two phases: Take the commonly used linear transformation in a fully connected layer as an example. These terms correspond to input activation (A), weight (B), bias (D), and output (E), respectively. The example employs a `DeviceGemmMultipleD_Xdl_CShuffle` struct from CK library as the fundamental instance to explore the compute capability of AMD Instinct GPUs for the computation of GEMM. The implementation of the instance contains two phases:
- [Template parameter definition](#template-parameter-definition) - [Template parameter definition](#template-parameter-definition)
- [Instantiating and running the templated kernel](#instantiating-and-running-the-templated-kernel) - [Instantiating and running the templated kernel](#instantiating-and-running-the-templated-kernel)
@@ -108,7 +108,7 @@ These parameters include Block Size, M/N/K Per Block, M/N per XDL, AK1, BK1, etc
- Block Size determines the number of threads in the thread block. - Block Size determines the number of threads in the thread block.
- M/N/K Per Block determines the size of tile that each thread block is responsible for calculating. - M/N/K Per Block determines the size of tile that each thread block is responsible for calculating.
- M/N Per XDL refers to M/N size for Instinct accelerator Matrix Fused Multiply Add (MFMA) instructions operating on a per-wavefront basis. - M/N Per XDL refers to M/N size for Instinct GPU Matrix Fused Multiply Add (MFMA) instructions operating on a per-wavefront basis.
- A/B K1 is related to the data type. It can be any value ranging from 1 to K Per Block. To achieve the optimal load/store performance, 128bit per load is suggested. In addition, the A/B loading parameters must be changed accordingly to match the A/B K1 value; otherwise, it will result in compilation errors. - A/B K1 is related to the data type. It can be any value ranging from 1 to K Per Block. To achieve the optimal load/store performance, 128bit per load is suggested. In addition, the A/B loading parameters must be changed accordingly to match the A/B K1 value; otherwise, it will result in compilation errors.
Conditions for achieving computational load balancing on different hardware platforms can vary. Conditions for achieving computational load balancing on different hardware platforms can vary.
@@ -133,7 +133,7 @@ Templated kernel launching consists of kernel instantiation, making arguments by
## Developing fused INT8 kernels for SmoothQuant models ## Developing fused INT8 kernels for SmoothQuant models
[SmoothQuant](https://github.com/mit-han-lab/smoothquant) (SQ) is a quantization algorithm that enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLM. The required GPU kernel functionalities used to accelerate the inference of SQ models on Instinct accelerators are shown in the following table. [SmoothQuant](https://github.com/mit-han-lab/smoothquant) (SQ) is a quantization algorithm that enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLM. The required GPU kernel functionalities used to accelerate the inference of SQ models on Instinct GPUs are shown in the following table.
:::{table} Functionalities used to implement SmoothQuant model inference. :::{table} Functionalities used to implement SmoothQuant model inference.
@@ -164,7 +164,7 @@ The CK library contains many fundamental instances that implement different func
Second, consider whether the format of input data meets your actual calculation needs. For SQ models, the 8-bit integer data format (INT8) is applied for matrix calculations. Second, consider whether the format of input data meets your actual calculation needs. For SQ models, the 8-bit integer data format (INT8) is applied for matrix calculations.
Third, consider the platform for implementing CK instances. The instances suffixed with `xdl` only run on AMD Instinct accelerators after being compiled and cannot run on Radeon-series GPUs. This is due to the underlying device-specific instruction sets for implementing these basic instances. Third, consider the platform for implementing CK instances. The instances suffixed with `xdl` only run on AMD Instinct GPUs after being compiled and cannot run on Radeon-Series GPUs. This is due to the underlying device-specific instruction sets for implementing these basic instances.
Here, we use [DeviceBatchedGemmMultiD_Xdl](https://github.com/ROCm/composable_kernel/tree/develop/example/24_batched_gemm) as the fundamental instance to implement the functionalities in the previous table. Here, we use [DeviceBatchedGemmMultiD_Xdl](https://github.com/ROCm/composable_kernel/tree/develop/example/24_batched_gemm) as the fundamental instance to implement the functionalities in the previous table.
@@ -435,7 +435,7 @@ The implementation architecture of running SmoothQuant models on MI300X GPUs is
### Figure 7 ### Figure 7
================ --> ================ -->
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-inference_flow.jpg ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-inference_flow.jpg
The implementation architecture of running SmoothQuant models on AMD MI300X accelerators. The implementation architecture of running SmoothQuant models on AMD MI300X GPUs.
``` ```
For the target [SQ quantized model](https://huggingface.co/mit-han-lab/opt-13b-smoothquant), each decoder layer contains three major components: attention calculation, layer normalization, and linear transformation in fully connected layers. The corresponding implementation classes for these components are: For the target [SQ quantized model](https://huggingface.co/mit-han-lab/opt-13b-smoothquant), each decoder layer contains three major components: attention calculation, layer normalization, and linear transformation in fully connected layers. The corresponding implementation classes for these components are:
@@ -447,21 +447,21 @@ For the target [SQ quantized model](https://huggingface.co/mit-han-lab/opt-13b-s
These classes' underlying implementation logits will harness the functions in previous table. Note that for the example, the `LayerNormQ` module is implemented by the torch native module. These classes' underlying implementation logits will harness the functions in previous table. Note that for the example, the `LayerNormQ` module is implemented by the torch native module.
Testing environment: Testing environment:
The hardware platform used for testing equips with 256 AMD EPYC 9534 64-Core Processor, 8 AMD Instinct MI300X accelerators and 1.5T memory. The testing was done in a publicly available Docker image from Docker Hub: The hardware platform used for testing equips with 256 AMD EPYC 9534 64-Core Processor, 8 AMD Instinct MI300X GPUs and 1.5T memory. The testing was done in a publicly available Docker image from Docker Hub:
[`rocm/pytorch:rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2`](https://hub.docker.com/layers/rocm/pytorch/rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2/images/sha256-f6ea7cee8aae299c7f6368187df7beed29928850c3929c81e6f24b34271d652b) [`rocm/pytorch:rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2`](https://hub.docker.com/layers/rocm/pytorch/rocm6.1_ubuntu22.04_py3.10_pytorch_2.1.2/images/sha256-f6ea7cee8aae299c7f6368187df7beed29928850c3929c81e6f24b34271d652b)
The tested models are OPT-1.3B, 2.7B, 6.7B and 13B FP16 models and the corresponding SmoothQuant INT8 OPT models were obtained from Hugging Face. The tested models are OPT-1.3B, 2.7B, 6.7B and 13B FP16 models and the corresponding SmoothQuant INT8 OPT models were obtained from Hugging Face.
Note that since the default values were used for the tunable parameters of the fundamental instance, the performance of the INT8 kernel is suboptimal. Note that since the default values were used for the tunable parameters of the fundamental instance, the performance of the INT8 kernel is suboptimal.
Figure 8 shows the performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator. The GPU memory footprints of SmoothQuant-quantized models are significantly reduced. It also indicates the per-sample inference latency is significantly reduced for all SmoothQuant-quantized OPT models (illustrated in (b)). Notably, the performance of the CK instance-based INT8 kernel steadily improves with an increase in model size. Figure 8 shows the performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X GPU. The GPU memory footprints of SmoothQuant-quantized models are significantly reduced. It also indicates the per-sample inference latency is significantly reduced for all SmoothQuant-quantized OPT models (illustrated in (b)). Notably, the performance of the CK instance-based INT8 kernel steadily improves with an increase in model size.
<!-- <!--
================ ================
### Figure 8 ### Figure 8
================ --> ================ -->
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-comparisons.jpg ```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-comparisons.jpg
Performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator. Performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X GPU.
``` ```
For accuracy comparisons between the original FP16 and INT8 models, the evaluation is done by using the first 1,000 samples from the LAMBADA dataset's validation set. We employ the same Last Token Prediction Accuracy method introduced in [SmoothQuant Real-INT8 Inference for PyTorch](https://github.com/mit-han-lab/smoothquant/blob/main/examples/smoothquant_opt_real_int8_demo.ipynb) as our evaluation metric. The comparison results are shown in Table 2. For accuracy comparisons between the original FP16 and INT8 models, the evaluation is done by using the first 1,000 samples from the LAMBADA dataset's validation set. We employ the same Last Token Prediction Accuracy method introduced in [SmoothQuant Real-INT8 Inference for PyTorch](https://github.com/mit-han-lab/smoothquant/blob/main/examples/smoothquant_opt_real_int8_demo.ipynb) as our evaluation metric. The comparison results are shown in Table 2.
@@ -482,4 +482,4 @@ CK provides a rich set of template parameters for generating flexible accelerate
CK supports multiple instruction sets of AMD Instinct GPUs, operator fusion and different data precisions. Its composability helps users quickly construct operator performance verification. CK supports multiple instruction sets of AMD Instinct GPUs, operator fusion and different data precisions. Its composability helps users quickly construct operator performance verification.
With CK, you can build more effective AI applications with higher flexibility and better performance on different AMD accelerator platforms. With CK, you can build more effective AI applications with higher flexibility and better performance on different AMD GPU platforms.

File diff suppressed because it is too large Load Diff

View File

@@ -1,31 +1,30 @@
.. meta:: .. meta::
:description: Learn about workload tuning on AMD Instinct MI300X accelerators for optimal performance. :description: Learn about workload tuning on AMD Instinct MI300X GPUs for optimal performance.
:keywords: AMD, Instinct, MI300X, HPC, tuning, BIOS settings, NBIO, ROCm, :keywords: AMD, Instinct, MI300X, HPC, tuning, BIOS settings, NBIO, ROCm,
environment variable, performance, HIP, Triton, PyTorch TunableOp, vLLM, RCCL, environment variable, performance, HIP, Triton, PyTorch TunableOp, vLLM, RCCL,
MIOpen, accelerator, GPU, resource utilization MIOpen, GPU, resource utilization
***************************************** *****************************************
AMD Instinct MI300X workload optimization AMD Instinct MI300X workload optimization
***************************************** *****************************************
This document provides guidelines for optimizing the performance of AMD This document provides guidelines for optimizing the performance of AMD
Instinct™ MI300X accelerators, with a particular focus on GPU kernel Instinct™ MI300X GPUs, with a particular focus on GPU kernel
programming, high-performance computing (HPC), and deep learning operations programming, high-performance computing (HPC), and deep learning operations
using PyTorch. It delves into specific workloads such as using PyTorch. It delves into specific workloads such as
:ref:`model inference <mi300x-vllm-optimization>`, offering strategies to :ref:`model inference <mi300x-vllm-optimization>`, offering strategies to
enhance efficiency. enhance efficiency.
The following topics highlight :ref:`auto-tunable configurations <mi300x-auto-tune>` The following topics highlight :ref:`auto-tunable configurations <mi300x-auto-tune>` as
that streamline optimization as well as advanced techniques like well as :ref:`Triton kernel optimization <mi300x-triton-kernel-performance-optimization>`
:ref:`Triton kernel optimization <mi300x-triton-kernel-performance-optimization>` for for meticulous tuning.
meticulous tuning.
Workload tuning strategy Workload tuning strategy
======================== ========================
By following a structured approach, you can systematically address By following a structured approach, you can systematically address
performance issues and enhance the efficiency of your workloads on AMD Instinct performance issues and enhance the efficiency of your workloads on AMD Instinct
MI300X accelerators. MI300X GPUs.
Measure the current workload Measure the current workload
---------------------------- ----------------------------
@@ -86,27 +85,28 @@ Optimize model inference with vLLM
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vLLM provides tools and techniques specifically designed for efficient model vLLM provides tools and techniques specifically designed for efficient model
inference on AMD Instinct MI300X accelerators. See :ref:`fine-tuning-llms-vllm` inference on AMD Instinct GPUs. See the official `vLLM installation docs
for installation guidance. Optimizing performance with vLLM <https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html>`__ for
involves configuring tensor parallelism, leveraging advanced features, and installation guidance. Optimizing performance with vLLM involves configuring
ensuring efficient execution. Heres how to optimize vLLM performance: tensor parallelism, leveraging advanced features, and ensuring efficient
execution.
* Tensor parallelism: Configure the * Configuration for vLLM: Set engine arguments according to workload
:ref:`tensor-parallel-size parameter <mi300x-vllm-multiple-gpus>` to distribute requirements.
tensor computations across multiple GPUs. Adjust parameters such as
``batch-size``, ``input-len``, and ``output-len`` based on your workload.
* Configuration for vLLM: Set :ref:`parameters <mi300x-vllm-optimization>`
according to workload requirements. Benchmark performance to understand
characteristics and identify bottlenecks.
* Benchmarking and performance metrics: Measure latency and throughput to * Benchmarking and performance metrics: Measure latency and throughput to
evaluate performance. evaluate performance.
.. seealso::
See :doc:`vllm-optimization` to learn more about vLLM performance
optimization techniques.
.. _mi300x-auto-tune: .. _mi300x-auto-tune:
Auto-tunable configurations Auto-tunable configurations
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Auto-tunable configurations can significantly streamline performance Auto-tunable configurations can significantly streamline performance
optimization by automatically adjusting parameters based on workload optimization by automatically adjusting parameters based on workload
characteristics. For example: characteristics. For example:
@@ -120,8 +120,7 @@ characteristics. For example:
your specific hardware. your specific hardware.
* Triton: Use :ref:`Tritons auto-tuning features <mi300x-autotunable-kernel-config>` * Triton: Use :ref:`Tritons auto-tuning features <mi300x-autotunable-kernel-config>`
to explore various kernel configurations and automatically select the to explore various kernel configurations and select the best-performing ones.
best-performing ones.
Manual tuning Manual tuning
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
@@ -239,7 +238,7 @@ benchmarking process.
With AMD's profiling tools, developers are able to gain important insight into how efficiently their application is With AMD's profiling tools, developers are able to gain important insight into how efficiently their application is
using hardware resources and effectively diagnose potential bottlenecks contributing to poor performance. Developers using hardware resources and effectively diagnose potential bottlenecks contributing to poor performance. Developers
working with AMD Instinct accelerators have multiple tools depending on their specific profiling needs; these include: working with AMD Instinct GPUs have multiple tools depending on their specific profiling needs; these include:
* :ref:`ROCProfiler <mi300x-rocprof>` * :ref:`ROCProfiler <mi300x-rocprof>`
@@ -257,11 +256,11 @@ metrics, commonly called *performance counters*. These counters quantify the per
showcasing which pieces of the computational pipeline and memory hierarchy are being utilized. showcasing which pieces of the computational pipeline and memory hierarchy are being utilized.
Your ROCm installation contains a script or executable command called ``rocprof`` which provides the ability to list all Your ROCm installation contains a script or executable command called ``rocprof`` which provides the ability to list all
available hardware counters for your specific accelerator or GPU, and run applications while collecting counters during available hardware counters for your specific GPU, and run applications while collecting counters during
their execution. their execution.
This ``rocprof`` utility also depends on the :doc:`ROCTracer and ROC-TX libraries <roctracer:index>`, giving it the This ``rocprof`` utility also depends on the :doc:`ROCTracer and ROC-TX libraries <roctracer:index>`, giving it the
ability to collect timeline traces of the accelerator software stack as well as user-annotated code regions. ability to collect timeline traces of the GPU software stack as well as user-annotated code regions.
.. note:: .. note::
@@ -276,16 +275,16 @@ ROCm Compute Profiler
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>` is a system performance profiler for high-performance computing (HPC) and :doc:`ROCm Compute Profiler <rocprofiler-compute:index>` is a system performance profiler for high-performance computing (HPC) and
machine learning (ML) workloads using Instinct accelerators. Under the hood, ROCm Compute Profiler uses machine learning (ML) workloads using Instinct GPUs. Under the hood, ROCm Compute Profiler uses
:ref:`ROCProfiler <mi300x-rocprof>` to collect hardware performance counters. The ROCm Compute Profiler tool performs :ref:`ROCProfiler <mi300x-rocprof>` to collect hardware performance counters. The ROCm Compute Profiler tool performs
system profiling based on all approved hardware counters for Instinct system profiling based on all approved hardware counters for Instinct
accelerator architectures. It provides high level performance analysis features including System Speed-of-Light, IP GPU architectures. It provides high level performance analysis features including System Speed-of-Light, IP
block Speed-of-Light, Memory Chart Analysis, Roofline Analysis, Baseline Comparisons, and more. block Speed-of-Light, Memory Chart Analysis, Roofline Analysis, Baseline Comparisons, and more.
ROCm Compute Profiler takes the guesswork out of profiling by removing the need to provide text input files with lists of counters ROCm Compute Profiler takes the guesswork out of profiling by removing the need to provide text input files with lists of counters
to collect and analyze raw CSV output files as is the case with ROCProfiler. Instead, ROCm Compute Profiler automates the collection to collect and analyze raw CSV output files as is the case with ROCProfiler. Instead, ROCm Compute Profiler automates the collection
of all available hardware counters in one command and provides graphical interfaces to help users understand and of all available hardware counters in one command and provides graphical interfaces to help users understand and
analyze bottlenecks and stressors for their computational workloads on AMD Instinct accelerators. analyze bottlenecks and stressors for their computational workloads on AMD Instinct GPUs.
.. note:: .. note::
@@ -328,380 +327,21 @@ hardware counters are also included.
ROCm Systems Profiler timeline trace example. ROCm Systems Profiler timeline trace example.
.. _mi300x-vllm-optimization:
vLLM performance optimization vLLM performance optimization
============================= =============================
vLLM is a high-throughput and memory efficient inference and serving engine for large language models that has gained traction in the AI community for vLLM is a high-throughput and memory efficient inference and serving engine for
its performance and ease of use. See :ref:`fine-tuning-llms-vllm` for a primer on vLLM with ROCm. large language models that has gained traction in the AI community for its
performance and ease of use. See :doc:`vllm-optimization`, where you'll learn
Performance environment variables how to:
---------------------------------
* Enable AITER (AI Tensor Engine for ROCm) to speed up on LLM models.
The following performance tips are not *specific* to vLLM -- they are general * Configure environment variables for optimal HIP, RCCL, and Quick Reduce performance.
but relevant in this context. You can tune the following vLLM parameters to * Select the right attention backend for your workload (AITER MHA/MLA vs. Triton).
achieve optimal request latency and throughput performance. * Choose parallelism strategies (tensor, pipeline, data, expert) for multi-GPU deployments.
* Apply quantization (``FP8``/``FP4``) to reduce memory usage by 2-4× with minimal accuracy loss.
* As described in `Environment variables (MI300X) * Tune engine arguments (batch size, memory utilization, graph modes) for your use case.
<https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html#environment-variables>`_, * Benchmark and scale across single-node and multi-node configurations.
the environment variable ``HIP_FORCE_DEV_KERNARG`` can improve vLLM
performance. Set it to ``export HIP_FORCE_DEV_KERNARG=1``.
* Set the :ref:`RCCL environment variable <mi300x-rccl>` ``NCCL_MIN_NCHANNELS``
to ``112`` to increase the number of channels on MI300X to potentially improve
performance.
* Set the environment variable ``TORCH_BLAS_PREFER_HIPBLASLT=1`` to use hipBLASLt to improve performance.
Auto-tuning using PyTorch TunableOp
------------------------------------
Since vLLM is based on the PyTorch framework, PyTorch TunableOp can be used for auto-tuning.
You can run auto-tuning with TunableOp in two simple steps without modifying your code:
* Enable TunableOp and tuning. Optionally, enable verbose mode:
.. code-block:: shell
PYTORCH_TUNABLEOP_ENABLED=1 PYTORCH_TUNABLEOP_VERBOSE=1 your_vllm_script.sh
* Enable TunableOp and disable tuning and measure.
.. code-block:: shell
PYTORCH_TUNABLEOP_ENABLED=1 PYTORCH_TUNABLEOP_TUNING=0 your_vllm_script.sh
Learn more about TunableOp in the :ref:`PyTorch TunableOp <mi300x-tunableop>` section.
Performance tuning based on vLLM engine configurations
-------------------------------------------------------
The following subsections describe vLLM-specific configurations for performance tuning.
You can tune the following vLLM parameters to achieve optimal performance.
* ``tensor_parallel_size``
* ``gpu_memory_utilization``
* ``dtype``
* ``enforce_eager``
* ``kv_cache_dtype``
* ``input_len``
* ``output_len``
* ``max_num_seqs``
* ``num_scheduler_steps``
* ``max_model_len``
* ``enable_chunked_prefill``
* ``distributed_executor_backend``
* ``max_seq_len_to_capture``
Refer to `vLLM documentation <https://docs.vllm.ai/en/latest/models/performance.html>`_
for additional performance tips. :ref:`fine-tuning-llms-vllm` describes vLLM
usage with ROCm.
ROCm provides a prebuilt optimized Docker image for validating the performance
of LLM inference with vLLM on MI300X series accelerators. The Docker image includes
ROCm, vLLM, and PyTorch. For more information, see
:doc:`/how-to/rocm-for-ai/inference/benchmark-docker/vllm`.
.. _mi300x-vllm-throughput-measurement:
Evaluating performance by throughput measurement
-------------------------------------------------
This tuning guide evaluates the performance of LLM inference workloads by measuring throughput in tokens per second (TPS). Throughput can be assessed using both real-world and synthetic data, depending on your evaluation goals.
Refer to the benchmarking script located at ``benchmarks/benchmark_throughput.py`` in the `vLLM repository <https://github.com/ROCm/vllm/blob/main/benchmarks/benchmark_throughput.py>`_.
Use this script to measure throughput effectively. You can assess throughput using real-world and synthetic data, depending on your evaluation goals.
* For realistic performance evaluation, you can use datasets like Hugging Face's
``ShareGPT_V3_unfiltered_cleaned_split.json``. This dataset includes real-world conversational
data, making it a good representation of typical use cases for language models. Download it using
the following command:
.. code-block:: shell
wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
* For standardized benchmarking, you can set fixed input and output token
lengths. Synthetic prompts provide consistent benchmarking runs, making it
easier to compare performance across different models or configurations.
Additionally, a controlled environment simplifies analysis.
By balancing real-world data and synthetic data approaches, you can get a well-rounded understanding of model performance in varied scenarios.
.. _mi300x-vllm-single-node:
Maximizing vLLM instances on a single node
------------------------------------------
The general guideline is to maximize per-node throughput by running as many vLLM instances as possible.
However, running too many instances might lead to insufficient memory for the KV-cache, which can affect performance.
The Instinct MI300X accelerator is equipped with 192GB of HBM3 memory capacity and bandwidth.
For models that fit in one GPU -- to maximize the accumulated throughput -- you can run as many as eight vLLM instances
simultaneously on one MI300X node (with eight GPUs). To do so, use the GPU isolation environment
variable ``CUDA_VISIBLE_DEVICES``.
For example, this script runs eight instances of vLLM for throughput benchmarking at the same time
with a model that can fit in one GPU:
.. code-block:: shell
for i in $(seq 0 7);
do
CUDA_VISIBLE_DEVICES="$i" python3 /app/vllm/benchmarks/benchmark_throughput.py -tp 1 --dataset "/path/to/dataset/ShareGPT_V3_unfiltered_cleaned_split.json" --model /path/to/model &
done
The total throughput achieved by running ``N`` instances of vLLM is generally much higher than running a
single vLLM instance across ``N`` GPUs simultaneously (that is, configuring ``tensor_parallel_size`` as N or
using the ``-tp`` N option, where ``1 < N ≤ 8``).
vLLM on MI300X accelerators can run a variety of model weights, including Llama 2 (7b, 13b, 70b), Llama 3 (8b, 70b), Qwen2 (7b, 72b), Mixtral-8x7b, Mixtral-8x22b, and so on.
Notable configurations include Llama2-70b and Llama3-70b models on a single MI300X GPU, and the Llama3.1 405b model can fit on one single node with 8 MI300X GPUs.
.. _mi300x-vllm-gpu-memory-utilization:
Configure the gpu_memory_utilization parameter
----------------------------------------------
There are two ways to increase throughput by configuring ``gpu-memory-utilization`` parameter.
1. Increase ``gpu-memory-utilization`` to improve the throughput for a single instance as long as
it does not incur HIP or CUDA Out Of Memory. The default ``gpu-memory-utilization`` is 0.9.
You can set it to ``>0.9`` and ``<1``.
For example, below benchmarking command set the ``gpu-memory-utilization`` as 0.98, or 98%.
.. code-block:: shell
/vllm-workspace/benchmarks/benchmark_throughput.py --gpu-memory-utilization 0.98 --input-len 1024 --output-len 128 --model /path/to/model
2. Decrease ``gpu-memory-utilization`` to maximize the number of vLLM instances on the same GPU.
Specify GPU memory utilization to run as many instances of vLLM as possible on a single
GPU. However, too many instances can result in no memory for KV-cache. For small models, run
multiple instances of vLLM on the same GPU by specifying a smaller ``gpu-memory-utilization`` -- as
long as it would not cause HIP Out Of Memory.
For example, run two instances of the Llama3-8b model at the same time on a single GPU by specifying
``--gpu-memory-utilization`` to 0.4 (40%) as follows (on GPU ``0``):
.. code-block:: shell
CUDA_VISIBLE_DEVICES=0 python3 /vllm-workspace/benchmarks/benchmark_throughput.py --gpu-memory-utilization 0.4
--dataset "/path/to/dataset/ShareGPT_V3_unfiltered_cleaned_split.json" --model /path/to/model &
CUDA_VISIBLE_DEVICES=0 python3 /vllm-workspace/benchmarks/benchmark_throughput.py --gpu-memory-utilization 0.4
--dataset "/path/to/dataset/ShareGPT_V3_unfiltered_cleaned_split.json" --model /path/to/model &
See :ref:`vllm-engine-args` for other performance suggestions.
.. _mi300x-vllm-multiple-gpus:
Run vLLM on multiple GPUs
-------------------------
The two main reasons to use multiple GPUs are:
* The model size is too big to run vLLM using one GPU as it results HIP Out of Memory.
* To achieve better latency when using a single GPU is not desirable.
To run one vLLM instance on multiple GPUs, use the ``-tp`` or ``--tensor-parallel-size`` option to
specify multiple GPUs. Optionally, use the ``CUDA_VISIBLE_DEVICES`` environment variable to specify
the GPUs.
For example, you can use two GPUs to start an API server on port 8000:
.. code-block:: shell
python -m vllm.entrypoints.api_server --model /path/to/model --dtype
float16 -tp 2 --port 8000 &
To achieve both latency and throughput performance for serving, you can run multiple API servers on
different GPUs by specifying different ports for each server and use ``CUDA_VISIBLE_DEVICES`` to
specify the GPUs for each server, for example:
.. code-block:: shell
CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.api_server --model
/path/to/model --dtype float16 -tp 2 --port 8000 &
CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.api_server --model
/path/to/model --dtype float16 -tp 2 --port 8001 &
Choose an attention backend
---------------------------
vLLM on ROCm supports two attention backends, each suitable for different use cases and performance
requirements:
- **Triton Flash Attention** - For benchmarking, run vLLM scripts at
least once as a warm-up step so Triton can perform auto-tuning before
collecting benchmarking numbers. This is the default setting.
- **Composable Kernel (CK) Flash Attention** - To use CK Flash Attention, specify
the environment variable as ``export VLLM_USE_TRITON_FLASH_ATTN=0``.
Refer to :ref:`Model acceleration libraries <acceleration-flash-attention>`
to learn more about Flash Attention with Triton or CK backends.
.. _vllm-engine-args:
vLLM engine arguments
---------------------
The following are configuration suggestions to potentially improve performance with vLLM. See
`vLLM's engine arguments documentation <https://docs.vllm.ai/en/latest/serving/engine_args.html>`_
for a full list of configurable engine arguments.
Configure the max-num-seqs parameter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Increase the ``max-num-seqs`` parameter from the default ``256`` to ``512`` (``--max-num-seqs
512``). This increases the maximum number of sequences per iteration and can improve throughput.
Use the float16 dtype
^^^^^^^^^^^^^^^^^^^^^
The default data type (``dtype``) is specified in the models configuration file. For instance, some models use ``torch.bfloat16`` as their default ``dtype``.
Use float16 (``--dtype float16``) for better performance.
Multi-step scheduling
^^^^^^^^^^^^^^^^^^^^^
Setting ``num-scheduler-steps`` for multi-step scheduling can increase performance. Set it between 10 to 15 (``--num-scheduler-steps 10``).
Distributed executor backend
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The vLLM supports two modes of distributed executor backend: ``ray`` and ``mp``. When using the `<https://github.com/ROCm/vllm>`__ fork, using the ``mp``
backend (``--distributed_executor_backend mp``) is recommended.
Graph mode max-seq-len-to-capture
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Maximum sequence length covered by CUDA graphs. In the default mode (where ``enforce_eager`` is ``False``), when a sequence has context length
larger than this, vLLM engine falls back to eager mode. The default is 8192.
When working with models that support long context lengths, set the parameter ``--max-seq-len-to-capture`` to 16384.
See this `vLLM blog <https://blog.vllm.ai/2024/10/23/vllm-serving-amd.html>`__ for details.
An example of long context length model is Qwen2-7b.
Whether to enable chunked prefill
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another vLLM performance tip is to enable chunked prefill to improve
throughput. Chunked prefill allows large prefills to be chunked into
smaller chunks and batched together with decode requests.
You can enable the feature by specifying ``--enable-chunked-prefill`` in the
command line or setting ``enable_chunked_prefill=True`` in the LLM
constructor. 
As stated in `vLLM's documentation, <https://docs.vllm.ai/en/latest/models/performance.html#chunked-prefill>`__,
you can tune the performance by changing ``max_num_batched_tokens``. By
default, it is set to 512 and optimized for ITL (inter-token latency).
Smaller ``max_num_batched_tokens`` achieves better ITL because there are
fewer prefills interrupting decodes.
Higher ``max_num_batched_tokens`` achieves better TTFT (time to the first
token) as you can put more prefill to the batch.
You might experience noticeable throughput improvements when
benchmarking on a single GPU or 8 GPUs using the vLLM throughput
benchmarking script along with the ShareGPT dataset as input.
In the case of fixed ``input-len``/``output-len``, for some configurations,
enabling chunked prefill increases the throughput. For some other
configurations, the throughput may be worse and elicit a need to tune
parameter ``max_num_batched_tokens`` (for example, increasing ``max_num_batched_tokens`` value to 4096 or larger).
.. note::
Chunked prefill is no longer recommended. See the vLLM blog: `Serving LLMs on AMD MI300X: Best Practices <https://blog.vllm.ai/2024/10/23/vllm-serving-amd.html>`_ (October 2024).
Quantization support
---------------------
Quantization reduces the precision of the models weights and activations, which significantly decreases the memory footprint.
``fp8(w8a8)`` and ``AWQ`` quantization are supported for ROCm.
FP8 quantization
^^^^^^^^^^^^^^^^^
`<https://github.com/ROCm/vllm>`__ supports FP8 (8-bit floating point) weight and activation quantization using hardware acceleration on the Instinct MI300X.
Quantization of models with FP8 allows for a 2x reduction in model memory requirements and up to a 1.6x improvement in throughput with minimal impact on accuracy.
AMD publishes Quark Quantized OCP FP8 models on Hugging Face. For example:
* `Llama-3.1-8B-Instruct-FP8-KV <https://huggingface.co/amd/Llama-3.1-8B-Instruct-FP8-KV>`__
* `Llama-3.1-70B-Instruct-FP8-KV <https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV>`__
* `Llama-3.1-405B-Instruct-FP8-KV <https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV>`__
* `Mixtral-8x7B-Instruct-v0.1-FP8-KV <https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV>`__
* `Mixtral-8x22B-Instruct-v0.1-FP8-KV <https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV>`__
To enable vLLM benchmarking to run on fp8 quantized models, use the ``--quantization`` parameter with value ``fp8`` (``--quantization fp8``).
AWQ quantization
^^^^^^^^^^^^^^^^
You can quantize your own models by installing AutoAWQ or picking one of the 400+ models on Hugging Face. Be aware that
that AWQ support in vLLM is currently underoptimized.
To enable vLLM to run on ``awq`` quantized models, using ``--quantization`` parameter with ``awq`` (``--quantization awq``).
You can find more specifics in the `vLLM AutoAWQ documentation <https://docs.vllm.ai/en/stable/quantization/auto_awq.html>`_.
fp8 kv-cached-dtype
^^^^^^^^^^^^^^^^^^^^^^^
Using ``fp8 kv-cache dtype`` can improve performance as it reduces the size
of ``kv-cache``. As a result, it reduces the cost required for reading and
writing the ``kv-cache``.
To use this feature, specify ``--kv-cache-dtype`` as ``fp8``.
To specify the quantization scaling config, use the
``--quantization-param-path`` parameter. If the parameter is not specified,
the default scaling factor of ``1`` is used, which can lead to less accurate
results. To generate ``kv-cache`` scaling JSON file, see `FP8 KV
Cache <https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_kv_cache/README.md>`__
in the vLLM GitHub repository.
Two sample Llama scaling configuration files are in vLLM for ``llama2-70b`` and
``llama2-7b``.
If building the vLLM using
`Dockerfile.rocm <https://github.com/vllm-project/vllm/blob/main/docker/Dockerfile.rocm>`_
for ``llama2-70b`` scale config, find the file at
``/vllm-workspace/tests/fp8_kv/llama2-70b-fp8-kv/kv_cache_scales.json`` at
runtime.
Below is a sample command to run benchmarking with this feature enabled
for the ``llama2-70b`` model:
.. code-block:: shell
python3 /vllm-workspace/benchmarks/benchmark_throughput.py --model \
/path/to/llama2-70b-model --kv-cache-dtype "fp8" \
--quantization-param-path \
"/vllm-workspace/tests/fp8_kv/llama2-70b-fp8-kv/kv_cache_scales.json" \
--input-len 512 --output-len 256 --num-prompts 500
.. _mi300x-tunableop: .. _mi300x-tunableop:
@@ -917,7 +557,7 @@ ROCm library tuning involves optimizing the performance of routine computational
operations (such as ``GEMM``) provided by ROCm libraries like operations (such as ``GEMM``) provided by ROCm libraries like
:ref:`hipBLASLt <mi300x-hipblaslt>`, :ref:`Composable Kernel <mi300x-ck>`, :ref:`hipBLASLt <mi300x-hipblaslt>`, :ref:`Composable Kernel <mi300x-ck>`,
:ref:`MIOpen <mi300x-miopen>`, and :ref:`RCCL <mi300x-rccl>`. This tuning aims :ref:`MIOpen <mi300x-miopen>`, and :ref:`RCCL <mi300x-rccl>`. This tuning aims
to maximize efficiency and throughput on Instinct MI300X accelerators to gain to maximize efficiency and throughput on Instinct MI300X GPUs to gain
improved application performance. improved application performance.
.. _mi300x-library-gemm: .. _mi300x-library-gemm:
@@ -946,33 +586,33 @@ for details.
.. code-block:: shell .. code-block:: shell
HIP_FORCE_DEV_KERNARG=1  hipblaslt-bench --alpha 1 --beta 0 -r f16_r \ HIP_FORCE_DEV_KERNARG=1 hipblaslt-bench --alpha 1 --beta 0 -r f16_r \
--a_type f16_r --b_type f8_r --compute_type f32_f16_r \ --a_type f16_r --b_type f8_r --compute_type f32_f16_r \
--initialization trig_float  --cold_iters 100 --iters 1000 --rotating 256 --initialization trig_float --cold_iters 100 --iters 1000 --rotating 256
* Example 2: Benchmark forward epilogues and backward epilogues * Example 2: Benchmark forward epilogues and backward epilogues
* ``HIPBLASLT_EPILOGUE_RELU: "--activation_type relu";`` * ``HIPBLASLT_EPILOGUE_RELU: "--activation_type relu";``
* ``HIPBLASLT_EPILOGUE_BIAS: "--bias_vector";`` * ``HIPBLASLT_EPILOGUE_BIAS: "--bias_vector";``
* ``HIPBLASLT_EPILOGUE_RELU_BIAS: "--activation_type relu --bias_vector";`` * ``HIPBLASLT_EPILOGUE_RELU_BIAS: "--activation_type relu --bias_vector";``
* ``HIPBLASLT_EPILOGUE_GELU: "--activation_type gelu";`` * ``HIPBLASLT_EPILOGUE_GELU: "--activation_type gelu";``
* ``HIPBLASLT_EPILOGUE_DGELU": --activation_type gelu --gradient";`` * ``HIPBLASLT_EPILOGUE_DGELU": --activation_type gelu --gradient";``
* ``HIPBLASLT_EPILOGUE_GELU_BIAS: "--activation_type gelu --bias_vector";`` * ``HIPBLASLT_EPILOGUE_GELU_BIAS: "--activation_type gelu --bias_vector";``
* ``HIPBLASLT_EPILOGUE_GELU_AUX: "--activation_type gelu --use_e";`` * ``HIPBLASLT_EPILOGUE_GELU_AUX: "--activation_type gelu --use_e";``
* ``HIPBLASLT_EPILOGUE_GELU_AUX_BIAS: "--activation_type gelu --bias_vector --use_e";`` * ``HIPBLASLT_EPILOGUE_GELU_AUX_BIAS: "--activation_type gelu --bias_vector --use_e";``
* ``HIPBLASLT_EPILOGUE_DGELU_BGRAD: "--activation_type gelu --bias_vector --gradient";`` * ``HIPBLASLT_EPILOGUE_DGELU_BGRAD: "--activation_type gelu --bias_vector --gradient";``
* ``HIPBLASLT_EPILOGUE_BGRADA: "--bias_vector --gradient --bias_source a";`` * ``HIPBLASLT_EPILOGUE_BGRADA: "--bias_vector --gradient --bias_source a";``
* ``HIPBLASLT_EPILOGUE_BGRADB:  "--bias_vector --gradient --bias_source b";`` * ``HIPBLASLT_EPILOGUE_BGRADB: "--bias_vector --gradient --bias_source b";``
hipBLASLt auto-tuning using hipblaslt-bench hipBLASLt auto-tuning using hipblaslt-bench
@@ -1031,26 +671,26 @@ The tuning tool is a two-step tool. It first runs the benchmark, then it creates
.. code-block:: python .. code-block:: python
defaultBenchOptions = {"ProblemType": { defaultBenchOptions = {"ProblemType": {
    "TransposeA": 0, "TransposeA": 0,
    "TransposeB": 0, "TransposeB": 0,
    "ComputeInputDataType": "s", "ComputeInputDataType": "s",
    "ComputeDataType": "s", "ComputeDataType": "s",
    "DataTypeC": "s", "DataTypeC": "s",
    "DataTypeD": "s", "DataTypeD": "s",
    "UseBias": False "UseBias": False
}, "TestConfig": { }, "TestConfig": {
    "ColdIter": 20, "ColdIter": 20,
    "Iter": 100, "Iter": 100,
    "AlgoMethod": "all", "AlgoMethod": "all",
    "RequestedSolutions": 2, # Only works in AlgoMethod heuristic "RequestedSolutions": 2, # Only works in AlgoMethod heuristic
    "SolutionIndex": None, # Only works in AlgoMethod index "SolutionIndex": None, # Only works in AlgoMethod index
    "ApiMethod": "cpp", "ApiMethod": "cpp",
    "RotatingBuffer": 0, "RotatingBuffer": 0,
}, "TuningParameters": { }, "TuningParameters": {
    "SplitK": [0] "SplitK": [0]
}, "ProblemSizes": []} }, "ProblemSizes": []}
defaultCreateLogicOptions = {}  # Currently unused defaultCreateLogicOptions = {} # Currently unused
* ``TestConfig`` * ``TestConfig``
1. ``ColdIter``: This is number the warm-up iterations before starting the kernel benchmark. 1. ``ColdIter``: This is number the warm-up iterations before starting the kernel benchmark.
@@ -1230,7 +870,7 @@ command:
.. code-block:: shell .. code-block:: shell
merge.py original_dir new_tuned_yaml_dir output_dir  merge.py original_dir new_tuned_yaml_dir output_dir
The following table describes the logic YAML files. The following table describes the logic YAML files.
@@ -1451,7 +1091,7 @@ you can only use a fraction of the potential bandwidth on the node.
The following figure shows an The following figure shows an
:doc:`MI300X node-level architecture </conceptual/gpu-arch/mi300>` of a :doc:`MI300X node-level architecture </conceptual/gpu-arch/mi300>` of a
system with AMD EPYC processors in a dual-socket configuration and eight system with AMD EPYC processors in a dual-socket configuration and eight
AMD Instinct MI300X accelerators. The MI300X OAMs attach to the host system via AMD Instinct MI300X GPUs. The MI300X OAMs attach to the host system via
PCIe Gen 5 x16 links (yellow lines). The GPUs use seven high-bandwidth, PCIe Gen 5 x16 links (yellow lines). The GPUs use seven high-bandwidth,
low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected
8-GPU system. 8-GPU system.
@@ -1460,7 +1100,7 @@ low-latency AMD Infinity Fabric™ links (red lines) to form a fully connected
.. figure:: ../../../data/shared/mi300-node-level-arch.png .. figure:: ../../../data/shared/mi300-node-level-arch.png
MI300 series node-level architecture showing 8 fully interconnected MI300X MI300 Series node-level architecture showing 8 fully interconnected MI300X
OAM modules connected to (optional) PCIe switches via re-timers and HGX OAM modules connected to (optional) PCIe switches via re-timers and HGX
connectors. connectors.
@@ -1653,7 +1293,7 @@ Auto-tunable kernel configuration involves adjusting memory access and computati
resources assigned to each compute unit. It encompasses the usage of resources assigned to each compute unit. It encompasses the usage of
:ref:`LDS <mi300x-cu-fig>`, register, and task scheduling on a compute unit. :ref:`LDS <mi300x-cu-fig>`, register, and task scheduling on a compute unit.
The accelerator or GPU contains global memory, local data share (LDS), and The GPU contains global memory, local data share (LDS), and
registers. Global memory has high access latency, but is large. LDS access has registers. Global memory has high access latency, but is large. LDS access has
much lower latency, but is smaller. It is a fast on-CU software-managed memory much lower latency, but is smaller. It is a fast on-CU software-managed memory
that can be used to efficiently share data between all work items in a block. that can be used to efficiently share data between all work items in a block.
@@ -1666,11 +1306,11 @@ Register access is the fastest yet smallest among the three.
Schematic representation of a CU in the CDNA2 or CDNA3 architecture. Schematic representation of a CU in the CDNA2 or CDNA3 architecture.
The following is a list of kernel arguments used for tuning performance and The following is a list of kernel arguments used for tuning performance and
resource allocation on AMD accelerators, which helps in optimizing the resource allocation on AMD GPUs, which helps in optimizing the
efficiency and throughput of various computational kernels. efficiency and throughput of various computational kernels.
``num_stages=n`` ``num_stages=n``
Adjusts the number of pipeline stages for different types of kernels. On AMD accelerators, set ``num_stages`` Adjusts the number of pipeline stages for different types of kernels. On AMD GPUs, set ``num_stages``
according to the following rules: according to the following rules:
* For kernels with a single GEMM, set to ``2``. * For kernels with a single GEMM, set to ``2``.
@@ -1697,15 +1337,15 @@ efficiency and throughput of various computational kernels.
* The occupancy of the kernel is limited by VGPR usage, and * The occupancy of the kernel is limited by VGPR usage, and
* The current VGPR usage is only a few above a boundary in * The current VGPR usage is only a few above a boundary in
:ref:`Occupancy related to VGPR usage in an Instinct MI300X accelerator <mi300x-occupancy-vgpr-table>`. :ref:`Occupancy related to VGPR usage in an Instinct MI300X GPU <mi300x-occupancy-vgpr-table>`.
.. _mi300x-occupancy-vgpr-table: .. _mi300x-occupancy-vgpr-table:
.. figure:: ../../../data/shared/occupancy-vgpr.png .. figure:: ../../../data/shared/occupancy-vgpr.png
:alt: Occupancy related to VGPR usage in an Instinct MI300X accelerator. :alt: Occupancy related to VGPR usage in an Instinct MI300X GPU.
:align: center :align: center
Occupancy related to VGPRs usage on an Instinct MI300X accelerator Occupancy related to VGPRs usage on an Instinct MI300X GPU
For example, according to the table, each Execution Unit (EU) has 512 available For example, according to the table, each Execution Unit (EU) has 512 available
VGPRs, which are allocated in blocks of 16. If the current VGPR usage is 170, VGPRs, which are allocated in blocks of 16. If the current VGPR usage is 170,
@@ -1730,7 +1370,7 @@ VGPR usage so that it might fit 3 waves per EU.
- ``matrix_instr_nonkdim = 32``: ``mfma_32x32`` is used. - ``matrix_instr_nonkdim = 32``: ``mfma_32x32`` is used.
For GEMM kernels on an MI300X accelerator, ``mfma_16x16`` typically outperforms ``mfma_32x32``, even for large For GEMM kernels on an MI300X GPU, ``mfma_16x16`` typically outperforms ``mfma_32x32``, even for large
tile/GEMM sizes. tile/GEMM sizes.
@@ -1749,7 +1389,7 @@ the number of CUs a kernel can distribute its task across.
XCD-level system architecture showing 40 compute units, XCD-level system architecture showing 40 compute units,
each with 32 KB L1 cache, a unified compute system with 4 ACE compute each with 32 KB L1 cache, a unified compute system with 4 ACE compute
accelerators, shared 4MB of L2 cache, and a hardware scheduler (HWS). GPUs, shared 4MB of L2 cache, and a hardware scheduler (HWS).
You can query hardware resources with the command ``rocminfo`` in the You can query hardware resources with the command ``rocminfo`` in the
``/opt/rocm/bin`` directory. For instance, query the number of CUs, number of ``/opt/rocm/bin`` directory. For instance, query the number of CUs, number of
@@ -1833,7 +1473,7 @@ de-quantize the ``int4`` key-value from the ``int4`` data type to ``fp16``.
From the IR snippet, you can see ``i32`` data is loaded from global memory to From the IR snippet, you can see ``i32`` data is loaded from global memory to
registers (``%190``). With a few element-wise operations in registers, it is registers (``%190``). With a few element-wise operations in registers, it is
stored in shared memory (``%269``) for the transpose operation (``%270``), which stored in shared memory (``%269``) for the transpose operation (``%270``), which
needs data movement across different threads. With the transpose done, it is needs data movement across different threads. With the transpose done, it is
loaded from LDS to register again (``%276``), and with a few more loaded from LDS to register again (``%276``), and with a few more
element-wise operations, it is stored to LDS again (``%298``). The last step element-wise operations, it is stored to LDS again (``%298``). The last step
@@ -1967,7 +1607,7 @@ something similar to the following:
loaded at: [0x7fd4f100c000-0x7fd4f100e070] loaded at: [0x7fd4f100c000-0x7fd4f100e070]
The kernel name and the code object file should be listed. In the The kernel name and the code object file should be listed. In the
example above, the kernel name is vector_add_assert_trap, but this might example above, the kernel name is vector_add_assert_trap, but this might
also look like: also look like:
.. code-block:: text .. code-block:: text
@@ -2081,3 +1721,8 @@ Hardware efficiency is maximized with 4 or fewer HIP streams. These environment
configuration to two compute streams and two RCCL streams, aligning with this best practice. configuration to two compute streams and two RCCL streams, aligning with this best practice.
Additionally, RCCL is often pre-optimized for MI300 systems in production by querying the node Additionally, RCCL is often pre-optimized for MI300 systems in production by querying the node
topology during startup, reducing the need for extensive manual tuning. topology during startup, reducing the need for extensive manual tuning.
Further reading
===============
* :doc:`vllm-optimization`

View File

@@ -1,7 +1,7 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the
ROCm vLLM Docker image. ROCm vLLM Docker image.
:keywords: model, MAD, automation, dashboarding, validate :keywords: model, MAD, automation, dashboarding, validate
@@ -23,9 +23,9 @@ vLLM inference performance testing
The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers The `ROCm vLLM Docker <{{ unified_docker.docker_hub_url }}>`_ image offers
a prebuilt, optimized environment for validating large language model (LLM) a prebuilt, optimized environment for validating large language model (LLM)
inference performance on AMD Instinct™ MI300X series accelerators. This ROCm vLLM inference performance on AMD Instinct™ MI300X Series GPUs. This ROCm vLLM
Docker image integrates vLLM and PyTorch tailored specifically for MI300X series Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series
accelerators and includes the following components: GPUs and includes the following components:
.. list-table:: .. list-table::
:header-rows: 1 :header-rows: 1
@@ -47,7 +47,7 @@ vLLM inference performance testing
With this Docker image, you can quickly test the :ref:`expected With this Docker image, you can quickly test the :ref:`expected
inference performance numbers <vllm-benchmark-performance-measurements-812>` for inference performance numbers <vllm-benchmark-performance-measurements-812>` for
MI300X series accelerators. MI300X Series GPUs.
What's new What's new
========== ==========
@@ -139,7 +139,7 @@ page provides reference throughput and serving measurements for inferencing popu
The performance data presented in The performance data presented in
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_ `Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
only reflects the latest version of this inference benchmarking environment. only reflects the latest version of this inference benchmarking environment.
The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X accelerators or ROCm software. The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X GPUs or ROCm software.
System validation System validation
================= =================
@@ -424,7 +424,7 @@ Further reading
- To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide <https://github.com/ROCm/MAD?tab=readme-ov-file#usage-guide>`__. - To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide <https://github.com/ROCm/MAD?tab=readme-ov-file#usage-guide>`__.
- To learn more about system settings and management practices to configure your system for - To learn more about system settings and management practices to configure your system for
AMD Instinct MI300X series accelerators, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_. AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_.
- For application performance optimization strategies for HPC and AI workloads, - For application performance optimization strategies for HPC and AI workloads,
including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`. including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`.

View File

@@ -0,0 +1,448 @@
:orphan:
.. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the ROCm vLLM Docker image.
:keywords: model, MAD, automation, dashboarding, validate
**********************************
vLLM inference performance testing
**********************************
.. caution::
This documentation does not reflect the latest version of ROCm vLLM
inference performance documentation. See :doc:`../vllm` for the latest version.
.. _vllm-benchmark-unified-docker-909:
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20250909-benchmark-models.yaml
{% set docker = data.dockers[0] %}
The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers
a prebuilt, optimized environment for validating large language model (LLM)
inference performance on AMD Instinct™ MI300X Series accelerators. This ROCm vLLM
Docker image integrates vLLM and PyTorch tailored specifically for MI300X Series
accelerators and includes the following components:
.. list-table::
:header-rows: 1
* - Software component
- Version
{% for component_name, component_version in docker.components.items() %}
* - {{ component_name }}
- {{ component_version }}
{% endfor %}
With this Docker image, you can quickly test the :ref:`expected
inference performance numbers <vllm-benchmark-performance-measurements-909>` for
MI300X Series accelerators.
What's new
==========
The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release <vllm-history>`.
* Upgraded to vLLM v0.10.1.
* Set ``VLLM_V1_USE_PREFILL_DECODE_ATTENTION=1`` by default for better performance.
* Set ``VLLM_ROCM_USE_AITER_RMSNORM=0`` by default to avoid various issues with torch compile.
.. _vllm-benchmark-supported-models-909:
Supported models
================
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20250909-benchmark-models.yaml
{% set docker = data.dockers[0] %}
{% set model_groups = data.model_groups %}
.. _vllm-benchmark-available-models-909:
The following models are supported for inference performance benchmarking
with vLLM and ROCm. Some instructions, commands, and recommendations in this
documentation might vary by model -- select one to get started.
.. raw:: html
<div id="vllm-benchmark-ud-params-picker" class="container-fluid">
<div class="row gx-0">
<div class="col-2 me-1 px-2 model-param-head">Model</div>
<div class="row col-10 pe-0">
{% for model_group in model_groups %}
<div class="col-3 px-2 model-param" data-param-k="model-group" data-param-v="{{ model_group.tag }}" tabindex="0">{{ model_group.group }}</div>
{% endfor %}
</div>
</div>
<div class="row gx-0 pt-1">
<div class="col-2 me-1 px-2 model-param-head">Variant</div>
<div class="row col-10 pe-0">
{% for model_group in model_groups %}
{% set models = model_group.models %}
{% for model in models %}
{% if models|length % 3 == 0 %}
<div class="col-4 px-2 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% else %}
<div class="col-6 px-2 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</div>
.. _vllm-benchmark-vllm-909:
{% for model_group in model_groups %}
{% for model in model_group.models %}
.. container:: model-doc {{ model.mad_tag }}
.. note::
See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model.
Some models require access authorization prior to use via an external license agreement through a third party.
{% if model.precision == "float8" and model.model_repo.startswith("amd") %}
This model uses FP8 quantization via `AMD Quark <https://quark.docs.amd.com/latest/>`__ for efficient inference on AMD accelerators.
{% endif %}
{% endfor %}
{% endfor %}
.. _vllm-benchmark-performance-measurements-909:
Performance measurements
========================
To evaluate performance, the
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
page provides reference throughput and serving measurements for inferencing popular AI models.
.. important::
The performance data presented in
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
only reflects the latest version of this inference benchmarking environment.
The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct MI325X and MI300X accelerators or ROCm software.
System validation
=================
Before running AI workloads, it's important to validate that your AMD hardware is configured
correctly and performing optimally.
If you have already validated your system settings, including aspects like NUMA auto-balancing, you
can skip this step. Otherwise, complete the procedures in the :ref:`System validation and
optimization <rocm-for-ai-system-optimization>` guide to properly configure your system settings
before starting training.
To test for optimal performance, consult the recommended :ref:`System health benchmarks
<rocm-for-ai-system-health-bench>`. This suite of tests will help you verify and fine-tune your
system's configuration.
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20250909-benchmark-models.yaml
{% set docker = data.dockers[0] %}
{% set model_groups = data.model_groups %}
Pull the Docker image
=====================
Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_.
Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull {{ docker.pull_tag }}
Benchmarking
============
Once the setup is complete, choose between two options to reproduce the
benchmark results:
.. _vllm-benchmark-mad-909:
{% for model_group in model_groups %}
{% for model in model_group.models %}
.. container:: model-doc {{model.mad_tag}}
.. tab-set::
.. tab-item:: MAD-integrated benchmarking
The following run command is tailored to {{ model.model }}.
See :ref:`vllm-benchmark-supported-models-909` to switch to another available model.
1. Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
2. Use this command to run the performance benchmark test on the `{{model.model}} <{{ model.url }}>`_ model
using one GPU with the :literal:`{{model.precision}}` data type on the host machine.
.. code-block:: shell
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
madengine run \
--tags {{model.mad_tag}} \
--keep-model-dir \
--live-output \
--timeout 28800
MAD launches a Docker container with the name
``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the
model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv``
and ``{{ model.mad_tag }}_serving.csv``.
Although the :ref:`available models
<vllm-benchmark-available-models-909>` are preconfigured to collect
offline throughput and online serving performance data, you can
also change the benchmarking parameters. See the standalone
benchmarking tab for more information.
{% if model.tunableop %}
.. note::
For improved performance, consider enabling :ref:`PyTorch TunableOp <mi300x-tunableop>`.
TunableOp automatically explores different implementations and configurations of certain PyTorch
operators to find the fastest one for your hardware.
By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see
`<https://github.com/ROCm/MAD/blob/develop/models.json>`__). To enable it, include
the ``--tunableop on`` argument in your run.
Enabling TunableOp triggers a two-pass run -- a warm-up followed by the
performance-collection run.
{% endif %}
.. tab-item:: Standalone benchmarking
The following commands are optimized for {{ model.model }}.
See :ref:`vllm-benchmark-supported-models-909` to switch to another available model.
.. seealso::
For more information on configuration, see the `config files
<https://github.com/ROCm/MAD/tree/develop/scripts/vllm/configs>`__
in the MAD repository. Refer to the `vLLM engine <https://docs.vllm.ai/en/latest/configuration/engine_args.html#engineargs>`__
for descriptions of available configuration options
and `Benchmarking vLLM <https://github.com/vllm-project/vllm/blob/main/benchmarks/README.md>`__ for
additional benchmarking information.
.. rubric:: Launch the container
You can run the vLLM benchmark tool independently by starting the
`Docker container <{{ docker.docker_hub_url }}>`_ as shown
in the following snippet.
.. code-block:: shell
docker pull {{ docker.pull_tag }}
docker run -it \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--shm-size 16G \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
--cap-add=SYS_PTRACE \
-v $(pwd):/workspace \
--env HUGGINGFACE_HUB_CACHE=/workspace \
--name test \
{{ docker.pull_tag }}
.. rubric:: Throughput command
Use the following command to start the throughput benchmark.
.. code-block:: shell
model={{ model.model_repo }}
tp={{ model.config.tp }}
num_prompts=1024
in=128
out=128
dtype={{ model.config.dtype }}
kv_cache_dtype={{ model.config.kv_cache_dtype }}
max_num_seqs=1024
max_seq_len_to_capture={{ model.config.max_seq_len_to_capture }}
max_num_batched_tokens={{ model.config.max_num_batched_tokens }}
max_model_len={{ model.config.max_model_len }}
vllm bench throughput --model $model \
-tp $tp \
--num-prompts $num_prompts \
--input-len $in \
--output-len $out \
--dtype $dtype \
--kv-cache-dtype $kv_cache_dtype \
--max-num-seqs $max_num_seqs \
--max-seq-len-to-capture $max_seq_len_to_capture \
--max-num-batched-tokens $max_num_batched_tokens \
--max-model-len $max_model_len \
--trust-remote-code \
--output-json ${model}_throughput.json \
--gpu-memory-utilization 0.9
.. rubric:: Serving command
1. Start the server using the following command:
.. code-block:: shell
model={{ model.model_repo }}
tp={{ model.config.tp }}
dtype={{ model.config.dtype }}
kv_cache_dtype={{ model.config.kv_cache_dtype }}
max_num_seqs=256
max_seq_len_to_capture={{ model.config.max_seq_len_to_capture }}
max_num_batched_tokens={{ model.config.max_num_batched_tokens }}
max_model_len={{ model.config.max_model_len }}
vllm serve $model \
-tp $tp \
--dtype $dtype \
--kv-cache-dtype $kv_cache_dtype \
--max-num-seqs $max_num_seqs \
--max-seq-len-to-capture $max_seq_len_to_capture \
--max-num-batched-tokens $max_num_batched_tokens \
--max-model-len $max_model_len \
--no-enable-prefix-caching \
--swap-space 16 \
--disable-log-requests \
--trust-remote-code \
--gpu-memory-utilization 0.9
Wait until the model has loaded and the server is ready to accept requests.
2. On another terminal on the same machine, run the benchmark:
.. code-block:: shell
# Connect to the container
docker exec -it test bash
# Wait for the server to start
until curl -s http://localhost:8000/v1/models; do sleep 30; done
# Run the benchmark
model={{ model.model_repo }}
max_concurrency=1
num_prompts=10
in=128
out=128
vllm bench serve --model $model \
--percentile-metrics "ttft,tpot,itl,e2el" \
--dataset-name random \
--ignore-eos \
--max-concurrency $max_concurrency \
--num-prompts $num_prompts \
--random-input-len $in \
--random-output-len $out \
--trust-remote-code \
--save-result \
--result-filename ${model}_serving.json
.. note::
For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B,
try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands.
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
.. code-block::
OSError: You are trying to access a gated repo.
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
.. raw:: html
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
.. note::
Throughput is calculated as:
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
{% endfor %}
{% endfor %}
Advanced usage
==============
For information on experimental features and known issues related to ROCm optimization efforts on vLLM,
see the developer's guide at `<https://github.com/ROCm/vllm/blob/documentation/docs/dev-docker/README.md>`__.
Reproducing the Docker image
----------------------------
To reproduce this ROCm/vLLM Docker image release, follow these steps:
1. Clone the `vLLM repository <https://github.com/ROCm/vllm>`__.
.. code-block:: shell
git clone https://github.com/ROCm/vllm.git
2. Checkout the specific release commit.
.. code-block:: shell
cd vllm
git checkout 6663000a391911eba96d7864a26ac42b07f6ef29
3. Build the Docker image. Replace ``vllm-rocm`` with your desired image tag.
.. code-block:: shell
docker build -f docker/Dockerfile.rocm -t vllm-rocm .
Further reading
===============
- To learn more about the options for latency and throughput benchmark scripts,
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide <https://github.com/ROCm/MAD?tab=readme-ov-file#usage-guide>`__.
- To learn more about system settings and management practices to configure your system for
AMD Instinct MI300X Series accelerators, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_.
- See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
a brief introduction to vLLM and optimization strategies.
- For application performance optimization strategies for HPC and AI workloads,
including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`.
- For a list of other ready-made Docker images for AI with ROCm, see
`AMD Infinity Hub <https://www.amd.com/en/developer/resources/infinity-hub.html#f-amd_hub_category=AI%20%26%20ML%20Models>`_.
Previous versions
=================
See :doc:`vllm-history` to find documentation for previous releases
of the ``ROCm/vllm`` Docker image.

View File

@@ -0,0 +1,482 @@
:orphan:
.. meta::
:description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image.
:keywords: model, MAD, automation, dashboarding, validate
**********************************
vLLM inference performance testing
**********************************
.. caution::
This documentation does not reflect the latest version of ROCm vLLM
inference performance documentation. See :doc:`../vllm` for the latest version.
.. _vllm-benchmark-unified-docker-930:
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml
{% set docker = data.dockers[0] %}
The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers a
prebuilt, optimized environment for validating large language model (LLM)
inference performance on AMD Instinct™ MI355X, MI350X, MI325X and MI300X
GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored
specifically for AMD data center GPUs and includes the following components:
.. tab-set::
.. tab-item:: {{ docker.pull_tag }}
.. list-table::
:header-rows: 1
* - Software component
- Version
{% for component_name, component_version in docker.components.items() %}
* - {{ component_name }}
- {{ component_version }}
{% endfor %}
With this Docker image, you can quickly test the :ref:`expected
inference performance numbers <vllm-benchmark-performance-measurements-930>` for
AMD Instinct GPUs.
What's new
==========
The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release <vllm-history>`.
* Added support for AMD Instinct MI355X and MI350X GPUs.
* Added support and benchmarking instructions for the following models. See :ref:`vllm-benchmark-supported-models-930`.
* Llama 4 Scout and Maverick
* DeepSeek R1 0528 FP8
* MXFP4 models (MI355X and MI350X only): Llama 3.3 70B MXFP4 and Llama 3.1 405B MXFP4
* GPT OSS 20B and 120B
* Qwen 3 32B, 30B-A3B, and 235B-A22B
* Removed the deprecated ``--max-seq-len-to-capture`` flag.
* ``--gpu-memory-utilization`` is now configurable via the `configuration files
<https://github.com/ROCm/MAD/tree/develop/scripts/vllm/configs>`__ in the MAD
repository.
.. _vllm-benchmark-supported-models-930:
Supported models
================
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml
{% set docker = data.dockers[0] %}
{% set model_groups = data.model_groups %}
.. _vllm-benchmark-available-models-930:
The following models are supported for inference performance benchmarking
with vLLM and ROCm. Some instructions, commands, and recommendations in this
documentation might vary by model -- select one to get started. MXFP4 models
are only supported on MI355X and MI350X GPUs.
.. raw:: html
<div id="vllm-benchmark-ud-params-picker" class="container-fluid">
<div class="row gx-0">
<div class="col-2 me-1 px-2 model-param-head">Model</div>
<div class="row col-10 pe-0">
{% for model_group in model_groups %}
<div class="col-4 px-2 model-param" data-param-k="model-group" data-param-v="{{ model_group.tag }}" tabindex="0">{{ model_group.group }}</div>
{% endfor %}
</div>
</div>
<div class="row gx-0 pt-1">
<div class="col-2 me-1 px-2 model-param-head">Variant</div>
<div class="row col-10 pe-0">
{% for model_group in model_groups %}
{% set models = model_group.models %}
{% for model in models %}
{% if models|length % 3 == 0 %}
<div class="col-4 px-2 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% else %}
<div class="col-6 px-2 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</div>
.. _vllm-benchmark-vllm-930:
{% for model_group in model_groups %}
{% for model in model_group.models %}
.. container:: model-doc {{ model.mad_tag }}
{% if model.precision == "float4" %}
.. important::
MXFP4 is supported only on MI355X and MI350X GPUs.
{% endif %}
.. note::
See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model.
Some models require access authorization prior to use via an external license agreement through a third party.
{% if model.precision == "float8" and model.model_repo.startswith("amd") %}
This model uses FP8 quantization via `AMD Quark <https://quark.docs.amd.com/latest/>`__ for efficient inference on AMD GPUs.
{% endif %}
{% if model.precision == "float4" and model.model_repo.startswith("amd") %}
This model uses FP4 quantization via `AMD Quark <https://quark.docs.amd.com/latest/>`__ for efficient inference on AMD GPUs.
{% endif %}
{% endfor %}
{% endfor %}
.. _vllm-benchmark-performance-measurements-930:
Performance measurements
========================
To evaluate performance, the
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
page provides reference throughput and serving measurements for inferencing popular AI models.
.. important::
The performance data presented in
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
only reflects the latest version of this inference benchmarking environment.
The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct GPUs or ROCm software.
System validation
=================
Before running AI workloads, it's important to validate that your AMD hardware is configured
correctly and performing optimally.
If you have already validated your system settings, including aspects like NUMA auto-balancing, you
can skip this step. Otherwise, complete the procedures in the :ref:`System validation and
optimization <rocm-for-ai-system-optimization>` guide to properly configure your system settings
before starting training.
To test for optimal performance, consult the recommended :ref:`System health benchmarks
<rocm-for-ai-system-health-bench>`. This suite of tests will help you verify and fine-tune your
system's configuration.
Pull the Docker image
=====================
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml
{% set docker = data.dockers[0] %}
Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_.
Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull {{ docker.pull_tag }}
Benchmarking
============
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml
{% set docker = data.dockers[0] %}
{% set model_groups = data.model_groups %}
Once the setup is complete, choose between two options to reproduce the
benchmark results:
.. _vllm-benchmark-mad-930:
{% for model_group in model_groups %}
{% for model in model_group.models %}
.. container:: model-doc {{model.mad_tag}}
.. tab-set::
.. tab-item:: MAD-integrated benchmarking
The following run command is tailored to {{ model.model }}.
See :ref:`vllm-benchmark-supported-models-930` to switch to another available model.
1. Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
2. On the host machine, use this command to run the performance benchmark test on
the `{{model.model}} <{{ model.url }}>`_ model using one node with the
:literal:`{{model.precision}}` data type.
.. code-block:: shell
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
madengine run \
--tags {{model.mad_tag}} \
--keep-model-dir \
--live-output
MAD launches a Docker container with the name
``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the
model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv``
and ``{{ model.mad_tag }}_serving.csv``.
Although the :ref:`available models
<vllm-benchmark-available-models-930>` are preconfigured to collect
offline throughput and online serving performance data, you can
also change the benchmarking parameters. See the standalone
benchmarking tab for more information.
{% if model.tunableop %}
.. note::
For improved performance, consider enabling :ref:`PyTorch TunableOp <mi300x-tunableop>`.
TunableOp automatically explores different implementations and configurations of certain PyTorch
operators to find the fastest one for your hardware.
By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see
`<https://github.com/ROCm/MAD/blob/develop/models.json>`__). To enable it, include
the ``--tunableop on`` argument in your run.
Enabling TunableOp triggers a two-pass run -- a warm-up followed by the
performance-collection run.
{% endif %}
.. tab-item:: Standalone benchmarking
The following commands are optimized for {{ model.model }}.
See :ref:`vllm-benchmark-supported-models-930` to switch to another available model.
.. seealso::
For more information on configuration, see the `config files
<https://github.com/ROCm/MAD/tree/develop/scripts/vllm/configs>`__
in the MAD repository. Refer to the `vLLM engine <https://docs.vllm.ai/en/latest/configuration/engine_args.html#engineargs>`__
for descriptions of available configuration options
and `Benchmarking vLLM <https://github.com/vllm-project/vllm/blob/main/benchmarks/README.md>`__ for
additional benchmarking information.
.. rubric:: Launch the container
You can run the vLLM benchmark tool independently by starting the
`Docker container <{{ docker.docker_hub_url }}>`_ as shown
in the following snippet.
.. code-block:: shell
docker pull {{ docker.pull_tag }}
docker run -it \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--shm-size 16G \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
--cap-add=SYS_PTRACE \
-v $(pwd):/workspace \
--env HUGGINGFACE_HUB_CACHE=/workspace \
--name test \
{{ docker.pull_tag }}
.. rubric:: Throughput command
Use the following command to start the throughput benchmark.
.. code-block:: shell
model={{ model.model_repo }}
tp={{ model.config.tp }}
num_prompts={{ model.config.num_prompts | default(1024) }}
in={{ model.config.in | default(128) }}
out={{ model.config.in | default(128) }}
dtype={{ model.config.dtype | default("auto") }}
kv_cache_dtype={{ model.config.kv_cache_dtype }}
max_num_seqs={{ model.config.max_num_seqs | default(1024) }}
max_num_batched_tokens={{ model.config.max_num_batched_tokens }}
max_model_len={{ model.config.max_model_len }}
vllm bench throughput --model $model \
-tp $tp \
--num-prompts $num_prompts \
--input-len $in \
--output-len $out \
--dtype $dtype \
--kv-cache-dtype $kv_cache_dtype \
--max-num-seqs $max_num_seqs \
--max-num-batched-tokens $max_num_batched_tokens \
--max-model-len $max_model_len \
--trust-remote-code \
--output-json ${model}_throughput.json \
--gpu-memory-utilization {{ model.config.gpu_memory_utilization | default(0.9) }}
.. rubric:: Serving command
1. Start the server using the following command:
.. code-block:: shell
model={{ model.model_repo }}
tp={{ model.config.tp }}
dtype={{ model.config.dtype }}
kv_cache_dtype={{ model.config.kv_cache_dtype }}
max_num_seqs=256
max_num_batched_tokens={{ model.config.max_num_batched_tokens }}
max_model_len={{ model.config.max_model_len }}
vllm serve $model \
-tp $tp \
--dtype $dtype \
--kv-cache-dtype $kv_cache_dtype \
--max-num-seqs $max_num_seqs \
--max-num-batched-tokens $max_num_batched_tokens \
--max-model-len $max_model_len \
--no-enable-prefix-caching \
--swap-space 16 \
--disable-log-requests \
--trust-remote-code \
--gpu-memory-utilization 0.9
Wait until the model has loaded and the server is ready to accept requests.
2. On another terminal on the same machine, run the benchmark:
.. code-block:: shell
# Connect to the container
docker exec -it test bash
# Wait for the server to start
until curl -s http://localhost:8000/v1/models; do sleep 30; done
# Run the benchmark
model={{ model.model_repo }}
max_concurrency=1
num_prompts=10
in=128
out=128
vllm bench serve --model $model \
--percentile-metrics "ttft,tpot,itl,e2el" \
--dataset-name random \
--ignore-eos \
--max-concurrency $max_concurrency \
--num-prompts $num_prompts \
--random-input-len $in \
--random-output-len $out \
--trust-remote-code \
--save-result \
--result-filename ${model}_serving.json
.. note::
For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B,
try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands.
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
.. code-block::
OSError: You are trying to access a gated repo.
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
.. raw:: html
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
.. note::
Throughput is calculated as:
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
{% endfor %}
{% endfor %}
Advanced usage
==============
For information on experimental features and known issues related to ROCm optimization efforts on vLLM,
see the developer's guide at `<https://github.com/ROCm/vllm/blob/documentation/docs/dev-docker/README.md>`__.
Reproducing the Docker image
----------------------------
To reproduce this ROCm-enabled vLLM Docker image release, follow these steps:
1. Clone the `vLLM repository <https://github.com/vllm-project/vllm>`__.
.. code-block:: shell
git clone https://github.com/vllm-project/vllm.git
cd vllm
2. Use the following command to build the image directly from the specified commit.
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.10.1_20251006-benchmark-models.yaml
{% set docker = data.dockers[0] %}
.. code-block:: shell
docker build -f docker/Dockerfile.rocm \
--build-arg REMOTE_VLLM=1 \
--build-arg VLLM_REPO=https://github.com/ROCm/vllm \
--build-arg VLLM_BRANCH="{{ docker.dockerfile.commit }}" \
-t vllm-rocm .
.. tip::
Replace ``vllm-rocm`` with your desired image tag.
Further reading
===============
- To learn more about the options for latency and throughput benchmark scripts,
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide <https://github.com/ROCm/MAD?tab=readme-ov-file#usage-guide>`__.
- To learn more about system settings and management practices to configure your system for
AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_.
- See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
a brief introduction to vLLM and optimization strategies.
- For application performance optimization strategies for HPC and AI workloads,
including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`.
- For a list of other ready-made Docker images for AI with ROCm, see
`AMD Infinity Hub <https://www.amd.com/en/developer/resources/infinity-hub.html#f-amd_hub_category=AI%20%26%20ML%20Models>`_.
Previous versions
=================
See :doc:`vllm-history` to find documentation for previous releases
of the ``ROCm/vllm`` Docker image.

View File

@@ -0,0 +1,472 @@
:orphan:
.. meta::
:description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the ROCm vLLM Docker image.
:keywords: model, MAD, automation, dashboarding, validate
**********************************
vLLM inference performance testing
**********************************
.. caution::
This documentation does not reflect the latest version of ROCm vLLM
inference performance documentation. See :doc:`../vllm` for the latest version.
.. _vllm-benchmark-unified-docker-1103:
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml
{% set docker = data.dockers[0] %}
The `ROCm vLLM Docker <{{ docker.docker_hub_url }}>`_ image offers a
prebuilt, optimized environment for validating large language model (LLM)
inference performance on AMD Instinct™ MI355X, MI350X, MI325X and MI300X
GPUs. This ROCm vLLM Docker image integrates vLLM and PyTorch tailored
specifically for AMD data center GPUs and includes the following components:
.. tab-set::
.. tab-item:: {{ docker.pull_tag }}
.. list-table::
:header-rows: 1
* - Software component
- Version
{% for component_name, component_version in docker.components.items() %}
* - {{ component_name }}
- {{ component_version }}
{% endfor %}
With this Docker image, you can quickly test the :ref:`expected
inference performance numbers <vllm-benchmark-performance-measurements-1103>` for
AMD Instinct GPUs.
What's new
==========
The following is summary of notable changes since the :doc:`previous ROCm/vLLM Docker release <vllm-history>`.
* Enabled :ref:`AITER <vllm-optimization-aiter-switches>` by default.
* Fixed ``rms_norm`` segfault issue with Qwen 3 235B.
* Known performance degradation on Llama 4 models due to `an upstream vLLM issue <https://github.com/vllm-project/vllm/issues/26320>`_.
.. _vllm-benchmark-supported-models-1103:
Supported models
================
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml
{% set docker = data.dockers[0] %}
{% set model_groups = data.model_groups %}
.. _vllm-benchmark-available-models-1103:
The following models are supported for inference performance benchmarking
with vLLM and ROCm. Some instructions, commands, and recommendations in this
documentation might vary by model -- select one to get started. MXFP4 models
are only supported on MI355X and MI350X GPUs.
.. raw:: html
<div id="vllm-benchmark-ud-params-picker" class="container-fluid">
<div class="row gx-0">
<div class="col-2 me-1 px-2 model-param-head">Model</div>
<div class="row col-10 pe-0">
{% for model_group in model_groups %}
<div class="col-4 px-2 model-param" data-param-k="model-group" data-param-v="{{ model_group.tag }}" tabindex="0">{{ model_group.group }}</div>
{% endfor %}
</div>
</div>
<div class="row gx-0 pt-1">
<div class="col-2 me-1 px-2 model-param-head">Variant</div>
<div class="row col-10 pe-0">
{% for model_group in model_groups %}
{% set models = model_group.models %}
{% for model in models %}
{% if models|length % 3 == 0 %}
<div class="col-4 px-2 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% else %}
<div class="col-6 px-2 model-param" data-param-k="model" data-param-v="{{ model.mad_tag }}" data-param-group="{{ model_group.tag }}" tabindex="0">{{ model.model }}</div>
{% endif %}
{% endfor %}
{% endfor %}
</div>
</div>
</div>
.. _vllm-benchmark-vllm-1103:
{% for model_group in model_groups %}
{% for model in model_group.models %}
.. container:: model-doc {{ model.mad_tag }}
{% if model.precision == "float4" %}
.. important::
MXFP4 is supported only on MI355X and MI350X GPUs.
{% endif %}
.. note::
See the `{{ model.model }} model card on Hugging Face <{{ model.url }}>`_ to learn more about your selected model.
Some models require access authorization prior to use via an external license agreement through a third party.
{% if model.precision == "float8" and model.model_repo.startswith("amd") %}
This model uses FP8 quantization via `AMD Quark <https://quark.docs.amd.com/latest/>`__ for efficient inference on AMD GPUs.
{% endif %}
{% if model.precision == "float4" and model.model_repo.startswith("amd") %}
This model uses FP4 quantization via `AMD Quark <https://quark.docs.amd.com/latest/>`__ for efficient inference on AMD GPUs.
{% endif %}
{% endfor %}
{% endfor %}
.. _vllm-benchmark-performance-measurements-1103:
Performance measurements
========================
To evaluate performance, the
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
page provides reference throughput and serving measurements for inferencing popular AI models.
.. important::
The performance data presented in
`Performance results with AMD ROCm software <https://www.amd.com/en/developer/resources/rocm-hub/dev-ai/performance-results.html>`_
only reflects the latest version of this inference benchmarking environment.
The listed measurements should not be interpreted as the peak performance achievable by AMD Instinct GPUs or ROCm software.
System validation
=================
Before running AI workloads, it's important to validate that your AMD hardware is configured
correctly and performing optimally.
If you have already validated your system settings, including aspects like NUMA auto-balancing, you
can skip this step. Otherwise, complete the procedures in the :ref:`System validation and
optimization <rocm-for-ai-system-optimization>` guide to properly configure your system settings
before starting training.
To test for optimal performance, consult the recommended :ref:`System health benchmarks
<rocm-for-ai-system-health-bench>`. This suite of tests will help you verify and fine-tune your
system's configuration.
Pull the Docker image
=====================
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml
{% set docker = data.dockers[0] %}
Download the `ROCm vLLM Docker image <{{ docker.docker_hub_url }}>`_.
Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull {{ docker.pull_tag }}
Benchmarking
============
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml
{% set docker = data.dockers[0] %}
{% set model_groups = data.model_groups %}
Once the setup is complete, choose between two options to reproduce the
benchmark results:
.. _vllm-benchmark-mad-1103:
{% for model_group in model_groups %}
{% for model in model_group.models %}
.. container:: model-doc {{model.mad_tag}}
.. tab-set::
.. tab-item:: MAD-integrated benchmarking
The following run command is tailored to {{ model.model }}.
See :ref:`vllm-benchmark-supported-models-1103` to switch to another available model.
1. Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
2. On the host machine, use this command to run the performance benchmark test on
the `{{model.model}} <{{ model.url }}>`_ model using one node with the
:literal:`{{model.precision}}` data type.
.. code-block:: shell
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
madengine run \
--tags {{model.mad_tag}} \
--keep-model-dir \
--live-output
MAD launches a Docker container with the name
``container_ci-{{model.mad_tag}}``. The throughput and serving reports of the
model are collected in the following paths: ``{{ model.mad_tag }}_throughput.csv``
and ``{{ model.mad_tag }}_serving.csv``.
Although the :ref:`available models
<vllm-benchmark-available-models-1103>` are preconfigured to collect
offline throughput and online serving performance data, you can
also change the benchmarking parameters. See the standalone
benchmarking tab for more information.
{% if model.tunableop %}
.. note::
For improved performance, consider enabling :ref:`PyTorch TunableOp <mi300x-tunableop>`.
TunableOp automatically explores different implementations and configurations of certain PyTorch
operators to find the fastest one for your hardware.
By default, ``{{model.mad_tag}}`` runs with TunableOp disabled (see
`<https://github.com/ROCm/MAD/blob/develop/models.json>`__). To enable it, include
the ``--tunableop on`` argument in your run.
Enabling TunableOp triggers a two-pass run -- a warm-up followed by the
performance-collection run.
{% endif %}
.. tab-item:: Standalone benchmarking
The following commands are optimized for {{ model.model }}.
See :ref:`vllm-benchmark-supported-models-1103` to switch to another available model.
.. seealso::
For more information on configuration, see the `config files
<https://github.com/ROCm/MAD/tree/develop/scripts/vllm/configs>`__
in the MAD repository. Refer to the `vLLM engine <https://docs.vllm.ai/en/latest/configuration/engine_args.html#engineargs>`__
for descriptions of available configuration options
and `Benchmarking vLLM <https://github.com/vllm-project/vllm/blob/main/benchmarks/README.md>`__ for
additional benchmarking information.
.. rubric:: Launch the container
You can run the vLLM benchmark tool independently by starting the
`Docker container <{{ docker.docker_hub_url }}>`_ as shown
in the following snippet.
.. code-block:: shell
docker pull {{ docker.pull_tag }}
docker run -it \
--device=/dev/kfd \
--device=/dev/dri \
--group-add video \
--shm-size 16G \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
--cap-add=SYS_PTRACE \
-v $(pwd):/workspace \
--env HUGGINGFACE_HUB_CACHE=/workspace \
--name test \
{{ docker.pull_tag }}
.. rubric:: Throughput command
Use the following command to start the throughput benchmark.
.. code-block:: shell
model={{ model.model_repo }}
tp={{ model.config.tp }}
num_prompts={{ model.config.num_prompts | default(1024) }}
in={{ model.config.in | default(128) }}
out={{ model.config.in | default(128) }}
dtype={{ model.config.dtype | default("auto") }}
kv_cache_dtype={{ model.config.kv_cache_dtype }}
max_num_seqs={{ model.config.max_num_seqs | default(1024) }}
max_num_batched_tokens={{ model.config.max_num_batched_tokens }}
max_model_len={{ model.config.max_model_len }}
vllm bench throughput --model $model \
-tp $tp \
--num-prompts $num_prompts \
--input-len $in \
--output-len $out \
--dtype $dtype \
--kv-cache-dtype $kv_cache_dtype \
--max-num-seqs $max_num_seqs \
--max-num-batched-tokens $max_num_batched_tokens \
--max-model-len $max_model_len \
--trust-remote-code \
--output-json ${model}_throughput.json \
--gpu-memory-utilization {{ model.config.gpu_memory_utilization | default(0.9) }}
.. rubric:: Serving command
1. Start the server using the following command:
.. code-block:: shell
model={{ model.model_repo }}
tp={{ model.config.tp }}
dtype={{ model.config.dtype }}
kv_cache_dtype={{ model.config.kv_cache_dtype }}
max_num_seqs=256
max_num_batched_tokens={{ model.config.max_num_batched_tokens }}
max_model_len={{ model.config.max_model_len }}
vllm serve $model \
-tp $tp \
--dtype $dtype \
--kv-cache-dtype $kv_cache_dtype \
--max-num-seqs $max_num_seqs \
--max-num-batched-tokens $max_num_batched_tokens \
--max-model-len $max_model_len \
--no-enable-prefix-caching \
--swap-space 16 \
--disable-log-requests \
--trust-remote-code \
--gpu-memory-utilization 0.9
Wait until the model has loaded and the server is ready to accept requests.
2. On another terminal on the same machine, run the benchmark:
.. code-block:: shell
# Connect to the container
docker exec -it test bash
# Wait for the server to start
until curl -s http://localhost:8000/v1/models; do sleep 30; done
# Run the benchmark
model={{ model.model_repo }}
max_concurrency=1
num_prompts=10
in=128
out=128
vllm bench serve --model $model \
--percentile-metrics "ttft,tpot,itl,e2el" \
--dataset-name random \
--ignore-eos \
--max-concurrency $max_concurrency \
--num-prompts $num_prompts \
--random-input-len $in \
--random-output-len $out \
--trust-remote-code \
--save-result \
--result-filename ${model}_serving.json
.. note::
For improved performance with certain Mixture of Experts models, such as Mixtral 8x22B,
try adding ``export VLLM_ROCM_USE_AITER=1`` to your commands.
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
.. code-block::
OSError: You are trying to access a gated repo.
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
.. raw:: html
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
.. note::
Throughput is calculated as:
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
{% endfor %}
{% endfor %}
Advanced usage
==============
For information on experimental features and known issues related to ROCm optimization efforts on vLLM,
see the developer's guide at `<https://github.com/ROCm/vllm/blob/documentation/docs/dev-docker/README.md>`__.
.. note::
If youre using this Docker image on other AMD GPUs such as the AMD Instinct MI200 Series or Radeon, add ``export VLLM_ROCM_USE_AITER=0`` to your command, since AITER is only supported on gfx942 and gfx950 architectures.
Reproducing the Docker image
----------------------------
To reproduce this ROCm-enabled vLLM Docker image release, follow these steps:
1. Clone the `vLLM repository <https://github.com/vllm-project/vllm>`__.
.. code-block:: shell
git clone https://github.com/vllm-project/vllm.git
cd vllm
2. Use the following command to build the image directly from the specified commit.
.. datatemplate:yaml:: /data/how-to/rocm-for-ai/inference/previous-versions/vllm_0.11.1_20251103-benchmark-models.yaml
{% set docker = data.dockers[0] %}
.. code-block:: shell
docker build -f docker/Dockerfile.rocm \
--build-arg REMOTE_VLLM=1 \
--build-arg VLLM_REPO=https://github.com/ROCm/vllm \
--build-arg VLLM_BRANCH="{{ docker.dockerfile.commit }}" \
-t vllm-rocm .
.. tip::
Replace ``vllm-rocm`` with your desired image tag.
Further reading
===============
- To learn more about the options for latency and throughput benchmark scripts,
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about MAD and the ``madengine`` CLI, see the `MAD usage guide <https://github.com/ROCm/MAD?tab=readme-ov-file#usage-guide>`__.
- To learn more about system settings and management practices to configure your system for
AMD Instinct MI300X Series GPUs, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_.
- See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
a brief introduction to vLLM and optimization strategies.
- For application performance optimization strategies for HPC and AI workloads,
including inference with vLLM, see :doc:`/how-to/rocm-for-ai/inference-optimization/workload`.
- For a list of other ready-made Docker images for AI with ROCm, see
`AMD Infinity Hub <https://www.amd.com/en/developer/resources/infinity-hub.html#f-amd_hub_category=AI%20%26%20ML%20Models>`_.
Previous versions
=================
See :doc:`vllm-history` to find documentation for previous releases
of the ``ROCm/vllm`` Docker image.

View File

@@ -1,7 +1,7 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the unified :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the unified
ROCm Docker image. ROCm Docker image.
:keywords: model, MAD, automation, dashboarding, validate :keywords: model, MAD, automation, dashboarding, validate
@@ -18,9 +18,9 @@ vLLM inference performance testing
The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers
a prebuilt, optimized environment designed for validating large language model a prebuilt, optimized environment designed for validating large language model
(LLM) inference performance on the AMD Instinct™ MI300X accelerator. This (LLM) inference performance on the AMD Instinct™ MI300X GPU. This
ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the
MI300X accelerator and includes the following components: MI300X GPU and includes the following components:
* `ROCm 6.2.0 <https://github.com/ROCm/ROCm>`_ * `ROCm 6.2.0 <https://github.com/ROCm/ROCm>`_
@@ -31,7 +31,7 @@ MI300X accelerator and includes the following components:
* Tuning files (in CSV format) * Tuning files (in CSV format)
With this Docker image, you can quickly validate the expected inference With this Docker image, you can quickly validate the expected inference
performance numbers on the MI300X accelerator. This topic also provides tips on performance numbers on the MI300X GPU. This topic also provides tips on
optimizing performance with popular AI models. optimizing performance with popular AI models.
.. _vllm-benchmark-vllm: .. _vllm-benchmark-vllm:
@@ -51,7 +51,7 @@ Getting started
=============== ===============
Use the following procedures to reproduce the benchmark results on an Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image. MI300X GPU with the prebuilt vLLM Docker image.
.. _vllm-benchmark-get-started: .. _vllm-benchmark-get-started:
@@ -267,7 +267,7 @@ Options
.. _vllm-benchmark-run-benchmark-v043: .. _vllm-benchmark-run-benchmark-v043:
Running the benchmark on the MI300X accelerator Running the benchmark on the MI300X GPU
----------------------------------------------- -----------------------------------------------
Here are some examples of running the benchmark with various options. Here are some examples of running the benchmark with various options.
@@ -328,7 +328,7 @@ Further reading
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_. see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about system settings and management practices to configure your system for - To learn more about system settings and management practices to configure your system for
MI300X series accelerators, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_ MI300X Series GPUs, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_
- To learn how to run community models from Hugging Face on AMD GPUs, see - To learn how to run community models from Hugging Face on AMD GPUs, see
:doc:`Running models from Hugging Face </how-to/rocm-for-ai/inference/hugging-face-models>`. :doc:`Running models from Hugging Face </how-to/rocm-for-ai/inference/hugging-face-models>`.

View File

@@ -1,7 +1,7 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the unified :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the unified
ROCm Docker image. ROCm Docker image.
:keywords: model, MAD, automation, dashboarding, validate :keywords: model, MAD, automation, dashboarding, validate
@@ -18,9 +18,9 @@ vLLM inference performance testing
The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers
a prebuilt, optimized environment designed for validating large language model a prebuilt, optimized environment designed for validating large language model
(LLM) inference performance on the AMD Instinct™ MI300X accelerator. This (LLM) inference performance on the AMD Instinct™ MI300X GPU. This
ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the
MI300X accelerator and includes the following components: MI300X GPU and includes the following components:
* `ROCm 6.2.1 <https://github.com/ROCm/ROCm>`_ * `ROCm 6.2.1 <https://github.com/ROCm/ROCm>`_
@@ -31,7 +31,7 @@ MI300X accelerator and includes the following components:
* Tuning files (in CSV format) * Tuning files (in CSV format)
With this Docker image, you can quickly validate the expected inference With this Docker image, you can quickly validate the expected inference
performance numbers on the MI300X accelerator. This topic also provides tips on performance numbers on the MI300X GPU. This topic also provides tips on
optimizing performance with popular AI models. optimizing performance with popular AI models.
.. hlist:: .. hlist::
@@ -74,7 +74,7 @@ Getting started
=============== ===============
Use the following procedures to reproduce the benchmark results on an Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image. MI300X GPU with the prebuilt vLLM Docker image.
.. _vllm-benchmark-get-started: .. _vllm-benchmark-get-started:
@@ -332,7 +332,7 @@ Options
.. _vllm-benchmark-run-benchmark-v064: .. _vllm-benchmark-run-benchmark-v064:
Running the benchmark on the MI300X accelerator Running the benchmark on the MI300X GPU
----------------------------------------------- -----------------------------------------------
Here are some examples of running the benchmark with various options. Here are some examples of running the benchmark with various options.
@@ -398,7 +398,7 @@ Further reading
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_. see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about system settings and management practices to configure your system for - To learn more about system settings and management practices to configure your system for
MI300X series accelerators, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_ MI300X Series GPUs, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_
- To learn how to run community models from Hugging Face on AMD GPUs, see - To learn how to run community models from Hugging Face on AMD GPUs, see
:doc:`Running models from Hugging Face </how-to/rocm-for-ai/inference/hugging-face-models>`. :doc:`Running models from Hugging Face </how-to/rocm-for-ai/inference/hugging-face-models>`.

View File

@@ -1,7 +1,7 @@
:orphan: :orphan:
.. meta:: .. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the :description: Learn how to validate LLM inference performance on MI300X GPUs using AMD MAD and the
ROCm vLLM Docker image. ROCm vLLM Docker image.
:keywords: model, MAD, automation, dashboarding, validate :keywords: model, MAD, automation, dashboarding, validate
@@ -18,9 +18,9 @@ LLM inference performance validation on AMD Instinct MI300X
The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers
a prebuilt, optimized environment for validating large language model (LLM) a prebuilt, optimized environment for validating large language model (LLM)
inference performance on the AMD Instinct™ MI300X accelerator. This ROCm vLLM inference performance on the AMD Instinct™ MI300X GPU. This ROCm vLLM
Docker image integrates vLLM and PyTorch tailored specifically for the MI300X Docker image integrates vLLM and PyTorch tailored specifically for the MI300X
accelerator and includes the following components: GPU and includes the following components:
* `ROCm 6.3.1 <https://github.com/ROCm/ROCm>`_ * `ROCm 6.3.1 <https://github.com/ROCm/ROCm>`_
@@ -29,7 +29,7 @@ accelerator and includes the following components:
* `PyTorch 2.7.0 (2.7.0a0+git3a58512) <https://github.com/pytorch/pytorch>`_ * `PyTorch 2.7.0 (2.7.0a0+git3a58512) <https://github.com/pytorch/pytorch>`_
With this Docker image, you can quickly validate the expected inference With this Docker image, you can quickly validate the expected inference
performance numbers for the MI300X accelerator. This topic also provides tips on performance numbers for the MI300X GPU. This topic also provides tips on
optimizing performance with popular AI models. For more information, see the lists of optimizing performance with popular AI models. For more information, see the lists of
:ref:`available models for MAD-integrated benchmarking <vllm-benchmark-mad-v066-models>` :ref:`available models for MAD-integrated benchmarking <vllm-benchmark-mad-v066-models>`
and :ref:`standalone benchmarking <vllm-benchmark-standalone-v066-options>`. and :ref:`standalone benchmarking <vllm-benchmark-standalone-v066-options>`.
@@ -47,7 +47,7 @@ Getting started
=============== ===============
Use the following procedures to reproduce the benchmark results on an Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image. MI300X GPU with the prebuilt vLLM Docker image.
.. _vllm-benchmark-get-started: .. _vllm-benchmark-get-started:
@@ -377,7 +377,7 @@ Options and available models
.. _vllm-benchmark-run-benchmark-v066: .. _vllm-benchmark-run-benchmark-v066:
Running the benchmark on the MI300X accelerator Running the benchmark on the MI300X GPU
----------------------------------------------- -----------------------------------------------
Here are some examples of running the benchmark with various options. Here are some examples of running the benchmark with various options.
@@ -443,7 +443,7 @@ Further reading
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_. see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about system settings and management practices to configure your system for - To learn more about system settings and management practices to configure your system for
MI300X series accelerators, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_ MI300X Series GPUs, see `AMD Instinct MI300X system optimization <https://instinct.docs.amd.com/projects/amdgpu-docs/en/latest/system-optimization/mi300x.html>`_
- To learn how to run community models from Hugging Face on AMD GPUs, see - To learn how to run community models from Hugging Face on AMD GPUs, see
:doc:`Running models from Hugging Face </how-to/rocm-for-ai/inference/hugging-face-models>`. :doc:`Running models from Hugging Face </how-to/rocm-for-ai/inference/hugging-face-models>`.

Some files were not shown because too many files have changed in this diff Show More