Compare commits

...

127 Commits

Author SHA1 Message Date
randyh62
af9f9f1919 Update RELEASE.md
Add Optimized entry and removed Resolved Issue for HIP
2025-05-15 12:09:45 -07:00
Pratik Basyal
64ecf4ec07 6.1.5 column added to historical compatibility matrix to ROCm 6.3.1 (#4641)
* 6.1.5 version misalignment fixed
2025-04-17 11:49:25 -04:00
Pratik Basyal
304d422ab2 Debian Single-node support added to 631 compatibility matrix (#4355)
* Debian single-node support added

* Wordlist updated

* Updated historical footnote
2025-02-07 18:26:26 -05:00
Peter Park
e3d4b4836e Merge pull request #4349 from peterjunpark/docs/6.3.1
[docs/6.3.1] Update vLLM benchmarking guide
2025-02-05 17:49:55 -05:00
Peter Park
edf3a3975b Fix ROCm Bandwidth Test license type
Fix ROCm Bandwidth Test license type

(cherry picked from commit 9b0ae86b1b)
2025-02-05 17:21:18 -05:00
Peter Park
4964169c82 Update vLLM benchmarking guide (#4347)
* update vllm-benchmark

fix hlist overflow

update standalone benchmarking options

update list of models

fix typo and model name

unnecessary duplicate info

update formatting

update vllm benchmark guide

- remove Llama 2 FP8
- add Jais 13B
- update commands

update docker pull tag

update MAD available models

remove extra mad models not relevant to vllm

update PyTorch version

add changelog

add model names to .wordlist.txt

* Update docs/how-to/rocm-for-ai/inference/vllm-benchmark.rst

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>

* Update docs/how-to/rocm-for-ai/inference/vllm-benchmark.rst

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>

* Update docs/how-to/rocm-for-ai/inference/vllm-benchmark.rst

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>

* fix typo

* update link

* fix link text

* change changelog to previous versions

* fix typo

* remove "for"

---------

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>
(cherry picked from commit 2751a17cf0)
2025-02-05 17:21:11 -05:00
Pratik Basyal
d734eea462 Updated ROCm install on Linux installation method link (#4313) (#4336) 2025-02-04 10:03:51 -05:00
Pratik Basyal
690cc0e1d0 Azure Linux support for 6.3.1 added (#4333)
* Azure Linux support for 6.3.1 added
2025-02-03 15:10:04 -05:00
randyh62
3c34ba2980 Azure for 631 (#4325)
* Update what-is-rocm.rst

Add image for Azure Linux

* Add ROCm software stack from 6.3.2 

to add Azure Linux as supported OS in 6.3.1
2025-01-31 15:16:03 -08:00
Pratik Basyal
f05dcf6b00 2nd POC for How to Use ROCm for AI (#282) (#4301)
New TOC for ROCm for AI

---------

Co-authored-by: Peter Park <peter.park@amd.com>
2025-01-27 17:09:46 -05:00
Peter Park
a633bef754 Merge pull request #4284 from peterjunpark/docs/6.3.1
[docs/6.3.1] fix link to llama cookbook (#4269)
2025-01-21 17:43:40 -05:00
Peter Park
2e5dad7628 fix link to llama cookbook (#4269)
(cherry picked from commit 8dd99fe3a4)
2025-01-21 16:21:47 -05:00
Istvan Kiss
b50e861eef Remove duplicate entry at perfromance counters 2025-01-16 22:16:05 +01:00
Pratik Basyal
69c3ec2167 ROCprofiler SDK changelog added to 6.3.1 RN (#4260)
* ROCprofiler SDK changelog added

* Doc link added

Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>

* Reference to RDC added

---------

Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>
2025-01-15 13:09:51 -05:00
Alex Xu
e097719ba5 upgrade rocm-docs-core from 1.11.0 to 1.13.0 2025-01-13 11:25:42 -05:00
Peter Park
34fee2db6c Add TensorFlow compatibility docs (#4247)
* Add Tensorflow

* WIP

* WIP

* minor fmt

* PR feedbacks

* fix missed inconsistent formatting

* WIP

WIP

WIP

WIP

* minor formatting

update tensorflow-rocm docker images to rocm6.3.1

fix urls

* WIP

* fix typo and update wordlist

* fix tables not rendering

* fix table headings

* add period

* update tf dockers

* fix link

* fix link

* wording

* update historical compat

* fix tensile link

---------

Co-authored-by: Mátyás Aradi <matyas@streamhpc.com>
Co-authored-by: Istvan Kiss <neon60@gmail.com>
(cherry picked from commit 26553d725b)
2025-01-09 14:47:08 -05:00
Pratik Basyal
f2962df5cf HPC application list updated (#4066) (#4245)
* PETSc added

* List of HPC applications updated for 6.2.4

* Leo's feedback incorporated



* Review feedback incorporated

* vllm removed

---------

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2025-01-08 10:07:27 -05:00
Peter Park
b98d9db046 Merge pull request #4240 from peterjunpark/docs/6.3.1
Add JAX compatibility and docs fixes to 6.3.1
2025-01-07 11:58:22 -05:00
Peter Park
12f54bf824 Fix target branches in list of licenses (#4232)
* fix target branches in licenses list: set to repo defaults

* update LICENSE, conf.py, and fix license.md

* update missed 2024

(cherry picked from commit 8a3f00d4e2)
2025-01-07 11:25:35 -05:00
Peter Park
faeb282e76 fix supported jax versions in compat matrix (#4236)
(cherry picked from commit 8d50479762)
2025-01-07 11:23:55 -05:00
Peter Park
d1ca7ebd66 Add JAX compatibility doc (#4234)
* Add JAX compatibility

(cherry picked from commit 99215ab6b4cf6a1209d6c5fc781b5855251dcba5)

* WIP

(cherry picked from commit 54564a85d340b4149ed80a33377cf54c1eb48713)

* Fix docker table

(cherry picked from commit 8115a905764c869b390de2561e5f1356ec7e9743)

* WIP

(cherry picked from commit 45076e1fd20fd2c43f7a0ab6d8d5d246c498d801)

* add minor formatting

(cherry picked from commit c75706841092006c26766611b0407b79a13c7345)

* PR feedbacks

(cherry picked from commit 236b5daae4251c26cd697c6e20d5982771b05754)

* fix inconsistent formatting

(cherry picked from commit 0c6a2e3627f9e6159e3f400ab18769904c18097e)

* Rename file

(cherry picked from commit f17239aa8a9fa1ecdf8dab08c0348dc9216c5311)

* jax_triton supported

(cherry picked from commit fa56d697fbaa44c0c480df71dc236be8584291c0)

* WIP

(cherry picked from commit e8f0c5741fe96bb1e3272365906334d911a9a849)

* WIP

(cherry picked from commit 8ee4f3c62da8e11eea591340dc7c9fc1be8b7035)

* WIP

(cherry picked from commit 58c6bf441054fe3a21ba2d86808279e90de847b7)

* WIP

(cherry picked from commit 368ddf6925215a9bfd75a43c7c33def12238f81d)

* update .wordlist.txt

(cherry picked from commit 78ac332c8d6eba93e2b3e57440da3f60054bbadb)

* update .wordlist.txt

(cherry picked from commit 8d9492399f4b73b0c3c5359684d5b7faa328ba0f)

* Fix typos

(cherry picked from commit 394dede13b6de087237832fe3c693c11da7d733b)

* update jax note

(cherry picked from commit ceacc713c4295f8bbd20fc622579de9053b73337)

* Update docs/compatibility/ml-compatibility/jax-compatibility.rst

(cherry picked from commit b0613e914a2ba639fddea62eb495f97beaa8ba49)

* Update docs/compatibility/ml-compatibility/jax-compatibility.rst

(cherry picked from commit 8aac4344b6fd4120a3b8a31878f5316df99f3f99)

* Add back hipGraph support

(cherry picked from commit 028ddb3535073e0cd668c24614a0a73a491b5948)

* WIP

(cherry picked from commit 2e0ff9c5e3f88ceea6b0ca770bb4edb52ce08a47)

* WIP

(cherry picked from commit 186802585de5b7d58f9ac2a7947a83c037df1617)

* add blurb about docker icon

(cherry picked from commit aef650d4072578f75e7549151613f390f6545ce1)

* update pytorch-compatibility path in conf.py

* words

---------

Co-authored-by: Mátyás Aradi <matyas@streamhpc.com>
Co-authored-by: Istvan Kiss <neon60@gmail.com>
(cherry picked from commit ff1393142b)
2025-01-07 11:23:46 -05:00
Pratik Basyal
01fd243fb8 AMDSMi github reference updated (#4237)
(cherry picked from commit b069ca1885)
2025-01-07 11:23:35 -05:00
Peter Park
9a62ff8651 Merge pull request #4220 from peterjunpark/docs/6.3.1
[docs/6.3.1] remove `windows` from pytorch compat header (#4219)
2025-01-03 16:11:49 -05:00
Peter Park
92c792b623 remove windows from pytorch compat header (#4219)
(cherry picked from commit eafa2de533)
2025-01-03 11:17:36 -05:00
alexxu-amd
7e7534f5aa remove MPI page from index (#4206)
Co-authored-by: Alex Xu <alex.xu@amd.com>
(cherry picked from commit f7e04d8fb0)
2024-12-30 14:25:25 -05:00
alexxu-amd
1f36fa0d3e Remove gpu-cluster-networking and 'Using MPI' page due to migration to Instinct Docs (#4201)
* remove 'Using MPI' and 'gpu-cluster-networking' sections due to migration to dcgpu

* remove gpu-cluster-networking from index page

---------

Co-authored-by: Alex Xu <alex.xu@amd.com>
(cherry picked from commit 85bd6e98f5)
2024-12-30 09:45:55 -05:00
Peter Park
3e9f3a6986 [docs/6.3.1] Add PyTorchh compatibility doc (#4199)
* Add PyTorch compatibility doc (#4193)

* Add compatibility framework pages

* update formatting

* WIP

* satisfy spellcheck linter

* PR feedbacks

* caps

* remove jax and tensorflow pages

* comment out "?"s

* update wordlist

* fix toc and table

* update toc and deep-learning-rocm.rst

---------

Co-authored-by: Istvan Kiss <neon60@gmail.com>
(cherry picked from commit 76d6e892bb)

* Fix PyTorch Compatibility link and remove incomplete rows (#4195)

* fix pytorch-compatibility filename

fix links

* remove incomplete rows in pytorch-compatibility

* fix broken refs

(cherry picked from commit f76145c2ad)
2024-12-24 13:25:41 -05:00
Peter Park
702500dacc Fix PyTorch Compatibility link and remove incomplete rows (#4195)
* fix pytorch-compatibility filename

fix links

* remove incomplete rows in pytorch-compatibility

* fix broken refs

(cherry picked from commit f76145c2ad)
2024-12-24 13:16:59 -05:00
alexxu-amd
9bb63ef477 Change version variable to latest
Since gpu-cluster-networking gets moved to dcgpu. All versioning will be renamed.

(cherry picked from commit 027b2ea376)
2024-12-23 18:31:16 -05:00
alexxu-amd
21167b9441 Update index.md
(cherry picked from commit fe69fc1bb4)
2024-12-23 18:08:10 -05:00
alexxu-amd
18772310d2 Update _toc.yml.in
(cherry picked from commit 4d31d717a6)
2024-12-23 18:07:53 -05:00
Peter Park
b71477a521 Add PyTorch compatibility doc (#4193)
* Add compatibility framework pages

* update formatting

* WIP

* satisfy spellcheck linter

* PR feedbacks

* caps

* remove jax and tensorflow pages

* comment out "?"s

* update wordlist

* fix toc and table

* update toc and deep-learning-rocm.rst

---------

Co-authored-by: Istvan Kiss <neon60@gmail.com>
(cherry picked from commit 76d6e892bb)
2024-12-23 18:07:39 -05:00
Alex Xu
ecc4d589e0 Merge branch 'roc-6.3.x' into docs/6.3.1 2024-12-20 18:44:32 -05:00
Alex Xu
deb4895b11 Merge branch 'develop' into roc-6.3.x 2024-12-20 18:42:53 -05:00
Peter Park
47502421e1 fix merge conflicts (#4185) 2024-12-20 18:34:17 -05:00
Peter Park
b8ce0db5b2 Merge pull request #4186 from peterjunpark/docs/6.3.1
Fix links and TOC in 6.3.1
2024-12-20 18:21:36 -05:00
Peter Park
5e9fea1494 Fix links in docs (#4182)
* fix links

* missing space

* fix link in known issue

* Remove programming guide from TOC

(cherry picked from commit bf9727db74)
2024-12-20 18:18:49 -05:00
Peter Park
d4d03208be Fix broken links in 6.3.1 docs (#4180)
* fix links

* missing space

(cherry picked from commit 49d253dd13)
2024-12-20 18:18:43 -05:00
Peter Park
bf9727db74 Fix links in docs (#4182)
* fix links

* missing space

* fix link in known issue

* Remove programming guide from TOC
2024-12-20 17:21:30 -05:00
Peter Park
49d253dd13 Fix broken links in 6.3.1 docs (#4180)
* fix links

* missing space
2024-12-20 16:51:39 -05:00
alexxu-amd
94f97d4428 Merge pull request #4179 from ROCm/amd/alexxu12/update631
Change version from 6.3.0 to 6.3.1
2024-12-20 16:18:00 -05:00
Pratik Basyal
33891d6fd0 Bar memory update develop (#4168)
* Bar Memory page added

* Leo's feedback incorporated

* Spell check fixed

* SME review feedback incorporated

* Feedback updated

* Indentation fixed

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-20 16:05:49 -05:00
Alex Xu
859923e595 change version from 6.3.0 to 6.3.1 2024-12-20 15:53:46 -05:00
alexxu-amd
2b044f98c9 Sync develop into docs/6.3.1 2024-12-20 15:24:57 -05:00
alexxu-amd
74f3e26b62 Merge pull request #4177 from ROCm/sync-develop-from-internal
Sync develop from internal
2024-12-20 15:18:42 -05:00
Alex Xu
09a5a2f23a resolve merge conflict 2024-12-20 15:06:43 -05:00
Peter Park
59f217be22 fix merge conflicts in workload.rst 2024-12-20 14:52:30 -05:00
Alex Xu
2fd18ab1b6 revert KFD change back 2024-12-20 14:23:57 -05:00
Alex Xu
d275733631 Merge remote-tracking branch 'internal/develop' into sync-develop-from-internal 2024-12-20 14:00:20 -05:00
darren-amd
16e18bf5e6 External CI: Add rocprof-sdk and aqlprofile dependencies to rocprof-sys (#4176) 2024-12-20 13:59:39 -05:00
Joseph Macaranas
c9dd86f2d2 External CI: Add toggle for torchvision build and test (#4173)
- Recent vision compilation has been failing, and debugging hasn't been fruitful in finding cause.
- Should unblock nightly job to at least build and test pytorch while debug effort continues after the holidays.
- pytorch build and test is unblocked by temporarily patching the composable_kernel submodule on upstream pytorch to latest develop, until that submodule is updated to have explicit cast for hneg.
2024-12-20 13:59:04 -05:00
Peter Park
95845105a5 Fix 6.3.1 links (#269) 2024-12-20 13:23:26 -05:00
Pratik Basyal
14a6fd5837 Release.md and autotag template updated for 6.3.1 release prep (#268)
* Release ready final changes and template update

* wordlist updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-20 12:56:13 -05:00
alexxu-amd
6791382b26 Sync develop into docs/6.3.1 2024-12-20 12:11:44 -05:00
Pratik Basyal
eb06305a88 Release management feedback incorporated (#267)
* Release management feedback incorporated

* RN updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-20 11:59:50 -05:00
Pratik Basyal
771518a2aa ROCm runfile link reverted to internal (#266)
Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-20 10:00:45 -05:00
Pratik Basyal
3e714f683c Known issue draft added (#265)
* Known issue draft added

* SME feedback incorporated

* Known issue updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-20 06:57:41 -05:00
Pratik Basyal
1998affd4f Link update for install instruction (#262)
* Link update for install instruction

* Debian related changes

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-20 06:47:22 -05:00
Peter Park
8d3de707e2 add MI325 to GPU specs page (#264) 2024-12-19 15:20:00 -05:00
randyh62
5f86dba37c Update RELEASE.md (#263)
remove HIP 6.3.1 content that is not in the release
2024-12-19 11:34:11 -08:00
Pratik Basyal
9a602592a0 Mi325X GPU support (#260)
* MI325X support added

* RN updated

* MI325x moved to OS hardware support

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Update RELEASE.md

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2024-12-19 12:46:05 -05:00
Pratik Basyal
6d524affd2 Doc highlight updated (#259)
Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-19 10:57:58 -05:00
alexxu-amd
988da8a0d1 Merge pull request #256 from ROCm/sync-develop-from-external
Sync develop from external
2024-12-19 09:52:54 -05:00
alexxu-amd
758e8a33db Merge branch 'develop' into sync-develop-from-external 2024-12-19 09:48:30 -05:00
Istvan Kiss
21d26e52d0 Add graph safe support 2024-12-19 14:48:58 +01:00
Pratik Basyal
134ee09631 Footnote updated (#258)
Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-18 18:43:47 -05:00
randyh62
4f10f22920 Rocm image 631 (#257)
* Add files via upload

Add Debian OS without TransferBench

* Delete docs/data/rocm-software-stack-6_3_0.jpg

remove 6-3-0 image

* Update what-is-rocm.rst

Remove link and description for TransferBench

* Update rocm-tools.md

remove TransferBench

* Update what-is-rocm.rst

update ROCm Software Stack image name

* Add files via upload

Add correct image
2024-12-18 15:12:33 -08:00
Alex Xu
935a55f703 resolve more merge conflicts 2024-12-18 16:49:16 -05:00
Alex Xu
45d59ffdba add back overwritten PR 2024-12-18 16:41:24 -05:00
alexxu-amd
c2be7ee900 Merge branch 'develop' into sync-develop-from-external 2024-12-18 16:19:12 -05:00
Pratik Basyal
279a241c11 Quickupdates release631 develop (#255)
* Transferbench added

* Minor fix

* Table alignment fixed

* Review feedback

* Leo's feedback incorporated

* Fixed issue added

* Compatibility Matrix table fixed

* JAX version updated

* Debian support added

* Transferbench added

* Debian footnote updated

* Debian added to wordlist

* Debian footnote updated

* wordlist updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-18 16:08:13 -05:00
Alex Xu
0356ffd148 Merge remote-tracking branch 'external/develop' into sync-develop-from-external 2024-12-18 15:57:08 -05:00
randyh62
cf3c4d1e67 Whatis 631 (#253)
* Update what-is-rocm.rst

minor change

* Update rocm-tools.md

minor change
2024-12-18 07:11:26 -08:00
Pratik Basyal
cbe105ae8f TransferBench Release Notes Updated (#252)
* Transferbench added

* Minor fix

* Table alignment fixed

* Review feedback

* Leo's feedback incorporated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-18 00:00:51 -05:00
randyh62
5c5b5cce73 Add files via upload (#250)
Add ROCm image with TransferBench
2024-12-17 15:37:09 -08:00
Pratik Basyal
72dce7937b Add TransferBench to release notes for 6.3.1 (#251)
* Transferbench added

* Minor fix

* Table alignment fixed

* Review feedback

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-17 17:49:07 -05:00
randyh62
08d9286cd5 Update ROCm software stack image for 6.3.1 (#245) 2024-12-17 11:55:51 -08:00
Pratik Basyal
6a7d8654ad Revamped PCIe into new format and incorporated style guide (#4051)
* Revamped PCIe into new format and incorporated style guide

* Title case fixed

* Quick fix and changes

* Added RMW to wordlist and updated titles

* Grammatical fixes incorporated

* Sandra's review feedback incorporated

* Removed PCIe3 feature reference

* Leo's feedback incorporated

* Sandra's feedback incorporated

* Replaced execute with run

* Replaced executing with running

* SME review feedback incorporated

* Minor feedback updated

* Sandra's feedback incorporated

* Filename renamed

* File rename changes updated

* Document title updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-17 12:00:00 -05:00
Peter Park
c5ee1196c4 6.3.1 release notes style update (#249)
* update RELEASE.md

(cherry picked from commit ad449acd31c57c93be0d9f7dc8270f48747c11e1)

* Update RELEASE.md

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>

---------

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>
2024-12-16 15:18:01 -05:00
darren-amd
dc648ad764 External CI: Disable rocprof-system openmp target example (#4166) 2024-12-16 14:38:55 -05:00
Peter Park
f9dbc1f21f add megatron training doc (#4159)
* add megatron training doc

update toc

add images

update formatting and wording

formatting

update formatting

update conf.py

update formatting

update docker img

tweak formatting

Fix stuff

fix mock-data/data-path

add specific commit hash to checkout

update docker pull tag

fix docker run cmd and examples path

fix docker cmd

* wording

words

words

* improve title
2024-12-16 13:37:35 -05:00
Yanyao Wang
a857597340 Merge pull request #4162 from WBobby/develop-pr
Update build scripts of ROCm6.3 release to develop branch
2024-12-16 12:23:27 -06:00
Yanyao Wang
8c036531e8 Merge pull request #4163 from WBobby/roc-6.3.x-pr
Update build scripts of ROCm6.3 release to roc-6.3.x branch
2024-12-16 12:23:11 -06:00
Pratik Basyal
e839c63a01 Updated stagin reference for review (#247)
Co-authored-by: prbasyal <prbasyal@amd.com>
2024-12-16 13:07:42 -05:00
Pratik Basyal
6d997135a5 Update Compatibility Matrix for 6.3.1 (#240)
* Updated for 6.3.1

* Compatible version updated from RC1 build

Co-authored-by: Peter Park <peter.park@amd.com>

* Comptibility table and rst updated

* Compatible version updated from RC1 build

Co-authored-by: Peter Park <peter.park@amd.com>

* Peter's review feedback incorporated

Co-authored-by: Peter Park <peter.park@amd.com>

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
2024-12-16 11:19:01 -05:00
Pratik Basyal
2d6a601253 Update to 6.3.1 Release Notes (#236)
* 6.3.1 Release notes (#224)


* New Release highlight on offline installer added. OS change, Known Issues, Resolved issues, and upcoming changes copied from 6.3.0 and updated version

---------

Co-authored-by: prbasyal <prbasyal@amd.com>

* Updates to release notes (#229)

* Updates to release notes

* & -> and

* Updated the component changes, table, release highlights, and fixed i… (#232)

* Updated the component changes, table, release highlights, and fixed issues

* Version number and heading title fixed

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Version transition updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Rn 631 custombranch (#234)

* Updated the component changes, table, release highlights, and fixed issues

* Version number and heading title fixed

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Version transition updated

* OS and Hardware compatibility updated

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* [6.3.1 release notes] Add rocprof-sys changes to RN (#235)

* remove extra sections

* add rocprof-sys changelog

* add omni fixed issues and ami smi cl

* Update RELEASE.md

Version transition added in the table

* add documentation update note

* Broken link fixed

---------

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>

* added edited version of the migraphx changelog and removed CK entry (#238)

Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>

* Updating ROCm-internal with 6.3.1 release notes changes (#241)

* Updated date and version

* Typos and wording fixed

* Minor fix

* Documentation update added

* MIGraphX change dropped

* Debian support removed

* New release highlight added

* HIPRand version changed

* Cross-reference to Per queue added

* Leo's review feedback incorporated

* HIP optimized section updated

* Istinct and Peter's feedback added

---------

Co-authored-by: prbasyal <prbasyal@amd.com>

* Fix changelog and new documentation note (#246)

* fix amd smi and add training a model using megatron note

* update workload tuning doc note

* fmt

---------

Co-authored-by: prbasyal <prbasyal@amd.com>
Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2024-12-16 09:30:20 -05:00
Wang, Yanyao
484cbefc2e Update build scripts of ROCm6.3 release to roc-6.3.x branch 2024-12-15 17:35:58 -08:00
Wang, Yanyao
a2d128749c Update build scripts of ROCm6.3 release to develop branch 2024-12-15 17:06:23 -08:00
Joseph Macaranas
bacb49681e External CI: Typo in rocPyDecode Pipeline Parameter (#4157) 2024-12-13 14:35:35 -05:00
alexxu-amd
721b60d52f Merge pull request #4155 from amd-jnovotny/user-kernel-space-rocm-roc63x
Cherry-pick to roc-6.3.x: Change reference to kernel-mode GPU compute driver in ROCm (#4147)
2024-12-13 13:15:06 -05:00
Jeffrey Novotny
8ebe7be283 Change reference to kernel-mode GPU compute driver in ROCm (#4147)
* Change reference to kernel-mode GPU compute driver in ROCm

* More changes for kernel-mode terminology

* Fix linting

(cherry picked from commit 04fdc08328)
2024-12-13 12:13:15 -05:00
Jeffrey Novotny
04fdc08328 Change reference to kernel-mode GPU compute driver in ROCm (#4147)
* Change reference to kernel-mode GPU compute driver in ROCm

* More changes for kernel-mode terminology

* Fix linting
2024-12-13 11:46:02 -05:00
darren-amd
1b33f1d7da External CI: llvm project comgr disable spirv (#4154)
External CI: add flag to disable SPIRV from comgr build in llvm-project
2024-12-13 10:51:36 -05:00
Joseph Macaranas
fd067f7b3b External CI: HIP shared library symlinks for rocPyDecode (#4153)
- Modifying the HIP shared libraries installed to follow the Linux symbolic link convention resolves test failures in rocPyDecode.
2024-12-13 09:58:25 -05:00
spolifroni-amd
2a7520f08a Added MIGraphX changes (#4150)
* Added MIGraphX changes

* removed gfx support

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

---------

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
2024-12-12 11:19:28 -05:00
randyh62
a591218531 ROCm software stack update for 6.3.1 (#242) 2024-12-11 15:29:41 -08:00
Joseph Macaranas
5271c2c82d External CI: MIOpen Test Parameters (#4148)
- Exclude lone, consistently failing MIOpen test.
- test_rnn_seq_api is the only ctest failure, so let's filter it out for now to easily identify new failures.
2024-12-11 13:31:12 -05:00
JeniferC99
59a928f3a7 Merge pull request #4137 from ROCm/JeniferC99-patch-1
Update default.xml
2024-12-10 11:47:38 -06:00
David Galiffi
22572a9857 Add TransferBench and hipBLAS-common 2024-12-09 18:37:20 -05:00
randyh62
49e50b93c6 Update index.md (#4144) (#4146)
Remove Programming Guide topic from "How to"
2024-12-09 12:17:54 -08:00
Istvan Kiss
3354099b9c Remove GPU memory page 2024-12-09 17:23:57 +01:00
David Galiffi
794b34f40e Update default.xml
Fixed merge conflicts
2024-12-09 11:18:39 -05:00
David Galiffi
25ef417b31 Merge branch 'develop' into JeniferC99-patch-1 2024-12-09 11:17:17 -05:00
Peter Park
78f9adc6ec fix rccl hip streams section in workload tuning guide (#4140) 2024-12-09 11:06:12 -05:00
David Galiffi
4abcae54a8 Update default.xml (#4136)
Add rocJPEG
Rename omniperf to rocprofiler-compute
Rename omnitrace to rocprofiler-compute
2024-12-09 10:56:07 -05:00
JeniferC99
2690506e64 Update default.xml
SWDEV-502858
Rename  Omnitrace and Omniperf
2024-12-06 23:01:56 -06:00
darren-amd
3dffe1998a External CI: add aomp dependency to rocprofiler-sdk (#4135)
External CI: Add aomp dependency for rocprofiler-sdk
2024-12-06 16:20:09 -05:00
Peter Park
059c2cd9a4 6.3.0 release notes (#199)
* generate 6.3.0 RELEASE.md

* add 6.3.0 os/hw support

* regenerate changelog

* update table

* add amd smi and fix fmt

* add rocjpeg note

* add missed changelog entries

* update ga date

* add SHARK toolkit introduced note

update SHARK note

* Edited some components (#202)

* Edited some components

* fixed formatting on rocal

* markdown fail on the last commit; fixed

* capitalization fix

* Copy edit component change logs (#203)

* fix some formatting

* fix table and add OpenCL note

fix fmt

fix more formatting

* add radeon note

* add rocmsmi

* Updated hipCUB, rocPrim, and rocThrust (#206)

* fix some stuff

* add transferbench

* Edits to RCCL 6.3 change log (#207)

* Update tools/autotag/templates/upcoming_changes/6.3.0.md

* fix formatting

* fix sphinx underline warning

* add @lpaoletti's highlights

* fix os support

* add missing kernel version

* fix heading

* add bitsandbytes ki

* Copy edits to release notes (#208)

* Copy edits to release notes

* Additional updates to release notes

* updated shark AI toolkit description

* fix formatting

* update opencl

* update opencl

fixes and updates

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* fix omnitools rename text

* Apply suggestions from code review

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* update omniperf and tesile notes

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* Update RELEASE.md

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

* made some copy edits (#209)

* Apply suggestions from code review

* Update RELEASE.md

* Apply suggestions from code review

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* indent

* add more highlights

* update shark urls

* add omni notes

* Apply suggestions from code review

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* update some changelogs

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* update some cls

* and missed changelogs

* add missed component updates

* fix links

* add amdgpu-dkms highlight

* Update RELEASE.md

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* change links

* add fixed issues

* @neon60's changes

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* Apply suggestions from code review

Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>

* rm extra hip docs

* add hip links

* add fixed issue

fix

* Update RELEASE.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* Update RELEASE.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* Update RELEASE.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* fix ri

* fix zebra

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* rm extra amd smi info

* Apply suggestions from code review

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>

* add more about omni renmae

fix rename stuff

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Update RELEASE.md

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* fix formatting

* wording

* fix link

* update aotriton

* remove libraries performance improved

* fix rhel version

* fix urls

shorten title

* Apply suggestions from code review

Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>

* Release notes updates (#212)

* Made language more precise (#211)

MIVisionX and rocAL were changed. An awkward sentence in rocAL was also fixed.

* add rocprofiler

* add rdc

add rdc entry

* Update RELEASE.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* Update RELEASE.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* Update RELEASE.md

Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>

* remove bitsandbytes known issue

* fix missed hip doc

* update rocprof-compute version to 3.0.0

* remove words

* change hiprand ver to 2.11.0

* update new components descriptions

* add #

* fix tensile versions

* fix versions and add missed cls

* Update RELEASE.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* remove resolved issue for #3493

* add rdc note

* add hiprand known issue

add hiprand known issue

add asterisk for hiprand ki

asterisk formatting

asterisk

link asterisk

* rdc known issue

* @lpaoletti updates

* @wenchenvincent add CK to Transformer Engine note

* fix links

fix links

* add roct thunk interface note

* rm 'previously'

* Apply suggestions from code review

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* add known issues

* add mi300x cpfw known issue

* add mi300x cpfw known issue

add note

* spacing

* update te error KI

* rm incorrect user impact in TE known issue

* correct description of transformer engine fatal python error known issue

* update autotag/templates

* fix order

* fix typo

* update .wordlist.txt w/ lib names

* add missing css classes

* remove ROCT-Thunk-Interface from ROCm licenses

* add rocJPEG LICENSE

* fix table zebra b/c added rows

* fix capitalization in toc

* update URLs post-review

* update AMD SMI changelog

* update ROCm SMI changelog

* add opencl icd stale file kI

words

* remove Azure Linux

* update omnitrace note

* add mi200 DLM known issue

* update omnitrace note

update omnitrace note

wording

update omnitrace note

* update 6.3 ga to 11/26

* update KIs wording

* Update tools/autotag/templates/highlights/6.3.0.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* Update tools/autotag/templates/highlights/6.3.0.md

Co-authored-by: Istvan Kiss <neon60@gmail.com>

* update TransferBench note

* remove transferbench

remove transferbench

* remove gfx12, 1151

* remove sr-iov

* rm tb

* css classes

* rm gfx12

* add back transferbench

* add transferbench to table

* rm transferbench, add as KI

* update transferbench KI workaround

* add rocprof-comp KI

fix

* fix tensile

* add backward weights conv KI

update

* remove RHEL 8.9 from OS EOS

* remove mi200 perf drop for DLMs

* add RHEL 8.9 to end of support OSes

* add omniperf/omnitrace KIs

* remove bf16 statement in mi300x KI

* update rvs versions in compat

* add amd smi KI

update

update

* words

* update GA date for 6.3.0

* add rvs KI

* add KI links

same

* rvs in compat

* update tf versions

* add rvs changelog

* update rn templates

* add possessives to wordlist

---------

Co-authored-by: spolifroni-amd <Sandra.Polifroni@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: randyh62 <42045079+randyh62@users.noreply.github.com>
Co-authored-by: Istvan Kiss <neon60@gmail.com>
Co-authored-by: Swati Rawat <120587655+SwRaw@users.noreply.github.com>
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
2024-12-03 15:16:38 -05:00
Peter Park
f53faa19ea update compatibility card to install in index.md (#221) 2024-12-02 16:32:18 -05:00
Peter Park
c5bf6c39ca remove RHEL 8.9 from compat (#223) 2024-12-02 10:57:52 -05:00
Peter Park
e677ebcb47 Trim TOC sections covered in Compatibility Matrix (#219)
* trim toc

* better org
2024-11-27 12:32:21 -05:00
Peter Park
a52c98cb2b 6.3.0 compat matrix WIP (#197)
* add tentative historical compat matrix

* update historical compat

* update compat

* update rocprof-compute version to 3.0.0

* update historical compat matrix

* update os support in historical

* update compat and historical

* fix compat formatting

* fix footnotes

* update kernel versions

* remove mi300 footnote for gfx942

* rm extra kernel ver

* add azure linux

* update tested user space vers

* spellcheck

* add roct to wordlist

* remove azure linux

* update thrust/cub versions

* fix

* rocr

rocr

* extra underscore

* add back RHEL 8.9
2024-11-27 12:00:55 -05:00
Alex Xu
d4ad0a838d update rocm-docs-core from 1.8.3 to 1.9.0 2024-11-22 19:20:15 -05:00
randyh62
e9e9fa4ba5 Change Common Language Runtime to Compute Language Runtime (#200)
* Change Common Language Runtime to Compute Language Runtime

* change rocJPEG description

* update ROCm Software Stack image
2024-11-22 15:45:26 -08:00
randyh62
c18694b0fe add azure image (#214) 2024-11-22 08:04:01 -08:00
randyh62
d83ed9d58a Update How to TOC and Index (#210) 2024-11-18 10:19:32 -08:00
alexxu-amd
0a237dfd42 Sync develop from external repo (#205)
* Update version list with 6.2.0 (#3505) (#3506)

* Fix link to meta-llama finetuning recipes

* Spellcheck fixes in release notes templates (#3526) (#3548)

* fix spelling in 5.4.x templates

* add to wordlist

* update templates

update wordlist

* remove extra_components

rm extra_components

* fix spelling

Co-authored-by: Peter Park <peter.park@amd.com>

* Fix link to rocr debug agent (#3533)

Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>

* Fix intersphinx links (#3546)

* update fw install links

* fix more intersphinx links

* fix more links

* add rocPyDecode repo to ROCm6.2 manifest file (#3541) (#3553)

Co-authored-by: Yanyao Wang <yanywang@amd.com>
Co-authored-by: Wang, Yanyao <yanyao.wang@amd.com>

* Fix typo for TFLOPs metric in MI250 architecture page

* Add rocm-examples to default.xml (#3583)

* Add rocm 6.2.0 manifest file for rocm-build scripts (#3538)

* Add rocm 6.2.0 manifest file for rocm-build scripts

Signed-off-by: David Galiffi <David.Galiffi@amd.com>

* Add "rocm-examples"

---------

Signed-off-by: David Galiffi <David.Galiffi@amd.com>

* Add a section on increasing memory allocation to the MI300A system op… (#3587)

* Add a section on increasing memory allocation to the MI300A system optimization guide

* Addition to wordlist

* Change GB to GiB for consistency

* Standardize GiB/KiB spacing

* Minor wording changes

* Update build scripts for ROCm6.2 release

* fix README.md for Ubuntu24 docker

* Correct ttm to amdttm (#3648)

* Expand the section on changing thread affinity (#3653)

* Expand the section on changing thread affinity

* Clarify the methods for configuring allocatable memory settings

* Small correction

* Update model-quantization.rst to import `BitsAndBytesConfig` from transformers library (#3638)

* remove unneeded file (#3663)

* Fix intersphinx links (#3668)

* fix links in install.rst

* fix links in sys opt guides

* Add introduction and links to the new guide to the vLLM optimized Doc… (#3637)

* Add introduction and links to the new guide to the vLLM optimized Docker image on AMD Infinity Hub

* Update target link for the Docker vLLM guide

* Change target URL

* Change link target URL again

* Fixed broken link to RISC-V documentation

* Add FBGEMM/FBGEMM_GPU to the Model acceleration libraries page (#3659)

* Add FBGEMM/FBGEMM_GPU to the Model acceleration libraries page

* Add words to wordlist and fix a typo

* Add new sections for Docker and testing

* Incorporate comments from the external review

* Some minor edits and clarifications

* Incorporate further review coments and fix test section

* Add comment to test section

* Change git clone command for FBGEMM repo

* Change Docker command

* Changes from internal review

* Fix linting issue

* Fixed broken links for tensile, rocprofiler, roctracer, hipify, rocm-cmake

* add missing make command to bitsandbytes install commands (#3722)

* Update link to rocRAND data type support (#3736)

* Fix Radeon link and point at R6.1.3 as absolute link (#3757)

* Fix Radeon link and point at R6.1.3 as absolute link (#3757)

* Include rocal version change in the highlights (#177)

* Include rocal version change in the highlights

* Reworded rocal known issues and added link to rocal in highlights

* Update ROCm manifest to 6.2.1

* Update ROCm branch name

* Add 6.2.1 to version list (#3770)

* Add links to GH issues in 6.2.1 release notes (#3769)

* add MAD page

* link to GitHub issues in release notes known issues

* update templates for 6.2.1

* Revert "add MAD page"

This reverts commit 9cce72bba3.

* update wordlist for spellcheck linter

* add rccl note

* update rocal version change heading to be more obvious

* make rocal note more specific

* fix missing space

* fix capitalization

* Update RCCL known issue wording (#3775)

* add MAD page

* fix wording in RCCL known issue

* Revert "add MAD page"

This reverts commit c81d0f3b0a.

* update llvm version for 6.2.1 (#3779)

* Fix broken links in 6.2.1 release notes (#3782)

* External CI: Replace libomp dependencies with aomp (#3781)

Add roctracer dependency for hipBLAS and rocWMMA testing

* External CI: Add rocprofiler v1 and v2 smoke tests (#3784)

* External CI: ROCgdb smoke tests (#3785)

- Since this is an autotools project and not cmake, build and test on gfx942 system instead of separating into two jobs. Pipeline time is short anyway.
- Follow build instructions to update build flags and to incorporate the ROCdbgapi.
- Results are not parsed and graphed, but the log contents are printed at the end. This was helpful for debugging and will be kept in the pipeline, as the make check-gdb command's output was not helpful on its own.

* External CI: rocPyDecode Smoke Test (#3786)

* External CI: omniperf pipeline (#3788)

- Referred to public documentation, source, and iterative attempts to create and improve build and test pipeline.
- ctest failures are due to the test node not having expected marketing name string and override not working.
- The fix should be on the omniperf repo side of things, so this pull request should be fine as is.

* External CI: create omniperf pipeline IDs, update nightly build (#3790)

* Fixed greater than to be less than in rocFFT changes

* fix footnote for 6.1.0 (#3791)

* fix footnote for 6.1.0

* fix empty columns in historical KFD title

* External CI: Publish wheel as artifact for rocPyDecode (#3796)

* fix build rocal for ROCm6.2.1

* Add ROCm6.2.1 manifest file

* External CI: fix hip-tests symlink creation (#3799)

* Docs: Add Ubuntu 24.04.1 (#3801)

* add ubuntu 24.04.1

* add 24.04.1 to bottom os section

* fix heading and template

* Update compatibility-matrix.rst for OpenMP version

* Update compatibility-matrix-historical-6.0.csv for OpenMP version

* rm ubuntu 24.04.1 from 6.2.0

* Update docs/compatibility/compatibility-matrix.rst

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* rm duplicate ubuntu in historical

---------

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* Docs: Add Ubuntu 24.04.1 (#3801)

* add ubuntu 24.04.1

* add 24.04.1 to bottom os section

* fix heading and template

* Update compatibility-matrix.rst for OpenMP version

* Update compatibility-matrix-historical-6.0.csv for OpenMP version

* rm ubuntu 24.04.1 from 6.2.0

* Update docs/compatibility/compatibility-matrix.rst

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* rm duplicate ubuntu in historical

---------

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* External CI: fixes for rocMLIR and nightly build (#3800)

* External CI: fix symlinks for rocMLIR and nightly build

* add pipeline IDs for hip-tests

* fix hip-test ID typo

* remove llvm-alt license (#3727)

* remove llvm-alt license

* fix linting error

* External CI: enable ROCR-Runtime tests (#3809)

* External CI: default branches for hip-tests, omniperf (#3811)

* External CI: torch and torchvision smoke tests (#3810)

* External CI: torch and torchvision smoke tests

- Fixed issues with package name and version for the vision wheel that prevented it from installing. A patch is used until my pull request in vision repo is merged.
- Referred to rocAutomation scripts to pick which test scripts to run out of the many in the torch and vision repo, and iteratively tested suggested scripts to see which ones completed in a timely manner.
- Leveraging pytest-azurepipelines module to automatically parse and graph results from these tests.

* External CI: omnitrace build pipeline (#3812)

* External CI: omnitrace build pipeline starter

- Adding initial set of dependencies and build flags.

* External CI: omnitrace build pipeline

- Add bison, rccl, texinfo dependencies based on build failures.
- Add AMDGPU_TARGETS flag
- Add ROCm binaries to PATH for clang-format and other tools used.

* Fix indentation

---------

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: AMDMIGraphX Build Fix (#3814)

- Swap to default gcc on OS to resolve build errors from recent commits.
- Added libdnnl-dev dependency from iterative attempts with compiler change.
- Referred to the passing GitHub checks to observe the compilers that was used.
- Build CK jit lib and include in AMDMIGraphX build.

* External CI: test fixes w/ roctracer, list omniperf as partially succeeding (#3815)

* External CI: rpp tests (#3816)

* External CI: Build pipeline for rocprofiler-sdk (#3819)

* External CI: Pipeline for rocprofiler-sdk

* Add rocprofiler dependency

* External CI: rocprofiler-sdk build pipeline

---------

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: Fix/add missing pipeline IDs (#3818)

* Update default.xml - Change 6.2.1 to 6.2.2

* Add ROCm6.2.1 manifest file

* External CI: omnitrace tests (#3822)

* Update tags to 6.2.2 (#3827)

* Update tags to 6.2.2 (#3827)

* External CI: add roctracer to roc/hipSOLVER test deps (#3825)

* External CI: add rocprofiler-sdk pipeline IDs (#3824)

* External CI: AMDMIGraphX Smoke Tests (#3830)

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: MIOpen tests (#3837)

* Point to release history instead of deprecated changelog (#3836)

* External CI: filter out hipTensor extended tests (#3838)

* added revised note re. radeon gpus (#3839)

* Restructured the contributions section. (#3715)

* testing if this file is editable

* changed 'kebob-case' to 'dash-case'

* Restructured the page to be more straightforward and provide additional repo information

* forgot to save

* Moved the topic sentence

* Wrong accent on the a in diataxis

* Removed the feedback info from contributing and moved it to Feedback

* fixed spelling errors

* fixed some wording and removed second person text

* consolidated Build and Structure into Contribute; edited toolchai to (hopefully) conform to style guide; updated toc

* updated the titles in the toc

* made changes based on feedback

* it's better when you save

* removed structure and build; fixed something for the linter

* added rst to wordlist

* added customizations to wordlist

* Add links to gpu cluster network guides (#3763)

* Add links to gpu cluster network guides

* Add newline character to eof

* Make link absolute

* add dynamic branch in toc

* remove unnecessary page

clean up

* clean up index/toc

* make multi-node topics adjacent

---------

Co-authored-by: Peter Park <peter.park@amd.com>

* Point to release history instead of deprecated changelog (#3836)

* Restructured the contributions section. (#3715)

* testing if this file is editable

* changed 'kebob-case' to 'dash-case'

* Restructured the page to be more straightforward and provide additional repo information

* forgot to save

* Moved the topic sentence

* Wrong accent on the a in diataxis

* Removed the feedback info from contributing and moved it to Feedback

* fixed spelling errors

* fixed some wording and removed second person text

* consolidated Build and Structure into Contribute; edited toolchai to (hopefully) conform to style guide; updated toc

* updated the titles in the toc

* made changes based on feedback

* it's better when you save

* removed structure and build; fixed something for the linter

* added rst to wordlist

* added customizations to wordlist

* Add links to gpu cluster network guides (#3763)

* Add links to gpu cluster network guides

* Add newline character to eof

* Make link absolute

* add dynamic branch in toc

* remove unnecessary page

clean up

* clean up index/toc

* make multi-node topics adjacent

---------

Co-authored-by: Peter Park <peter.park@amd.com>

* updated the radeon note (#3850)

* External CI: Fix rocPyDecode wheel creation (#3852)

- Set values for expected environment variables.
- Accompanying changes required in rocPyDecode repo. Pull request will be made.

* External CI: pytorch vision patch removal (#3855)

My pull request applying this patch was merged upstream, so this is no longer needed and will break the pipeline since it can no longer be applied.

* Build(deps): Bump rocm-docs-core from 1.8.1 to 1.8.2 in /docs/sphinx (#3807)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.8.1 to 1.8.2.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.8.2/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.8.1...v1.8.2)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* updated the radeon note, as it were (#3857)

* updated the radeon note, as it were

* updated the note again

* Set devops team as codeowners for rocm-build (#3860)

* Set ext CI as codeowners for rocm-build

* Update CODEOWNERS to rocm-devops

* External CI: Add option to pull mainline branch for dependencies (#3689)

* External CI: Add option to pull mainline branch for dependencies

* Missing parameter for mainline branch dependencies.

* External CI: mainline branch definitions

* Removed MIGraphX optimization page (#3848)

* External CI: add a global variable to control gfx942 tests (#3864)

* External CI: update component default/mainline branches (#3871)

* External CI: Stop building gfx90a (#3872)

Save on VM resources until infrastructure has test targets.

* External CI: add libstdc++-12 to rocMLIR (#3874)

* Add building doc section (#3873)

* External CI: programmatically get latest aqlprofile (#3876)

* External CI: use ctest for rocm-examples (#3877)

* External CI: Tensile pipeline (#3884)

* add oversubscription conceptual doc (#3885)

add mitigiation steps

add to toc

move page for build

move doc

fix spelling

update doc

update oversubscription

update order

fix spelling

add oversubscription to wordlist

move oversubscription topic to bottom of toc and index

* add oversubscription conceptual doc (#3885)

add mitigiation steps

add to toc

move page for build

move doc

fix spelling

update doc

update oversubscription

update order

fix spelling

add oversubscription to wordlist

move oversubscription topic to bottom of toc and index

(cherry picked from commit d0ecf51b0c)

* add oversubscription conceptual doc (#3885)

(cherry picked from commit d0ecf51b0c)

* Add building doc section (#3873)

(cherry picked from commit abc0e6a087)

* External CI: Add pipeline to build upstream boost (#3896)

* Update bitsandbytes branch in docs (#3898)

* Update bitsandbytes branch in docs (#3898)

(cherry picked from commit b541be7bcb)

* Documentation: Add reference to precision-support floating-point types (#3899)

* External CI: use Boost template for MIOpen (#3903)

* External CI: create rocprofiler-systems pipeline (#3906)

* External CI: omnitrace/rocprof-sys pipeline IDs (#3908)

* External CI: MIOpen parse test results (#3913)

* External CI: Use pip to install latest cmake on test system (#3915)

* added a link to the compatibility matrix (#3904)

* added a link to the compatibility matrix

* removed quotes

* docs: Remove invalid amd_iommu=on parameter

Per kernel-parameters.txt, there is no "on" option for amd_iommu. While
intel_iommu has it, amd_iommu is automatically on unless specified
otherwise. For more info, see these 2 links:

https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
75aa74d52f/drivers/iommu/amd/init.c (L3481)

Signed-off-by: Kent Russell <kent.russell@amd.com>

* docs: Remove invalid amd_iommu=on parameter

Per kernel-parameters.txt, there is no "on" option for amd_iommu. While
intel_iommu has it, amd_iommu is automatically on unless specified
otherwise. For more info, see these 2 links:

https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
75aa74d52f/drivers/iommu/amd/init.c (L3481)

Signed-off-by: Kent Russell <kent.russell@amd.com>
(cherry picked from commit 74333b667d)

* External CI: hipBLASLt build now requires python packaging module (#3926)

https://github.com/ROCm/hipBLASLt/pull/1250/files#diff-fee2e6f068b33fca3a1dc49392de8848dbf05c3f4632b680abb1052523e5a30fR35

* External CI: Moved location of upstream pytorch build scripts (#3930)

https://github.com/pytorch/pytorch/pull/138103

* External CI: disable rocMLIR tests (#3931)

* External CI: disable rocMLIR tests

* roctracer AMDGPU_TARGETS flag

* External CI: create a GPU diagnostics template (#3932)

* External CI: Add CK into pytorch build environment (#3934)

* Update rocm-6.2.2.xml (#3927)

vim typo removed

* External CI: add support to disable individual component tests (#3938)

* External CI: AMDMIGraphX greater-equal pip dependencies (#3939)

* Build(deps): Bump rocm-docs-core from 1.8.2 to 1.8.3 in /docs/sphinx (#3933)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.8.2 to 1.8.3.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.8.2...v1.8.3)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* External CI: rocDecode add libva-amdgpu-dev dependency (#3940)

* External CI: enumerate GPUs in gpu-diagnostics (#3942)

* External CI: move gpu-diag directly before tests (#3943)

* External CI: fix HIP_PIPELINE_ID (#3944)

* External CI: pytorch pipeline updates (#3948)

To support recent upstream changes and issues observed.

* External CI: rocpydecode dependency installation change (#3954)

- Install pybind11 through pip instead of apt
- Add pip-installed pybind11 path to CMAKE_PREFIX_PATH
- Tested against source of PR 122

* External CI: do not assume python is python3 for rocpydecode (#3955)

* Improve consistency of the gpu-arch-specs table. (#3936)

* Improve consistency of the gpu-arch-specs table.

* Add XCD to the glossary.

* External CI: Always force rocPyDecode cleanup step

* External CI: Add aqlprofile to Tensile test dependencies (#3961)

* add vllm performance validation doc (#3964)

* External CI: various fixes (#3963)

* add suggestions to vllm perf validation doc (#3968)

* External CI: move allowPartiallySucceededBuilds to library variable (#3970)

* External CI: suppress GPU diag warnings (#3972)

* External CI: rocprofiler-compute pipeline files (#3973)

* External CI: disable reload AMDGPU (#3974)

* Update links to vllm perf validation doc (#3971)

* update links to vllm perf validation doc

* add PagedAttention to wordlist

* External CI: Change test setup for rocPyDecode (#3978)

- Use multiple potential locations for pybind11 to be found by cmake.

* External CI: add roctracer to rocBLAS deps (#3982)

* External CI: decode test changes (#3983)

- Only target container with access to first device
- Ensure pybind11-dev is uninstalled before the package manager install steps

* Changed the introductory text linked to Radeon (#3988)

Co-authored-by: prbasyal <prbasyal@amd.com>

* External CI: finish rocprofiler-compute enablement (#3995)

* External CI: add aomp as rocprofiler-systems dependency (#3996)

* External CI: remove omniperf from nightly (#4000)

* Sync from internal develop 6.2.4 (#4002)

* add radeon pro v710 to gpu arch specs (#192)

* Add V710 specs

gpg:                using RSA key
22223038B47B3ED4B3355AB11B54779B4780494E
gpg: Good signature from "Peter Park (MKMPETEPARK01)
<peter.park@amd.com>" [ultimate]
add some specs

add cols

clean up extra line

* fix graphics l1 cache description

* update SGPR for RDNA2 and RDNA3 archs

* update VGPR

* Apply suggestions from code review

* change l2 cache to 4

* Update docs/reference/gpu-arch-specs.rst

* ROCm 6.2.4 compatibility matrix (#186)

* prep compat column (historical) and mi300x column

* update historical compat matrix for 6.2.4

* update compat matrix for 6.2.4

* fix compat

* fix thunk version

* fix hipify ver

* ROCm 6.2.4 release notes (#184)

* prep 6.2.4 release notes

* add mathlibs

* add detail component changes

* rm non-updated linnks

* fix sentence

* fix rocthrust v

* rm offline installer

* condense

* add leo/ram fdback

words

* update documentation section

* add rocm on radeon note

* update os support note wording

* update release

* update version and GA date to 10-17

* update 6.2.4 rn

* update wording

* add link to v710

* update wording

* update templ

* simplify note

* words

os note

words

* change URLs to latest

* update link to supported GPUs

* Update versions.md 6.2.4 date to Oct 18

* Update conf.py release note date to Oct 18

---------

Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>

* Sync change from ROCm to ROCm-internal (#194)

* Fix Radeon link and point at R6.1.3 as absolute link (#3757)

* Update ROCm manifest to 6.2.1

* Update ROCm branch name

* Add 6.2.1 to version list (#3770)

* Add links to GH issues in 6.2.1 release notes (#3769)

* add MAD page

* link to GitHub issues in release notes known issues

* update templates for 6.2.1

* Revert "add MAD page"

This reverts commit 9cce72bba3.

* update wordlist for spellcheck linter

* add rccl note

* update rocal version change heading to be more obvious

* make rocal note more specific

* fix missing space

* fix capitalization

* Update RCCL known issue wording (#3775)

* add MAD page

* fix wording in RCCL known issue

* Revert "add MAD page"

This reverts commit c81d0f3b0a.

* update llvm version for 6.2.1 (#3779)

* Fix broken links in 6.2.1 release notes (#3782)

* External CI: Replace libomp dependencies with aomp (#3781)

Add roctracer dependency for hipBLAS and rocWMMA testing

* External CI: Add rocprofiler v1 and v2 smoke tests (#3784)

* External CI: ROCgdb smoke tests (#3785)

- Since this is an autotools project and not cmake, build and test on gfx942 system instead of separating into two jobs. Pipeline time is short anyway.
- Follow build instructions to update build flags and to incorporate the ROCdbgapi.
- Results are not parsed and graphed, but the log contents are printed at the end. This was helpful for debugging and will be kept in the pipeline, as the make check-gdb command's output was not helpful on its own.

* External CI: rocPyDecode Smoke Test (#3786)

* External CI: omniperf pipeline (#3788)

- Referred to public documentation, source, and iterative attempts to create and improve build and test pipeline.
- ctest failures are due to the test node not having expected marketing name string and override not working.
- The fix should be on the omniperf repo side of things, so this pull request should be fine as is.

* External CI: create omniperf pipeline IDs, update nightly build (#3790)

* Fixed greater than to be less than in rocFFT changes

* fix footnote for 6.1.0 (#3791)

* fix footnote for 6.1.0

* fix empty columns in historical KFD title

* External CI: Publish wheel as artifact for rocPyDecode (#3796)

* External CI: fix hip-tests symlink creation (#3799)

* Docs: Add Ubuntu 24.04.1 (#3801)

* add ubuntu 24.04.1

* add 24.04.1 to bottom os section

* fix heading and template

* Update compatibility-matrix.rst for OpenMP version

* Update compatibility-matrix-historical-6.0.csv for OpenMP version

* rm ubuntu 24.04.1 from 6.2.0

* Update docs/compatibility/compatibility-matrix.rst

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* rm duplicate ubuntu in historical

---------

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* External CI: fixes for rocMLIR and nightly build (#3800)

* External CI: fix symlinks for rocMLIR and nightly build

* add pipeline IDs for hip-tests

* fix hip-test ID typo

* remove llvm-alt license (#3727)

* remove llvm-alt license

* fix linting error

* External CI: enable ROCR-Runtime tests (#3809)

* External CI: default branches for hip-tests, omniperf (#3811)

* External CI: torch and torchvision smoke tests (#3810)

* External CI: torch and torchvision smoke tests

- Fixed issues with package name and version for the vision wheel that prevented it from installing. A patch is used until my pull request in vision repo is merged.
- Referred to rocAutomation scripts to pick which test scripts to run out of the many in the torch and vision repo, and iteratively tested suggested scripts to see which ones completed in a timely manner.
- Leveraging pytest-azurepipelines module to automatically parse and graph results from these tests.

* External CI: omnitrace build pipeline (#3812)

* External CI: omnitrace build pipeline starter

- Adding initial set of dependencies and build flags.

* External CI: omnitrace build pipeline

- Add bison, rccl, texinfo dependencies based on build failures.
- Add AMDGPU_TARGETS flag
- Add ROCm binaries to PATH for clang-format and other tools used.

* Fix indentation

---------

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: AMDMIGraphX Build Fix (#3814)

- Swap to default gcc on OS to resolve build errors from recent commits.
- Added libdnnl-dev dependency from iterative attempts with compiler change.
- Referred to the passing GitHub checks to observe the compilers that was used.
- Build CK jit lib and include in AMDMIGraphX build.

* External CI: test fixes w/ roctracer, list omniperf as partially succeeding (#3815)

* External CI: rpp tests (#3816)

* External CI: Build pipeline for rocprofiler-sdk (#3819)

* External CI: Pipeline for rocprofiler-sdk

* Add rocprofiler dependency

* External CI: rocprofiler-sdk build pipeline

---------

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: Fix/add missing pipeline IDs (#3818)

* External CI: omnitrace tests (#3822)

* Update tags to 6.2.2 (#3827)

* External CI: add roctracer to roc/hipSOLVER test deps (#3825)

* External CI: add rocprofiler-sdk pipeline IDs (#3824)

* External CI: AMDMIGraphX Smoke Tests (#3830)

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: MIOpen tests (#3837)

* Point to release history instead of deprecated changelog (#3836)

* External CI: filter out hipTensor extended tests (#3838)

* added revised note re. radeon gpus (#3839)

* Restructured the contributions section. (#3715)

* testing if this file is editable

* changed 'kebob-case' to 'dash-case'

* Restructured the page to be more straightforward and provide additional repo information

* forgot to save

* Moved the topic sentence

* Wrong accent on the a in diataxis

* Removed the feedback info from contributing and moved it to Feedback

* fixed spelling errors

* fixed some wording and removed second person text

* consolidated Build and Structure into Contribute; edited toolchai to (hopefully) conform to style guide; updated toc

* updated the titles in the toc

* made changes based on feedback

* it's better when you save

* removed structure and build; fixed something for the linter

* added rst to wordlist

* added customizations to wordlist

* Add links to gpu cluster network guides (#3763)

* Add links to gpu cluster network guides

* Add newline character to eof

* Make link absolute

* add dynamic branch in toc

* remove unnecessary page

clean up

* clean up index/toc

* make multi-node topics adjacent

---------

Co-authored-by: Peter Park <peter.park@amd.com>

* updated the radeon note (#3850)

* External CI: Fix rocPyDecode wheel creation (#3852)

- Set values for expected environment variables.
- Accompanying changes required in rocPyDecode repo. Pull request will be made.

* External CI: pytorch vision patch removal (#3855)

My pull request applying this patch was merged upstream, so this is no longer needed and will break the pipeline since it can no longer be applied.

* Build(deps): Bump rocm-docs-core from 1.8.1 to 1.8.2 in /docs/sphinx (#3807)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.8.1 to 1.8.2.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.8.2/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.8.1...v1.8.2)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* updated the radeon note, as it were (#3857)

* updated the radeon note, as it were

* updated the note again

* Set devops team as codeowners for rocm-build (#3860)

* Set ext CI as codeowners for rocm-build

* Update CODEOWNERS to rocm-devops

* External CI: Add option to pull mainline branch for dependencies (#3689)

* External CI: Add option to pull mainline branch for dependencies

* Missing parameter for mainline branch dependencies.

* External CI: mainline branch definitions

* Removed MIGraphX optimization page (#3848)

* External CI: add a global variable to control gfx942 tests (#3864)

* External CI: update component default/mainline branches (#3871)

* External CI: Stop building gfx90a (#3872)

Save on VM resources until infrastructure has test targets.

* External CI: add libstdc++-12 to rocMLIR (#3874)

* Add building doc section (#3873)

* External CI: programmatically get latest aqlprofile (#3876)

* External CI: use ctest for rocm-examples (#3877)

* External CI: Tensile pipeline (#3884)

* add oversubscription conceptual doc (#3885)

add mitigiation steps

add to toc

move page for build

move doc

fix spelling

update doc

update oversubscription

update order

fix spelling

add oversubscription to wordlist

move oversubscription topic to bottom of toc and index

* add oversubscription conceptual doc (#3885)

(cherry picked from commit d0ecf51b0c)

* External CI: Add pipeline to build upstream boost (#3896)

* Update bitsandbytes branch in docs (#3898)

* Documentation: Add reference to precision-support floating-point types (#3899)

* External CI: use Boost template for MIOpen (#3903)

* External CI: create rocprofiler-systems pipeline (#3906)

* External CI: omnitrace/rocprof-sys pipeline IDs (#3908)

* External CI: MIOpen parse test results (#3913)

* External CI: Use pip to install latest cmake on test system (#3915)

* added a link to the compatibility matrix (#3904)

* added a link to the compatibility matrix

* removed quotes

* docs: Remove invalid amd_iommu=on parameter

Per kernel-parameters.txt, there is no "on" option for amd_iommu. While
intel_iommu has it, amd_iommu is automatically on unless specified
otherwise. For more info, see these 2 links:

https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
75aa74d52f/drivers/iommu/amd/init.c (L3481)

Signed-off-by: Kent Russell <kent.russell@amd.com>

* External CI: hipBLASLt build now requires python packaging module (#3926)

https://github.com/ROCm/hipBLASLt/pull/1250/files#diff-fee2e6f068b33fca3a1dc49392de8848dbf05c3f4632b680abb1052523e5a30fR35

* External CI: Moved location of upstream pytorch build scripts (#3930)

https://github.com/pytorch/pytorch/pull/138103

* External CI: disable rocMLIR tests (#3931)

* External CI: disable rocMLIR tests

* roctracer AMDGPU_TARGETS flag

* External CI: create a GPU diagnostics template (#3932)

* External CI: Add CK into pytorch build environment (#3934)

* External CI: add support to disable individual component tests (#3938)

* External CI: AMDMIGraphX greater-equal pip dependencies (#3939)

* Build(deps): Bump rocm-docs-core from 1.8.2 to 1.8.3 in /docs/sphinx (#3933)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.8.2 to 1.8.3.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.8.2...v1.8.3)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* External CI: rocDecode add libva-amdgpu-dev dependency (#3940)

* External CI: enumerate GPUs in gpu-diagnostics (#3942)

* External CI: move gpu-diag directly before tests (#3943)

* External CI: fix HIP_PIPELINE_ID (#3944)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>
Co-authored-by: Wang, Yanyao <yanyao.wang@amd.com>
Co-authored-by: Yanyao Wang <yanywang@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>
Co-authored-by: Joseph Macaranas <145489236+amd-jmacaran@users.noreply.github.com>
Co-authored-by: Daniel Su <danielsu@amd.com>
Co-authored-by: Sandra Polifroni <sandra.polifroni@amd.com>
Co-authored-by: randyh62 <42045079+randyh62@users.noreply.github.com>
Co-authored-by: Michael Benavidez <michael.benavidez@amd.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: MKKnorr <MKKnorr@web.de>
Co-authored-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Joseph Greathouse <jlgreathouse@users.noreply.github.com>

* 6.2.4 release notes: add known/fixed issues (#193)

* add "for compute workloads" wording for clarity

* add AMDSMI resolved issue

* add dlm known issue

intro text

wording

* update wording

rm bullet point

update wording

* fix spellcheck due to spacing

* rm s

* rm gfx1151

* remove dlm known issue

* update list of updated docs; note for Radeon users

fmt

* update GA date for 6.2.4

* fix rdc version

* fix RDC version strings (#196)

* revert outdataed change for .azuredevops

* Fix 6.2.4 date in versions.md

Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: Wang, Yanyao <yanyao.wang@amd.com>
Co-authored-by: Yanyao Wang <yanywang@amd.com>
Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>
Co-authored-by: Joseph Macaranas <145489236+amd-jmacaran@users.noreply.github.com>
Co-authored-by: Daniel Su <danielsu@amd.com>
Co-authored-by: Sandra Polifroni <sandra.polifroni@amd.com>
Co-authored-by: randyh62 <42045079+randyh62@users.noreply.github.com>
Co-authored-by: Michael Benavidez <michael.benavidez@amd.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: MKKnorr <MKKnorr@web.de>
Co-authored-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Joseph Greathouse <jlgreathouse@users.noreply.github.com>

* fix links in release notes 6.2.4 (#4008)

* Remove extra line

* Update xml files for 6.2.4 (#4012)

* Update xml files for 6.2.4

* Update README with 6.2.4

* Increase visibility of programming guide

* Docs: Update what is rocm description

* Apply suggestions from code review

Co-authored-by: randyh62 <42045079+randyh62@users.noreply.github.com>

* Update docs/how-to/hip_programming_guide.rst

Co-authored-by: MKKnorr <MKKnorr@web.de>

* WIP

* Update docs/index.md

* Update docs/how-to/hip_programming_guide.rst

Co-authored-by: MKKnorr <MKKnorr@web.de>

* Update docs/how-to/programming_guide.rst

* Update docs/what-is-rocm.rst

* Apply suggestions from code review

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Update docs/how-to/programming_guide.rst

Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>

* Remove tip

* External CI: allow test failures to present as failures on Github (#3993)

* External CI: disable rdmatest and rocrtstFunc.Memory_Max_Mem (#4016)

* Added 6.2.4 manifest.xml

* External CI: fix comgr build (#4025)

* External CI: increase Tensile test timeout to 90 mins (#4027)

---------

Signed-off-by: David Galiffi <David.Galiffi@amd.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Yanyao Wang <yanywang@amd.com>
Co-authored-by: Wang, Yanyao <yanyao.wang@amd.com>
Co-authored-by: David Galiffi <dgaliffi@amd.com>
Co-authored-by: Chris Kime <Christopher.Kime@amd.com>
Co-authored-by: ozziemoreno <109979778+ozziemoreno@users.noreply.github.com>
Co-authored-by: Sandra Polifroni <sandra.polifroni@amd.com>
Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>
Co-authored-by: Joseph Macaranas <145489236+amd-jmacaran@users.noreply.github.com>
Co-authored-by: Daniel Su <danielsu@amd.com>
Co-authored-by: randyh62 <42045079+randyh62@users.noreply.github.com>
Co-authored-by: JeniferC99 <150404595+JeniferC99@users.noreply.github.com>
Co-authored-by: Michael Benavidez <michael.benavidez@amd.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: MKKnorr <MKKnorr@web.de>
Co-authored-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Joseph Greathouse <jlgreathouse@users.noreply.github.com>
Co-authored-by: Johannes Maria Frank <jmfrank63@gmail.com>
Co-authored-by: Brian Cornille <bcornill@amd.com>
Co-authored-by: Joseph Macaranas <Joseph.Macaranas@amd.com>
Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>
Co-authored-by: prbasyal <prbasyal@amd.com>
Co-authored-by: Istvan Kiss <neon60@gmail.com>
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Ameya Keshava Mallya <ameyakeshava.mallya@amd.com>
2024-11-14 13:14:37 -05:00
srawat
c1f0a8d9d9 Adding TransferBench into list of components (#185) 2024-11-08 15:46:59 -07:00
Sam Wu
c9754cb9d8 Test Sphinx Sitemap (#162)
* Use sphinx-sitemap

* Update baseurl for sphinx-sitemap

* Regenerate doc reqs
2024-11-08 15:46:04 -07:00
Peter Park
b7ecf6d552 Rename Omnitools to ROCm Compute/Systems Profiler (#183)
* rename Omniperf and Omnitrace

* rename labels

rename more labels

* update licenses and rocm-tools.md

* fix rocprof-sys ref
2024-11-07 18:01:26 -05:00
Jeffrey Novotny
48d2d16563 Remove deprecated architectures from LLVM targets in GPU architecture specs (#198) 2024-11-07 12:33:28 -05:00
Peter Park
47cda1bc70 fix RDC version strings (#196) 2024-11-06 13:51:36 -05:00
Peter Park
34c0266496 6.2.4 release notes: add known/fixed issues (#193)
* add "for compute workloads" wording for clarity

* add AMDSMI resolved issue

* add dlm known issue

intro text

wording

* update wording

rm bullet point

update wording

* fix spellcheck due to spacing

* rm s

* rm gfx1151

* remove dlm known issue

* update list of updated docs; note for Radeon users

fmt

* update GA date for 6.2.4

* fix rdc version
2024-11-06 12:42:55 -05:00
alexxu-amd
8e3d51c31d Sync change from ROCm to ROCm-internal (#194)
* Fix Radeon link and point at R6.1.3 as absolute link (#3757)

* Update ROCm manifest to 6.2.1

* Update ROCm branch name

* Add 6.2.1 to version list (#3770)

* Add links to GH issues in 6.2.1 release notes (#3769)

* add MAD page

* link to GitHub issues in release notes known issues

* update templates for 6.2.1

* Revert "add MAD page"

This reverts commit 9cce72bba3.

* update wordlist for spellcheck linter

* add rccl note

* update rocal version change heading to be more obvious

* make rocal note more specific

* fix missing space

* fix capitalization

* Update RCCL known issue wording (#3775)

* add MAD page

* fix wording in RCCL known issue

* Revert "add MAD page"

This reverts commit c81d0f3b0a.

* update llvm version for 6.2.1 (#3779)

* Fix broken links in 6.2.1 release notes (#3782)

* External CI: Replace libomp dependencies with aomp (#3781)

Add roctracer dependency for hipBLAS and rocWMMA testing

* External CI: Add rocprofiler v1 and v2 smoke tests (#3784)

* External CI: ROCgdb smoke tests (#3785)

- Since this is an autotools project and not cmake, build and test on gfx942 system instead of separating into two jobs. Pipeline time is short anyway.
- Follow build instructions to update build flags and to incorporate the ROCdbgapi.
- Results are not parsed and graphed, but the log contents are printed at the end. This was helpful for debugging and will be kept in the pipeline, as the make check-gdb command's output was not helpful on its own.

* External CI: rocPyDecode Smoke Test (#3786)

* External CI: omniperf pipeline (#3788)

- Referred to public documentation, source, and iterative attempts to create and improve build and test pipeline.
- ctest failures are due to the test node not having expected marketing name string and override not working.
- The fix should be on the omniperf repo side of things, so this pull request should be fine as is.

* External CI: create omniperf pipeline IDs, update nightly build (#3790)

* Fixed greater than to be less than in rocFFT changes

* fix footnote for 6.1.0 (#3791)

* fix footnote for 6.1.0

* fix empty columns in historical KFD title

* External CI: Publish wheel as artifact for rocPyDecode (#3796)

* External CI: fix hip-tests symlink creation (#3799)

* Docs: Add Ubuntu 24.04.1 (#3801)

* add ubuntu 24.04.1

* add 24.04.1 to bottom os section

* fix heading and template

* Update compatibility-matrix.rst for OpenMP version

* Update compatibility-matrix-historical-6.0.csv for OpenMP version

* rm ubuntu 24.04.1 from 6.2.0

* Update docs/compatibility/compatibility-matrix.rst

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* rm duplicate ubuntu in historical

---------

Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>

* External CI: fixes for rocMLIR and nightly build (#3800)

* External CI: fix symlinks for rocMLIR and nightly build

* add pipeline IDs for hip-tests

* fix hip-test ID typo

* remove llvm-alt license (#3727)

* remove llvm-alt license

* fix linting error

* External CI: enable ROCR-Runtime tests (#3809)

* External CI: default branches for hip-tests, omniperf (#3811)

* External CI: torch and torchvision smoke tests (#3810)

* External CI: torch and torchvision smoke tests

- Fixed issues with package name and version for the vision wheel that prevented it from installing. A patch is used until my pull request in vision repo is merged.
- Referred to rocAutomation scripts to pick which test scripts to run out of the many in the torch and vision repo, and iteratively tested suggested scripts to see which ones completed in a timely manner.
- Leveraging pytest-azurepipelines module to automatically parse and graph results from these tests.

* External CI: omnitrace build pipeline (#3812)

* External CI: omnitrace build pipeline starter

- Adding initial set of dependencies and build flags.

* External CI: omnitrace build pipeline

- Add bison, rccl, texinfo dependencies based on build failures.
- Add AMDGPU_TARGETS flag
- Add ROCm binaries to PATH for clang-format and other tools used.

* Fix indentation

---------

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: AMDMIGraphX Build Fix (#3814)

- Swap to default gcc on OS to resolve build errors from recent commits.
- Added libdnnl-dev dependency from iterative attempts with compiler change.
- Referred to the passing GitHub checks to observe the compilers that was used.
- Build CK jit lib and include in AMDMIGraphX build.

* External CI: test fixes w/ roctracer, list omniperf as partially succeeding (#3815)

* External CI: rpp tests (#3816)

* External CI: Build pipeline for rocprofiler-sdk (#3819)

* External CI: Pipeline for rocprofiler-sdk

* Add rocprofiler dependency

* External CI: rocprofiler-sdk build pipeline

---------

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: Fix/add missing pipeline IDs (#3818)

* External CI: omnitrace tests (#3822)

* Update tags to 6.2.2 (#3827)

* External CI: add roctracer to roc/hipSOLVER test deps (#3825)

* External CI: add rocprofiler-sdk pipeline IDs (#3824)

* External CI: AMDMIGraphX Smoke Tests (#3830)

Co-authored-by: Daniel Su <danielsu@amd.com>

* External CI: MIOpen tests (#3837)

* Point to release history instead of deprecated changelog (#3836)

* External CI: filter out hipTensor extended tests (#3838)

* added revised note re. radeon gpus (#3839)

* Restructured the contributions section. (#3715)

* testing if this file is editable

* changed 'kebob-case' to 'dash-case'

* Restructured the page to be more straightforward and provide additional repo information

* forgot to save

* Moved the topic sentence

* Wrong accent on the a in diataxis

* Removed the feedback info from contributing and moved it to Feedback

* fixed spelling errors

* fixed some wording and removed second person text

* consolidated Build and Structure into Contribute; edited toolchai to (hopefully) conform to style guide; updated toc

* updated the titles in the toc

* made changes based on feedback

* it's better when you save

* removed structure and build; fixed something for the linter

* added rst to wordlist

* added customizations to wordlist

* Add links to gpu cluster network guides (#3763)

* Add links to gpu cluster network guides

* Add newline character to eof

* Make link absolute

* add dynamic branch in toc

* remove unnecessary page

clean up

* clean up index/toc

* make multi-node topics adjacent

---------

Co-authored-by: Peter Park <peter.park@amd.com>

* updated the radeon note (#3850)

* External CI: Fix rocPyDecode wheel creation (#3852)

- Set values for expected environment variables.
- Accompanying changes required in rocPyDecode repo. Pull request will be made.

* External CI: pytorch vision patch removal (#3855)

My pull request applying this patch was merged upstream, so this is no longer needed and will break the pipeline since it can no longer be applied.

* Build(deps): Bump rocm-docs-core from 1.8.1 to 1.8.2 in /docs/sphinx (#3807)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.8.1 to 1.8.2.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/v1.8.2/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.8.1...v1.8.2)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* updated the radeon note, as it were (#3857)

* updated the radeon note, as it were

* updated the note again

* Set devops team as codeowners for rocm-build (#3860)

* Set ext CI as codeowners for rocm-build

* Update CODEOWNERS to rocm-devops

* External CI: Add option to pull mainline branch for dependencies (#3689)

* External CI: Add option to pull mainline branch for dependencies

* Missing parameter for mainline branch dependencies.

* External CI: mainline branch definitions

* Removed MIGraphX optimization page (#3848)

* External CI: add a global variable to control gfx942 tests (#3864)

* External CI: update component default/mainline branches (#3871)

* External CI: Stop building gfx90a (#3872)

Save on VM resources until infrastructure has test targets.

* External CI: add libstdc++-12 to rocMLIR (#3874)

* Add building doc section (#3873)

* External CI: programmatically get latest aqlprofile (#3876)

* External CI: use ctest for rocm-examples (#3877)

* External CI: Tensile pipeline (#3884)

* add oversubscription conceptual doc (#3885)

add mitigiation steps

add to toc

move page for build

move doc

fix spelling

update doc

update oversubscription

update order

fix spelling

add oversubscription to wordlist

move oversubscription topic to bottom of toc and index

* add oversubscription conceptual doc (#3885)

(cherry picked from commit d0ecf51b0c)

* External CI: Add pipeline to build upstream boost (#3896)

* Update bitsandbytes branch in docs (#3898)

* Documentation: Add reference to precision-support floating-point types (#3899)

* External CI: use Boost template for MIOpen (#3903)

* External CI: create rocprofiler-systems pipeline (#3906)

* External CI: omnitrace/rocprof-sys pipeline IDs (#3908)

* External CI: MIOpen parse test results (#3913)

* External CI: Use pip to install latest cmake on test system (#3915)

* added a link to the compatibility matrix (#3904)

* added a link to the compatibility matrix

* removed quotes

* docs: Remove invalid amd_iommu=on parameter

Per kernel-parameters.txt, there is no "on" option for amd_iommu. While
intel_iommu has it, amd_iommu is automatically on unless specified
otherwise. For more info, see these 2 links:

https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
75aa74d52f/drivers/iommu/amd/init.c (L3481)

Signed-off-by: Kent Russell <kent.russell@amd.com>

* External CI: hipBLASLt build now requires python packaging module (#3926)

https://github.com/ROCm/hipBLASLt/pull/1250/files#diff-fee2e6f068b33fca3a1dc49392de8848dbf05c3f4632b680abb1052523e5a30fR35

* External CI: Moved location of upstream pytorch build scripts (#3930)

https://github.com/pytorch/pytorch/pull/138103

* External CI: disable rocMLIR tests (#3931)

* External CI: disable rocMLIR tests

* roctracer AMDGPU_TARGETS flag

* External CI: create a GPU diagnostics template (#3932)

* External CI: Add CK into pytorch build environment (#3934)

* External CI: add support to disable individual component tests (#3938)

* External CI: AMDMIGraphX greater-equal pip dependencies (#3939)

* Build(deps): Bump rocm-docs-core from 1.8.2 to 1.8.3 in /docs/sphinx (#3933)

Bumps [rocm-docs-core](https://github.com/ROCm/rocm-docs-core) from 1.8.2 to 1.8.3.
- [Release notes](https://github.com/ROCm/rocm-docs-core/releases)
- [Changelog](https://github.com/ROCm/rocm-docs-core/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/ROCm/rocm-docs-core/compare/v1.8.2...v1.8.3)

---
updated-dependencies:
- dependency-name: rocm-docs-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* External CI: rocDecode add libva-amdgpu-dev dependency (#3940)

* External CI: enumerate GPUs in gpu-diagnostics (#3942)

* External CI: move gpu-diag directly before tests (#3943)

* External CI: fix HIP_PIPELINE_ID (#3944)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Jeffrey Novotny <jnovotny@amd.com>
Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>
Co-authored-by: Wang, Yanyao <yanyao.wang@amd.com>
Co-authored-by: Yanyao Wang <yanywang@amd.com>
Co-authored-by: Peter Park <peter.park@amd.com>
Co-authored-by: Young Hui - AMD <145490163+yhuiYH@users.noreply.github.com>
Co-authored-by: Joseph Macaranas <145489236+amd-jmacaran@users.noreply.github.com>
Co-authored-by: Daniel Su <danielsu@amd.com>
Co-authored-by: Sandra Polifroni <sandra.polifroni@amd.com>
Co-authored-by: randyh62 <42045079+randyh62@users.noreply.github.com>
Co-authored-by: Michael Benavidez <michael.benavidez@amd.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: MKKnorr <MKKnorr@web.de>
Co-authored-by: Kent Russell <kent.russell@amd.com>
Co-authored-by: Joseph Greathouse <jlgreathouse@users.noreply.github.com>
2024-10-25 14:41:40 -04:00
Peter Park
b9ec9507db ROCm 6.2.4 release notes (#184)
* prep 6.2.4 release notes

* add mathlibs

* add detail component changes

* rm non-updated linnks

* fix sentence

* fix rocthrust v

* rm offline installer

* condense

* add leo/ram fdback

words

* update documentation section

* add rocm on radeon note

* update os support note wording

* update release

* update version and GA date to 10-17

* update 6.2.4 rn

* update wording

* add link to v710

* update wording

* update templ

* simplify note

* words

os note

words

* change URLs to latest

* update link to supported GPUs

* Update versions.md 6.2.4 date to Oct 18

* Update conf.py release note date to Oct 18

---------

Co-authored-by: Sam Wu <22262939+samjwu@users.noreply.github.com>
2024-10-18 12:21:38 -04:00
Peter Park
be19ac1071 ROCm 6.2.4 compatibility matrix (#186)
* prep compat column (historical) and mi300x column

* update historical compat matrix for 6.2.4

* update compat matrix for 6.2.4

* fix compat

* fix thunk version

* fix hipify ver
2024-10-18 11:39:09 -04:00
Peter Park
916b5513cf add radeon pro v710 to gpu arch specs (#192)
* Add V710 specs

gpg:                using RSA key
22223038B47B3ED4B3355AB11B54779B4780494E
gpg: Good signature from "Peter Park (MKMPETEPARK01)
<peter.park@amd.com>" [ultimate]
add some specs

add cols

clean up extra line

* fix graphics l1 cache description

* update SGPR for RDNA2 and RDNA3 archs

* update VGPR

* Apply suggestions from code review

* change l2 cache to 4

* Update docs/reference/gpu-arch-specs.rst
2024-10-18 11:33:36 -04:00
172 changed files with 6732 additions and 4996 deletions

View File

@@ -197,3 +197,4 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: MIOpen
testParameters: '-VV --output-on-failure --force-new-ctest-process --output-junit test_output.xml --exclude-regex test_rnn_seq_api'

View File

@@ -126,6 +126,7 @@ jobs:
componentName: comgr
extraBuildFlags: >-
-DCMAKE_PREFIX_PATH="$(Build.SourcesDirectory)/llvm/build;$(Build.SourcesDirectory)/amd/device-libs/build"
-DCOMGR_DISABLE_SPIRV=1
-DCMAKE_BUILD_TYPE=Release
cmakeBuildDir: 'amd/comgr/build'
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml

View File

@@ -0,0 +1,166 @@
parameters:
- name: checkoutRepo
type: string
default: 'self'
- name: checkoutRef
type: string
default: ''
- name: aptPackages
type: object
default:
- cmake
- python3-pip
- name: pipModules
type: object
default:
- astunparse==1.6.2
- colorlover
- dash>=1.12.0
- matplotlib
- numpy>=1.17.5
- pandas>=1.4.3
- pymongo
- pyyaml
- tabulate
- tqdm
- dash-svg
- dash-bootstrap-components
- kaleido
- setuptools
- plotille
- mock
- pytest
- pytest-cov
- pytest-xdist
- name: rocmDependencies
type: object
default:
- clr
- llvm-project
- rocm-cmake
- rocm-core
- rocminfo
- ROCR-Runtime
- rocprofiler
- rocprofiler-register
- roctracer
jobs:
- job: omniperf
variables:
- group: common
- template: /.azuredevops/variables-global.yml
pool:
vmImage: ${{ variables.BASE_BUILD_POOL }}
workspace:
clean: all
strategy:
matrix:
gfx942:
JOB_GPU_TARGET: gfx942
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
parameters:
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, '') }}:
dependencySource: tag-builds
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: $(JOB_GPU_TARGET)
# CI case: download latest default branch build
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
# manual build case: triggered by ROCm/ROCm repo
${{ elseif ne(parameters.checkoutRef, '') }}:
dependencySource: tag-builds
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-upload.yml
parameters:
gpuTarget: $(JOB_GPU_TARGET)
- job: omniperf_testing
dependsOn: omniperf
condition: and(succeeded(), eq(variables.ENABLE_GFX942_TESTS, 'true'), not(containsValue(split(variables.DISABLED_GFX942_TESTS, ','), variables['Build.DefinitionName'])))
variables:
- group: common
- template: /.azuredevops/variables-global.yml
- name: PYTHON_VERSION
value: 3.10
pool: $(JOB_TEST_POOL)
workspace:
clean: all
strategy:
matrix:
gfx942:
JOB_GPU_TARGET: gfx942
JOB_TEST_POOL: ${{ variables.GFX942_TEST_POOL }}
steps:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-other.yml
parameters:
aptPackages: ${{ parameters.aptPackages }}
pipModules: ${{ parameters.pipModules }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/preamble.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/local-artifact-download.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
parameters:
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, '') }}:
dependencySource: tag-builds
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: $(JOB_GPU_TARGET)
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, '') }}:
dependencySource: tag-builds
- task: Bash@3
displayName: Add ROCm binaries to PATH
inputs:
targetType: inline
script: echo "##vso[task.prependpath]$(Agent.BuildDirectory)/rocm/bin"
- task: Bash@3
displayName: Add ROCm compilers to PATH
inputs:
targetType: inline
script: echo "##vso[task.prependpath]$(Agent.BuildDirectory)/rocm/llvm/bin"
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/build-cmake.yml
parameters:
extraBuildFlags: >-
-DCMAKE_HIP_ARCHITECTURES=$(JOB_GPU_TARGET)
-DCMAKE_C_COMPILER=$(Agent.BuildDirectory)/rocm/llvm/bin/amdclang
-DCMAKE_MODULE_PATH=$(Agent.BuildDirectory)/rocm/lib/cmake/hip
-DCMAKE_PREFIX_PATH=$(Agent.BuildDirectory)/rocm
-DCMAKE_BUILD_TYPE=Release
-DENABLE_TESTS=ON
-DINSTALL_TESTS=ON
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/gpu-diagnostics.yml
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/test.yml
parameters:
componentName: omniperf
testDir: $(Build.BinariesDirectory)/libexec/omniperf
testExecutable: export OMNIPERF_ARCH_OVERRIDE="MI300X"; ctest
- task: Bash@3
displayName: Remove ROCm binaries from PATH
inputs:
targetType: inline
script: echo "##vso[task.setvariable variable=PATH]$(echo $PATH | sed -e 's;:$(Agent.BuildDirectory)/rocm/bin;;' -e 's;^/;;' -e 's;/$;;')"
- task: Bash@3
displayName: Remove ROCm compilers from PATH
inputs:
targetType: inline
script: echo "##vso[task.setvariable variable=PATH]$(echo $PATH | sed -e 's;:$(Agent.BuildDirectory)/rocm/llvm/bin;;' -e 's;^/;;' -e 's;/$;;')"

View File

@@ -181,6 +181,7 @@ jobs:
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
gpuTarget: $(JOB_GPU_TARGET)
setupHIPLibrarySymlinks: true
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, '') }}:

View File

@@ -41,6 +41,7 @@ parameters:
- ROCR-Runtime
- rocprofiler-register
- roctracer
- aomp
jobs:
- job: rocprofilersdk

View File

@@ -51,6 +51,7 @@ parameters:
- rocprofiler
- rocprofiler-register
- roctracer
- rocprofiler-sdk
jobs:
- job: rocprofiler_systems
@@ -73,6 +74,12 @@ jobs:
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/checkout.yml
parameters:
checkoutRepo: ${{ parameters.checkoutRepo }}
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-aqlprofile.yml
parameters:
${{ if eq(parameters.checkoutRef, '') }}:
dependencySource: staging
${{ elseif ne(parameters.checkoutRef, '') }}:
dependencySource: tag-builds
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/dependencies-rocm.yml
parameters:
dependencyList: ${{ parameters.rocmDependencies }}
@@ -109,6 +116,7 @@ jobs:
-DROCPROFSYS_BUILD_TESTING=ON
-DROCPROFSYS_BUILD_DYNINST=ON
-DROCPROFSYS_BUILD_LIBUNWIND=ON
-DROCPROFSYS_DISABLE_EXAMPLES="openmp-target"
-DDYNINST_BUILD_TBB=ON
-DDYNINST_BUILD_ELFUTILS=ON
-DDYNINST_BUILD_LIBIBERTY=ON

View File

@@ -142,6 +142,10 @@ parameters:
- binary_ufuncs
- autograd
# - inductor/torchinductor takes too long
# set to false to disable torchvision build and test
- name: includeVision
type: boolean
default: false
trigger: none
pr: none
@@ -237,6 +241,12 @@ jobs:
git clone https://github.com/pytorch/builder.git --depth=1 --recurse-submodules
sudo ln -s $(Build.SourcesDirectory)/builder /builder
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: Temporarily Patch CK Submodule
inputs:
targetType: inline
script: git pull origin develop
workingDirectory: $(Build.SourcesDirectory)/pytorch/third_party/composable_kernel
- task: Bash@3
displayName: Install patchelf
inputs:
@@ -296,59 +306,60 @@ jobs:
sourceDir: /remote/wheelhouserocm$(ROCM_VERSION)
contentsString: '*.whl'
# common helper source for pytorch vision and audio
- task: Bash@3
displayName: git clone pytorch test-infra
inputs:
targetType: inline
script: git clone https://github.com/pytorch/test-infra.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: install package helper
inputs:
targetType: inline
script: python3 -m pip install test-infra/tools/pkg-helpers
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: pytorch pkg helpers
inputs:
targetType: inline
script: CU_VERSION=${CU_VERSION} CHANNEL=${CHANNEL} python -m pytorch_pkg_helpers
# get torch vision source and build
- task: Bash@3
displayName: git clone pytorch vision
inputs:
targetType: inline
script: git clone https://github.com/pytorch/vision.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: Build vision
inputs:
targetType: inline
script: >-
TORCH_PACKAGE_NAME=torch.$(ROCM_BRANCH).$(JOB_GPU_TARGET)
TORCHVISION_PACKAGE_NAME=torchvision.$(ROCM_BRANCH).$(JOB_GPU_TARGET)
PYTORCH_VERSION=$(cat $(Build.SourcesDirectory)/pytorch/version.txt | cut -da -f1)post$(date -u +%Y%m%d)
BUILD_VERSION=$(cat $(Build.SourcesDirectory)/vision/version.txt | cut -da -f1)post$(date -u +%Y%m%d)
python3 setup.py bdist_wheel
workingDirectory: $(Build.SourcesDirectory)/vision
- task: Bash@3
displayName: Relocate vision
inputs:
targetType: inline
script: python3 packaging/wheel/relocate.py
workingDirectory: $(Build.SourcesDirectory)/vision
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-prepare-package.yml
parameters:
sourceDir: $(Build.SourcesDirectory)/vision/dist
contentsString: '*.whl'
clean: false
- ${{ if eq(parameters.includeVision, true) }}:
- task: Bash@3
displayName: git clone pytorch test-infra
inputs:
targetType: inline
script: git clone https://github.com/pytorch/test-infra.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: install package helper
inputs:
targetType: inline
script: python3 -m pip install test-infra/tools/pkg-helpers
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: pytorch pkg helpers
inputs:
targetType: inline
script: CU_VERSION=${CU_VERSION} CHANNEL=${CHANNEL} python -m pytorch_pkg_helpers
# get torch vision source and build
- task: Bash@3
displayName: git clone pytorch vision
inputs:
targetType: inline
script: git clone https://github.com/pytorch/vision.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: Build vision
inputs:
targetType: inline
script: >-
TORCH_PACKAGE_NAME=torch.$(ROCM_BRANCH).$(JOB_GPU_TARGET)
TORCHVISION_PACKAGE_NAME=torchvision.$(ROCM_BRANCH).$(JOB_GPU_TARGET)
PYTORCH_VERSION=$(cat $(Build.SourcesDirectory)/pytorch/version.txt | cut -da -f1)post$(date -u +%Y%m%d)
BUILD_VERSION=$(cat $(Build.SourcesDirectory)/vision/version.txt | cut -da -f1)post$(date -u +%Y%m%d)
python3 setup.py bdist_wheel
workingDirectory: $(Build.SourcesDirectory)/vision
- task: Bash@3
displayName: Relocate vision
inputs:
targetType: inline
script: python3 packaging/wheel/relocate.py
workingDirectory: $(Build.SourcesDirectory)/vision
- template: ${{ variables.CI_TEMPLATE_PATH }}/steps/artifact-prepare-package.yml
parameters:
sourceDir: $(Build.SourcesDirectory)/vision/dist
contentsString: '*.whl'
clean: false
- task: PublishPipelineArtifact@1
displayName: 'wheel file Publish'
retryCountOnTaskFailure: 3
inputs:
targetPath: $(Build.BinariesDirectory)
- job: torchvision_testing
- job: pytorch_testing
dependsOn: pytorch
condition: and(succeeded(), eq(variables.ENABLE_GFX942_TESTS, 'true'), not(containsValue(split(variables.DISABLED_GFX942_TESTS, ','), variables['Build.DefinitionName'])))
variables:
@@ -401,12 +412,13 @@ jobs:
targetType: inline
script: git clone https://github.com/pytorch/pytorch.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: git clone pytorch vision
inputs:
targetType: inline
script: git clone https://github.com/pytorch/vision.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- ${{ if eq(parameters.includeVision, true) }}:
- task: Bash@3
displayName: git clone pytorch vision
inputs:
targetType: inline
script: git clone https://github.com/pytorch/vision.git --depth=1 --recurse-submodules
workingDirectory: $(Build.SourcesDirectory)
- task: Bash@3
displayName: Install Wheel Files
inputs:
@@ -510,13 +522,14 @@ jobs:
script: pytest test/test_${{ torchTest }}.py
# Reference on what tests to run for torchvision found in private repo:
# https://github.com/ROCm/rocAutomation/blob/jenkins-pipelines/pytorch/pytorch_ci/test_torchvision.sh#L51
- task: Bash@3
displayName: Test vision/transforms
continueOnError: true
inputs:
targetType: inline
script: pytest test/test_transforms.py
workingDirectory: $(Build.SourcesDirectory)/vision
- ${{ if eq(parameters.includeVision, true) }}:
- task: Bash@3
displayName: Test vision/transforms
continueOnError: true
inputs:
targetType: inline
script: pytest test/test_transforms.py
workingDirectory: $(Build.SourcesDirectory)/vision
- task: Bash@3
displayName: Uninstall Wheel Files
inputs:

View File

@@ -26,6 +26,7 @@ parameters:
- llvm-project
- MIOpen
- MIVisionX
- omniperf
- rccl
- rdc
- rocAL

View File

@@ -0,0 +1,29 @@
variables:
- group: common
- template: /.azuredevops/variables-global.yml
parameters:
- name: checkoutRef
type: string
default: refs/tags/$(LATEST_RELEASE_TAG)
resources:
repositories:
- repository: pipelines_repo
type: github
endpoint: ROCm
name: ROCm/ROCm
- repository: release_repo
type: github
endpoint: ROCm
name: ROCm/omniperf
ref: ${{ parameters.checkoutRef }}
trigger: none
pr: none
jobs:
- template: ${{ variables.CI_COMPONENT_PATH }}/omniperf.yml
parameters:
checkoutRepo: release_repo
checkoutRef: ${{ parameters.checkoutRef }}

View File

@@ -62,7 +62,7 @@ parameters:
ROCgdb: amd-staging
rocJPEG: develop
rocm-cmake: develop
rocm-core: master
rocm-core: amd-staging
rocm-examples: develop
rocminfo: amd-staging
rocMLIR: develop

View File

@@ -165,6 +165,11 @@ parameters:
- name: skipLlvmSymlink
type: boolean
default: false
# set to true if dlopen calls for HIP libraries are causing failures
# because they do not follow shared library symlink convention
- name: setupHIPLibrarySymlinks
type: boolean
default: false
# some ROCm components can specify GPU target and this will affect downloads
- name: gpuTarget
type: string
@@ -280,6 +285,37 @@ steps:
for file in amdclang amdclang++ amdclang-cl amdclang-cpp amdflang amdlld aompcc mygpu mycpu offload-arch; do
sudo ln -s $(Agent.BuildDirectory)/rocm/llvm/bin/$file $(Agent.BuildDirectory)/rocm/bin/$file
done
# dlopen calls within a ctest or pytest sequence runs into issues when shared library symlink convention is not followed
# the convention is as follows:
# unversioned .so is a symlink to major version .so
# major version .so is a symlink to detailed version .so
# HIP libraries do not follow this convention, and each .so is a copy of each other
# changing the library structure to follow the symlink convention resolves some test failures
- ${{ if eq(parameters.setupHIPLibrarySymlinks, true) }}:
- task: Bash@3
displayName: Setup symlinks for hip libraries
inputs:
targetType: inline
workingDirectory: $(Agent.BuildDirectory)/rocm/lib
script: |
LIBRARIES=("libamdhip64" "libhiprtc-builtins" "libhiprtc")
for LIB_NAME in "${LIBRARIES[@]}"; do
VERSIONED_SO=$(ls ${LIB_NAME}.so.* 2>/dev/null | grep -E "${LIB_NAME}\.so\.[0-9]+\.[0-9]+\.[0-9]+(-.*)?" | sort -V | tail -n 1)
if [[ -z "$VERSIONED_SO" ]]; then
continue
fi
MAJOR_VERSION=$(echo "$VERSIONED_SO" | grep -oP "${LIB_NAME}\.so\.\K[0-9]+")
if [[ -e "${LIB_NAME}.so.${MAJOR_VERSION}" && ! -L "${LIB_NAME}.so.${MAJOR_VERSION}" ]]; then
rm -f "${LIB_NAME}.so.${MAJOR_VERSION}"
fi
if [[ -e "${LIB_NAME}.so" && ! -L "${LIB_NAME}.so" ]]; then
rm -f "${LIB_NAME}.so"
fi
ln -sf "$VERSIONED_SO" "${LIB_NAME}.so.${MAJOR_VERSION}"
ln -sf "${LIB_NAME}.so.${MAJOR_VERSION}" "${LIB_NAME}.so"
echo "Symlinks created for $LIB_NAME:"
ls -l ${LIB_NAME}.so*
done
- task: Bash@3
displayName: 'List downloaded ROCm files'
inputs:

View File

@@ -26,6 +26,7 @@ ASm
ATI
AddressSanitizer
AlexNet
Andrej
Arb
Autocast
BARs
@@ -73,6 +74,7 @@ Conda
ConnectX
CuPy
Dashboarding
DBRX
DDR
DF
DGEMM
@@ -90,6 +92,8 @@ Dask
DataFrame
DataLoader
DataParallel
Debian
DeepSeek
DeepSpeed
Dependabot
Deprecations
@@ -106,6 +110,7 @@ FFT
FFTs
FFmpeg
FHS
FIXME
FMA
FP
FX
@@ -126,10 +131,12 @@ GDS
GEMM
GEMMs
GFortran
Gemma
GiB
GIM
GL
GLXT
Gloo
GMI
GPG
GPR
@@ -148,6 +155,8 @@ HGX
HIPCC
HIPExtension
HIPIFY
HIPification
HIPify
HPC
HPCG
HPE
@@ -183,14 +192,17 @@ Interop
Intersphinx
Intra
Ioffe
JAX's
Jinja
JSON
Jupyter
KFD
KFDTest
KiB
KMD
KV
KVM
Karpathy's
KiB
Keras
Khronos
LAPACK
@@ -211,6 +223,7 @@ MiB
MIGraphX
MIOpen
MIOpenGEMM
MIOpen's
MIVisionX
MLM
MMA
@@ -240,6 +253,8 @@ MyEnvironment
MyST
NBIO
NBIOs
NCCL
NCF
NIC
NICs
NLI
@@ -281,10 +296,12 @@ OpenVX
OpenXLA
Oversubscription
PagedAttention
Pallas
PCC
PCI
PCIe
PEFT
PEQT
PIL
PILImage
POR
@@ -314,6 +331,7 @@ RDMA
RDNA
README
RHEL
RMW
RNN
RNNs
ROC
@@ -330,6 +348,7 @@ ROCmSoftwarePlatform
ROCmValidationSuite
ROCprofiler
ROCr
RPP
RST
RW
Radeon
@@ -337,6 +356,7 @@ RelWithDebInfo
Req
Rickle
RoCE
Runfile
Ryzen
SALU
SBIOS
@@ -349,6 +369,7 @@ SENDMSG
SGPR
SGPRs
SHA
SHARK's
SIGQUIT
SIMD
SIMDs
@@ -393,9 +414,14 @@ TensorFlow
TensorParallel
ToC
TorchAudio
torchaudio
TorchElastic
TorchMIGraphX
torchrec
TorchScript
TorchServe
torchserve
torchtext
TorchVision
TransferBench
TrapStatus
@@ -502,6 +528,9 @@ copyable
cpp
csn
cuBLAS
cuda
cuDNN
cudnn
cuFFT
cuLIB
cuRAND
@@ -518,6 +547,7 @@ dbgapi
de
deallocation
debuggability
debian
denoise
denoised
denoises
@@ -555,6 +585,7 @@ gRPC
galb
gcc
gdb
gemm
gfortran
gfx
githooks
@@ -570,6 +601,7 @@ hipBLASLt's
hipblaslt
hipCUB
hipFFT
hipFORT
hipLIB
hipRAND
hipSOLVER
@@ -591,6 +623,7 @@ hpp
hsa
hsakmt
hyperparameter
hyperparameters
iDRAC
ib_core
inband
@@ -617,6 +650,7 @@ len
libfabric
libjpeg
libs
linalg
linearized
linter
linux
@@ -639,6 +673,7 @@ mutex
mvffr
namespace
namespaces
nanoGPT
num
numref
ocl
@@ -650,7 +685,9 @@ optimizers
os
oversubscription
pageable
pallas
parallelization
parallelizing
parameterization
passthrough
perfcounter
@@ -663,6 +700,7 @@ prebuilt
precompiled
preconditioner
preconfigured
preemptible
prefetch
prefetchable
prefill
@@ -679,10 +717,13 @@ profilers
protobuf
pseudorandom
py
recommender
recommenders
quantile
quantizer
quasirandom
queueing
radeon
rccl
rdc
rdma
@@ -704,6 +745,7 @@ rocALUTION
rocBLAS
rocDecode
rocFFT
rocHPCG
rocJPEG
rocLIB
rocMLIR
@@ -735,6 +777,7 @@ runtimes
sL
scalability
scalable
scipy
seealso
sendmsg
seqs
@@ -742,6 +785,7 @@ serializers
shader
sharding
sigmoid
single-node
sm
smi
softmax

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023 - 2024 Advanced Micro Devices, Inc. All rights reserved.
Copyright (c) 2023 - 2025 Advanced Micro Devices, Inc. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -76,7 +76,7 @@ The Build time will reduce significantly if we limit the GPU Architecture/s agai
mkdir -p ~/WORKSPACE/ # Or any folder name other than WORKSPACE
cd ~/WORKSPACE/
export ROCM_VERSION=6.3.0
export ROCM_VERSION=6.3.1
~/bin/repo init -u http://github.com/ROCm/ROCm.git -b roc-6.3.x -m tools/rocm-build/rocm-${ROCM_VERSION}.xml
~/bin/repo sync

1529
RELEASE.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="rocm-org" fetch="https://github.com/ROCm/" />
<default revision="refs/tags/rocm-6.3.0"
<default revision="refs/tags/rocm-6.3.1"
remote="rocm-org"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCm-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCR-Runtime" />
<project name="ROCT-Thunk-Interface" />
<project name="amdsmi" />
<project name="omniperf" />
<project name="omnitrace" />
<project name="rdc" />
<project name="rocm_bandwidth_test" />
<project name="rocm_smi_lib" />
@@ -21,6 +18,8 @@
<project name="rocprofiler" />
<project name="rocprofiler-register" />
<project name="rocprofiler-sdk" />
<project name="rocprofiler-compute" />
<project name="rocprofiler-systems" />
<project name="roctracer" />
<!--HIP Projects-->
<project name="HIP" />
@@ -42,6 +41,7 @@
<project groups="mathlibs" name="ROCmValidationSuite" />
<project groups="mathlibs" name="Tensile" />
<project groups="mathlibs" name="composable_kernel" />
<project groups="mathlibs" name="hipBLAS-common" />
<project groups="mathlibs" name="hipBLAS" />
<project groups="mathlibs" name="hipBLASLt" />
<project groups="mathlibs" name="hipCUB" />
@@ -57,6 +57,7 @@
<project groups="mathlibs" name="rocALUTION" />
<project groups="mathlibs" name="rocBLAS" />
<project groups="mathlibs" name="rocDecode" />
<project groups="mathlibs" name="rocJPEG" />
<project groups="mathlibs" name="rocPyDecode" />
<project groups="mathlibs" name="rocFFT" />
<project groups="mathlibs" name="rocPRIM" />
@@ -67,6 +68,7 @@
<project groups="mathlibs" name="rocWMMA" />
<project groups="mathlibs" name="rocm-cmake" />
<project groups="mathlibs" name="rpp" />
<project groups="mathlibs" name="TransferBench" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" />

View File

@@ -25,15 +25,15 @@ additional licenses. Please review individual repositories for more information.
<!-- spellcheck-disable -->
| Component | License |
|:---------------------|:-------------------------|
| [AMD Compute Language Runtime (CLR)](https://github.com/ROCm/clr) | [MIT](https://github.com/ROCm/clr/blob/develop/LICENCE) |
| [AMD SMI](https://github.com/ROCm/amdsmi) | [MIT](https://github.com/ROCm/amdsmi/blob/develop/LICENSE) |
| [AMD Compute Language Runtime (CLR)](https://github.com/ROCm/clr) | [MIT](https://github.com/ROCm/clr/blob/amd-staging/LICENCE) |
| [AMD SMI](https://github.com/ROCm/amdsmi) | [MIT](https://github.com/ROCm/amdsmi/blob/amd-staging/LICENSE) |
| [aomp](https://github.com/ROCm/aomp/) | [Apache 2.0](https://github.com/ROCm/aomp/blob/aomp-dev/LICENSE) |
| [aomp-extras](https://github.com/ROCm/aomp-extras/) | [MIT](https://github.com/ROCm/aomp-extras/blob/aomp-dev/LICENSE) |
| [Code Object Manager (Comgr)](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/comgr) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/comgr/LICENSE.txt) |
| [Composable Kernel](https://github.com/ROCm/composable_kernel) | [MIT](https://github.com/ROCm/composable_kernel/blob/develop/LICENSE) |
| [half](https://github.com/ROCm/half/) | [MIT](https://github.com/ROCm/half/blob/rocm/LICENSE.txt) |
| [HIP](https://github.com/ROCm/HIP/) | [MIT](https://github.com/ROCm/HIP/blob/develop/LICENSE.txt) |
| [hipamd](https://github.com/ROCm/clr/tree/develop/hipamd) | [MIT](https://github.com/ROCm/clr/blob/develop/hipamd/LICENSE.txt) |
| [HIP](https://github.com/ROCm/HIP/) | [MIT](https://github.com/ROCm/HIP/blob/amd-staging/LICENSE.txt) |
| [hipamd](https://github.com/ROCm/clr/tree/amd-staging/hipamd) | [MIT](https://github.com/ROCm/clr/blob/amd-staging/hipamd/LICENSE.txt) |
| [hipBLAS](https://github.com/ROCm/hipBLAS/) | [MIT](https://github.com/ROCm/hipBLAS/blob/develop/LICENSE.md) |
| [hipBLASLt](https://github.com/ROCm/hipBLASLt/) | [MIT](https://github.com/ROCm/hipBLASLt/blob/develop/LICENSE.md) |
| [HIPCC](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/hipcc) | [MIT](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/hipcc/LICENSE.txt) |
@@ -58,29 +58,29 @@ additional licenses. Please review individual repositories for more information.
| [ROCdbgapi](https://github.com/ROCm/ROCdbgapi/) | [MIT](https://github.com/ROCm/ROCdbgapi/blob/amd-staging/LICENSE.txt) |
| [rocDecode](https://github.com/ROCm/rocDecode) | [MIT](https://github.com/ROCm/rocDecode/blob/develop/LICENSE) |
| [rocFFT](https://github.com/ROCm/rocFFT/) | [MIT](https://github.com/ROCm/rocFFT/blob/develop/LICENSE.md) |
| [ROCgdb](https://github.com/ROCm/ROCgdb/) | [GNU General Public License v3.0](https://github.com/ROCm/ROCgdb/blob/amd-master/COPYING3) |
| [ROCgdb](https://github.com/ROCm/ROCgdb/) | [GNU General Public License v3.0](https://github.com/ROCm/ROCgdb/blob/amd-staging/COPYING3) |
| [rocJPEG](https://github.com/ROCm/rocJPEG/) | [MIT](https://github.com/ROCm/rocJPEG/blob/develop/LICENSE) |
| [ROCK-Kernel-Driver](https://github.com/ROCm/ROCK-Kernel-Driver/) | [GPL 2.0 WITH Linux-syscall-note](https://github.com/ROCm/ROCK-Kernel-Driver/blob/master/COPYING) |
| [rocminfo](https://github.com/ROCm/rocminfo/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocminfo/blob/amd-staging/License.txt) |
| [ROCm Bandwidth Test](https://github.com/ROCm/rocm_bandwidth_test/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocm_bandwidth_test/blob/master/LICENSE.txt) |
| [ROCm Bandwidth Test](https://github.com/ROCm/rocm_bandwidth_test/) | [MIT](https://github.com/ROCm/rocm_bandwidth_test/blob/master/LICENSE.txt) |
| [ROCm CMake](https://github.com/ROCm/rocm-cmake/) | [MIT](https://github.com/ROCm/rocm-cmake/blob/develop/LICENSE) |
| [ROCm Communication Collectives Library (RCCL)](https://github.com/ROCm/rccl/) | [Custom](https://github.com/ROCm/rccl/blob/develop/LICENSE.txt) |
| [ROCm-Core](https://github.com/ROCm/rocm-core) | [MIT](https://github.com/ROCm/rocm-core/blob/master/copyright) |
| [ROCm Compute Profiler](https://github.com/ROCm/rocprofiler-compute) | [MIT](https://github.com/ROCm/rocprofiler-compute/blob/amd-staging/LICENSE) |
| [ROCm Data Center (RDC)](https://github.com/ROCm/rdc/) | [MIT](https://github.com/ROCm/rdc/blob/develop/LICENSE) |
| [ROCm Data Center (RDC)](https://github.com/ROCm/rdc/) | [MIT](https://github.com/ROCm/rdc/blob/amd-staging/LICENSE) |
| [ROCm-Device-Libs](https://github.com/ROCm/llvm-project/tree/amd-staging/amd/device-libs) | [The University of Illinois/NCSA](https://github.com/ROCm/llvm-project/blob/amd-staging/amd/device-libs/LICENSE.TXT) |
| [ROCm-OpenCL-Runtime](https://github.com/ROCm/clr/tree/develop/opencl) | [MIT](https://github.com/ROCm/clr/blob/develop/opencl/LICENSE.txt) |
| [ROCm-OpenCL-Runtime](https://github.com/ROCm/clr/tree/amd-staging/opencl) | [MIT](https://github.com/ROCm/clr/blob/amd-staging/opencl/LICENSE.txt) |
| [ROCm Performance Primitives (RPP)](https://github.com/ROCm/rpp) | [MIT](https://github.com/ROCm/rpp/blob/develop/LICENSE) |
| [ROCm SMI Lib](https://github.com/ROCm/rocm_smi_lib/) | [MIT](https://github.com/ROCm/rocm_smi_lib/blob/develop/License.txt) |
| [ROCm SMI Lib](https://github.com/ROCm/rocm_smi_lib/) | [MIT](https://github.com/ROCm/rocm_smi_lib/blob/amd-staging/License.txt) |
| [ROCm Systems Profiler](https://github.com/ROCm/rocprofiler-systems) | [MIT](https://github.com/ROCm/rocprofiler-systems/blob/amd-staging/LICENSE) |
| [ROCm Validation Suite](https://github.com/ROCm/ROCmValidationSuite/) | [MIT](https://github.com/ROCm/ROCmValidationSuite/blob/master/LICENSE) |
| [rocPRIM](https://github.com/ROCm/rocPRIM/) | [MIT](https://github.com/ROCm/rocPRIM/blob/develop/LICENSE.txt) |
| [ROCProfiler](https://github.com/ROCm/rocprofiler/) | [MIT](https://github.com/ROCm/rocprofiler/blob/amd-master/LICENSE) |
| [ROCProfiler](https://github.com/ROCm/rocprofiler/) | [MIT](https://github.com/ROCm/rocprofiler/blob/amd-staging/LICENSE) |
| [ROCprofiler-SDK](https://github.com/ROCm/rocprofiler-sdk) | [MIT](https://github.com/ROCm/rocprofiler-sdk/blob/amd-mainline/LICENSE) |
| [rocPyDecode](https://github.com/ROCm/rocPyDecode) | [MIT](https://github.com/ROCm/rocPyDecode/blob/develop/LICENSE) |
| [rocRAND](https://github.com/ROCm/rocRAND/) | [MIT](https://github.com/ROCm/rocRAND/blob/develop/LICENSE.txt) |
| [ROCr Debug Agent](https://github.com/ROCm/rocr_debug_agent/) | [The University of Illinois/NCSA](https://github.com/ROCm/rocr_debug_agent/blob/amd-staging/LICENSE.txt) |
| [ROCR-Runtime](https://github.com/ROCm/ROCR-Runtime/) | [The University of Illinois/NCSA](https://github.com/ROCm/ROCR-Runtime/blob/master/LICENSE.txt) |
| [ROCR-Runtime](https://github.com/ROCm/ROCR-Runtime/) | [The University of Illinois/NCSA](https://github.com/ROCm/ROCR-Runtime/blob/amd-staging/LICENSE.txt) |
| [rocSOLVER](https://github.com/ROCm/rocSOLVER/) | [BSD-2-Clause](https://github.com/ROCm/rocSOLVER/blob/develop/LICENSE.md) |
| [rocSPARSE](https://github.com/ROCm/rocSPARSE/) | [MIT](https://github.com/ROCm/rocSPARSE/blob/develop/LICENSE.md) |
| [rocThrust](https://github.com/ROCm/rocThrust/) | [Apache 2.0](https://github.com/ROCm/rocThrust/blob/develop/LICENSE) |
@@ -99,7 +99,7 @@ repositories to distinguish from open sourced packages.
The following additional terms and conditions apply to your use of ROCm technical documentation.
```
©2023 - 2024 Advanced Micro Devices, Inc. All rights reserved.
©2023 - 2025 Advanced Micro Devices, Inc. All rights reserved.
The information presented in this document is for informational purposes only
and may contain technical inaccuracies, omissions, and typographical errors. The

View File

@@ -1,118 +1,129 @@
ROCm Version,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0, 6.1.2, 6.1.1, 6.1.0, 6.0.2, 6.0.0
:ref:`Operating systems & kernels <OS-kernel-versions>`,Ubuntu 24.04.2,"Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04",Ubuntu 24.04,,,,,
,Ubuntu 22.04.5,"Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3, 22.04.2","Ubuntu 22.04.4, 22.04.3, 22.04.2"
,,,,,,"Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5"
,"RHEL 9.5, 9.4","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4 [#red-hat94-past-60]_, 9.3, 9.2","RHEL 9.4 [#red-hat94-past-60]_, 9.3, 9.2","RHEL 9.4 [#red-hat94-past-60]_, 9.3, 9.2","RHEL 9.3, 9.2","RHEL 9.3, 9.2"
,"RHEL 8.10","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8"
,"SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4"
,,,,,,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9
,Oracle Linux 8.10 [#oracle89-past-60]_,Oracle Linux 8.9 [#oracle89-past-60]_,Oracle Linux 8.9 [#oracle89-past-60]_,Oracle Linux 8.9 [#oracle89-past-60]_,Oracle Linux 8.9 [#oracle89-past-60]_,Oracle Linux 8.9 [#oracle89-past-60]_,Oracle Linux 8.9 [#oracle89-past-60]_,,,
,.. _architecture-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3
,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2
,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA
,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3
,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2
,.. _gpu-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>`,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100
,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030
,gfx942,gfx942 [#mi300_624-past-60]_,gfx942 [#mi300_622-past-60]_,gfx942 [#mi300_621-past-60]_,gfx942 [#mi300_620-past-60]_, gfx942 [#mi300_612-past-60]_, gfx942 [#mi300_611-past-60]_, gfx942 [#mi300_610-past-60]_, gfx942 [#mi300_602-past-60]_, gfx942 [#mi300_600-past-60]_
,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a
,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908
,,,,,,,,,,
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`PyTorch <rocm-install-on-linux:install/3rd-party/pytorch-install>`,"2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13"
:doc:`TensorFlow <rocm-install-on-linux:install/3rd-party/tensorflow-install>`,"2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.14.0, 2.13.1, 2.12.1","2.14.0, 2.13.1, 2.12.1"
:doc:`JAX <rocm-install-on-linux:install/3rd-party/jax-install>`,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.14.1,1.14.1
,,,,,,,,,,
THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix-past-60:,,,,,,,,,
`UCC <https://github.com/ROCm/ucc>`_,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.2.0,>=1.2.0
`UCX <https://github.com/ROCm/ucx>`_,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1
,,,,,,,,,,
THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix-past-60:,,,,,,,,,
Thrust,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
CUB,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
,,,,,,,,,,
KFD & USER SPACE [#kfd_support-past-60]_,.. _kfd-userspace-support-compatibility-matrix-past-60:,,,,,,,,,
Tested user space versions,"6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x"
,,,,,,,,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0
:doc:`MIGraphX <amdmigraphx:index>`,2.11.0,2.10.0,2.10.0,2.10.0,2.10.0,2.9.0,2.9.0,2.9.0,2.8.0,2.8.0
:doc:`MIOpen <miopen:index>`,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`MIVisionX <mivisionx:index>`,3.1.0,3.0.0,3.0.0,3.0.0,3.0.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0
:doc:`rocAL <rocal:index>`,2.1.0,2.0.0,2.0.0,2.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`rocDecode <rocdecode:index>`,0.8.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.0,0.5.0,N/A,N/A
:doc:`rocJPEG <rocjpeg:index>`,0.6.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`rocPyDecode <rocpydecode:index>`,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0,N/A,N/A,N/A,N/A,N/A
:doc:`RPP <rpp:index>`,1.9.1,1.8.0,1.8.0,1.8.0,1.8.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0
,,,,,,,,,,
COMMUNICATION,.. _commlibs-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`RCCL <rccl:index>`,2.21.5,2.20.5,2.20.5,2.20.5,2.20.5,2.18.6,2.18.6,2.18.6,2.18.3,2.18.3
,,,,,,,,,,
MATH LIBS,.. _mathlibs-support-compatibility-matrix-past-60:,,,,,,,,,
`half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0
:doc:`hipBLAS <hipblas:index>`,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.0.0,2.0.0
:doc:`hipBLASLt <hipblaslt:index>`,0.10.0,0.8.0,0.8.0,0.8.0,0.8.0,0.7.0,0.7.0,0.7.0,0.6.0,0.6.0
:doc:`hipFFT <hipfft:index>`,1.0.17,1.0.16,1.0.15,1.0.15,1.0.14,1.0.14,1.0.14,1.0.14,1.0.13,1.0.13
:doc:`hipfort <hipfort:index>`,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0
:doc:`hipRAND <hiprand:index>`,2.11.0,2.11.1,2.11.0,2.11.0,2.11.0,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16
:doc:`hipSOLVER <hipsolver:index>`,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.1,2.1.1,2.1.0,2.0.0,2.0.0
:doc:`hipSPARSE <hipsparse:index>`,3.1.2,3.1.1,3.1.1,3.1.1,3.1.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0
:doc:`hipSPARSELt <hipsparselt:index>`,0.2.2,0.2.1,0.2.1,0.2.1,0.2.1,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0
:doc:`rocALUTION <rocalution:index>`,3.2.1,3.2.1,3.2.0,3.2.0,3.2.0,3.1.1,3.1.1,3.1.1,3.0.3,3.0.3
:doc:`rocBLAS <rocblas:index>`,4.3.0,4.2.4,4.2.1,4.2.1,4.2.0,4.1.2,4.1.0,4.1.0,4.0.0,4.0.0
:doc:`rocFFT <rocfft:index>`,1.0.31,1.0.30,1.0.29,1.0.29,1.0.28,1.0.27,1.0.27,1.0.26,1.0.25,1.0.23
:doc:`rocRAND <rocrand:index>`,3.2.0,3.1.1,3.1.0,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.0,2.10.17
:doc:`rocSOLVER <rocsolver:index>`,3.27.0,3.26.2,3.26.0,3.26.0,3.26.0,3.25.0,3.25.0,3.25.0,3.24.0,3.24.0
:doc:`rocSPARSE <rocsparse:index>`,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.2,3.1.2,3.1.2,3.0.2,3.0.2
:doc:`rocWMMA <rocwmma:index>`,1.6.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0,1.4.0,1.3.0,1.3.0
:doc:`Tensile <tensile:index>`,4.42.0,4.41.0,4.41.0,4.41.0,4.41.0,4.40.0,4.40.0,4.40.0,4.39.0,4.39.0
,,,,,,,,,,
PRIMITIVES,.. _primitivelibs-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`hipCUB <hipcub:index>`,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`hipTensor <hiptensor:index>`,1.4.0,1.3.0,1.3.0,1.3.0,1.3.0,1.2.0,1.2.0,1.2.0,1.1.0,1.1.0
:doc:`rocPRIM <rocprim:index>`,3.3.0,3.2.2,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`rocThrust <rocthrust:index>`,3.3.0,3.1.1,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0
,,,,,,,,,,
SUPPORT LIBS,,,,,,,,,,
`hipother <https://github.com/ROCm/hipother>`_,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
`rocm-core <https://github.com/ROCm/rocm-core>`_,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0,6.1.2,6.1.1,6.1.0,6.0.2,6.0.0
`ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr-past-60]_,20240607.5.7,20240607.5.7,20240607.4.05,20240607.1.4246,20240125.5.08,20240125.5.08,20240125.3.30,20231016.2.245,20231016.2.245
,,,,,,,,,,
SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`AMD SMI <amdsmi:index>`,24.7.1,24.6.3,24.6.3,24.6.3,24.6.2,24.5.1,24.5.1,24.4.1,23.4.2,23.4.2
:doc:`ROCm Data Center Tool <rdc:index>`,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0
:doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`ROCm SMI <rocm_smi_lib:index>`,7.4.0,7.3.0,7.3.0,7.3.0,7.3.0,7.2.0,7.0.0,7.0.0,6.0.2,6.0.0
:doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.1.0,1.0.60204,1.0.60202,1.0.60201,1.0.60200,1.0.60102,1.0.60101,1.0.60100,1.0.60002,1.0.60000
,,,,,,,,,,
PERFORMANCE TOOLS,,,,,,,,,,
:doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.0.0,2.0.1,2.0.1,2.0.1,2.0.1,N/A,N/A,N/A,N/A,N/A
:doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,0.1.0,1.11.2,1.11.2,1.11.2,1.11.2,N/A,N/A,N/A,N/A,N/A
:doc:`ROCProfiler <rocprofiler:index>`,2.0.60300,2.0.60204,2.0.60202,2.0.60201,2.0.60200,2.0.60102,2.0.60101,2.0.60100,2.0.60002,2.0.60000
:doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,N/A,N/A,N/A,N/A,N/A
:doc:`ROCTracer <roctracer:index>`,4.1.60300,4.1.60204,4.1.60202,4.1.60201,4.1.60200,4.1.60102,4.1.60101,4.1.60100,4.1.60002,4.1.60000
,,,,,,,,,,
DEVELOPMENT TOOLS,,,,,,,,,,
:doc:`HIPIFY <hipify:index>`,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
:doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.13.0,0.13.0,0.13.0,0.13.0,0.12.0,0.12.0,0.12.0,0.11.0,0.11.0
:doc:`ROCdbgapi <rocdbgapi:index>`,0.77.0,0.76.0,0.76.0,0.76.0,0.76.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0
:doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,15.2.0,14.2.0,14.2.0,14.2.0,14.2.0,14.1.0,14.1.0,14.1.0,13.2.0,13.2.0
`rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.3.0,0.3.0,0.3.0,N/A,N/A
:doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3
,,,,,,,,,,
COMPILERS,.. _compilers-support-compatibility-matrix-past-60:,,,,,,,,,
`clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,N/A,N/A,N/A,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0
:doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
`Flang <https://github.com/ROCm/flang>`_,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
:doc:`llvm-project <llvm-project:index>`,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
`OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
,,,,,,,,,,
RUNTIMES,.. _runtime-support-compatibility-matrix-past-60:,,,,,,,,,
:doc:`AMD CLR <hip:understand/amd_clr>`,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
:doc:`HIP <hip:index>`,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
`OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0
:doc:`ROCr Runtime <rocr-runtime:index>`,1.14.0,1.14.0,1.14.0,1.14.0,1.13.0,1.13.0,1.13.0,1.13.0,1.12.0,1.12.0
ROCm Version,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0, 6.1.5, 6.1.2, 6.1.1, 6.1.0, 6.0.2, 6.0.0
:ref:`Operating systems & kernels <OS-kernel-versions>`,Ubuntu 24.04.2,Ubuntu 24.04.2,"Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04","Ubuntu 24.04.1, 24.04",Ubuntu 24.04,,,,,,
,Ubuntu 22.04.5,Ubuntu 22.04.5,"Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.5, 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3","Ubuntu 22.04.4, 22.04.3, 22.04.2","Ubuntu 22.04.4, 22.04.3, 22.04.2"
,,,,,,,"Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5","Ubuntu 20.04.6, 20.04.5"
,"RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.4, 9.3, 9.2","RHEL 9.3, 9.2","RHEL 9.3, 9.2"
,RHEL 8.10,RHEL 8.10,"RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.10, 8.9","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8","RHEL 8.9, 8.8"
,"SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4","SLES 15 SP5, SP4"
,,,,,,,,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9,CentOS 7.9
,Oracle Linux 8.10 [#mic300x-past-60]_,Oracle Linux 8.10 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,Oracle Linux 8.9 [#mic300x-past-60]_,,,
,Debian 12 [#single-node-past-60]_,,,,,,,,,,,
,Azure Linux 3.0 [#mic300x-past-60]_,,,,,,,,,,,
,.. _architecture-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3,CDNA3
,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2,CDNA2
,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA,CDNA
,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3,RDNA3
,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2,RDNA2
,.. _gpu-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>`,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100,gfx1100
,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030,gfx1030
,gfx942,gfx942,gfx942 [#mi300_624-past-60]_,gfx942 [#mi300_622-past-60]_,gfx942 [#mi300_621-past-60]_,gfx942 [#mi300_620-past-60]_, gfx942 [#mi300_612-past-60]_, gfx942 [#mi300_612-past-60]_, gfx942 [#mi300_611-past-60]_, gfx942 [#mi300_610-past-60]_, gfx942 [#mi300_602-past-60]_, gfx942 [#mi300_600-past-60]_
,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a,gfx90a
,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908,gfx908
,,,,,,,,,,,,
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13","2.1, 2.0, 1.13"
:doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.16.1, 2.15.1, 2.14.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.15.0, 2.14.0, 2.13.1","2.14.0, 2.13.1, 2.12.1","2.14.0, 2.13.1, 2.12.1"
:doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.4.31,0.4.31,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26,0.4.26
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.17.3,1.14.1,1.14.1
,,,,,,,,,,,,
THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix-past-60:,,,,,,,,,,,
`UCC <https://github.com/ROCm/ucc>`_,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.3.0,>=1.2.0,>=1.2.0
`UCX <https://github.com/ROCm/ucx>`_,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.15.0,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1,>=1.14.1
,,,,,,,,,,,,
THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix-past-60:,,,,,,,,,,,
Thrust,2.3.2,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
CUB,2.3.2,2.3.2,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.1,2.0.1
,,,,,,,,,,,,
,,,,,,,,,,,,
KMD & USER SPACE [#kfd_support-past-60]_,.. _kfd-userspace-support-compatibility-matrix-past-60:,,,,,,,,,,,
Tested user space versions,"6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x","6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x"
,,,,,,,,,,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0,1.1.0
:doc:`MIGraphX <amdmigraphx:index>`,2.11.0,2.11.0,2.10.0,2.10.0,2.10.0,2.10.0,2.9.0,2.9.0,2.9.0,2.9.0,2.8.0,2.8.0
:doc:`MIOpen <miopen:index>`,3.3.0,3.3.0,3.2.0,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`MIVisionX <mivisionx:index>`,3.1.0,3.1.0,3.0.0,3.0.0,3.0.0,3.0.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0,2.5.0
:doc:`rocAL <rocal:index>`,2.1.0,2.1.0,2.0.0,2.0.0,2.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`rocDecode <rocdecode:index>`,0.8.0,0.8.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.6.0,0.5.0,0.5.0,N/A,N/A
:doc:`rocJPEG <rocjpeg:index>`,0.6.0,0.6.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`rocPyDecode <rocpydecode:index>`,0.2.0,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`RPP <rpp:index>`,1.9.1,1.9.1,1.8.0,1.8.0,1.8.0,1.8.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0
,,,,,,,,,,,,
COMMUNICATION,.. _commlibs-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`RCCL <rccl:index>`,2.21.5,2.21.5,2.20.5,2.20.5,2.20.5,2.20.5,2.18.6,2.18.6,2.18.6,2.18.6,2.18.3,2.18.3
,,,,,,,,,,,,
MATH LIBS,.. _mathlibs-support-compatibility-matrix-past-60:,,,,,,,,,,,
`half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0,1.12.0
:doc:`hipBLAS <hipblas:index>`,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.0,2.1.0,2.1.0,2.1.0,2.0.0,2.0.0
:doc:`hipBLASLt <hipblaslt:index>`,0.10.0,0.10.0,0.8.0,0.8.0,0.8.0,0.8.0,0.7.0,0.7.0,0.7.0,0.7.0,0.6.0,0.6.0
:doc:`hipFFT <hipfft:index>`,1.0.17,1.0.17,1.0.16,1.0.15,1.0.15,1.0.14,1.0.14,1.0.14,1.0.14,1.0.14,1.0.13,1.0.13
:doc:`hipfort <hipfort:index>`,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0
:doc:`hipRAND <hiprand:index>`,2.11.1,2.11.0,2.11.1,2.11.0,2.11.0,2.11.0,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16,2.10.16
,,,,,,,,,,,,
:doc:`hipSOLVER <hipsolver:index>`,2.3.0,2.3.0,2.2.0,2.2.0,2.2.0,2.2.0,2.1.1,2.1.1,2.1.1,2.1.0,2.0.0,2.0.0
:doc:`hipSPARSE <hipsparse:index>`,3.1.2,3.1.2,3.1.1,3.1.1,3.1.1,3.1.1,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0
:doc:`hipSPARSELt <hipsparselt:index>`,0.2.2,0.2.2,0.2.1,0.2.1,0.2.1,0.2.1,0.2.0,0.2.0,0.1.0,0.1.0,0.1.0,0.1.0
:doc:`rocALUTION <rocalution:index>`,3.2.1,3.2.1,3.2.1,3.2.0,3.2.0,3.2.0,3.1.1,3.1.1,3.1.1,3.1.1,3.0.3,3.0.3
:doc:`rocBLAS <rocblas:index>`,4.3.0,4.3.0,4.2.4,4.2.1,4.2.1,4.2.0,4.1.2,4.1.2,4.1.0,4.1.0,4.0.0,4.0.0
:doc:`rocFFT <rocfft:index>`,1.0.31,1.0.31,1.0.30,1.0.29,1.0.29,1.0.28,1.0.27,1.0.27,1.0.27,1.0.26,1.0.25,1.0.23
:doc:`rocRAND <rocrand:index>`,3.2.0,3.2.0,3.1.1,3.1.0,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,2.10.17
:doc:`rocSOLVER <rocsolver:index>`,3.27.0,3.27.0,3.26.2,3.26.0,3.26.0,3.26.0,3.25.0,3.25.0,3.25.0,3.25.0,3.24.0,3.24.0
:doc:`rocSPARSE <rocsparse:index>`,3.3.0,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.2,3.1.2,3.1.2,3.1.2,3.0.2,3.0.2
:doc:`rocWMMA <rocwmma:index>`,1.6.0,1.6.0,1.5.0,1.5.0,1.5.0,1.5.0,1.4.0,1.4.0,1.4.0,1.4.0,1.3.0,1.3.0
:doc:`Tensile <tensile:src/index>`,4.42.0,4.42.0,4.41.0,4.41.0,4.41.0,4.41.0,4.40.0,4.40.0,4.40.0,4.40.0,4.39.0,4.39.0
,,,,,,,,,,,,
PRIMITIVES,.. _primitivelibs-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`hipCUB <hipcub:index>`,3.3.0,3.3.0,3.2.1,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`hipTensor <hiptensor:index>`,1.4.0,1.4.0,1.3.0,1.3.0,1.3.0,1.3.0,1.2.0,1.2.0,1.2.0,1.2.0,1.1.0,1.1.0
:doc:`rocPRIM <rocprim:index>`,3.3.0,3.3.0,3.2.2,3.2.0,3.2.0,3.2.0,3.1.0,3.1.0,3.1.0,3.1.0,3.0.0,3.0.0
:doc:`rocThrust <rocthrust:index>`,3.3.0,3.3.0,3.1.1,3.1.0,3.1.0,3.0.1,3.0.1,3.0.1,3.0.1,3.0.1,3.0.0,3.0.0
,,,,,,,,,,,,
SUPPORT LIBS,,,,,,,,,,,,
`hipother <https://github.com/ROCm/hipother>`_,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
,,,,,,,,,,,,
`rocm-core <https://github.com/ROCm/rocm-core>`_,6.3.1,6.3.0,6.2.4,6.2.2,6.2.1,6.2.0,6.1.5,6.1.2,6.1.1,6.1.0,6.0.2,6.0.0
`ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr-past-60]_,N/A [#ROCT-rocr-past-60]_,20240607.5.7,20240607.5.7,20240607.4.05,20240607.1.4246,20240125.5.08,20240125.5.08,20240125.5.08,20240125.3.30,20231016.2.245,20231016.2.245
,,,,,,,,,,,,
SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`AMD SMI <amdsmi:index>`,24.7.1,24.7.1,24.6.3,24.6.3,24.6.3,24.6.2,24.5.1,24.5.1,24.5.1,24.4.1,23.4.2,23.4.2
:doc:`ROCm Data Center Tool <rdc:index>`,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0,0.3.0
:doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
:doc:`ROCm SMI <rocm_smi_lib:index>`,7.4.0,7.4.0,7.3.0,7.3.0,7.3.0,7.3.0,7.2.0,7.2.0,7.0.0,7.0.0,6.0.2,6.0.0
:doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.1.0,1.1.0,1.0.60204,1.0.60202,1.0.60201,1.0.60200,1.0.60105,1.0.60102,1.0.60101,1.0.60100,1.0.60002,1.0.60000
,,,,,,,,,,,,
PERFORMANCE TOOLS,,,,,,,,,,,,
:doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0,1.4.0
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.0.0,3.0.0,2.0.1,2.0.1,2.0.1,2.0.1,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,0.1.0,0.1.0,1.11.2,1.11.2,1.11.2,1.11.2,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`ROCProfiler <rocprofiler:index>`,2.0.60301,2.0.60300,2.0.60204,2.0.60202,2.0.60201,2.0.60200,2.0.60105,2.0.60102,2.0.60101,2.0.60100,2.0.60002,2.0.60000
,,,,,,,,,,,,
:doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,0.5.0,0.5.0,0.4.0,0.4.0,0.4.0,0.4.0,N/A,N/A,N/A,N/A,N/A,N/A
:doc:`ROCTracer <roctracer:index>`,4.1.60301,4.1.60300,4.1.60204,4.1.60202,4.1.60201,4.1.60200,4.1.60105,4.1.60102,4.1.60101,4.1.60100,4.1.60002,4.1.60000
,,,,,,,,,,,,
,,,,,,,,,,,,
DEVELOPMENT TOOLS,,,,,,,,,,,,
:doc:`HIPIFY <hipify:index>`,18.0.0.24491,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
,,,,,,,,,,,,
:doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.14.0,0.13.0,0.13.0,0.13.0,0.13.0,0.12.0,0.12.0,0.12.0,0.12.0,0.11.0,0.11.0
:doc:`ROCdbgapi <rocdbgapi:index>`,0.77.0,0.77.0,0.76.0,0.76.0,0.76.0,0.76.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0,0.71.0
:doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,15.2.0,15.2.0,14.2.0,14.2.0,14.2.0,14.2.0,14.1.0,14.1.0,14.1.0,14.1.0,13.2.0,13.2.0
`rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.4.0,0.3.0,0.3.0,0.3.0,0.3.0,N/A,N/A
:doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3,2.0.3
,,,,,,,,,,,,
COMPILERS,.. _compilers-support-compatibility-matrix-past-60:,,,,,,,,,,,
`clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,N/A,N/A,N/A,N/A,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0,0.5.0
:doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.1.1,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0,1.0.0
`Flang <https://github.com/ROCm/flang>`_,18.0.0.24491,18.0.0.24455,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
:doc:`llvm-project <llvm-project:index>`,18.0.0.24455,18.0.0.24491,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
`OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,18.0.0.24455,18.0.0.24491,18.0.0.24392,18.0.0.24355,18.0.0.24355,18.0.0.24232,17.0.0.24193,17.0.0.24193,17.0.0.24154,17.0.0.24103,17.0.0.24012,17.0.0.23483
,,,,,,,,,,,,
,,,,,,,,,,,,
RUNTIMES,.. _runtime-support-compatibility-matrix-past-60:,,,,,,,,,,,
:doc:`AMD CLR <hip:understand/amd_clr>`,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
,,,,,,,,,,,,
:doc:`HIP <hip:index>`,6.3.42133,6.3.42131,6.2.41134,6.2.41134,6.2.41134,6.2.41133,6.1.40093,6.1.40093,6.1.40092,6.1.40091,6.1.32831,6.1.32830
,,,,,,,,,,,,
`OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0,2.0.0
:doc:`ROCr Runtime <rocr-runtime:index>`,1.14.0,1.14.0,1.14.0,1.14.0,1.14.0,1.13.0,1.13.0,1.13.0,1.13.0,1.13.0,1.12.0,1.12.0
1 ROCm Version 6.3.1 6.3.0 6.2.4 6.2.2 6.2.1 6.2.0 6.1.5 6.1.2 6.1.1 6.1.0 6.0.2 6.0.0
2 :ref:`Operating systems & kernels <OS-kernel-versions>` Ubuntu 24.04.2 Ubuntu 24.04.2 Ubuntu 24.04.1, 24.04 Ubuntu 24.04.1, 24.04 Ubuntu 24.04.1, 24.04 Ubuntu 24.04
3 Ubuntu 22.04.5 Ubuntu 22.04.5 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4 Ubuntu 22.04.5, 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3 Ubuntu 22.04.4, 22.04.3, 22.04.2 Ubuntu 22.04.4, 22.04.3, 22.04.2
4 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5 Ubuntu 20.04.6, 20.04.5
5 RHEL 9.5, 9.4 RHEL 9.5, 9.4 RHEL 9.4, 9.3 RHEL 9.4, 9.3 RHEL 9.4, 9.3 RHEL 9.4, 9.3 RHEL 9.4, 9.3, 9.2 RHEL 9.4 [#red-hat94-past-60]_, 9.3, 9.2 RHEL 9.4, 9.3, 9.2 RHEL 9.4 [#red-hat94-past-60]_, 9.3, 9.2 RHEL 9.4, 9.3, 9.2 RHEL 9.4 [#red-hat94-past-60]_, 9.3, 9.2 RHEL 9.4, 9.3, 9.2 RHEL 9.3, 9.2 RHEL 9.3, 9.2
6 RHEL 8.10 RHEL 8.10 RHEL 8.10, 8.9 RHEL 8.10, 8.9 RHEL 8.10, 8.9 RHEL 8.10, 8.9 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8 RHEL 8.9, 8.8
7 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP6, SP5 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4 SLES 15 SP5, SP4
8 CentOS 7.9 CentOS 7.9 CentOS 7.9 CentOS 7.9 CentOS 7.9
9 Oracle Linux 8.10 [#mic300x-past-60]_ Oracle Linux 8.10 [#oracle89-past-60]_ Oracle Linux 8.10 [#mic300x-past-60]_ Oracle Linux 8.9 [#oracle89-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_ Oracle Linux 8.9 [#oracle89-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_ Oracle Linux 8.9 [#oracle89-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_ Oracle Linux 8.9 [#oracle89-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_ Oracle Linux 8.9 [#oracle89-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_ Oracle Linux 8.9 [#oracle89-past-60]_ Oracle Linux 8.9 [#mic300x-past-60]_
10 Debian 12 [#single-node-past-60]_ .. _architecture-support-compatibility-matrix-past-60:
11 :doc:`Architecture <rocm-install-on-linux:reference/system-requirements>` Azure Linux 3.0 [#mic300x-past-60]_ CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3 CDNA3
12 .. _architecture-support-compatibility-matrix-past-60: CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2 CDNA2
13 :doc:`Architecture <rocm-install-on-linux:reference/system-requirements>` CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA CDNA3 CDNA CDNA3
14 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2 RDNA3 CDNA2
15 CDNA RDNA2 CDNA RDNA2 CDNA RDNA2 CDNA RDNA2 CDNA RDNA2 CDNA CDNA RDNA2 CDNA RDNA2 CDNA RDNA2 CDNA RDNA2 CDNA RDNA2 CDNA
16 RDNA3 .. _gpu-support-compatibility-matrix-past-60: RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3 RDNA3
17 :doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>` RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2 gfx1100 RDNA2
18 .. _gpu-support-compatibility-matrix-past-60: gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030 gfx1030
19 :doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>` gfx1100 gfx942 gfx1100 gfx942 [#mi300_624-past-60]_ gfx1100 gfx942 [#mi300_622-past-60]_ gfx1100 gfx942 [#mi300_621-past-60]_ gfx1100 gfx942 [#mi300_620-past-60]_ gfx1100 gfx1100 gfx942 [#mi300_612-past-60]_ gfx1100 gfx942 [#mi300_611-past-60]_ gfx1100 gfx942 [#mi300_610-past-60]_ gfx1100 gfx942 [#mi300_602-past-60]_ gfx1100 gfx942 [#mi300_600-past-60]_ gfx1100
20 gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx90a gfx1030 gfx90a gfx1030
21 gfx942 gfx908 gfx942 gfx908 gfx942 [#mi300_624-past-60]_ gfx908 gfx942 [#mi300_622-past-60]_ gfx908 gfx942 [#mi300_621-past-60]_ gfx908 gfx942 [#mi300_620-past-60]_ gfx942 [#mi300_612-past-60]_ gfx908 gfx942 [#mi300_612-past-60]_ gfx908 gfx942 [#mi300_611-past-60]_ gfx908 gfx942 [#mi300_610-past-60]_ gfx908 gfx942 [#mi300_602-past-60]_ gfx908 gfx942 [#mi300_600-past-60]_
22 gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a gfx90a
23 FRAMEWORK SUPPORT gfx908 .. _framework-support-compatibility-matrix-past-60: gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908 gfx908
24 :doc:`PyTorch <rocm-install-on-linux:install/3rd-party/pytorch-install>` 2.4, 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.3, 2.2, 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13 2.1, 2.0, 1.13
25 :doc:`TensorFlow <rocm-install-on-linux:install/3rd-party/tensorflow-install>` FRAMEWORK SUPPORT .. _framework-support-compatibility-matrix-past-60: 2.17.0, 2.16.2, 2.15.1 2.16.1, 2.15.1, 2.14.1 2.16.1, 2.15.1, 2.14.1 2.16.1, 2.15.1, 2.14.1 2.16.1, 2.15.1, 2.14.1 2.15.0, 2.14.0, 2.13.1 2.15.0, 2.14.0, 2.13.1 2.15.0, 2.14.0, 2.13.1 2.14.0, 2.13.1, 2.12.1 2.14.0, 2.13.1, 2.12.1
26 :doc:`JAX <rocm-install-on-linux:install/3rd-party/jax-install>` :doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>` 2.4, 2.3, 2.2, 2.1, 2.0, 1.13 0.4.26 2.4, 2.3, 2.2, 2.1, 2.0, 1.13 0.4.26 2.3, 2.2, 2.1, 2.0, 1.13 0.4.26 2.3, 2.2, 2.1, 2.0, 1.13 0.4.26 2.3, 2.2, 2.1, 2.0, 1.13 0.4.26 2.3, 2.2, 2.1, 2.0, 1.13 2.1, 2.0, 1.13 0.4.26 2.1, 2.0, 1.13 0.4.26 2.1, 2.0, 1.13 0.4.26 2.1, 2.0, 1.13 0.4.26 2.1, 2.0, 1.13 0.4.26 2.1, 2.0, 1.13
27 `ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_ :doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>` 2.17.0, 2.16.2, 2.15.1 1.17.3 2.17.0, 2.16.2, 2.15.1 1.17.3 2.16.1, 2.15.1, 2.14.1 1.17.3 2.16.1, 2.15.1, 2.14.1 1.17.3 2.16.1, 2.15.1, 2.14.1 1.17.3 2.16.1, 2.15.1, 2.14.1 2.15.0, 2.14.0, 2.13.1 1.17.3 2.15.0, 2.14.0, 2.13.1 1.17.3 2.15.0, 2.14.0, 2.13.1 1.17.3 2.15.0, 2.14.0, 2.13.1 1.14.1 2.14.0, 2.13.1, 2.12.1 1.14.1 2.14.0, 2.13.1, 2.12.1
28 :doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>` 0.4.31 0.4.31 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26 0.4.26
29 THIRD PARTY COMMS `ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_ 1.17.3 .. _thirdpartycomms-support-compatibility-matrix-past-60: 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.17.3 1.14.1 1.14.1
30 `UCC <https://github.com/ROCm/ucc>`_ >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.2.0 >=1.2.0
31 `UCX <https://github.com/ROCm/ucx>`_ THIRD PARTY COMMS .. _thirdpartycomms-support-compatibility-matrix-past-60: >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1
32 `UCC <https://github.com/ROCm/ucc>`_ >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.3.0 >=1.2.0 >=1.2.0
33 THIRD PARTY ALGORITHM `UCX <https://github.com/ROCm/ucx>`_ >=1.15.0 .. _thirdpartyalgorithm-support-compatibility-matrix-past-60: >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.15.0 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1 >=1.14.1
34 Thrust 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
35 CUB THIRD PARTY ALGORITHM .. _thirdpartyalgorithm-support-compatibility-matrix-past-60: 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
36 Thrust 2.3.2 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
37 KFD & USER SPACE [#kfd_support-past-60]_ CUB 2.3.2 .. _kfd-userspace-support-compatibility-matrix-past-60: 2.3.2 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.1.0 2.0.1 2.0.1
38 Tested user space versions 6.3.x, 6.2.x, 6.1.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x
39
40 ML & COMPUTER VISION KMD & USER SPACE [#kfd_support-past-60]_ .. _kfd-userspace-support-compatibility-matrix-past-60: .. _mllibs-support-compatibility-matrix-past-60:
41 :doc:`Composable Kernel <composable_kernel:index>` Tested user space versions 6.3.x, 6.2.x, 6.1.x 1.1.0 6.3.x, 6.2.x, 6.1.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 1.1.0 6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x 1.1.0 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x 1.1.0 6.2.x, 6.1.x, 6.0.x, 5.7.x, 5.6.x
42 :doc:`MIGraphX <amdmigraphx:index>` 2.11.0 2.10.0 2.10.0 2.10.0 2.10.0 2.9.0 2.9.0 2.9.0 2.8.0 2.8.0
43 :doc:`MIOpen <miopen:index>` ML & COMPUTER VISION .. _mllibs-support-compatibility-matrix-past-60: 3.3.0 3.2.0 3.2.0 3.2.0 3.2.0 3.1.0 3.1.0 3.1.0 3.0.0 3.0.0
44 :doc:`MIVisionX <mivisionx:index>` :doc:`Composable Kernel <composable_kernel:index>` 1.1.0 3.1.0 1.1.0 3.0.0 1.1.0 3.0.0 1.1.0 3.0.0 1.1.0 3.0.0 1.1.0 1.1.0 2.5.0 1.1.0 2.5.0 1.1.0 2.5.0 1.1.0 2.5.0 1.1.0 2.5.0 1.1.0
45 :doc:`rocAL <rocal:index>` :doc:`MIGraphX <amdmigraphx:index>` 2.11.0 2.1.0 2.11.0 2.0.0 2.10.0 2.0.0 2.10.0 2.0.0 2.10.0 1.0.0 2.10.0 2.9.0 1.0.0 2.9.0 1.0.0 2.9.0 1.0.0 2.9.0 1.0.0 2.8.0 1.0.0 2.8.0
46 :doc:`rocDecode <rocdecode:index>` :doc:`MIOpen <miopen:index>` 3.3.0 0.8.0 3.3.0 0.6.0 3.2.0 0.6.0 3.2.0 0.6.0 3.2.0 0.6.0 3.2.0 3.1.0 0.6.0 3.1.0 0.5.0 3.1.0 0.5.0 3.1.0 N/A 3.0.0 N/A 3.0.0
47 :doc:`rocJPEG <rocjpeg:index>` :doc:`MIVisionX <mivisionx:index>` 3.1.0 0.6.0 3.1.0 N/A 3.0.0 N/A 3.0.0 N/A 3.0.0 N/A 3.0.0 2.5.0 N/A 2.5.0 N/A 2.5.0 N/A 2.5.0 N/A 2.5.0 N/A 2.5.0
48 :doc:`rocPyDecode <rocpydecode:index>` :doc:`rocAL <rocal:index>` 2.1.0 0.2.0 2.1.0 0.1.0 2.0.0 0.1.0 2.0.0 0.1.0 2.0.0 0.1.0 1.0.0 1.0.0 N/A 1.0.0 N/A 1.0.0 N/A 1.0.0 N/A 1.0.0 N/A 1.0.0
49 :doc:`RPP <rpp:index>` :doc:`rocDecode <rocdecode:index>` 0.8.0 1.9.1 0.8.0 1.8.0 0.6.0 1.8.0 0.6.0 1.8.0 0.6.0 1.8.0 0.6.0 0.6.0 1.5.0 0.6.0 1.5.0 0.5.0 1.5.0 0.5.0 1.4.0 N/A 1.4.0 N/A
50 :doc:`rocJPEG <rocjpeg:index>` 0.6.0 0.6.0 N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A
51 COMMUNICATION :doc:`rocPyDecode <rocpydecode:index>` 0.2.0 .. _commlibs-support-compatibility-matrix-past-60: 0.2.0 0.1.0 0.1.0 0.1.0 0.1.0 N/A N/A N/A N/A N/A N/A
52 :doc:`RCCL <rccl:index>` :doc:`RPP <rpp:index>` 1.9.1 2.21.5 1.9.1 2.20.5 1.8.0 2.20.5 1.8.0 2.20.5 1.8.0 2.20.5 1.8.0 1.5.0 2.18.6 1.5.0 2.18.6 1.5.0 2.18.6 1.5.0 2.18.3 1.4.0 2.18.3 1.4.0
53
54 MATH LIBS COMMUNICATION .. _commlibs-support-compatibility-matrix-past-60: .. _mathlibs-support-compatibility-matrix-past-60:
55 `half <https://github.com/ROCm/half>`_ :doc:`RCCL <rccl:index>` 2.21.5 1.12.0 2.21.5 1.12.0 2.20.5 1.12.0 2.20.5 1.12.0 2.20.5 1.12.0 2.20.5 2.18.6 1.12.0 2.18.6 1.12.0 2.18.6 1.12.0 2.18.6 1.12.0 2.18.3 1.12.0 2.18.3
56 :doc:`hipBLAS <hipblas:index>` 2.3.0 2.2.0 2.2.0 2.2.0 2.2.0 2.1.0 2.1.0 2.1.0 2.0.0 2.0.0
57 :doc:`hipBLASLt <hipblaslt:index>` MATH LIBS .. _mathlibs-support-compatibility-matrix-past-60: 0.10.0 0.8.0 0.8.0 0.8.0 0.8.0 0.7.0 0.7.0 0.7.0 0.6.0 0.6.0
58 :doc:`hipFFT <hipfft:index>` `half <https://github.com/ROCm/half>`_ 1.12.0 1.0.17 1.12.0 1.0.16 1.12.0 1.0.15 1.12.0 1.0.15 1.12.0 1.0.14 1.12.0 1.12.0 1.0.14 1.12.0 1.0.14 1.12.0 1.0.14 1.12.0 1.0.13 1.12.0 1.0.13 1.12.0
59 :doc:`hipfort <hipfort:index>` :doc:`hipBLAS <hipblas:index>` 2.3.0 0.5.0 2.3.0 0.4.0 2.2.0 0.4.0 2.2.0 0.4.0 2.2.0 0.4.0 2.2.0 2.1.0 0.4.0 2.1.0 0.4.0 2.1.0 0.4.0 2.1.0 0.4.0 2.0.0 0.4.0 2.0.0
60 :doc:`hipRAND <hiprand:index>` :doc:`hipBLASLt <hipblaslt:index>` 0.10.0 2.11.0 0.10.0 2.11.1 0.8.0 2.11.0 0.8.0 2.11.0 0.8.0 2.11.0 0.8.0 0.7.0 2.10.16 0.7.0 2.10.16 0.7.0 2.10.16 0.7.0 2.10.16 0.6.0 2.10.16 0.6.0
61 :doc:`hipSOLVER <hipsolver:index>` :doc:`hipFFT <hipfft:index>` 1.0.17 2.3.0 1.0.17 2.2.0 1.0.16 2.2.0 1.0.15 2.2.0 1.0.15 2.2.0 1.0.14 1.0.14 2.1.1 1.0.14 2.1.1 1.0.14 2.1.0 1.0.14 2.0.0 1.0.13 2.0.0 1.0.13
62 :doc:`hipSPARSE <hipsparse:index>` :doc:`hipfort <hipfort:index>` 0.5.0 3.1.2 0.5.0 3.1.1 0.4.0 3.1.1 0.4.0 3.1.1 0.4.0 3.1.1 0.4.0 0.4.0 3.0.1 0.4.0 3.0.1 0.4.0 3.0.1 0.4.0 3.0.0 0.4.0 3.0.0 0.4.0
63 :doc:`hipSPARSELt <hipsparselt:index>` :doc:`hipRAND <hiprand:index>` 2.11.1 0.2.2 2.11.0 0.2.1 2.11.1 0.2.1 2.11.0 0.2.1 2.11.0 0.2.1 2.11.0 2.10.16 0.2.0 2.10.16 0.1.0 2.10.16 0.1.0 2.10.16 0.1.0 2.10.16 0.1.0 2.10.16
64 :doc:`rocALUTION <rocalution:index>` 3.2.1 3.2.1 3.2.0 3.2.0 3.2.0 3.1.1 3.1.1 3.1.1 3.0.3 3.0.3
65 :doc:`rocBLAS <rocblas:index>` :doc:`hipSOLVER <hipsolver:index>` 2.3.0 4.3.0 2.3.0 4.2.4 2.2.0 4.2.1 2.2.0 4.2.1 2.2.0 4.2.0 2.2.0 2.1.1 4.1.2 2.1.1 4.1.0 2.1.1 4.1.0 2.1.0 4.0.0 2.0.0 4.0.0 2.0.0
66 :doc:`rocFFT <rocfft:index>` :doc:`hipSPARSE <hipsparse:index>` 3.1.2 1.0.31 3.1.2 1.0.30 3.1.1 1.0.29 3.1.1 1.0.29 3.1.1 1.0.28 3.1.1 3.0.1 1.0.27 3.0.1 1.0.27 3.0.1 1.0.26 3.0.1 1.0.25 3.0.0 1.0.23 3.0.0
67 :doc:`rocRAND <rocrand:index>` :doc:`hipSPARSELt <hipsparselt:index>` 0.2.2 3.2.0 0.2.2 3.1.1 0.2.1 3.1.0 0.2.1 3.1.0 0.2.1 3.1.0 0.2.1 0.2.0 3.0.1 0.2.0 3.0.1 0.1.0 3.0.1 0.1.0 3.0.0 0.1.0 2.10.17 0.1.0
68 :doc:`rocSOLVER <rocsolver:index>` :doc:`rocALUTION <rocalution:index>` 3.2.1 3.27.0 3.2.1 3.26.2 3.2.1 3.26.0 3.2.0 3.26.0 3.2.0 3.26.0 3.2.0 3.1.1 3.25.0 3.1.1 3.25.0 3.1.1 3.25.0 3.1.1 3.24.0 3.0.3 3.24.0 3.0.3
69 :doc:`rocSPARSE <rocsparse:index>` :doc:`rocBLAS <rocblas:index>` 4.3.0 3.3.0 4.3.0 3.2.1 4.2.4 3.2.0 4.2.1 3.2.0 4.2.1 3.2.0 4.2.0 4.1.2 3.1.2 4.1.2 3.1.2 4.1.0 3.1.2 4.1.0 3.0.2 4.0.0 3.0.2 4.0.0
70 :doc:`rocWMMA <rocwmma:index>` :doc:`rocFFT <rocfft:index>` 1.0.31 1.6.0 1.0.31 1.5.0 1.0.30 1.5.0 1.0.29 1.5.0 1.0.29 1.5.0 1.0.28 1.0.27 1.4.0 1.0.27 1.4.0 1.0.27 1.4.0 1.0.26 1.3.0 1.0.25 1.3.0 1.0.23
71 :doc:`Tensile <tensile:index>` :doc:`rocRAND <rocrand:index>` 3.2.0 4.42.0 3.2.0 4.41.0 3.1.1 4.41.0 3.1.0 4.41.0 3.1.0 4.41.0 3.1.0 3.0.1 4.40.0 3.0.1 4.40.0 3.0.1 4.40.0 3.0.1 4.39.0 3.0.0 4.39.0 2.10.17
72 :doc:`rocSOLVER <rocsolver:index>` 3.27.0 3.27.0 3.26.2 3.26.0 3.26.0 3.26.0 3.25.0 3.25.0 3.25.0 3.25.0 3.24.0 3.24.0
73 PRIMITIVES :doc:`rocSPARSE <rocsparse:index>` 3.3.0 .. _primitivelibs-support-compatibility-matrix-past-60: 3.3.0 3.2.1 3.2.0 3.2.0 3.2.0 3.1.2 3.1.2 3.1.2 3.1.2 3.0.2 3.0.2
74 :doc:`hipCUB <hipcub:index>` :doc:`rocWMMA <rocwmma:index>` 1.6.0 3.3.0 1.6.0 3.2.1 1.5.0 3.2.0 1.5.0 3.2.0 1.5.0 3.2.0 1.5.0 1.4.0 3.1.0 1.4.0 3.1.0 1.4.0 3.1.0 1.4.0 3.0.0 1.3.0 3.0.0 1.3.0
75 :doc:`hipTensor <hiptensor:index>` :doc:`Tensile <tensile:src/index>` 4.42.0 1.4.0 4.42.0 1.3.0 4.41.0 1.3.0 4.41.0 1.3.0 4.41.0 1.3.0 4.41.0 4.40.0 1.2.0 4.40.0 1.2.0 4.40.0 1.2.0 4.40.0 1.1.0 4.39.0 1.1.0 4.39.0
76 :doc:`rocPRIM <rocprim:index>` 3.3.0 3.2.2 3.2.0 3.2.0 3.2.0 3.1.0 3.1.0 3.1.0 3.0.0 3.0.0
77 :doc:`rocThrust <rocthrust:index>` PRIMITIVES .. _primitivelibs-support-compatibility-matrix-past-60: 3.3.0 3.1.1 3.1.0 3.1.0 3.0.1 3.0.1 3.0.1 3.0.1 3.0.0 3.0.0
78 :doc:`hipCUB <hipcub:index>` 3.3.0 3.3.0 3.2.1 3.2.0 3.2.0 3.2.0 3.1.0 3.1.0 3.1.0 3.1.0 3.0.0 3.0.0
79 SUPPORT LIBS :doc:`hipTensor <hiptensor:index>` 1.4.0 1.4.0 1.3.0 1.3.0 1.3.0 1.3.0 1.2.0 1.2.0 1.2.0 1.2.0 1.1.0 1.1.0
80 `hipother <https://github.com/ROCm/hipother>`_ :doc:`rocPRIM <rocprim:index>` 3.3.0 6.3.42131 3.3.0 6.2.41134 3.2.2 6.2.41134 3.2.0 6.2.41134 3.2.0 6.2.41133 3.2.0 3.1.0 6.1.40093 3.1.0 6.1.40092 3.1.0 6.1.40091 3.1.0 6.1.32831 3.0.0 6.1.32830 3.0.0
81 `rocm-core <https://github.com/ROCm/rocm-core>`_ :doc:`rocThrust <rocthrust:index>` 3.3.0 6.3.0 3.3.0 6.2.4 3.1.1 6.2.2 3.1.0 6.2.1 3.1.0 6.2.0 3.0.1 3.0.1 6.1.2 3.0.1 6.1.1 3.0.1 6.1.0 3.0.1 6.0.2 3.0.0 6.0.0 3.0.0
82 `ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_ N/A [#ROCT-rocr-past-60]_ 20240607.5.7 20240607.5.7 20240607.4.05 20240607.1.4246 20240125.5.08 20240125.5.08 20240125.3.30 20231016.2.245 20231016.2.245
83 SUPPORT LIBS
84 SYSTEM MGMT TOOLS `hipother <https://github.com/ROCm/hipother>`_ 6.3.42133 .. _tools-support-compatibility-matrix-past-60: 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
85 :doc:`AMD SMI <amdsmi:index>` 24.7.1 24.6.3 24.6.3 24.6.3 24.6.2 24.5.1 24.5.1 24.4.1 23.4.2 23.4.2
86 :doc:`ROCm Data Center Tool <rdc:index>` `rocm-core <https://github.com/ROCm/rocm-core>`_ 6.3.1 0.3.0 6.3.0 0.3.0 6.2.4 0.3.0 6.2.2 0.3.0 6.2.1 0.3.0 6.2.0 6.1.5 0.3.0 6.1.2 0.3.0 6.1.1 0.3.0 6.1.0 0.3.0 6.0.2 0.3.0 6.0.0
87 :doc:`rocminfo <rocminfo:index>` `ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_ N/A [#ROCT-rocr-past-60]_ 1.0.0 N/A [#ROCT-rocr-past-60]_ 1.0.0 20240607.5.7 1.0.0 20240607.5.7 1.0.0 20240607.4.05 1.0.0 20240607.1.4246 20240125.5.08 1.0.0 20240125.5.08 1.0.0 20240125.5.08 1.0.0 20240125.3.30 1.0.0 20231016.2.245 1.0.0 20231016.2.245
88 :doc:`ROCm SMI <rocm_smi_lib:index>` 7.4.0 7.3.0 7.3.0 7.3.0 7.3.0 7.2.0 7.0.0 7.0.0 6.0.2 6.0.0
89 :doc:`ROCm Validation Suite <rocmvalidationsuite:index>` SYSTEM MGMT TOOLS .. _tools-support-compatibility-matrix-past-60: 1.1.0 1.0.60204 1.0.60202 1.0.60201 1.0.60200 1.0.60102 1.0.60101 1.0.60100 1.0.60002 1.0.60000
90 :doc:`AMD SMI <amdsmi:index>` 24.7.1 24.7.1 24.6.3 24.6.3 24.6.3 24.6.2 24.5.1 24.5.1 24.5.1 24.4.1 23.4.2 23.4.2
91 PERFORMANCE TOOLS :doc:`ROCm Data Center Tool <rdc:index>` 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0 0.3.0
92 :doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>` :doc:`rocminfo <rocminfo:index>` 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0 1.4.0 1.0.0
93 :doc:`ROCm Compute Profiler <rocprofiler-compute:index>` :doc:`ROCm SMI <rocm_smi_lib:index>` 7.4.0 3.0.0 7.4.0 2.0.1 7.3.0 2.0.1 7.3.0 2.0.1 7.3.0 2.0.1 7.3.0 7.2.0 N/A 7.2.0 N/A 7.0.0 N/A 7.0.0 N/A 6.0.2 N/A 6.0.0
94 :doc:`ROCm Systems Profiler <rocprofiler-systems:index>` :doc:`ROCm Validation Suite <rocmvalidationsuite:index>` 1.1.0 0.1.0 1.1.0 1.11.2 1.0.60204 1.11.2 1.0.60202 1.11.2 1.0.60201 1.11.2 1.0.60200 1.0.60105 N/A 1.0.60102 N/A 1.0.60101 N/A 1.0.60100 N/A 1.0.60002 N/A 1.0.60000
95 :doc:`ROCProfiler <rocprofiler:index>` 2.0.60300 2.0.60204 2.0.60202 2.0.60201 2.0.60200 2.0.60102 2.0.60101 2.0.60100 2.0.60002 2.0.60000
96 :doc:`ROCprofiler-SDK <rocprofiler-sdk:index>` PERFORMANCE TOOLS 0.5.0 0.4.0 0.4.0 0.4.0 0.4.0 N/A N/A N/A N/A N/A
97 :doc:`ROCTracer <roctracer:index>` :doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>` 1.4.0 4.1.60300 1.4.0 4.1.60204 1.4.0 4.1.60202 1.4.0 4.1.60201 1.4.0 4.1.60200 1.4.0 1.4.0 4.1.60102 1.4.0 4.1.60101 1.4.0 4.1.60100 1.4.0 4.1.60002 1.4.0 4.1.60000 1.4.0
98 :doc:`ROCm Compute Profiler <rocprofiler-compute:index>` 3.0.0 3.0.0 2.0.1 2.0.1 2.0.1 2.0.1 N/A N/A N/A N/A N/A N/A
99 DEVELOPMENT TOOLS :doc:`ROCm Systems Profiler <rocprofiler-systems:index>` 0.1.0 0.1.0 1.11.2 1.11.2 1.11.2 1.11.2 N/A N/A N/A N/A N/A N/A
100 :doc:`HIPIFY <hipify:index>` :doc:`ROCProfiler <rocprofiler:index>` 2.0.60301 18.0.0.24455 2.0.60300 18.0.0.24392 2.0.60204 18.0.0.24355 2.0.60202 18.0.0.24355 2.0.60201 18.0.0.24232 2.0.60200 2.0.60105 17.0.0.24193 2.0.60102 17.0.0.24154 2.0.60101 17.0.0.24103 2.0.60100 17.0.0.24012 2.0.60002 17.0.0.23483 2.0.60000
101 :doc:`ROCm CMake <rocmcmakebuildtools:index>` 0.14.0 0.13.0 0.13.0 0.13.0 0.13.0 0.12.0 0.12.0 0.12.0 0.11.0 0.11.0
102 :doc:`ROCdbgapi <rocdbgapi:index>` :doc:`ROCprofiler-SDK <rocprofiler-sdk:index>` 0.5.0 0.77.0 0.5.0 0.76.0 0.4.0 0.76.0 0.4.0 0.76.0 0.4.0 0.76.0 0.4.0 N/A 0.71.0 N/A 0.71.0 N/A 0.71.0 N/A 0.71.0 N/A 0.71.0 N/A
103 :doc:`ROCm Debugger (ROCgdb) <rocgdb:index>` :doc:`ROCTracer <roctracer:index>` 4.1.60301 15.2.0 4.1.60300 14.2.0 4.1.60204 14.2.0 4.1.60202 14.2.0 4.1.60201 14.2.0 4.1.60200 4.1.60105 14.1.0 4.1.60102 14.1.0 4.1.60101 14.1.0 4.1.60100 13.2.0 4.1.60002 13.2.0 4.1.60000
104 `rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_ 0.4.0 0.4.0 0.4.0 0.4.0 0.4.0 0.3.0 0.3.0 0.3.0 N/A N/A
105 :doc:`ROCr Debug Agent <rocr_debug_agent:index>` 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3
106 DEVELOPMENT TOOLS
107 COMPILERS :doc:`HIPIFY <hipify:index>` 18.0.0.24491 .. _compilers-support-compatibility-matrix-past-60: 18.0.0.24455 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
108 `clang-ocl <https://github.com/ROCm/clang-ocl>`_ N/A N/A N/A N/A N/A 0.5.0 0.5.0 0.5.0 0.5.0 0.5.0
109 :doc:`hipCC <hipcc:index>` :doc:`ROCm CMake <rocmcmakebuildtools:index>` 0.14.0 1.1.1 0.14.0 1.1.1 0.13.0 1.1.1 0.13.0 1.1.1 0.13.0 1.1.1 0.13.0 0.12.0 1.0.0 0.12.0 1.0.0 0.12.0 1.0.0 0.12.0 1.0.0 0.11.0 1.0.0 0.11.0
110 `Flang <https://github.com/ROCm/flang>`_ :doc:`ROCdbgapi <rocdbgapi:index>` 0.77.0 18.0.0.24455 0.77.0 18.0.0.24392 0.76.0 18.0.0.24355 0.76.0 18.0.0.24355 0.76.0 18.0.0.24232 0.76.0 0.71.0 17.0.0.24193 0.71.0 17.0.0.24154 0.71.0 17.0.0.24103 0.71.0 17.0.0.24012 0.71.0 17.0.0.23483 0.71.0
111 :doc:`llvm-project <llvm-project:index>` :doc:`ROCm Debugger (ROCgdb) <rocgdb:index>` 15.2.0 18.0.0.24455 15.2.0 18.0.0.24392 14.2.0 18.0.0.24355 14.2.0 18.0.0.24355 14.2.0 18.0.0.24232 14.2.0 14.1.0 17.0.0.24193 14.1.0 17.0.0.24154 14.1.0 17.0.0.24103 14.1.0 17.0.0.24012 13.2.0 17.0.0.23483 13.2.0
112 `OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_ `rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_ 0.4.0 18.0.0.24455 0.4.0 18.0.0.24392 0.4.0 18.0.0.24355 0.4.0 18.0.0.24355 0.4.0 18.0.0.24232 0.4.0 0.3.0 17.0.0.24193 0.3.0 17.0.0.24154 0.3.0 17.0.0.24103 0.3.0 17.0.0.24012 N/A 17.0.0.23483 N/A
113 :doc:`ROCr Debug Agent <rocr_debug_agent:index>` 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3 2.0.3
114 RUNTIMES .. _runtime-support-compatibility-matrix-past-60:
115 :doc:`AMD CLR <hip:understand/amd_clr>` COMPILERS .. _compilers-support-compatibility-matrix-past-60: 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
116 :doc:`HIP <hip:index>` `clang-ocl <https://github.com/ROCm/clang-ocl>`_ N/A 6.3.42131 N/A 6.2.41134 N/A 6.2.41134 N/A 6.2.41134 N/A 6.2.41133 N/A 0.5.0 6.1.40093 0.5.0 6.1.40092 0.5.0 6.1.40091 0.5.0 6.1.32831 0.5.0 6.1.32830 0.5.0
117 `OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_ :doc:`hipCC <hipcc:index>` 1.1.1 2.0.0 1.1.1 2.0.0 1.1.1 2.0.0 1.1.1 2.0.0 1.1.1 2.0.0 1.1.1 1.0.0 2.0.0 1.0.0 2.0.0 1.0.0 2.0.0 1.0.0 2.0.0 1.0.0 2.0.0 1.0.0
118 :doc:`ROCr Runtime <rocr-runtime:index>` `Flang <https://github.com/ROCm/flang>`_ 18.0.0.24491 1.14.0 18.0.0.24455 1.14.0 18.0.0.24392 1.14.0 18.0.0.24355 1.14.0 18.0.0.24355 1.13.0 18.0.0.24232 17.0.0.24193 1.13.0 17.0.0.24193 1.13.0 17.0.0.24154 1.13.0 17.0.0.24103 1.12.0 17.0.0.24012 1.12.0 17.0.0.23483
119 :doc:`llvm-project <llvm-project:index>` 18.0.0.24455 18.0.0.24491 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
120 `OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_ 18.0.0.24455 18.0.0.24491 18.0.0.24392 18.0.0.24355 18.0.0.24355 18.0.0.24232 17.0.0.24193 17.0.0.24193 17.0.0.24154 17.0.0.24103 17.0.0.24012 17.0.0.23483
121
122
123 RUNTIMES .. _runtime-support-compatibility-matrix-past-60:
124 :doc:`AMD CLR <hip:understand/amd_clr>` 6.3.42133 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
125
126 :doc:`HIP <hip:index>` 6.3.42133 6.3.42131 6.2.41134 6.2.41134 6.2.41134 6.2.41133 6.1.40093 6.1.40093 6.1.40092 6.1.40091 6.1.32831 6.1.32830
127
128 `OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_ 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0 2.0.0
129 :doc:`ROCr Runtime <rocr-runtime:index>` 1.14.0 1.14.0 1.14.0 1.14.0 1.14.0 1.13.0 1.13.0 1.13.0 1.13.0 1.13.0 1.12.0 1.12.0

View File

@@ -23,17 +23,17 @@ compatibility and system requirements.
.. container:: format-big-table
.. csv-table::
:header: "ROCm Version", "6.3.0", "6.2.4", "6.1.0"
:header: "ROCm Version", "6.3.1", "6.3.0", "6.2.0"
:stub-columns: 1
:ref:`Operating systems & kernels <OS-kernel-versions>`,Ubuntu 24.04.2,"Ubuntu 24.04.1, 24.04",
,Ubuntu 22.04.5,"Ubuntu 22.04.5, 22.04.4","Ubuntu 22.04.4, 22.04.3"
,,,"Ubuntu 20.04.6, 20.04.5"
,"RHEL 9.5, 9.4","RHEL 9.4, 9.3","RHEL 9.4 [#red-hat94]_, 9.3, 9.2"
,"RHEL 8.10","RHEL 8.10, 8.9","RHEL 8.9, 8.8"
,"SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP5, SP4"
,,,CentOS 7.9
,Oracle Linux 8.10 [#oracle89]_,Oracle Linux 8.9 [#oracle89]_,
:ref:`Operating systems & kernels <OS-kernel-versions>`,Ubuntu 24.04.2,Ubuntu 24.04.2,Ubuntu 24.04
,Ubuntu 22.04.5,Ubuntu 22.04.5,"Ubuntu 22.04.5, 22.04.4"
,"RHEL 9.5, 9.4","RHEL 9.5, 9.4","RHEL 9.4, 9.3"
,RHEL 8.10,RHEL 8.10,"RHEL 8.10, 8.9"
,"SLES 15 SP6, SP5","SLES 15 SP6, SP5","SLES 15 SP6, SP5"
,Oracle Linux 8.10 [#mi300x]_,Oracle Linux 8.10 [#mi300x]_,Oracle Linux 8.9 [#mi300x]_
,Debian 12 [#single-node]_,,
,Azure Linux 3.0 [#mi300x]_,,
,.. _architecture-support-compatibility-matrix:,,
:doc:`Architecture <rocm-install-on-linux:reference/system-requirements>`,CDNA3,CDNA3,CDNA3
,CDNA2,CDNA2,CDNA2
@@ -43,115 +43,115 @@ compatibility and system requirements.
,.. _gpu-support-compatibility-matrix:,,
:doc:`GPU / LLVM target <rocm-install-on-linux:reference/system-requirements>`,gfx1100,gfx1100,gfx1100
,gfx1030,gfx1030,gfx1030
,gfx942,gfx942 [#mi300_624]_, gfx942 [#mi300_610]_
,gfx942,gfx942,gfx942 [#mi300_620]_
,gfx90a,gfx90a,gfx90a
,gfx908,gfx908,gfx908
,,,
FRAMEWORK SUPPORT,.. _framework-support-compatibility-matrix:,,
:doc:`PyTorch <rocm-install-on-linux:install/3rd-party/pytorch-install>`,"2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13","2.1, 2.0, 1.13"
:doc:`TensorFlow <rocm-install-on-linux:install/3rd-party/tensorflow-install>`,"2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1","2.15.0, 2.14.0, 2.13.1"
:doc:`JAX <rocm-install-on-linux:install/3rd-party/jax-install>`,0.4.26,0.4.26,0.4.26
:doc:`PyTorch <../compatibility/ml-compatibility/pytorch-compatibility>`,"2.4, 2.3, 2.2, 1.13","2.4, 2.3, 2.2, 2.1, 2.0, 1.13","2.3, 2.2, 2.1, 2.0, 1.13"
:doc:`TensorFlow <../compatibility/ml-compatibility/tensorflow-compatibility>`,"2.17.0, 2.16.2, 2.15.1","2.17.0, 2.16.2, 2.15.1","2.16.1, 2.15.1, 2.14.1"
:doc:`JAX <../compatibility/ml-compatibility/jax-compatibility>`,0.4.31,0.4.31,0.4.26
`ONNX Runtime <https://onnxruntime.ai/docs/build/eps.html#amd-migraphx>`_,1.17.3,1.17.3,1.17.3
,,,
THIRD PARTY COMMS,.. _thirdpartycomms-support-compatibility-matrix:,,
`UCC <https://github.com/ROCm/ucc>`_,>=1.3.0,>=1.3.0,>=1.3.0
`UCX <https://github.com/ROCm/ucx>`_,>=1.15.0,>=1.15.0,>=1.14.1
`UCX <https://github.com/ROCm/ucx>`_,>=1.15.0,>=1.15.0,>=1.15.0
,,,
THIRD PARTY ALGORITHM,.. _thirdpartyalgorithm-support-compatibility-matrix:,,
Thrust,2.3.2,2.2.0,2.1.0
CUB,2.3.2,2.2.0,2.1.0
Thrust,2.3.2,2.3.2,2.2.0
CUB,2.3.2,2.3.2,2.2.0
,,,
KFD & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,,
Tested user space versions,"6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x, 6.0.x","6.3.x, 6.2.x, 6.1.x, 6.0.x, 5.7.x"
KMD & USER SPACE [#kfd_support]_,.. _kfd-userspace-support-compatibility-matrix:,,
Tested user space versions,"6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x","6.3.x, 6.2.x, 6.1.x, 6.0.x"
,,,
ML & COMPUTER VISION,.. _mllibs-support-compatibility-matrix:,,
:doc:`Composable Kernel <composable_kernel:index>`,1.1.0,1.1.0,1.1.0
:doc:`MIGraphX <amdmigraphx:index>`,2.11.0,2.10.0,2.9.0
:doc:`MIOpen <miopen:index>`,3.3.0,3.2.0,3.1.0
:doc:`MIVisionX <mivisionx:index>`,3.1.0,3.0.0,2.5.0
:doc:`rocAL <rocal:index>`,2.1.0,2.0.0,1.0.0
:doc:`rocDecode <rocdecode:index>`,0.8.0,0.6.0,0.5.0
:doc:`rocJPEG <rocjpeg:index>`,0.6.0,N/A,N/A
:doc:`rocPyDecode <rocpydecode:index>`,0.2.0,0.1.0,N/A
:doc:`RPP <rpp:index>`,1.9.1,1.8.0,1.5.0
:doc:`MIGraphX <amdmigraphx:index>`,2.11.0,2.11.0,2.10.0
:doc:`MIOpen <miopen:index>`,3.3.0,3.3.0,3.2.0
:doc:`MIVisionX <mivisionx:index>`,3.1.0,3.1.0,3.0.0
:doc:`rocAL <rocal:index>`,2.1.0,2.1.0,1.0.0
:doc:`rocDecode <rocdecode:index>`,0.8.0,0.8.0,0.6.0
:doc:`rocJPEG <rocjpeg:index>`,0.6.0,0.6.0,N/A
:doc:`rocPyDecode <rocpydecode:index>`,0.2.0,0.2.0,0.1.0
:doc:`RPP <rpp:index>`,1.9.1,1.9.1,1.8.0
,,,
COMMUNICATION,.. _commlibs-support-compatibility-matrix:,,
:doc:`RCCL <rccl:index>`,2.21.5,2.20.5,2.18.6
:doc:`RCCL <rccl:index>`,2.21.5,2.21.5,2.20.5
,,,
MATH LIBS,.. _mathlibs-support-compatibility-matrix:,,
`half <https://github.com/ROCm/half>`_ ,1.12.0,1.12.0,1.12.0
:doc:`hipBLAS <hipblas:index>`,2.3.0,2.2.0,2.1.0
:doc:`hipBLASLt <hipblaslt:index>`,0.10.0,0.8.0,0.7.0
:doc:`hipFFT <hipfft:index>`,1.0.17,1.0.16,1.0.14
:doc:`hipfort <hipfort:index>`,0.5.0,0.4.0,0.4.0
:doc:`hipRAND <hiprand:index>`,2.11.0,2.11.1,2.10.16
:doc:`hipSOLVER <hipsolver:index>`,2.3.0,2.2.0,2.1.0
:doc:`hipSPARSE <hipsparse:index>`,3.1.2,3.1.1,3.0.1
:doc:`hipSPARSELt <hipsparselt:index>`,0.2.2,0.2.1,0.1.0
:doc:`rocALUTION <rocalution:index>`,3.2.1,3.2.1,3.1.1
:doc:`rocBLAS <rocblas:index>`,4.3.0,4.2.4,4.1.0
:doc:`rocFFT <rocfft:index>`,1.0.31,1.0.30,1.0.26
:doc:`rocRAND <rocrand:index>`,3.2.0,3.1.1,3.0.1
:doc:`rocSOLVER <rocsolver:index>`,3.27.0,3.26.2,3.25.0
:doc:`rocSPARSE <rocsparse:index>`,3.3.0,3.2.1,3.1.2
:doc:`rocWMMA <rocwmma:index>`,1.6.0,1.5.0,1.4.0
:doc:`Tensile <tensile:index>`,4.42.0,4.41.0,4.40.0
:doc:`hipBLAS <hipblas:index>`,2.3.0,2.3.0,2.2.0
:doc:`hipBLASLt <hipblaslt:index>`,0.10.0,0.10.0,0.8.0
:doc:`hipFFT <hipfft:index>`,1.0.17,1.0.17,1.0.14
:doc:`hipfort <hipfort:index>`,0.5.0,0.5.0,0.4.0
:doc:`hipRAND <hiprand:index>`,2.11.1,2.11.0,2.11.0
:doc:`hipSOLVER <hipsolver:index>`,2.3.0,2.3.0,2.2.0
:doc:`hipSPARSE <hipsparse:index>`,3.1.2,3.1.2,3.1.1
:doc:`hipSPARSELt <hipsparselt:index>`,0.2.2,0.2.2,0.2.1
:doc:`rocALUTION <rocalution:index>`,3.2.1,3.2.1,3.2.0
:doc:`rocBLAS <rocblas:index>`,4.3.0,4.3.0,4.2.0
:doc:`rocFFT <rocfft:index>`,1.0.31,1.0.31,1.0.28
:doc:`rocRAND <rocrand:index>`,3.2.0,3.2.0,3.1.0
:doc:`rocSOLVER <rocsolver:index>`,3.27.0,3.27.0,3.26.0
:doc:`rocSPARSE <rocsparse:index>`,3.3.0,3.3.0,3.2.0
:doc:`rocWMMA <rocwmma:index>`,1.6.0,1.6.0,1.5.0
:doc:`Tensile <tensile:src/index>`,4.42.0,4.42.0,4.41.0
,,,
PRIMITIVES,.. _primitivelibs-support-compatibility-matrix:,,
:doc:`hipCUB <hipcub:index>`,3.3.0,3.2.1,3.1.0
:doc:`hipTensor <hiptensor:index>`,1.4.0,1.3.0,1.2.0
:doc:`rocPRIM <rocprim:index>`,3.3.0,3.2.2,3.1.0
:doc:`rocThrust <rocthrust:index>`,3.3.0,3.1.1,3.0.1
:doc:`hipCUB <hipcub:index>`,3.3.0,3.3.0,3.2.0
:doc:`hipTensor <hiptensor:index>`,1.4.0,1.4.0,1.3.0
:doc:`rocPRIM <rocprim:index>`,3.3.0,3.3.0,3.2.0
:doc:`rocThrust <rocthrust:index>`,3.3.0,3.3.0,3.0.1
,,,
SUPPORT LIBS,,,
`hipother <https://github.com/ROCm/hipother>`_,6.3.42131,6.2.41134,6.1.40091
`rocm-core <https://github.com/ROCm/rocm-core>`_,6.3.0,6.2.4,6.1.0
`ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr]_,20240607.5.7,20240125.3.30
`hipother <https://github.com/ROCm/hipother>`_,6.3.42133,6.3.42131,6.2.41133
`rocm-core <https://github.com/ROCm/rocm-core>`_,6.3.1,6.3.0,6.2.0
`ROCT-Thunk-Interface <https://github.com/ROCm/ROCT-Thunk-Interface>`_,N/A [#ROCT-rocr]_,N/A [#ROCT-rocr]_,20240607.1.4246
,,,
SYSTEM MGMT TOOLS,.. _tools-support-compatibility-matrix:,,
:doc:`AMD SMI <amdsmi:index>`,24.7.1,24.6.3,24.4.1
:doc:`AMD SMI <amdsmi:index>`,24.7.1,24.7.1,24.6.2
:doc:`ROCm Data Center Tool <rdc:index>`,0.3.0,0.3.0,0.3.0
:doc:`rocminfo <rocminfo:index>`,1.0.0,1.0.0,1.0.0
:doc:`ROCm SMI <rocm_smi_lib:index>`,7.4.0,7.3.0,7.0.0
:doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.1.0,1.0.60204,1.0.60100
:doc:`ROCm SMI <rocm_smi_lib:index>`,7.4.0,7.4.0,7.3.0
:doc:`ROCm Validation Suite <rocmvalidationsuite:index>`,1.1.0,1.1.0,1.0.60200
,,,
PERFORMANCE TOOLS,,,
:doc:`ROCm Bandwidth Test <rocm_bandwidth_test:index>`,1.4.0,1.4.0,1.4.0
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.0.0,2.0.1,N/A
:doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,0.1.0,1.11.2,N/A
:doc:`ROCProfiler <rocprofiler:index>`,2.0.60300,2.0.60204,2.0.60100
:doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,0.5.0,0.4.0,N/A
:doc:`ROCTracer <roctracer:index>`,4.1.60300,4.1.60204,4.1.60100
:doc:`ROCm Compute Profiler <rocprofiler-compute:index>`,3.0.0,3.0.0,2.0.1
:doc:`ROCm Systems Profiler <rocprofiler-systems:index>`,0.1.0,0.1.0,1.11.2
:doc:`ROCProfiler <rocprofiler:index>`,2.0.60301,2.0.60300,2.0.60200
:doc:`ROCprofiler-SDK <rocprofiler-sdk:index>`,0.5.0,0.5.0,0.4.0
:doc:`ROCTracer <roctracer:index>`,4.1.60301,4.1.60300,4.1.60200
,,,
DEVELOPMENT TOOLS,,,
:doc:`HIPIFY <hipify:index>`,18.0.0.24455,18.0.0.24392,17.0.0.24103
:doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.13.0,0.12.0
:doc:`ROCdbgapi <rocdbgapi:index>`,0.77.0,0.76.0,0.71.0
:doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,15.2.0,14.2.0,14.1.0
`rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.4.0,0.4.0,0.3.0
:doc:`HIPIFY <hipify:index>`,18.0.0.24491,18.0.0.24455,18.0.0.24232
:doc:`ROCm CMake <rocmcmakebuildtools:index>`,0.14.0,0.14.0,0.13.0
:doc:`ROCdbgapi <rocdbgapi:index>`,0.77.0,0.77.0,0.76.0
:doc:`ROCm Debugger (ROCgdb) <rocgdb:index>`,15.2.0,15.2.0,14.2.0
`rocprofiler-register <https://github.com/ROCm/rocprofiler-register>`_,0.4.0,0.4.0,0.4.0
:doc:`ROCr Debug Agent <rocr_debug_agent:index>`,2.0.3,2.0.3,2.0.3
,,,
COMPILERS,.. _compilers-support-compatibility-matrix:,,
`clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,0.5.0
:doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.0.0
`Flang <https://github.com/ROCm/flang>`_,18.0.0.24455,18.0.0.24392,17.0.0.24103
:doc:`llvm-project <llvm-project:index>`,18.0.0.24455,18.0.0.24392,17.0.0.24103
`OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,18.0.0.24455,18.0.0.24392,17.0.0.24103
`clang-ocl <https://github.com/ROCm/clang-ocl>`_,N/A,N/A,N/A
:doc:`hipCC <hipcc:index>`,1.1.1,1.1.1,1.1.1
`Flang <https://github.com/ROCm/flang>`_,18.0.0.24491,18.0.0.24455,18.0.0.24232
:doc:`llvm-project <llvm-project:index>`,18.0.0.24491,18.0.0.24455,18.0.0.24232
`OpenMP <https://github.com/ROCm/llvm-project/tree/amd-staging/openmp>`_,18.0.0.24491,18.0.0.24455,18.0.0.24232
,,,
RUNTIMES,.. _runtime-support-compatibility-matrix:,,
:doc:`AMD CLR <hip:understand/amd_clr>`,6.3.42131,6.2.41134,6.1.40091
:doc:`HIP <hip:index>`,6.3.42131,6.2.41134,6.1.40091
:doc:`AMD CLR <hip:understand/amd_clr>`,6.3.42133,6.3.42131,6.2.41133
:doc:`HIP <hip:index>`,6.3.42133,6.3.42131,6.2.41133
`OpenCL Runtime <https://github.com/ROCm/clr/tree/develop/opencl>`_,2.0.0,2.0.0,2.0.0
:doc:`ROCr Runtime <rocr-runtime:index>`,1.14.0,1.14.0,1.13.0
.. rubric:: Footnotes
.. [#red-hat94] RHEL 9.4 is supported only on AMD Instinct MI300A.
.. [#oracle89] Oracle Linux is supported only on AMD Instinct MI300X.
.. [#mi300_624] **For ROCm 6.2.4** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_610] **For ROCm 6.1.0** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4.
.. [#kfd_support] ROCm provides forward and backward compatibility between the Kernel Fusion Driver (KFD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#ROCT-rocr] As of ROCm 6.3.0, the ROCT Thunk Interface is now included as part of the ROCr runtime package.
.. [#mi300x] Oracle Linux and Azure Linux are supported only on AMD Instinct MI300X.
.. [#single-node] Debian 12 is supported only on AMD Instinct MI300X for single-node functionality.
.. [#mi300_620] **For ROCm 6.2.0** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#kfd_support] ROCm provides forward and backward compatibility between the AMD Kernel-mode GPU Driver (KMD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#ROCT-rocr] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package.
.. _OS-kernel-versions:
@@ -166,35 +166,26 @@ Use this lookup table to confirm which operating system and kernel versions are
:stub-columns: 1
`Ubuntu <https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle>`_, 24.04.2, "6.8 GA, 6.11 HWE"
, 24.04.1, "6.8 GA"
, 24.04, "6.8 GA"
,,
`Ubuntu <https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle>`_, 22.04.5, "5.15 GA, 6.8 HWE"
, 22.04.4, "5.15 GA, 6.5 HWE"
, 22.04.3, "5.15 GA, 6.2 HWE"
, 22.04.2, "5.15 GA, 5.19 HWE"
,,
`Ubuntu <https://ubuntu.com/about/release-cycle#ubuntu-kernel-release-cycle>`_, 20.04.06, "5.15 HWE"
, 20.04.5, "5.15 HWE"
,,
`Red Hat Enterprise Linux (RHEL) <https://access.redhat.com/articles/3078#RHEL9>`_, 9.5, 5.14.0
,9.4, 5.14.0
,9.3, 5.14.0
,9.2, 5.14.0
,,
`Red Hat Enterprise Linux (RHEL) <https://access.redhat.com/articles/3078#RHEL8>`_, 8.10, 4.18.0
,8.9, 4.18.0
,8.8, 4.18.0
,,
`CentOS <https://access.redhat.com/articles/3078#RHEL7>`_, 7.9, 3.10
,,
`SUSE Linux Enterprise Server (SLES) <https://www.suse.com/support/kb/doc/?id=000019587#SLE15SP4>`_, 15 SP6, 6.4.0
,15 SP5, 5.14.21
,15 SP4, 5.14.21
,,
`Oracle Linux <https://blogs.oracle.com/scoter/post/oracle-linux-and-unbreakable-enterprise-kernel-uek-releases>`_, 8.10, 5.15.0
,8.9, 5.15.0
`Azure Linux <https://github.com/microsoft/azurelinux/releases>`_, 3.0, 6.6.60
,,
`Debian <https://www.debian.org/download>`_,12, 6.1
`Azure Linux <https://techcommunity.microsoft.com/blog/linuxandopensourceblog/azure-linux-3-0-now-in-preview-on-azure-kubernetes-service-v1-31/4287229>`_,3.0, 6.6
..
Footnotes and ref anchors in below historical tables should be appended with "-past-60", to differentiate from the
@@ -222,7 +213,8 @@ Expand for full historical view of:
.. rubric:: Footnotes
.. [#oracle89-past-60] Oracle Linux is supported only on AMD Instinct MI300X.
.. [#mic300x-past-60] Oracle Linux and Azure Linux are supported only on AMD Instinct MI300X.
.. [#single-node-past-60] Debian 12 is supported only on AMD Instinct MI300X for single-node functionality.
.. [#mi300_624-past-60] **For ROCm 6.2.4** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_622-past-60] **For ROCm 6.2.2** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
.. [#mi300_621-past-60] **For ROCm 6.2.1** - MI300X (gfx942) is supported on listed operating systems *except* Ubuntu 22.04.5 [6.8 HWE] and Ubuntu 22.04.4 [6.5 HWE].
@@ -232,5 +224,5 @@ Expand for full historical view of:
.. [#mi300_610-past-60] **For ROCm 6.1.0** - MI300A (gfx942) is supported on Ubuntu 22.04.4, RHEL 9.4, RHEL 9.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.4.
.. [#mi300_602-past-60] **For ROCm 6.0.2** - MI300A (gfx942) is supported on Ubuntu 22.04.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.3.
.. [#mi300_600-past-60] **For ROCm 6.0.0** - MI300A (gfx942) is supported on Ubuntu 22.04.3, RHEL 8.9, and SLES 15 SP5. MI300X (gfx942) is only supported on Ubuntu 22.04.3.
.. [#kfd_support-past-60] ROCm provides forward and backward compatibility between the Kernel Fusion Driver (KFD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#ROCT-rocr-past-60] As of ROCm 6.3.0, the ROCT Thunk Interface is now included as part of the ROCr runtime package.
.. [#kfd_support-past-60] ROCm provides forward and backward compatibility between the AMD Kernel-mode GPU Driver (KMD) and its user space software for +/- 2 releases. These are the compatibility combinations that are currently supported.
.. [#ROCT-rocr-past-60] Starting from ROCm 6.3.0, the ROCT Thunk Interface is included as part of the ROCr runtime package.

View File

@@ -0,0 +1,663 @@
.. meta::
:description: JAX compatibility
:keywords: GPU, JAX compatibility
*******************************************************************************
JAX compatibility
*******************************************************************************
JAX provides a NumPy-like API, which combines automatic differentiation and the
Accelerated Linear Algebra (XLA) compiler to achieve high-performance machine
learning at scale.
JAX uses composable transformations of Python and NumPy through just-in-time (JIT) compilation,
automatic vectorization, and parallelization. To learn about JAX, including profiling and
optimizations, see the official `JAX documentation
<https://jax.readthedocs.io/en/latest/notebooks/quickstart.html>`_.
ROCm support for JAX is upstreamed and users can build the official source code with ROCm
support:
- ROCm JAX release:
- Offers AMD-validated and community :ref:`Docker images <jax-docker-compat>` with ROCm and JAX pre-installed.
- ROCm JAX repository: `<https://github.com/ROCm/jax>`__
- See the :doc:`ROCm JAX installation guide <rocm-install-on-linux:install/3rd-party/jax-install>`
to get started.
- Official JAX release:
- Official JAX repository: `<https://github.com/jax-ml/jax>`__
- See the `AMD GPU (Linux) installation section
<https://jax.readthedocs.io/en/latest/installation.html#amd-gpu-linux>`_ in the JAX
documentation.
.. note::
AMD releases official `ROCm JAX Docker images <https://hub.docker.com/r/rocm/jax>`_
quarterly alongside new ROCm releases. These images undergo full AMD testing.
`Community ROCm JAX Docker images <https://hub.docker.com/r/rocm/jax-community>`_
follow upstream JAX releases and use the latest available ROCm version.
.. _jax-docker-compat:
Docker image compatibility
================================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes ready-made `JAX <https://hub.docker.com/r/rocm/jax/>`_
images with ROCm backends on Docker Hub. The following Docker image tags and
associated inventories are validated for
`ROCm 6.3.1 <https://repo.radeon.com/rocm/apt/6.3.1/>`_. Click the |docker-icon|
icon to view the image on Docker Hub.
.. list-table:: JAX Docker image components
:header-rows: 1
* - Docker image
- JAX
- Linux
- Python
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/jax/rocm6.3.1-jax0.4.31-py3.12/images/sha256-085a0cd5207110922f1fca684933a9359c66d42db6c5aba4760ed5214fdabde0"><i class="fab fa-docker fa-lg"></i> rocm/jax</a>
- `0.4.31 <https://github.com/ROCm/jax/releases/tag/rocm-jax-v0.4.31>`_
- Ubuntu 24.04
- `3.12.7 <https://www.python.org/downloads/release/python-3127/>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/jax/rocm6.3.1-jax0.4.31-py3.10/images/sha256-f88eddad8f47856d8640b694da4da347ffc1750d7363175ab7dc872e82b43324"><i class="fab fa-docker fa-lg"></i> rocm/jax</a>
- `0.4.31 <https://github.com/ROCm/jax/releases/tag/rocm-jax-v0.4.31>`_
- Ubuntu 22.04
- `3.10.14 <https://www.python.org/downloads/release/python-31014/>`_
AMD publishes community `JAX <https://hub.docker.com/r/rocm/jax-community>`_
images with ROCm backends on Docker Hub. The following Docker image tags and
associated inventories are tested for `ROCm 6.2.4 <https://repo.radeon.com/rocm/apt/6.2.4/>`_.
.. list-table:: JAX community Docker image components
:header-rows: 1
* - Docker image
- JAX
- Linux
- Python
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/jax-community/rocm6.2.4-jax0.4.35-py3.12.7/images/sha256-a6032d89c07573b84c44e42c637bf9752b1b7cd2a222d39344e603d8f4c63beb?context=explore"><i class="fab fa-docker fa-lg"></i> rocm/jax-community</a>
- `0.4.35 <https://github.com/ROCm/jax/releases/tag/rocm-jax-v0.4.35>`_
- Ubuntu 22.04
- `3.12.7 <https://www.python.org/downloads/release/python-3127/>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/jax-community/rocm6.2.4-jax0.4.35-py3.11.10/images/sha256-d462f7e445545fba2f3b92234a21beaa52fe6c5f550faabcfdcd1bf53486d991?context=explore"><i class="fab fa-docker fa-lg"></i> rocm/jax-community</a>
- `0.4.35 <https://github.com/ROCm/jax/releases/tag/rocm-jax-v0.4.35>`_
- Ubuntu 22.04
- `3.11.10 <https://www.python.org/downloads/release/python-31110/>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/jax-community/rocm6.2.4-jax0.4.35-py3.10.15/images/sha256-6f2d4d0f529378d9572f0e8cfdcbc101d1e1d335bd626bb3336fff87814e9d60?context=explore"><i class="fab fa-docker fa-lg"></i> rocm/jax-community</a>
- `0.4.35 <https://github.com/ROCm/jax/releases/tag/rocm-jax-v0.4.35>`_
- Ubuntu 22.04
- `3.10.15 <https://www.python.org/downloads/release/python-31015/>`_
Critical ROCm libraries for JAX
================================================================================
The functionality of JAX with ROCm is determined by its underlying library
dependencies. These critical ROCm components affect the capabilities,
performance, and feature set available to developers.
.. list-table::
:header-rows: 1
* - ROCm library
- Version
- Purpose
- Used in
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- 2.3.0
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations.
- Matrix multiplication in ``jax.numpy.matmul``, ``jax.lax.dot`` and
``jax.lax.dot_general``, operations like ``jax.numpy.dot``, which
involve vector and matrix computations and batch matrix multiplications
``jax.numpy.einsum`` with matrix-multiplication patterns algebra
operations.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- 0.10.0
- hipBLASLt is an extension of hipBLAS, providing additional
features like epilogues fused into the matrix multiplication kernel or
use of integer tensor cores.
- Matrix multiplication in ``jax.numpy.matmul`` or ``jax.lax.dot``, and
the XLA (Accelerated Linear Algebra) use hipBLASLt for optimized matrix
operations, mixed-precision support, and hardware-specific
optimizations.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_
- 3.3.0
- Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select.
- Reduction functions (``jax.numpy.sum``, ``jax.numpy.mean``,
``jax.numpy.prod``, ``jax.numpy.max`` and ``jax.numpy.min``), prefix sum
(``jax.numpy.cumsum``, ``jax.numpy.cumprod``) and sorting
(``jax.numpy.sort``, ``jax.numpy.argsort``).
* - `hipFFT <https://github.com/ROCm/hipFFT>`_
- 1.0.17
- Provides GPU-accelerated Fast Fourier Transform (FFT) operations.
- Used in functions like ``jax.numpy.fft``.
* - `hipRAND <https://github.com/ROCm/hipRAND>`_
- 2.11.0
- Provides fast random number generation for GPUs.
- The ``jax.random.uniform``, ``jax.random.normal``,
``jax.random.randint`` and ``jax.random.split``.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- 2.3.0
- Provides GPU-accelerated solvers for linear systems, eigenvalues, and
singular value decompositions (SVD).
- Solving linear systems (``jax.numpy.linalg.solve``), matrix
factorizations, SVD (``jax.numpy.linalg.svd``) and eigenvalue problems
(``jax.numpy.linalg.eig``).
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- 3.1.2
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse matrix multiplication (``jax.numpy.matmul``), sparse
matrix-vector and matrix-matrix products
(``jax.experimental.sparse.dot``), sparse linear system solvers and
sparse data handling.
* - `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_
- 0.2.2
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse matrix multiplication (``jax.numpy.matmul``), sparse
matrix-vector and matrix-matrix products
(``jax.experimental.sparse.dot``) and sparse linear system solvers.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_
- 3.3.0
- Optimized for deep learning primitives such as convolutions, pooling,
normalization, and activation functions.
- Speeds up convolutional neural networks (CNNs), recurrent neural
networks (RNNs), and other layers. Used in operations like
``jax.nn.conv``, ``jax.nn.relu``, and ``jax.nn.batch_norm``.
* - `RCCL <https://github.com/ROCm/rccl>`_
- 2.21.5
- Optimized for multi-GPU communication for operations like all-reduce,
broadcast, and scatter.
- Distribute computations across multiple GPU with ``pmap`` and
``jax.distributed``. XLA automatically uses rccl when executing
operations across multiple GPUs on AMD hardware.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_
- 3.3.0
- Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning.
- Reduction operations like ``jax.numpy.sum``, ``jax.pmap`` for
distributed training, which involves parallel reductions or
operations like ``jax.numpy.cumsum`` can use rocThrust.
Supported and unsupported features
===============================================================================
The following table maps GPU-accelerated JAX modules to their supported
ROCm and JAX versions.
.. list-table::
:header-rows: 1
* - Module
- Description
- Since JAX
- Since ROCm
* - ``jax.numpy``
- Implements the NumPy API, using the primitives in ``jax.lax``.
- 0.1.56
- 5.0.0
* - ``jax.scipy``
- Provides GPU-accelerated and differentiable implementations of many
functions from the SciPy library, leveraging JAX's transformations
(e.g., ``grad``, ``jit``, ``vmap``).
- 0.1.56
- 5.0.0
* - ``jax.lax``
- A library of primitives operations that underpins libraries such as
``jax.numpy.`` Transformation rules, such as Jacobian-vector product
(JVP) and batching rules, are typically defined as transformations on
``jax.lax`` primitives.
- 0.1.57
- 5.0.0
* - ``jax.random``
- Provides a number of routines for deterministic generation of sequences
of pseudorandom numbers.
- 0.1.58
- 5.0.0
* - ``jax.sharding``
- Allows to define partitioning and distributing arrays across multiple
devices.
- 0.3.20
- 5.1.0
* - ``jax.dlpack``
- For exchanging tensor data between JAX and other libraries that support the
DLPack standard.
- 0.1.57
- 5.0.0
* - ``jax.distributed``
- Enables the scaling of computations across multiple devices on a single
machine or across multiple machines.
- 0.1.74
- 5.0.0
* - ``jax.dtypes``
- Provides utilities for working with and managing data types in JAX
arrays and computations.
- 0.1.66
- 5.0.0
* - ``jax.image``
- Contains image manipulation functions like resize, scale and translation.
- 0.1.57
- 5.0.0
* - ``jax.nn``
- Contains common functions for neural network libraries.
- 0.1.56
- 5.0.0
* - ``jax.ops``
- Computes the minimum, maximum, sum or product within segments of an
array.
- 0.1.57
- 5.0.0
* - ``jax.profiler``
- Contains JAXs tracing and time profiling features.
- 0.1.57
- 5.0.0
* - ``jax.stages``
- Contains interfaces to stages of the compiled execution process.
- 0.3.4
- 5.0.0
* - ``jax.tree``
- Provides utilities for working with tree-like container data structures.
- 0.4.26
- 5.6.0
* - ``jax.tree_util``
- Provides utilities for working with nested data structures, or
``pytrees``.
- 0.1.65
- 5.0.0
* - ``jax.typing``
- Provides JAX-specific static type annotations.
- 0.3.18
- 5.1.0
* - ``jax.extend``
- Provides modules for access to JAX internal machinery module. The
``jax.extend`` module defines a library view of some of JAXs internal
components.
- 0.4.15
- 5.5.0
* - ``jax.example_libraries``
- Serves as a collection of example code and libraries that demonstrate
various capabilities of JAX.
- 0.1.74
- 5.0.0
* - ``jax.experimental``
- Namespace for experimental features and APIs that are in development or
are not yet fully stable for production use.
- 0.1.56
- 5.0.0
* - ``jax.lib``
- Set of internal tools and types for bridging between JAXs Python
frontend and its XLA backend.
- 0.4.6
- 5.3.0
* - ``jax_triton``
- Library that integrates the Triton deep learning compiler with JAX.
- jax_triton 0.2.0
- 6.2.4
jax.scipy module
-------------------------------------------------------------------------------
A SciPy-like API for scientific computing.
.. list-table::
:header-rows: 1
* - Module
- Since JAX
- Since ROCm
* - ``jax.scipy.cluster``
- 0.3.11
- 5.1.0
* - ``jax.scipy.fft``
- 0.1.71
- 5.0.0
* - ``jax.scipy.integrate``
- 0.4.15
- 5.5.0
* - ``jax.scipy.interpolate``
- 0.1.76
- 5.0.0
* - ``jax.scipy.linalg``
- 0.1.56
- 5.0.0
* - ``jax.scipy.ndimage``
- 0.1.56
- 5.0.0
* - ``jax.scipy.optimize``
- 0.1.57
- 5.0.0
* - ``jax.scipy.signal``
- 0.1.56
- 5.0.0
* - ``jax.scipy.spatial.transform``
- 0.4.12
- 5.4.0
* - ``jax.scipy.sparse.linalg``
- 0.1.56
- 5.0.0
* - ``jax.scipy.special``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats``
- 0.1.56
- 5.0.0
jax.scipy.stats module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. list-table::
:header-rows: 1
* - Module
- Since JAX
- Since ROCm
* - ``jax.scipy.stats.bernouli``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.beta``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.betabinom``
- 0.1.61
- 5.0.0
* - ``jax.scipy.stats.binom``
- 0.4.14
- 5.4.0
* - ``jax.scipy.stats.cauchy``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.chi2``
- 0.1.61
- 5.0.0
* - ``jax.scipy.stats.dirichlet``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.expon``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.gamma``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.gennorm``
- 0.3.15
- 5.2.0
* - ``jax.scipy.stats.geom``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.laplace``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.logistic``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.multinomial``
- 0.3.18
- 5.1.0
* - ``jax.scipy.stats.multivariate_normal``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.nbinom``
- 0.1.72
- 5.0.0
* - ``jax.scipy.stats.norm``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.pareto``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.poisson``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.t``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.truncnorm``
- 0.4.0
- 5.3.0
* - ``jax.scipy.stats.uniform``
- 0.1.56
- 5.0.0
* - ``jax.scipy.stats.vonmises``
- 0.4.2
- 5.3.0
* - ``jax.scipy.stats.wrapcauchy``
- 0.4.20
- 5.6.0
jax.extend module
-------------------------------------------------------------------------------
Modules for JAX extensions.
.. list-table::
:header-rows: 1
* - Module
- Since JAX
- Since ROCm
* - ``jax.extend.ffi``
- 0.4.30
- 6.0.0
* - ``jax.extend.linear_util``
- 0.4.17
- 5.6.0
* - ``jax.extend.mlir``
- 0.4.26
- 5.6.0
* - ``jax.extend.random``
- 0.4.15
- 5.5.0
jax.experimental module
-------------------------------------------------------------------------------
Experimental modules and APIs.
.. list-table::
:header-rows: 1
* - Module
- Since JAX
- Since ROCm
* - ``jax.experimental.checkify``
- 0.1.75
- 5.0.0
* - ``jax.experimental.compilation_cache.compilation_cache``
- 0.1.68
- 5.0.0
* - ``jax.experimental.custom_partitioning``
- 0.4.0
- 5.3.0
* - ``jax.experimental.jet``
- 0.1.56
- 5.0.0
* - ``jax.experimental.key_reuse``
- 0.4.26
- 5.6.0
* - ``jax.experimental.mesh_utils``
- 0.1.76
- 5.0.0
* - ``jax.experimental.multihost_utils``
- 0.3.2
- 5.0.0
* - ``jax.experimental.pallas``
- 0.4.15
- 5.5.0
* - ``jax.experimental.pjit``
- 0.1.61
- 5.0.0
* - ``jax.experimental.serialize_executable``
- 0.4.0
- 5.3.0
* - ``jax.experimental.shard_map``
- 0.4.3
- 5.3.0
* - ``jax.experimental.sparse``
- 0.1.75
- 5.0.0
.. list-table::
:header-rows: 1
* - API
- Since JAX
- Since ROCm
* - ``jax.experimental.enable_x64``
- 0.1.60
- 5.0.0
* - ``jax.experimental.disable_x64``
- 0.1.60
- 5.0.0
jax.experimental.pallas module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Module for Pallas, a JAX extension for custom kernels.
.. list-table::
:header-rows: 1
* - Module
- Since JAX
- Since ROCm
* - ``jax.experimental.pallas.mosaic_gpu``
- 0.4.31
- 6.1.3
* - ``jax.experimental.pallas.tpu``
- 0.4.15
- 5.5.0
* - ``jax.experimental.pallas.triton``
- 0.4.32
- 6.1.3
jax.experimental.sparse module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Experimental support for sparse matrix operations.
.. list-table::
:header-rows: 1
* - Module
- Since JAX
- Since ROCm
* - ``jax.experimental.sparse.linalg``
- 0.3.15
- 5.2.0
* - ``jax.experimental.sparse.sparsify``
- 0.3.25
- ❌
.. list-table::
:header-rows: 1
* - ``sparse`` data structure API
- Since JAX
- Since ROCm
* - ``jax.experimental.sparse.BCOO``
- 0.1.72
- 5.0.0
* - ``jax.experimental.sparse.BCSR``
- 0.3.20
- 5.1.0
* - ``jax.experimental.sparse.CSR``
- 0.1.75
- 5.0.0
* - ``jax.experimental.sparse.NM``
- 0.4.27
- 5.6.0
* - ``jax.experimental.sparse.COO``
- 0.1.75
- 5.0.0
Unsupported JAX features
------------------------
The following are GPU-accelerated JAX features not currently supported by
ROCm.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since JAX
* - Mixed Precision with TF32
- Mixed precision with TF32 is used for matrix multiplications,
convolutions, and other linear algebra operations, particularly in
deep learning workloads like CNNs and transformers.
- 0.2.25
* - RNN support
- Currently only LSTM with double bias is supported with float32 input
and weight.
- 0.3.25
* - XLA int4 support
- 4-bit integer (int4) precision in the XLA compiler.
- 0.4.0
* - ``jax.experimental.sparsify``
- Converts a dense matrix to a sparse matrix representation.
- Experimental
Use cases and recommendations
================================================================================
* The `nanoGPT in JAX <https://rocm.blogs.amd.com/artificial-intelligence/nanoGPT-JAX/README.html>`_
blog explores the implementation and training of a Generative Pre-trained
Transformer (GPT) model in JAX, inspired by Andrej Karpathys PyTorch-based
nanoGPT. By comparing how essential GPT components—such as self-attention
mechanisms and optimizers—are realized in PyTorch and JAX, also highlight
JAXs unique features.
* The `Optimize GPT Training: Enabling Mixed Precision Training in JAX using
ROCm on AMD GPUs <https://rocm.blogs.amd.com/artificial-intelligence/jax-mixed-precision/README.html>`_
blog post provides a comprehensive guide on enhancing the training efficiency
of GPT models by implementing mixed precision techniques in JAX, specifically
tailored for AMD GPUs utilizing the ROCm platform.
* The `Supercharging JAX with Triton Kernels on AMD GPUs <https://rocm.blogs.amd.com/artificial-intelligence/jax-triton/README.html>`_
blog demonstrates how to develop a custom fused dropout-activation kernel for
matrices using Triton, integrate it with JAX, and benchmark its performance
using ROCm.
* The `Distributed fine-tuning with JAX on AMD GPUs <https://rocm.blogs.amd.com/artificial-intelligence/distributed-sft-jax/README.html>`_
outlines the process of fine-tuning a Bidirectional Encoder Representations
from Transformers (BERT)-based large language model (LLM) using JAX for a text
classification task. The blog post discuss techniques for parallelizing the
fine-tuning across multiple AMD GPUs and assess the model's performance on a
holdout dataset. During the fine-tuning, a BERT-base-cased transformer model
and the General Language Understanding Evaluation (GLUE) benchmark dataset was
used on a multi-GPU setup.
* The `MI300X workload optimization guide <https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html>`_
provides detailed guidance on optimizing workloads for the AMD Instinct MI300X
accelerator using ROCm. The page is aimed at helping users achieve optimal
performance for deep learning and other high-performance computing tasks on
the MI300X GPU.
For more use cases and recommendations, see `ROCm JAX blog posts <https://rocm.blogs.amd.com/blog/tag/jax.html>`_.

View File

@@ -0,0 +1,922 @@
.. meta::
:description: PyTorch compatibility
:keywords: GPU, PyTorch compatibility
********************************************************************************
PyTorch compatibility
********************************************************************************
`PyTorch <https://pytorch.org/>`_ is an open-source tensor library designed for
deep learning. PyTorch on ROCm provides mixed-precision and large-scale training
using `MIOpen <https://github.com/ROCm/MIOpen>`_ and
`RCCL <https://github.com/ROCm/rccl>`_ libraries.
ROCm support for PyTorch is upstreamed into the official PyTorch repository. Due
to independent compatibility considerations, this results in two distinct
release cycles for PyTorch on ROCm:
- ROCm PyTorch release:
- Provides the latest version of ROCm but doesn't immediately support the latest stable PyTorch
version.
- Offers :ref:`Docker images <pytorch-docker-compat>` with ROCm and PyTorch
pre-installed.
- ROCm PyTorch repository: `<https://github.com/ROCm/pytorch>`__
- See the :doc:`ROCm PyTorch installation guide <rocm-install-on-linux:install/3rd-party/pytorch-install>` to get started.
- Official PyTorch release:
- Provides the latest stable version of PyTorch but doesn't immediately support the latest ROCm version.
- Official PyTorch repository: `<https://github.com/pytorch/pytorch>`__
- See the `Nightly and latest stable version installation guide <https://pytorch.org/get-started/locally/>`_
or `Previous versions <https://pytorch.org/get-started/previous-versions/>`_ to get started.
The upstream PyTorch includes an automatic HIPification solution that automatically generates HIP
source code from the CUDA backend. This approach allows PyTorch to support ROCm without requiring
manual code modifications.
Development of ROCm is aligned with the stable release of PyTorch while upstream PyTorch testing uses
the stable release of ROCm to maintain consistency.
.. _pytorch-docker-compat:
Docker image compatibility
================================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes ready-made `PyTorch <https://hub.docker.com/r/rocm/pytorch>`_
images with ROCm backends on Docker Hub. The following Docker image tags and
associated inventories are validated for `ROCm 6.3.0 <https://repo.radeon.com/rocm/apt/6.3/>`_.
Click the |docker-icon| icon to view the image on Docker Hub.
.. list-table:: PyTorch Docker image components
:header-rows: 1
:class: docker-image-compatibility
* - Docker
- PyTorch
- Ubuntu
- Python
- Apex
- torchvision
- TensorBoard
- MAGMA
- UCX
- OMPI
- OFED
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu24.04_py3.12_pytorch_release_2.4.0/images/sha256-98ddf20333bd01ff749b8092b1190ee369a75d3b8c71c2fac80ffdcb1a98d529?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_
- 24.04
- `3.12 <https://www.python.org/downloads/release/python-3128/>`_
- `1.4.0 <https://github.com/ROCm/apex/tree/release/1.4.0>`_
- `0.19.0 <https://github.com/pytorch/vision/tree/v0.19.0>`_
- `2.13.0 <https://github.com/tensorflow/tensorboard/tree/2.13>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.10.0 <https://github.com/openucx/ucx/tree/v1.10.0>`_
- `4.0.7 <https://github.com/open-mpi/ompi/tree/v4.0.7>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu22.04_py3.10_pytorch_release_2.4.0/images/sha256-402c9b4f1a6b5a81c634a1932b56cbe01abb699cfcc7463d226276997c6cf8ea?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_
- 22.04
- `3.10 <https://www.python.org/downloads/release/python-31016/>`_
- `1.4.0 <https://github.com/ROCm/apex/tree/release/1.4.0>`_
- `0.19.0 <https://github.com/pytorch/vision/tree/v0.19.0>`_
- `2.13.0 <https://github.com/tensorflow/tensorboard/tree/2.13>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.10.0 <https://github.com/openucx/ucx/tree/v1.10.0>`_
- `4.0.7 <https://github.com/open-mpi/ompi/tree/v4.0.7>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu22.04_py3.9_pytorch_release_2.4.0/images/sha256-e0608b55d408c3bfe5c19fdd57a4ced3e0eb3a495b74c309980b60b156c526dd?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `2.4.0 <https://github.com/ROCm/pytorch/tree/release/2.4>`_
- 22.04
- `3.9 <https://www.python.org/downloads/release/python-3918/>`_
- `1.4.0 <https://github.com/ROCm/apex/tree/release/1.4.0>`_
- `0.19.0 <https://github.com/pytorch/vision/tree/v0.19.0>`_
- `2.13.0 <https://github.com/tensorflow/tensorboard/tree/2.13>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.10.0 <https://github.com/openucx/ucx/tree/v1.10.0>`_
- `4.0.7 <https://github.com/open-mpi/ompi/tree/v4.0.7>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu22.04_py3.10_pytorch_release_2.3.0/images/sha256-652cf25263d05b1de548222970aeb76e60b12de101de66751264709c0d0ff9d8?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `2.3.0 <https://github.com/ROCm/pytorch/tree/release/2.3>`_
- 22.04
- `3.10 <https://www.python.org/downloads/release/python-31016/>`_
- `1.3.0 <https://github.com/ROCm/apex/tree/release/1.3.0>`_
- `0.18.0 <https://github.com/pytorch/vision/tree/v0.18.0>`_
- `2.13.0 <https://github.com/tensorflow/tensorboard/tree/2.13>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.14.1 <https://github.com/openucx/ucx/tree/v1.14.1>`_
- `4.1.5 <https://github.com/open-mpi/ompi/tree/v4.1.5>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu22.04_py3.10_pytorch_release_2.2.1/images/sha256-051976f26beab8f9aa65d999e3ad546c027b39240a0cc3ee81b114a9024f2912?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `2.2.1 <https://github.com/ROCm/pytorch/tree/release/2.2>`_
- 22.04
- `3.10 <https://www.python.org/downloads/release/python-31016/>`_
- `1.2.0 <https://github.com/ROCm/apex/tree/release/1.2.0>`_
- `0.17.1 <https://github.com/pytorch/vision/tree/v0.17.1>`_
- `2.13.0 <https://github.com/tensorflow/tensorboard/tree/2.13>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.14.1 <https://github.com/openucx/ucx/tree/v1.14.1>`_
- `4.1.5 <https://github.com/open-mpi/ompi/tree/v4.1.5>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu20.04_py3.9_pytorch_release_2.2.1/images/sha256-88c839a364d109d3748c100385bfa100d28090d25118cc723fd0406390ab2f7e?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `2.2.1 <https://github.com/ROCm/pytorch/tree/release/2.2>`_
- 20.04
- `3.9 <https://www.python.org/downloads/release/python-3921/>`_
- `1.2.0 <https://github.com/ROCm/apex/tree/release/1.2.0>`_
- `0.17.1 <https://github.com/pytorch/vision/tree/v0.17.1>`_
- `2.13.0 <https://github.com/tensorflow/tensorboard/tree/2.13.0>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.10.0 <https://github.com/openucx/ucx/tree/v1.10.0>`_
- `4.0.3 <https://github.com/open-mpi/ompi/tree/v4.0.3>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu22.04_py3.9_pytorch_release_1.13.1/images/sha256-994424ed07a63113f79dd9aa72159124c00f5fbfe18127151e6658f7d0b6f821?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `1.13.1 <https://github.com/ROCm/pytorch/tree/release/1.13>`_
- 22.04
- `3.9 <https://www.python.org/downloads/release/python-3921/>`_
- `1.0.0 <https://github.com/ROCm/apex/tree/release/1.0.0>`_
- `0.14.0 <https://github.com/pytorch/vision/tree/v0.14.0>`_
- `2.18.0 <https://github.com/tensorflow/tensorboard/tree/2.18>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.14.1 <https://github.com/openucx/ucx/tree/v1.14.1>`_
- `4.1.5 <https://github.com/open-mpi/ompi/tree/v4.1.5>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/pytorch/rocm6.3_ubuntu20.04_py3.9_pytorch_release_1.13.1/images/sha256-7b8139fe40a9aeb4bca3aecd15c22c1fa96e867d93479fa3a24fdeeeeafa1219?context=explore"><i class="fab fa-docker fa-lg"></i></a>
- `1.13.1 <https://github.com/ROCm/pytorch/tree/release/1.13>`_
- 20.04
- `3.9 <https://www.python.org/downloads/release/python-3921/>`_
- `1.0.0 <https://github.com/ROCm/apex/tree/release/1.0.0>`_
- `0.14.0 <https://github.com/pytorch/vision/tree/v0.14.0>`_
- `2.18.0 <https://github.com/tensorflow/tensorboard/tree/2.18>`_
- `master <https://bitbucket.org/icl/magma/src/master/>`_
- `1.10.0 <https://github.com/openucx/ucx/tree/v1.10.0>`_
- `4.0.3 <https://github.com/open-mpi/ompi/tree/v4.0.3>`_
- `5.3-1.0.5.0 <https://content.mellanox.com/ofed/MLNX_OFED-5.3-1.0.5.0/MLNX_OFED_LINUX-5.3-1.0.5.0-ubuntu20.04-x86_64.tgz>`_
Critical ROCm libraries for PyTorch
================================================================================
The functionality of PyTorch with ROCm is determined by its underlying library
dependencies. These critical ROCm components affect the capabilities,
performance, and feature set available to developers.
.. list-table::
:header-rows: 1
* - ROCm library
- Version
- Purpose
- Used in
* - `Composable Kernel <https://github.com/ROCm/composable_kernel>`_
- 1.1.0
- Enables faster execution of core operations like matrix multiplication
(GEMM), convolutions and transformations.
- Speeds up ``torch.permute``, ``torch.view``, ``torch.matmul``,
``torch.mm``, ``torch.bmm``, ``torch.nn.Conv2d``, ``torch.nn.Conv3d``
and ``torch.nn.MultiheadAttention``.
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- 2.3.0
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations.
- Supports operations like matrix multiplication, matrix-vector products,
and tensor contractions. Utilized in both dense and batched linear
algebra operations.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- 0.10.0
- hipBLASLt is an extension of the hipBLAS library, providing additional
features like epilogues fused into the matrix multiplication kernel or
use of integer tensor cores.
- It accelerates operations like ``torch.matmul``, ``torch.mm``, and the
matrix multiplications used in convolutional and linear layers.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_
- 3.3.0
- Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select.
- Supports operations like ``torch.sum``, ``torch.cumsum``, ``torch.sort``
and ``torch.topk``. Operations on sparse tensors or tensors with
irregular shapes often involve scanning, sorting, and filtering, which
hipCUB handles efficiently.
* - `hipFFT <https://github.com/ROCm/hipFFT>`_
- 1.0.17
- Provides GPU-accelerated Fast Fourier Transform (FFT) operations.
- Used in functions like the ``torch.fft`` module.
* - `hipRAND <https://github.com/ROCm/hipRAND>`_
- 2.11.0
- Provides fast random number generation for GPUs.
- The ``torch.rand``, ``torch.randn`` and stochastic layers like
``torch.nn.Dropout``.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- 2.3.0
- Provides GPU-accelerated solvers for linear systems, eigenvalues, and
singular value decompositions (SVD).
- Supports functions like ``torch.linalg.solve``,
``torch.linalg.eig``, and ``torch.linalg.svd``.
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- 3.1.2
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse tensor operations ``torch.sparse``.
* - `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_
- 0.2.2
- Accelerates operations on sparse matrices, such as sparse matrix-vector
or matrix-matrix products.
- Sparse tensor operations ``torch.sparse``.
* - `hipTensor <https://github.com/ROCm/hipTensor>`_
- 1.4.0
- Optimizes for high-performance tensor operations, such as contractions.
- Accelerates tensor algebra, especially in deep learning and scientific
computing.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_
- 3.3.0
- Optimizes deep learning primitives such as convolutions, pooling,
normalization, and activation functions.
- Speeds up convolutional neural networks (CNNs), recurrent neural
networks (RNNs), and other layers. Used in operations like
``torch.nn.Conv2d``, ``torch.nn.ReLU``, and ``torch.nn.LSTM``.
* - `MIGraphX <https://github.com/ROCm/AMDMIGraphX>`_
- 2.11.0
- Adds graph-level optimizations, ONNX models and mixed precision support
and enable Ahead-of-Time (AOT) Compilation.
- Speeds up inference models and executes ONNX models for
compatibility with other frameworks.
``torch.nn.Conv2d``, ``torch.nn.ReLU``, and ``torch.nn.LSTM``.
* - `MIVisionX <https://github.com/ROCm/MIVisionX>`_
- 3.1.0
- Optimizes acceleration for computer vision and AI workloads like
preprocessing, augmentation, and inferencing.
- Faster data preprocessing and augmentation pipelines for datasets like
ImageNet or COCO and easy to integrate into PyTorch's ``torch.utils.data``
and ``torchvision`` workflows.
* - `rocAL <https://github.com/ROCm/rocAL>`_
- 2.1.0
- Accelerates the data pipeline by offloading intensive preprocessing and
augmentation tasks. rocAL is part of MIVisionX.
- Easy to integrate into PyTorch's ``torch.utils.data`` and
``torchvision`` data load workloads.
* - `RCCL <https://github.com/ROCm/rccl>`_
- 2.21.5
- Optimizes for multi-GPU communication for operations like AllReduce and
Broadcast.
- Distributed data parallel training (``torch.nn.parallel.DistributedDataParallel``).
Handles communication in multi-GPU setups.
* - `rocDecode <https://github.com/ROCm/rocDecode>`_
- 0.8.0
- Provides hardware-accelerated data decoding capabilities, particularly
for image, video, and other dataset formats.
- Can be integrated in ``torch.utils.data``, ``torchvision.transforms``
and ``torch.distributed``.
* - `rocJPEG <https://github.com/ROCm/rocJPEG>`_
- 0.6.0
- Provides hardware-accelerated JPEG image decoding and encoding.
- GPU accelerated ``torchvision.io.decode_jpeg`` and
``torchvision.io.encode_jpeg`` and can be integrated in
``torch.utils.data`` and ``torchvision``.
* - `RPP <https://github.com/ROCm/RPP>`_
- 1.9.1
- Speeds up data augmentation, transformation, and other preprocessing steps.
- Easy to integrate into PyTorch's ``torch.utils.data`` and
``torchvision`` data load workloads.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_
- 3.3.0
- Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning.
- Utilized in backend operations for tensor computations requiring
parallel processing.
* - `rocWMMA <https://github.com/ROCm/rocWMMA>`_
- 1.6.0
- Accelerates warp-level matrix-multiply and matrix-accumulate to speed up matrix
multiplication (GEMM) and accumulation operations with mixed precision
support.
- Linear layers (``torch.nn.Linear``), convolutional layers
(``torch.nn.Conv2d``), attention layers, general tensor operations that
involve matrix products, such as ``torch.matmul``, ``torch.bmm``, and
more.
Supported and unsupported features
================================================================================
The following section maps GPU-accelerated PyTorch features to their supported
ROCm and PyTorch versions.
torch
--------------------------------------------------------------------------------
`torch <https://pytorch.org/docs/stable/index.html>`_ is the central module of
PyTorch, providing data structures for multi-dimensional tensors and
implementing mathematical operations on them. It also includes utilities for
efficient serialization of tensors and arbitrary data types, along with various
other tools.
Tensor data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The data type of a tensor is specified using the ``dtype`` attribute or argument, and PyTorch supports a wide range of data types for different use cases.
The following table lists `torch.Tensor <https://pytorch.org/docs/stable/tensors.html>`_'s single data types:
.. list-table::
:header-rows: 1
* - Data type
- Description
- Since PyTorch
- Since ROCm
* - ``torch.float8_e4m3fn``
- 8-bit floating point, e4m3
- 2.3
- 5.5
* - ``torch.float8_e5m2``
- 8-bit floating point, e5m2
- 2.3
- 5.5
* - ``torch.float16`` or ``torch.half``
- 16-bit floating point
- 0.1.6
- 2.0
* - ``torch.bfloat16``
- 16-bit floating point
- 1.6
- 2.6
* - ``torch.float32`` or ``torch.float``
- 32-bit floating point
- 0.1.12_2
- 2.0
* - ``torch.float64`` or ``torch.double``
- 64-bit floating point
- 0.1.12_2
- 2.0
* - ``torch.complex32`` or ``torch.chalf``
- PyTorch provides native support for 32-bit complex numbers
- 1.6
- 2.0
* - ``torch.complex64`` or ``torch.cfloat``
- PyTorch provides native support for 64-bit complex numbers
- 1.6
- 2.0
* - ``torch.complex128`` or ``torch.cdouble``
- PyTorch provides native support for 128-bit complex numbers
- 1.6
- 2.0
* - ``torch.uint8``
- 8-bit integer (unsigned)
- 0.1.12_2
- 2.0
* - ``torch.uint16``
- 16-bit integer (unsigned)
- 2.3
- Not natively supported
* - ``torch.uint32``
- 32-bit integer (unsigned)
- 2.3
- Not natively supported
* - ``torch.uint64``
- 32-bit integer (unsigned)
- 2.3
- Not natively supported
* - ``torch.int8``
- 8-bit integer (signed)
- 1.12
- 5.0
* - ``torch.int16`` or ``torch.short``
- 16-bit integer (signed)
- 0.1.12_2
- 2.0
* - ``torch.int32`` or ``torch.int``
- 32-bit integer (signed)
- 0.1.12_2
- 2.0
* - ``torch.int64`` or ``torch.long``
- 64-bit integer (signed)
- 0.1.12_2
- 2.0
* - ``torch.bool``
- Boolean
- 1.2
- 2.0
* - ``torch.quint8``
- Quantized 8-bit integer (unsigned)
- 1.8
- 5.0
* - ``torch.qint8``
- Quantized 8-bit integer (signed)
- 1.8
- 5.0
* - ``torch.qint32``
- Quantized 32-bit integer (signed)
- 1.8
- 5.0
* - ``torch.quint4x2``
- Quantized 4-bit integer (unsigned)
- 1.8
- 5.0
.. note::
Unsigned types aside from ``uint8`` are currently only have limited support in
eager mode (they primarily exist to assist usage with ``torch.compile``).
The :doc:`ROCm precision support page <rocm:reference/precision-support>`
collected the native HW support of different data types.
torch.cuda
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``torch.cuda`` in PyTorch is a module that provides utilities and functions for
managing and utilizing AMD and NVIDIA GPUs. It enables GPU-accelerated
computations, memory management, and efficient execution of tensor operations,
leveraging ROCm and CUDA as the underlying frameworks.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since PyTorch
- Since ROCm
* - Device management
- Utilities for managing and interacting with GPUs.
- 0.4.0
- 3.8
* - Tensor operations on GPU
- Performs tensor operations such as addition and matrix multiplications on
the GPU.
- 0.4.0
- 3.8
* - Streams and events
- Streams allow overlapping computation and communication for optimized
performance. Events enable synchronization.
- 1.6.0
- 3.8
* - Memory management
- Functions to manage and inspect memory usage like
``torch.cuda.memory_allocated()``, ``torch.cuda.max_memory_allocated()``,
``torch.cuda.memory_reserved()`` and ``torch.cuda.empty_cache()``.
- 0.3.0
- 1.9.2
* - Running process lists of memory management
- Returns a human-readable printout of the running processes and their GPU
memory use for a given device with functions like
``torch.cuda.memory_stats()`` and ``torch.cuda.memory_summary()``.
- 1.8.0
- 4.0
* - Communication collectives
- Set of APIs that enable efficient communication between multiple GPUs,
allowing for distributed computing and data parallelism.
- 1.9.0
- 5.0
* - ``torch.cuda.CUDAGraph``
- Graphs capture sequences of GPU operations to minimize kernel launch
overhead and improve performance.
- 1.10.0
- 5.3
* - TunableOp
- A mechanism that allows certain operations to be more flexible and
optimized for performance. It enables automatic tuning of kernel
configurations and other settings to achieve the best possible
performance based on the specific hardware (GPU) and workload.
- 2.0
- 5.4
* - NVIDIA Tools Extension (NVTX)
- Integration with NVTX for profiling and debugging GPU performance using
NVIDIA's Nsight tools.
- 1.8.0
- ❌
* - Lazy loading NVRTC
- Delays JIT compilation with NVRTC until the code is explicitly needed.
- 1.13.0
- ❌
* - Jiterator (beta)
- Jiterator allows asynchronous data streaming into computation streams
during training loops.
- 1.13.0
- 5.2
.. Need to validate and extend.
torch.backends.cuda
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``torch.backends.cuda`` is a PyTorch module that provides configuration options
and flags to control the behavior of ROCm or CUDA operations. It is part of the
PyTorch backend configuration system, which allows users to fine-tune how
PyTorch interacts with the ROCm or CUDA environment.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since PyTorch
- Since ROCm
* - ``cufft_plan_cache``
- Manages caching of GPU FFT plans to optimize repeated FFT computations.
- 1.7.0
- 5.0
* - ``matmul.allow_tf32``
- Enables or disables the use of TensorFloat-32 (TF32) precision for
faster matrix multiplications on GPUs with Tensor Cores.
- 1.10.0
- ❌
* - ``matmul.allow_fp16_reduced_precision_reduction``
- Reduced precision reductions (e.g., with fp16 accumulation type) are
allowed with fp16 GEMMs.
- 2.0
- ❌
* - ``matmul.allow_bf16_reduced_precision_reduction``
- Reduced precision reductions are allowed with bf16 GEMMs.
- 2.0
- ❌
* - ``enable_cudnn_sdp``
- Globally enables cuDNN SDPA's kernels within SDPA.
- 2.0
- ❌
* - ``enable_flash_sdp``
- Globally enables or disables FlashAttention for SDPA.
- 2.1
- ❌
* - ``enable_mem_efficient_sdp``
- Globally enables or disables Memory-Efficient Attention for SDPA.
- 2.1
- ❌
* - ``enable_math_sdp``
- Globally enables or disables the PyTorch C++ implementation within SDPA.
- 2.1
- ❌
.. Need to validate and extend.
torch.backends.cudnn
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Supported ``torch`` options include:
.. list-table::
:header-rows: 1
* - Option
- Description
- Since PyTorch
- Since ROCm
* - ``allow_tf32``
- TensorFloat-32 tensor cores may be used in cuDNN convolutions on NVIDIA
Ampere or newer GPUs.
- 1.12.0
- ❌
* - ``deterministic``
- A bool that, if True, causes cuDNN to only use deterministic
convolution algorithms.
- 1.12.0
- 6.0
Automatic mixed precision: torch.amp
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PyTorch that automates the process of using both 16-bit (half-precision,
float16) and 32-bit (single-precision, float32) floating-point types in model
training and inference.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since PyTorch
- Since ROCm
* - Autocasting
- Instances of autocast serve as context managers or decorators that allow
regions of your script to run in mixed precision.
- 1.9
- 2.5
* - Gradient scaling
- To prevent underflow, “gradient scaling” multiplies the networks
loss(es) by a scale factor and invokes a backward pass on the scaled
loss(es). Gradients flowing backward through the network are then
scaled by the same factor. In other words, gradient values have a
larger magnitude, so they dont flush to zero.
- 1.9
- 2.5
* - CUDA op-specific behavior
- These ops always go through autocasting whether they are invoked as part
of a ``torch.nn.Module``, as a function, or as a ``torch.Tensor`` method. If
functions are exposed in multiple namespaces, they go through
autocasting regardless of the namespace.
- 1.9
- 2.5
Distributed library features
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The PyTorch distributed library includes a collective of parallelism modules, a
communications layer, and infrastructure for launching and debugging large
training jobs. See :ref:`rocm-for-ai-pytorch-distributed` for more information.
The Distributed Library feature in PyTorch provides tools and APIs for building
and running distributed machine learning workflows. It allows training models
across multiple processes, GPUs, or nodes in a cluster, enabling efficient use
of computational resources and scalability for large-scale tasks.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since PyTorch
- Since ROCm
* - TensorPipe
- A point-to-point communication library integrated into
PyTorch for distributed training. It is designed to handle tensor data
transfers efficiently between different processes or devices, including
those on separate machines.
- 1.8
- 5.4
* - Gloo
- Designed for multi-machine and multi-GPU setups, enabling
efficient communication and synchronization between processes. Gloo is
one of the default backends for PyTorch's Distributed Data Parallel
(DDP) and RPC frameworks, alongside other backends like NCCL and MPI.
- 1.0
- 2.0
torch.compiler
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since PyTorch
- Since ROCm
* - ``torch.compiler`` (AOT Autograd)
- Autograd captures not only the user-level code, but also backpropagation,
which results in capturing the backwards pass “ahead-of-time”. This
enables acceleration of both forwards and backwards pass using
``TorchInductor``.
- 2.0
- 5.3
* - ``torch.compiler`` (TorchInductor)
- The default ``torch.compile`` deep learning compiler that generates fast
code for multiple accelerators and backends. You need to use a backend
compiler to make speedups through ``torch.compile`` possible. For AMD,
NVIDIA, and Intel GPUs, it leverages OpenAI Triton as the key building block.
- 2.0
- 5.3
torchaudio
--------------------------------------------------------------------------------
The `torchaudio <https://pytorch.org/audio/stable/index.html>`_ library provides
utilities for processing audio data in PyTorch, such as audio loading,
transformations, and feature extraction.
To ensure GPU-acceleration with ``torchaudio.transforms``, you need to move audio
data (waveform tensor) explicitly to GPU using ``.to('cuda')``.
The following ``torchaudio`` features are GPU-accelerated.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since torchaudio version
- Since ROCm
* - ``torchaudio.transforms.Spectrogram``
- Generates spectrogram of an input waveform using STFT.
- 0.6.0
- 4.5
* - ``torchaudio.transforms.MelSpectrogram``
- Generates the mel-scale spectrogram of raw audio signals.
- 0.9.0
- 4.5
* - ``torchaudio.transforms.MFCC``
- Extract of MFCC features.
- 0.9.0
- 4.5
* - ``torchaudio.transforms.Resample``
- Resamples a signal from one frequency to another.
- 0.9.0
- 4.5
torchvision
--------------------------------------------------------------------------------
The `torchvision <https://pytorch.org/vision/stable/index.html>`_ library
provide datasets, model architectures, and common image transformations for
computer vision.
The following ``torchvision`` features are GPU-accelerated.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since torchvision version
- Since ROCm
* - ``torchvision.transforms.functional``
- Provides GPU-compatible transformations for image preprocessing like
resize, normalize, rotate and crop.
- 0.2.0
- 4.0
* - ``torchvision.ops``
- GPU-accelerated operations for object detection and segmentation tasks.
``torchvision.ops.roi_align``, ``torchvision.ops.nms`` and
``box_convert``.
- 0.6.0
- 3.3
* - ``torchvision.models`` with ``.to('cuda')``
- ``torchvision`` provides several pre-trained models (ResNet, Faster
R-CNN, Mask R-CNN, ...) that can run on CUDA for faster inference and
training.
- 0.1.6
- 2.x
* - ``torchvision.io``
- Enables video decoding and frame extraction using GPU acceleration with NVIDIAs
NVDEC and nvJPEG (rocJPEG) on CUDA-enabled GPUs.
- 0.4.0
- 6.3
torchtext
--------------------------------------------------------------------------------
The `torchtext <https://pytorch.org/text/stable/index.html>`_ library provides
utilities for processing and working with text data in PyTorch, including
tokenization, vocabulary management, and text embeddings. torchtext supports
preprocessing pipelines and integration with PyTorch models, simplifying the
implementation of natural language processing (NLP) tasks.
To leverage GPU acceleration in torchtext, you need to move tensors
explicitly to the GPU using ``.to('cuda')``.
* torchtext does not implement its own kernels. ROCm support is enabled by linking against ROCm libraries.
* Only official release exists.
torchtune
--------------------------------------------------------------------------------
The `torchtune <https://pytorch.org/torchtune/stable/index.html>`_ library for
authoring, fine-tuning and experimenting with LLMs.
* Usage: It works out-of-the-box, enabling developers to fine-tune ROCm PyTorch solutions.
* Only official release exists.
torchserve
--------------------------------------------------------------------------------
The `torchserve <https://pytorch.org/torchserve/>`_ is a PyTorch domain library
for common sparsity and parallelism primitives needed for large-scale recommender
systems.
* torchtext does not implement its own kernels. ROCm support is enabled by linking against ROCm libraries.
* Only official release exists.
torchrec
--------------------------------------------------------------------------------
The `torchrec <https://pytorch.org/torchrec/>`_ is a PyTorch domain library for
common sparsity and parallelism primitives needed for large-scale recommender
systems.
* torchrec does not implement its own kernels. ROCm support is enabled by linking against ROCm libraries.
* Only official release exists.
Unsupported PyTorch features
----------------------------
The following are GPU-accelerated PyTorch features not currently supported by ROCm.
.. list-table::
:widths: 30, 60, 10
:header-rows: 1
* - Feature
- Description
- Since PyTorch
* - APEX batch norm
- Use APEX batch norm instead of PyTorch batch norm.
- 1.6.0
* - ``torch.backends.cuda`` / ``matmul.allow_tf32``
- A bool that controls whether TensorFloat-32 tensor cores may be used in
matrix multiplications.
- 1.7
* - ``torch.cuda`` / NVIDIA Tools Extension (NVTX)
- Integration with NVTX for profiling and debugging GPU performance using
NVIDIA's Nsight tools.
- 1.7.0
* - ``torch.cuda`` / Lazy loading NVRTC
- Delays JIT compilation with NVRTC until the code is explicitly needed.
- 1.8.0
* - ``torch-tensorrt``
- Integrate TensorRT library for optimizing and deploying PyTorch models.
ROCm does not have equialent library for TensorRT.
- 1.9.0
* - ``torch.backends`` / ``cudnn.allow_tf32``
- TensorFloat-32 tensor cores may be used in cuDNN convolutions.
- 1.10.0
* - ``torch.backends.cuda`` / ``matmul.allow_fp16_reduced_precision_reduction``
- Reduced precision reductions with fp16 accumulation type are
allowed with fp16 GEMMs.
- 2.0
* - ``torch.backends.cuda`` / ``matmul.allow_bf16_reduced_precision_reduction``
- Reduced precision reductions are allowed with bf16 GEMMs.
- 2.0
* - ``torch.nn.functional`` / ``scaled_dot_product_attention``
- Flash attention backend for SDPA to accelerate attention computation in
transformer-based models.
- 2.0
* - ``torch.backends.cuda`` / ``enable_cudnn_sdp``
- Globally enables cuDNN SDPA's kernels within SDPA.
- 2.0
* - ``torch.backends.cuda`` / ``enable_flash_sdp``
- Globally enables or disables FlashAttention for SDPA.
- 2.1
* - ``torch.backends.cuda`` / ``enable_mem_efficient_sdp``
- Globally enables or disables Memory-Efficient Attention for SDPA.
- 2.1
* - ``torch.backends.cuda`` / ``enable_math_sdp``
- Globally enables or disables the PyTorch C++ implementation within SDPA.
- 2.1
* - Dynamic parallelism
- PyTorch itself does not directly expose dynamic parallelism as a core
feature. Dynamic parallelism allow GPU threads to launch additional
threads which can be reached using custom operations via the
``torch.utils.cpp_extension`` module.
- Not a core feature
* - Unified memory support in PyTorch
- Unified Memory is not directly exposed in PyTorch's core API, it can be
utilized effectively through custom CUDA extensions or advanced
workflows.
- Not a core feature
Use cases and recommendations
================================================================================
* :doc:`Using ROCm for AI: training a model </how-to/rocm-for-ai/training/train-a-model>` provides
guidance on how to leverage the ROCm platform for training AI models. It covers the steps, tools, and best practices
for optimizing training workflows on AMD GPUs using PyTorch features.
* :doc:`Single-GPU fine-tuning and inference </how-to/rocm-for-ai/fine-tuning/single-gpu-fine-tuning-and-inference>`
describes and demonstrates how to use the ROCm platform for the fine-tuning and inference of
machine learning models, particularly large language models (LLMs), on systems with a single AMD
Instinct MI300X accelerator. This page provides a detailed guide for setting up, optimizing, and
executing fine-tuning and inference workflows in such environments.
* :doc:`Multi-GPU fine-tuning and inference optimization </how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference>`
describes and demonstrates the fine-tuning and inference of machine learning models on systems
with multi MI300X accelerators.
* The :doc:`Instinct MI300X workload optimization guide </how-to/rocm-for-ai/inference-optimization/workload>` provides detailed
guidance on optimizing workloads for the AMD Instinct MI300X accelerator using ROCm. This guide is aimed at helping
users achieve optimal performance for deep learning and other high-performance computing tasks on the MI300X
accelerator.
* The :doc:`Inception with PyTorch documentation </conceptual/ai-pytorch-inception>`
describes how PyTorch integrates with ROCm for AI workloads It outlines the use of PyTorch on the ROCm platform and
focuses on how to efficiently leverage AMD GPU hardware for training and inference tasks in AI applications.
For more use cases and recommendations, see `ROCm PyTorch blog posts <https://rocm.blogs.amd.com/blog/tag/pytorch.html>`_.

View File

@@ -0,0 +1,489 @@
.. meta::
:description: TensorFlow compatibility
:keywords: GPU, TensorFlow compatibility
*******************************************************************************
TensorFlow compatibility
*******************************************************************************
`TensorFlow <https://www.tensorflow.org/>`_ is an open-source library for
solving machine learning, deep learning, and AI problems. It can solve many
problems across different sectors and industries but primarily focuses on
neural network training and inference. It is one of the most popular and
in-demand frameworks and is very active in open-source contribution and
development.
The `official TensorFlow repository <http://github.com/tensorflow/tensorflow>`_
includes full ROCm support. AMD maintains a TensorFlow `ROCm repository
<http://github.com/rocm/tensorflow-upstream>`_ in order to quickly add bug
fixes, updates, and support for the latest ROCM versions.
- ROCm TensorFlow release:
- Offers :ref:`Docker images <tensorflow-docker-compat>` with
ROCm and TensorFlow pre-installed.
- ROCm TensorFlow repository: `<https://github.com/ROCm/tensorflow-upstream>`_
- See the :doc:`ROCm TensorFlow installation guide <rocm-install-on-linux:install/3rd-party/tensorflow-install>`
to get started.
- Official TensorFlow release:
- Official TensorFlow repository: `<https://github.com/tensorflow/tensorflow>`_
- See the `TensorFlow API versions <https://www.tensorflow.org/versions>`_ list.
.. note::
The official TensorFlow documentation does not cover ROCm support. Use the
ROCm documentation for installation instructions for Tensorflow on ROCm.
See :doc:`rocm-install-on-linux:install/3rd-party/tensorflow-install`.
.. _tensorflow-docker-compat:
Docker image compatibility
===============================================================================
.. |docker-icon| raw:: html
<i class="fab fa-docker"></i>
AMD validates and publishes ready-made `TensorFlow
<https://hub.docker.com/r/rocm/tensorflow>`_ images with ROCm backends on
Docker Hub. The following Docker image tags and associated inventories are
validated for `ROCm 6.3.1 <https://repo.radeon.com/rocm/apt/6.3.1/>`_. Click
the |docker-icon| icon to view the image on Docker Hub.
.. list-table:: TensorFlow Docker image components
:header-rows: 1
* - Docker image
- TensorFlow
- Dev
- Python
- TensorBoard
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/tensorflow/rocm6.3.1-py3.12-tf2.17.0-dev/images/sha256-804121ee4985718277ba7dcec53c57bdade130a1ef42f544b6c48090ad379c17"><i class="fab fa-docker fa-lg"></i> rocm/tensorflow</a>
- `tensorflow-rocm 2.17.0 <https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3/tensorflow_rocm-2.17.0-cp312-cp312-manylinux_2_28_x86_64.whl>`_
- dev
- `Python 3.12 <https://www.python.org/downloads/release/python-3124/>`_
- `TensorBoard 2.17.1 <https://github.com/tensorflow/tensorboard/tree/2.17.1>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/tensorflow/rocm6.3.1-py3.10-tf2.17.0-dev/images/sha256-776837ffa945913f6c466bfe477810a11453d21d5b6afb200be1c36e48fbc08e"><i class="fab fa-docker fa-lg"></i> rocm/tensorflow</a>
- `tensorflow-rocm 2.17.0 <https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3/tensorflow_rocm-2.17.0-cp310-cp310-manylinux_2_28_x86_64.whl>`_
- dev
- `Python 3.10 <https://www.python.org/downloads/release/python-31012/>`_
- `TensorBoard 2.17.0 <https://github.com/tensorflow/tensorboard/tree/2.17.0>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/tensorflow/rocm6.3.1-py3.12-tf2.16.2-dev/images/sha256-c793e1483e30809c3c28fc5d7805bedc033c73da224f839fff370717cb100944"><i class="fab fa-docker fa-lg"></i> rocm/tensorflow</a>
- `tensorflow-rocm 2.16.2 <https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3/tensorflow_rocm-2.16.2-cp312-cp312-manylinux_2_28_x86_64.whl>`_
- dev
- `Python 3.12 <https://www.python.org/downloads/release/python-3124/>`_
- `TensorBoard 2.16.2 <https://github.com/tensorflow/tensorboard/tree/2.16.2>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/tensorflow/rocm6.3.1-py3.10-tf2.16.0-dev/images/sha256-263e78414ae85d7bcd52a025a94131d0a279872a45ed632b9165336dfdcd4443"><i class="fab fa-docker fa-lg"></i> rocm/tensorflow</a>
- `tensorflow-rocm 2.16.2 <https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3/tensorflow_rocm-2.16.2-cp310-cp310-manylinux_2_28_x86_64.whl>`_
- dev
- `Python 3.10 <https://www.python.org/downloads/release/python-31012/>`_
- `TensorBoard 2.16.2 <https://github.com/tensorflow/tensorboard/tree/2.16.2>`_
* - .. raw:: html
<a href="https://hub.docker.com/layers/rocm/tensorflow/rocm6.3.1-py3.10-tf2.15.0-dev/images/sha256-479046a8477ca701a9494a813ab17e8ab4f6baa54641e65dc8d07629f1e6a880"><i class="fab fa-docker fa-lg"></i> rocm/tensorflow</a>
- `tensorflow-rocm 2.15.1 <https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3/tensorflow_rocm-2.15.1-cp310-cp310-manylinux_2_28_x86_64.whl>`_
- dev
- `Python 3.10 <https://www.python.org/downloads/release/python-31012/>`_
- `TensorBoard 2.15.2 <https://github.com/tensorflow/tensorboard/tree/2.15.2>`_
Critical ROCm libraries for TensorFlow
===============================================================================
TensorFlow depends on multiple components and the supported features of those
components can affect the TensorFlow ROCm supported feature set. The versions
in the following table refer to the first TensorFlow version where the ROCm
library was introduced as a dependency.
.. list-table::
:widths: 25, 10, 35, 30
:header-rows: 1
* - ROCm library
- Version
- Purpose
- Used in
* - `hipBLAS <https://github.com/ROCm/hipBLAS>`_
- 2.3.0
- Provides GPU-accelerated Basic Linear Algebra Subprograms (BLAS) for
matrix and vector operations.
- Accelerates operations like ``tf.matmul``, ``tf.linalg.matmul``, and
other matrix multiplications commonly used in neural network layers.
* - `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- 0.10.0
- Extends hipBLAS with additional optimizations like fused kernels and
integer tensor cores.
- Optimizes matrix multiplications and linear algebra operations used in
layers like dense, convolutional, and RNNs in TensorFlow.
* - `hipCUB <https://github.com/ROCm/hipCUB>`_
- 3.3.0
- Provides a C++ template library for parallel algorithms for reduction,
scan, sort and select.
- Supports operations like ``tf.reduce_sum``, ``tf.cumsum``, ``tf.sort``
and other tensor operations in TensorFlow, especially those involving
scanning, sorting, and filtering.
* - `hipFFT <https://github.com/ROCm/hipFFT>`_
- 1.0.17
- Accelerates Fast Fourier Transforms (FFT) for signal processing tasks.
- Used for operations like signal processing, image filtering, and
certain types of neural networks requiring FFT-based transformations.
* - `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- 2.3.0
- Provides GPU-accelerated direct linear solvers for dense and sparse
systems.
- Optimizes linear algebra functions such as solving systems of linear
equations, often used in optimization and training tasks.
* - `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
- 3.1.2
- Optimizes sparse matrix operations for efficient computations on sparse
data.
- Accelerates sparse matrix operations in models with sparse weight
matrices or activations, commonly used in neural networks.
* - `MIOpen <https://github.com/ROCm/MIOpen>`_
- 3.3.0
- Provides optimized deep learning primitives such as convolutions,
pooling,
normalization, and activation functions.
- Speeds up convolutional neural networks (CNNs) and other layers. Used
in TensorFlow for layers like ``tf.nn.conv2d``, ``tf.nn.relu``, and
``tf.nn.lstm_cell``.
* - `RCCL <https://github.com/ROCm/rccl>`_
- 2.21.5
- Optimizes for multi-GPU communication for operations like AllReduce and
Broadcast.
- Distributed data parallel training (``tf.distribute.MirroredStrategy``).
Handles communication in multi-GPU setups.
* - `rocThrust <https://github.com/ROCm/rocThrust>`_
- 3.3.0
- Provides a C++ template library for parallel algorithms like sorting,
reduction, and scanning.
- Reduction operations like ``tf.reduce_sum``, ``tf.cumsum`` for computing
the cumulative sum of elements along a given axis or ``tf.unique`` to
finds unique elements in a tensor can use rocThrust.
Supported and unsupported features
===============================================================================
The following section maps supported data types and GPU-accelerated TensorFlow
features to their minimum supported ROCm and TensorFlow versions.
Data types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The data type of a tensor is specified using the ``dtype`` attribute or
argument, and TensorFlow supports a wide range of data types for different use
cases.
The basic, single data types of `tf.dtypes <https://www.tensorflow.org/api_docs/python/tf/dtypes>`_
are as follows:
.. list-table::
:header-rows: 1
* - Data type
- Description
- Since TensorFlow
- Since ROCm
* - ``bfloat16``
- 16-bit bfloat (brain floating point).
- 1.0.0
- 1.7
* - ``bool``
- Boolean.
- 1.0.0
- 1.7
* - ``complex128``
- 128-bit complex.
- 1.0.0
- 1.7
* - ``complex64``
- 64-bit complex.
- 1.0.0
- 1.7
* - ``double``
- 64-bit (double precision) floating-point.
- 1.0.0
- 1.7
* - ``float16``
- 16-bit (half precision) floating-point.
- 1.0.0
- 1.7
* - ``float32``
- 32-bit (single precision) floating-point.
- 1.0.0
- 1.7
* - ``float64``
- 64-bit (double precision) floating-point.
- 1.0.0
- 1.7
* - ``half``
- 16-bit (half precision) floating-point.
- 2.0.0
- 2.0
* - ``int16``
- Signed 16-bit integer.
- 1.0.0
- 1.7
* - ``int32``
- Signed 32-bit integer.
- 1.0.0
- 1.7
* - ``int64``
- Signed 64-bit integer.
- 1.0.0
- 1.7
* - ``int8``
- Signed 8-bit integer.
- 1.0.0
- 1.7
* - ``qint16``
- Signed quantized 16-bit integer.
- 1.0.0
- 1.7
* - ``qint32``
- Signed quantized 32-bit integer.
- 1.0.0
- 1.7
* - ``qint8``
- Signed quantized 8-bit integer.
- 1.0.0
- 1.7
* - ``quint16``
- Unsigned quantized 16-bit integer.
- 1.0.0
- 1.7
* - ``quint8``
- Unsigned quantized 8-bit integer.
- 1.0.0
- 1.7
* - ``resource``
- Handle to a mutable, dynamically allocated resource.
- 1.0.0
- 1.7
* - ``string``
- Variable-length string, represented as byte array.
- 1.0.0
- 1.7
* - ``uint16``
- Unsigned 16-bit (word) integer.
- 1.0.0
- 1.7
* - ``uint32``
- Unsigned 32-bit (dword) integer.
- 1.5.0
- 1.7
* - ``uint64``
- Unsigned 64-bit (qword) integer.
- 1.5.0
- 1.7
* - ``uint8``
- Unsigned 8-bit (byte) integer.
- 1.0.0
- 1.7
* - ``variant``
- Data of arbitrary type (known at runtime).
- 1.4.0
- 1.7
Features
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This table provides an overview of key features in TensorFlow and their
availability in ROCm.
.. list-table::
:header-rows: 1
* - Module
- Description
- Since TensorFlow
- Since ROCm
* - ``tf.linalg`` (Linear Algebra)
- Operations for matrix and tensor computations, such as
``tf.linalg.matmul`` (matrix multiplication), ``tf.linalg.inv``
(matrix inversion) and ``tf.linalg.cholesky`` (Cholesky decomposition).
These leverage GPUs for high-performance linear algebra operations.
- 1.4
- 1.8.2
* - ``tf.nn`` (Neural Network Operations)
- GPU-accelerated building blocks for deep learning models, such as 2D
convolutions with ``tf.nn.conv2d``, max pooling operations with
``tf.nn.max_pool``, activation functions like ``tf.nn.relu`` or softmax
for output layers with ``tf.nn.softmax``.
- 1.0
- 1.8.2
* - ``tf.image`` (Image Processing)
- GPU-accelerated functions for image preprocessing and augmentations,
such as resize images with ``tf.image.resize``, flip images horizontally
with ``tf.image.flip_left_right`` and adjust image brightness randomly
with ``tf.image.random_brightness``.
- 1.1
- 1.8.2
* - ``tf.keras`` (High-Level API)
- GPU acceleration for Keras layers and models, including dense layers
(``tf.keras.layers.Dense``), convolutional layers
(``tf.keras.layers.Conv2D``) and recurrent layers
(``tf.keras.layers.LSTM``).
- 1.4
- 1.8.2
* - ``tf.math`` (Mathematical Operations)
- GPU-accelerated mathematical operations, such as sum across dimensions
with ``tf.math.reduce_sum``, elementwise exponentiation with
``tf.math.exp`` and sigmoid activation (``tf.math.sigmoid``).
- 1.5
- 1.8.2
* - ``tf.signal`` (Signal Processing)
- Functions for spectral analysis and signal transformations.
- 1.13
- 2.1
* - ``tf.data`` (Data Input Pipeline)
- GPU-accelerated data preprocessing for efficient input pipelines,
Prefetching with ``tf.data.experimental.AUTOTUNE``. GPU-enabled
transformations like map and batch.
- 1.4
- 1.8.2
* - ``tf.distribute`` (Distributed Training)
- Enabling to scale computations across multiple devices on a single
machine or across multiple machines.
- 1.13
- 2.1
* - ``tf.random`` (Random Number Generation)
- GPU-accelerated random number generation
- 1.12
- 1.9.2
* - ``tf.TensorArray`` (Dynamic Array Operations)
- Enables dynamic tensor manipulation on GPUs.
- 1.0
- 1.8.2
* - ``tf.sparse`` (Sparse Tensor Operations)
- GPU-accelerated sparse matrix manipulations.
- 1.9
- 1.9.0
* - ``tf.experimental.numpy``
- GPU-accelerated NumPy-like API for numerical computations.
- 2.4
- 4.1.1
* - ``tf.RaggedTensor``
- Handling of variable-length sequences and ragged tensors with GPU
support.
- 1.13
- 2.1
* - ``tf.function`` with XLA (Accelerated Linear Algebra)
- Enable GPU-accelerated functions in optimization.
- 1.14
- 2.4
* - ``tf.quantization``
- Quantized operations for inference, accelerated on GPUs.
- 1.12
- 1.9.2
Distributed library features
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Enables developers to scale computations across multiple devices on a single machine or
across multiple machines.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since TensorFlow
- Since ROCm
* - ``MultiWorkerMirroredStrategy``
- Synchronous training across multiple workers using mirrored variables.
- 2.0
- 3.0
* - ``MirroredStrategy``
- Synchronous training across multiple GPUs on one machine.
- 1.5
- 2.5
* - ``TPUStrategy``
- Efficiently trains models on Google TPUs.
- 1.9
- ❌
* - ``ParameterServerStrategy``
- Asynchronous training using parameter servers for variable management.
- 2.1
- 4.0
* - ``CentralStorageStrategy``
- Keeps variables on a single device and performs computation on multiple
devices.
- 2.3
- 4.1
* - ``CollectiveAllReduceStrategy``
- Synchronous training across multiple devices and hosts.
- 1.14
- 3.5
* - Distribution Strategies API
- High-level API to simplify distributed training configuration and
execution.
- 1.10
- 3.0
Unsupported TensorFlow features
===============================================================================
The following are GPU-accelerated TensorFlow features not currently supported by
ROCm.
.. list-table::
:header-rows: 1
* - Feature
- Description
- Since TensorFlow
* - Mixed Precision with TF32
- Mixed precision with TF32 is used for matrix multiplications,
convolutions, and other linear algebra operations, particularly in
deep learning workloads like CNNs and transformers.
- 2.4
* - ``tf.distribute.TPUStrategy``
- Efficiently trains models on Google TPUs.
- 1.9
Use cases and recommendations
===============================================================================
* The `Training a Neural Collaborative Filtering (NCF) Recommender on an AMD
GPU <https://rocm.blogs.amd.com/artificial-intelligence/ncf/README.html>`_
blog post discusses training an NCF recommender system using TensorFlow. It
explains how NCF improves traditional collaborative filtering methods by
leveraging neural networks to model non-linear user-item interactions. The
post outlines the implementation using the recommenders library, focusing on
the use of implicit data (for example, user interactions like viewing or
purchasing) and how it addresses challenges like the lack of negative values.
* The `Creating a PyTorch/TensorFlow code environment on AMD GPUs
<https://rocm.blogs.amd.com/software-tools-optimization/pytorch-tensorflow-env/README.html>`_
blog post provides instructions for creating a machine learning environment
for PyTorch and TensorFlow on AMD GPUs using ROCm. It covers steps like
installing the libraries, cloning code repositories, installing dependencies,
and troubleshooting potential issues with CUDA-based code. Additionally, it
explains how to HIPify code (port CUDA code to HIP) and manage Docker images
for a better experience on AMD GPUs. This guide aims to help data scientists
and ML practitioners adapt their code for AMD GPUs.
For more use cases and recommendations, see the `ROCm Tensorflow blog posts <https://rocm.blogs.amd.com/blog/tag/tensorflow.html>`_.

View File

@@ -1,156 +0,0 @@
.. meta::
:description: How ROCm uses PCIe atomics
:keywords: PCIe, PCIe atomics, atomics, BAR memory, AMD, ROCm
*****************************************************************************
How ROCm uses PCIe atomics
*****************************************************************************
ROCm PCIe feature and overview of BAR memory
================================================================
ROCm is an extension of HSA platform architecture, so it shares the queuing model, memory model,
signaling and synchronization protocols. Platform atomics are integral to perform queuing and
signaling memory operations where there may be multiple-writers across CPU and GPU agents.
The full list of HSA system architecture platform requirements are here:
`HSA Sys Arch Features <http://hsafoundation.com/wp-content/uploads/2021/02/HSA-SysArch-1.2.pdf>`_.
AMD ROCm Software uses the new PCI Express 3.0 (Peripheral Component Interconnect Express [PCIe]
3.0) features for atomic read-modify-write transactions which extends inter-processor synchronization
mechanisms to IO to support the defined set of HSA capabilities needed for queuing and signaling
memory operations.
The new PCIe atomic operations operate as completers for ``CAS`` (Compare and Swap), ``FetchADD``,
``SWAP`` atomics. The atomic operations are initiated by the I/O device which support 32-bit, 64-bit and
128-bit operand which target address have to be naturally aligned to operation sizes.
For ROCm the Platform atomics are used in ROCm in the following ways:
* Update HSA queue's read_dispatch_id: 64 bit atomic add used by the command processor on the
GPU agent to update the packet ID it processed.
* Update HSA queue's write_dispatch_id: 64 bit atomic add used by the CPU and GPU agent to
support multi-writer queue insertions.
* Update HSA Signals -- 64bit atomic ops are used for CPU & GPU synchronization.
The PCIe 3.0 atomic operations feature allows atomic transactions to be requested by, routed through
and completed by PCIe components. Routing and completion does not require software support.
Component support for each is detectable via the Device Capabilities 2 (DevCap2) register. Upstream
bridges need to have atomic operations routing enabled or the atomic operations will fail even though
PCIe endpoint and PCIe I/O devices has the capability to atomic operations.
To do atomic operations routing capability between two or more Root Ports, each associated Root Port
must indicate that capability via the atomic operations routing supported bit in the DevCap2 register.
If your system has a PCIe Express Switch it needs to support atomic operations routing. Atomic
operations requests are permitted only if a component's ``DEVCTL2.ATOMICOP_REQUESTER_ENABLE``
field is set. These requests can only be serviced if the upstream components support atomic operation
completion and/or routing to a component which does. Atomic operations routing support=1, routing
is supported; atomic operations routing support=0, routing is not supported.
An atomic operation is a non-posted transaction supporting 32-bit and 64-bit address formats, there
must be a response for Completion containing the result of the operation. Errors associated with the
operation (uncorrectable error accessing the target location or carrying out the atomic operation) are
signaled to the requester by setting the Completion Status field in the completion descriptor, they are
set to to Completer Abort (CA) or Unsupported Request (UR).
To understand more about how PCIe atomic operations work, see
`PCIe atomics <https://pcisig.com/specifications/pciexpress/specifications/ECN_Atomic_Ops_080417.pdf>`_
`Linux Kernel Patch to pci_enable_atomic_request <https://patchwork.kernel.org/project/linux-pci/patch/1443110390-4080-1-git-send-email-jay@jcornwall.me/>`_
There are also a number of papers which talk about these new capabilities:
* `Atomic Read Modify Write Primitives by Intel <https://www.intel.es/content/dam/doc/white-paper/atomic-read-modify-write-primitives-i-o-devices-paper.pdf>`_
* `PCI express 3 Accelerator White paper by Intel <https://www.intel.sg/content/dam/doc/white-paper/pci-express3-accelerator-white-paper.pdf>`_
* `PCIe Generation 4 Base Specification includes atomic operations <https://astralvx.com/storage/2020/11/PCI_Express_Base_4.0_Rev0.3_February19-2014.pdf>`_
* `Xilinx PCIe Ultrascale White paper <https://docs.xilinx.com/v/u/8OZSA2V1b1LLU2rRCDVGQw>`_
Other I/O devices with PCIe atomics support:
* Mellanox ConnectX-5 InfiniBand Card
* Cray Aries Interconnect
* Xilinx 7 Series Devices
Future bus technology with richer I/O atomics operation Support
* GenZ
New PCIe Endpoints with support beyond AMD Ryzen and EPYC CPU; Intel Haswell or newer CPUs
with PCIe Generation 3.0 support.
* Mellanox Bluefield SOC
* Cavium Thunder X2
In ROCm, we also take advantage of PCIe ID based ordering technology for P2P when the GPU
originates two writes to two different targets:
* Write to another GPU memory
* Write to system memory to indicate transfer complete
They are routed off to different ends of the computer but we want to make sure the write to system
memory to indicate transfer complete occurs AFTER P2P write to GPU has complete.
BAR memory overview
----------------------------------------------------------------------------------------------------
On a Xeon E5 based system in the BIOS we can turn on above 4GB PCIe addressing, if so he need to set
memory-mapped input/output (MMIO) base address (MMIOH base) and range (MMIO high size) in the BIOS.
In the Supermicro system in the system bios you need to see the following
* Advanced->PCIe/PCI/PnP configuration-\> Above 4G Decoding = Enabled
* Advanced->PCIe/PCI/PnP Configuration-\>MMIOH Base = 512G
* Advanced->PCIe/PCI/PnP Configuration-\>MMIO High Size = 256G
When we support Large Bar Capability there is a Large Bar VBIOS which also disable the IO bar.
For GFX9 and Vega10 which have Physical Address up 44 bit and 48 bit Virtual address.
* BAR0-1 registers: 64bit, prefetchable, GPU memory. 8GB or 16GB depending on Vega10 SKU. Must
be placed < 2^44 to support P2P access from other Vega10.
* BAR2-3 registers: 64bit, prefetchable, Doorbell. Must be placed \< 2^44 to support P2P access from
other Vega10.
* BAR4 register: Optional, not a boot device.
* BAR5 register: 32bit, non-prefetchable, MMIO. Must be placed \< 4GB.
Here is how our base address register (BAR) works on GFX 8 GPUs with 40 bit Physical Address Limit ::
11:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Fiji [Radeon R9 FURY / NANO
Series] (rev c1)
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 0b35
Flags: bus master, fast devsel, latency 0, IRQ 119
Memory at bf40000000 (64-bit, prefetchable) [size=256M]
Memory at bf50000000 (64-bit, prefetchable) [size=2M]
I/O ports at 3000 [size=256]
Memory at c7400000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at c7440000 [disabled] [size=128K]
Legend:
1 : GPU Frame Buffer BAR -- In this example it happens to be 256M, but typically this will be size of the
GPU memory (typically 4GB+). This BAR has to be placed \< 2^40 to allow peer-to-peer access from
other GFX8 AMD GPUs. For GFX9 (Vega GPU) the BAR has to be placed \< 2^44 to allow peer-to-peer
access from other GFX9 AMD GPUs.
2 : Doorbell BAR -- The size of the BAR is typically will be \< 10MB (currently fixed at 2MB) for this
generation GPUs. This BAR has to be placed \< 2^40 to allow peer-to-peer access from other current
generation AMD GPUs.
3 : IO BAR -- This is for legacy VGA and boot device support, but since this the GPUs in this project are
not VGA devices (headless), this is not a concern even if the SBIOS does not setup.
4 : MMIO BAR -- This is required for the AMD Driver SW to access the configuration registers. Since the
reminder of the BAR available is only 1 DWORD (32bit), this is placed \< 4GB. This is fixed at 256KB.
5 : Expansion ROM -- This is required for the AMD Driver SW to access the GPU video-bios. This is
currently fixed at 128KB.
For more information, you can review
`Overview of Changes to PCI Express 3.0 <https://www.mindshare.com/files/resources/PCIe%203-0.pdf>`_.

View File

@@ -615,7 +615,6 @@ The following table shows the hardware counters *by* all texture addressing unit
"``TA_FLAT_READ_WAVEFRONTS_sum``", "Sum of flat opcode reads processed"
"``TA_FLAT_WRITE_WAVEFRONTS_sum``", "Sum of flat opcode writes processed"
"``TA_FLAT_WAVEFRONTS_sum``", "Total number of flat opcode wavefronts processed"
"``TA_FLAT_READ_WAVEFRONTS_sum``", "Total number of flat opcode read wavefronts processed"
"``TA_FLAT_ATOMIC_WAVEFRONTS_sum``", "Total number of flat opcode atomic wavefronts processed"
"``TA_TOTAL_WAVEFRONTS_sum``", "Total number of wavefronts processed"

View File

@@ -1,241 +0,0 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="GPU memory">
<meta name="keywords" content="GPU memory, VRAM, video random access memory, pageable
memory, pinned memory, managed memory, AMD, ROCm">
</head>
# GPU memory
For the HIP reference documentation, see:
* {doc}`hip:doxygen/html/group___memory`
* {doc}`hip:doxygen/html/group___memory_m`
Host memory exists on the host (e.g. CPU) of the machine in random access memory (RAM).
Device memory exists on the device (e.g. GPU) of the machine in video random access memory (VRAM).
Recent architectures use graphics double data rate (GDDR) synchronous dynamic random-access memory (SDRAM)such as GDDR6, or high-bandwidth memory (HBM) such as HBM2e.
## Memory allocation
Memory can be allocated in two ways: pageable memory, and pinned memory.
The following API calls with result in these allocations:
| API | Data location | Allocation |
|--------------------|---------------|------------|
| System allocated | Host | Pageable |
| `hipMallocManaged` | Host | Managed |
| `hipHostMalloc` | Host | Pinned |
| `hipMalloc` | Device | Pinned |
:::{tip}
`hipMalloc` and `hipFree` are blocking calls, however, HIP recently added non-blocking versions `hipMallocAsync` and `hipFreeAsync` which take in a stream as an additional argument.
:::
### Pageable memory
Pageable memory is usually gotten when calling `malloc` or `new` in a C++ application.
It is unique in that it exists on "pages" (blocks of memory), which can be migrated to other memory storage.
For example, migrating memory between CPU sockets on a motherboard, or a system that runs out of space in RAM and starts dumping pages of RAM into the swap partition of your hard drive.
### Pinned memory
Pinned memory (or page-locked memory, or non-pageable memory) is host memory that is mapped into the address space of all GPUs, meaning that the pointer can be used on both host and device.
Accessing host-resident pinned memory in device kernels is generally not recommended for performance, as it can force the data to traverse the host-device interconnect (e.g. PCIe), which is much slower than the on-device bandwidth (>40x on MI200).
Pinned host memory can be allocated with one of two types of coherence support:
:::{note}
In HIP, pinned memory allocations are coherent by default (`hipHostMallocDefault`).
There are additional pinned memory flags (e.g. `hipHostMallocMapped` and `hipHostMallocPortable`).
On MI200 these options do not impact performance.
<!-- TODO: link to programming_manual#memory-allocation-flags -->
For more information, see the section *memory allocation flags* in the HIP Programming Guide: {doc}`hip:how-to/programming_manual`.
:::
Much like how a process can be locked to a CPU core by setting affinity, a pinned memory allocator does this with the memory storage system.
On multi-socket systems it is important to ensure that pinned memory is located on the same socket as the owning process, or else each cache line will be moved through the CPU-CPU interconnect, thereby increasing latency and potentially decreasing bandwidth.
In practice, pinned memory is used to improve transfer times between host and device.
For transfer operations, such as `hipMemcpy` or `hipMemcpyAsync`, using pinned memory instead of pageable memory on host can lead to a ~3x improvement in bandwidth.
:::{tip}
If the application needs to move data back and forth between device and host (separate allocations), use pinned memory on the host side.
:::
### Managed memory
Managed memory refers to universally addressable, or unified memory available on the MI200 series of GPUs.
Much like pinned memory, managed memory shares a pointer between host and device and (by default) supports fine-grained coherence, however, managed memory can also automatically migrate pages between host and device.
The allocation will be managed by AMD GPU driver using the Linux HMM (Heterogeneous Memory Management) mechanism.
If heterogenous memory management (HMM) is not available, then `hipMallocManaged` will default back to using system memory and will act like pinned host memory.
Other managed memory API calls will have undefined behavior.
It is therefore recommended to check for managed memory capability with: `hipDeviceGetAttribute` and `hipDeviceAttributeManagedMemory`.
HIP supports additional calls that work with page migration:
* `hipMemAdvise`
* `hipMemPrefetchAsync`
:::{tip}
If the application needs to use data on both host and device regularly, does not want to deal with separate allocations, and is not worried about maxing out the VRAM on MI200 GPUs (64 GB per GCD), use managed memory.
:::
:::{tip}
If managed memory performance is poor, check to see if managed memory is supported on your system and if page migration (XNACK) is enabled.
:::
## Access behavior
Memory allocations for GPUs behave as follow:
| API | Data location | Host access | Device access |
|--------------------|---------------|--------------|----------------------|
| System allocated | Host | Local access | Unhandled page fault |
| `hipMallocManaged` | Host | Local access | Zero-copy |
| `hipHostMalloc` | Host | Local access | Zero-copy* |
| `hipMalloc` | Device | Zero-copy | Local access |
Zero-copy accesses happen over the Infinity Fabric interconnect or PCI-E lanes on discrete GPUs.
:::{note}
While `hipHostMalloc` allocated memory is accessible by a device, the host pointer must be converted to a device pointer with `hipHostGetDevicePointer`.
Memory allocated through standard system allocators such as `malloc`, can be accessed a device by registering the memory via `hipHostRegister`.
The device pointer to be used in kernels can be retrieved with `hipHostGetDevicePointer`.
Registered memory is treated like `hipHostMalloc` and will have similar performance.
On devices that support and have [](#xnack) enabled, such as the MI250X, `hipHostRegister` is not required as memory accesses are handled via automatic page migration.
:::
### XNACK
Normally, host and device memory are separate and data has to be transferred manually via `hipMemcpy`.
On a subset of GPUs, such as the MI200, there is an option to automatically migrate pages of memory between host and device.
This is important for managed memory, where the locality of the data is important for performance.
Depending on the system, page migration may be disabled by default in which case managed memory will act like pinned host memory and suffer degraded performance.
*XNACK* describes the GPUs ability to retry memory accesses that failed due a page fault (which normally would lead to a memory access error), and instead retrieve the missing page.
This also affects memory allocated by the system as indicated by the following table:
| API | Data location | Host after device access | Device after host access |
|--------------------|---------------|--------------------------|--------------------------|
| System allocated | Host | Migrate page to host | Migrate page to device |
| `hipMallocManaged` | Host | Migrate page to host | Migrate page to device |
| `hipHostMalloc` | Host | Local access | Zero-copy |
| `hipMalloc` | Device | Zero-copy | Local access |
To check if page migration is available on a platform, use `rocminfo`:
```sh
$ rocminfo | grep xnack
Name: amdgcn-amd-amdhsa--gfx90a:sramecc+:xnack-
```
Here, `xnack-` means that XNACK is available but is disabled by default.
Turning on XNACK by setting the environment variable `HSA_XNACK=1` and gives the expected result, `xnack+`:
```sh
$ HSA_XNACK=1 rocminfo | grep xnack
Name: amdgcn-amd-amdhsa--gfx90a:sramecc+:xnack+
```
`hipcc`by default will generate code that runs correctly with both XNACK enabled or disabled.
Setting the `--offload-arch=`-option with `xnack+` or `xnack-` forces code to be only run with XNACK enabled or disabled respectively.
```sh
# Compiled kernels will run regardless if XNACK is enabled or is disabled.
hipcc --offload-arch=gfx90a
# Compiled kernels will only be run if XNACK is enabled with XNACK=1.
hipcc --offload-arch=gfx90a:xnack+
# Compiled kernels will only be run if XNACK is disabled with XNACK=0.
hipcc --offload-arch=gfx90a:xnack-
```
:::{tip}
If you want to make use of page migration, use managed memory. While pageable memory will migrate correctly, it is not a portable solution and can have performance issues if the accessed data isn't page aligned.
:::
### Coherence
* *Coarse-grained coherence* means that memory is only considered up to date at kernel boundaries, which can be enforced through `hipDeviceSynchronize`, `hipStreamSynchronize`, or any blocking operation that acts on the null stream (e.g. `hipMemcpy`).
For example, cacheable memory is a type of coarse-grained memory where an up-to-date copy of the data can be stored elsewhere (e.g. in an L2 cache).
* *Fine-grained coherence* means the coherence is supported while a CPU/GPU kernel is running.
This can be useful if both host and device are operating on the same dataspace using system-scope atomic operations (e.g. updating an error code or flag to a buffer).
Fine-grained memory implies that up-to-date data may be made visible to others regardless of kernel boundaries as discussed above.
| API | Flag | Coherence |
|-------------------------|------------------------------|----------------|
| `hipHostMalloc` | `hipHostMallocDefault` | Fine-grained |
| `hipHostMalloc` | `hipHostMallocNonCoherent` | Coarse-grained |
| API | Flag | Coherence |
|-------------------------|------------------------------|----------------|
| `hipExtMallocWithFlags` | `hipDeviceMallocDefault` | Coarse-grained |
| `hipExtMallocWithFlags` | `hipDeviceMallocFinegrained` | Fine-grained |
| API | `hipMemAdvise` argument | Coherence |
|-------------------------|------------------------------|----------------|
| `hipMallocManaged` | | Fine-grained |
| `hipMallocManaged` | `hipMemAdviseSetCoarseGrain` | Coarse-grained |
| `malloc` | | Fine-grained |
| `malloc` | `hipMemAdviseSetCoarseGrain` | Coarse-grained |
:::{tip}
Try to design your algorithms to avoid host-device memory coherence (e.g. system scope atomics). While it can be a useful feature in very specific cases, it is not supported on all systems, and can negatively impact performance by introducing the host-device interconnect bottleneck.
:::
The availability of fine- and coarse-grained memory pools can be checked with `rocminfo`:
```sh
$ rocminfo
...
*******
Agent 1
*******
Name: AMD EPYC 7742 64-Core Processor
...
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
...
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
...
*******
Agent 9
*******
Name: gfx90a
...
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
...
```
## System direct memory access
In most cases, the default behavior for HIP in transferring data from a pinned host allocation to device will run at the limit of the interconnect.
However, there are certain cases where the interconnect is not the bottleneck.
The primary way to transfer data onto and off of a GPU, such as the MI200, is to use the onboard System Direct Memory Access engine, which is used to feed blocks of memory to the off-device interconnect (either GPU-CPU or GPU-GPU).
Each GCD has a separate SDMA engine for host-to-device and device-to-host memory transfers.
Importantly, SDMA engines are separate from the computing infrastructure, meaning that memory transfers to and from a device will not impact kernel compute performance, though they do impact memory bandwidth to a limited extent.
The SDMA engines are mainly tuned for PCIe-4.0 x16, which means they are designed to operate at bandwidths up to 32 GB/s.
:::{note}
An important feature of the MI250X platform is the Infinity Fabric™ interconnect between host and device.
The Infinity Fabric interconnect supports improved performance over standard PCIe-4.0 (usually ~50% more bandwidth); however, since the SDMA engine does not run at this speed, it will not max out the bandwidth of the faster interconnect.
:::
The bandwidth limitation can be countered by bypassing the SDMA engine and replacing it with a type of copy kernel known as a "blit" kernel.
Blit kernels will use the compute units on the GPU, thereby consuming compute resources, which may not always be beneficial.
The easiest way to enable blit kernels is to set an environment variable `HSA_ENABLE_SDMA=0`, which will disable the SDMA engine.
On systems where the GPU uses a PCIe interconnect instead of an Infinity Fabric interconnect, blit kernels will not impact bandwidth, but will still consume compute resources.
The use of SDMA vs blit kernels also applies to MPI data transfers and GPU-GPU transfers.

View File

@@ -0,0 +1,57 @@
.. meta::
:description: How ROCm uses PCIe atomics
:keywords: PCIe, PCIe atomics, atomics, Atomic operations, AMD, ROCm
*****************************************************************************
How ROCm uses PCIe atomics
*****************************************************************************
AMD ROCm is an extension of the Heterogeneous System Architecture (HSA). To meet the requirements of an HSA-compliant system, ROCm supports queuing models, memory models, and signaling and synchronization protocols. ROCm can perform atomic Read-Modify-Write (RMW) transactions that extend inter-processor synchronization mechanisms to Input/Output (I/O) devices starting from Peripheral Component Interconnect Express 3.0 (PCIe™ 3.0). It supports the defined HSA capabilities for queuing and signaling memory operations. To learn more about the requirements of an HSA-compliant system, see the
`HSA Platform System Architecture Specification <http://hsafoundation.com/wp-content/uploads/2021/02/HSA-SysArch-1.2.pdf>`_.
ROCm uses platform atomics to perform memory operations like queuing, signaling, and synchronization across multiple CPU, GPU agents, and I/O devices. Platform atomics ensure that atomic operations run synchronously, without interruptions or conflicts, across multiple shared resources.
Platform atomics in ROCm
==============================
Platform atomics enable the set of atomic operations that perform RMW actions across multiple processors, devices, and memory locations so that they run synchronously without interruption. An atomic operation is a sequence of computing instructions run as a single, indivisible unit. These instructions are completed in their entirety without any interruptions. If the instructions can't be completed as a unit without interruption, none of the instructions are run. These operations support 32-bit and 64-bit address formats.
Some of the operations for which ROCm uses platform atomics are:
* Update the HSA queue's ``read_dispatch_id``. The command processor on the GPU agent uses a 64-bit atomic add operation. It updates the packet ID it processed.
* Update the HSA queue's ``write_dispatch_id``. The CPU and GPU agents use a 64-bit atomic add operation. It supports multi-writer queue insertions.
* Update HSA Signals. A 64-bit atomic operation is used for CPU & GPU synchronization.
PCIe for atomic operations
----------------------------
ROCm requires CPUs that support PCIe atomics. Similarly, all connected I/O devices should also support PCIe atomics for optimum compatibility. PCIe supports the ``CAS`` (Compare and Swap), ``FetchADD``, and ``SWAP`` atomic operations across multiple resources. These atomic operations are initiated by the I/O devices that support 32-bit, 64-bit, and 128-bit operands. Likewise, the target memory address where these atomic operations are performed should also be aligned to the size of the operand. This alignment ensures that the operations are performed efficiently and correctly without failure.
When an atomic operation is successful, the requester receives a response of completion along with the operation result. However, any errors associated with the operation are signaled to the requester by updating the Completion Status field. Issues accessing the target location or running the atomic operation are common errors. Depending upon the error, the Completion Status field is updated to Completer Abort (CA) or Unsupported Request (UR). The field is present in the Completion Descriptor.
To learn more about the industry standards and specifications of PCIe, see `PCI-SIG Specification <https://pcisig.com/specifications>`_.
To learn more about PCIe and its capabilities, consult the following white papers:
* `Atomic Read Modify Write Primitives by Intel <https://www.intel.es/content/dam/doc/white-paper/atomic-read-modify-write-primitives-i-o-devices-paper.pdf>`_
* `PCI Express 3 Accelerator White paper by Intel <https://www.intel.sg/content/dam/doc/white-paper/pci-express3-accelerator-white-paper.pdf>`_
* `PCIe Generation 4 Base Specification includes atomic operations <https://astralvx.com/storage/2020/11/PCI_Express_Base_4.0_Rev0.3_February19-2014.pdf>`_
* `Xilinx PCIe Ultrascale White paper <https://docs.xilinx.com/v/u/8OZSA2V1b1LLU2rRCDVGQw>`_
Working with PCIe 3.0 in ROCm
-------------------------------
Starting with PCIe 3.0, atomic operations can be requested, routed through, and completed by PCIe components. Routing and completion do not require software support. Component support for each can be identified by the Device Capabilities 2 (DevCap2) register. Upstream
bridges need to have atomic operations routing enabled. If not enabled, the atomic operations will fail even if the
PCIe endpoint and PCIe I/O devices can perform atomic operations.
If your system uses PCIe switches to connect and enable communication between multiple PCIe components, the switches must also support atomic operations routing.
To enable atomic operations routing between multiple root ports, each root port must support atomic operation routing. This capability can be identified from the atomic operations routing support bit in the DevCap2 register. If the bit has value of 1, routing is supported. Atomic operation requests are permitted only if a component's ``DEVCTL2.ATOMICOP_REQUESTER_ENABLE``
field is set. These requests can only be serviced if the upstream components also support atomic operation completion or if the requests can be routed to a component that supports atomic operation completion.
ROCm uses the PCIe-ID-based ordering technology for peer-to-peer (P2P) data transmission. PCIe-ID-based ordering technology is used when the GPU initiates multiple write operations to different memory locations.
For more information on changes implemented in PCIe 3.0, see `Overview of Changes to PCI Express 3.0 <https://www.mindshare.com/files/resources/PCIe%203-0.pdf>`_.

View File

@@ -29,59 +29,48 @@ if os.environ.get("READTHEDOCS", "") == "True":
# configurations for PDF output by Read the Docs
project = "ROCm Documentation"
author = "Advanced Micro Devices, Inc."
copyright = "Copyright (c) 2024 Advanced Micro Devices, Inc. All rights reserved."
version = "6.3.0"
release = "6.3.0"
copyright = "Copyright (c) 2025 Advanced Micro Devices, Inc. All rights reserved."
version = "6.3.1"
release = "6.3.1"
setting_all_article_info = True
all_article_info_os = ["linux", "windows"]
all_article_info_author = ""
# pages with specific settings
article_pages = [
{"file": "about/release-notes", "os": ["linux", "windows"], "date": "2024-12-03"},
{"file": "about/release-notes", "os": ["linux", "windows"], "date": "2024-12-20"},
{"file": "compatibility/ml-compatibility/pytorch-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/tensorflow-compatibility", "os": ["linux"]},
{"file": "compatibility/ml-compatibility/jax-compatibility", "os": ["linux"]},
{"file": "how-to/deep-learning-rocm", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/install", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/train-a-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/deploy-your-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/hugging-face-models", "os": ["linux"]},
{"file": "how-to/rocm-for-hpc/index", "os": ["linux"]},
{"file": "how-to/llm-fine-tuning-optimization/index", "os": ["linux"]},
{"file": "how-to/llm-fine-tuning-optimization/overview", "os": ["linux"]},
{
"file": "how-to/llm-fine-tuning-optimization/fine-tuning-and-inference",
"os": ["linux"],
},
{
"file": "how-to/llm-fine-tuning-optimization/single-gpu-fine-tuning-and-inference",
"os": ["linux"],
},
{
"file": "how-to/llm-fine-tuning-optimization/multi-gpu-fine-tuning-and-inference",
"os": ["linux"],
},
{
"file": "how-to/llm-fine-tuning-optimization/llm-inference-frameworks",
"os": ["linux"],
},
{
"file": "how-to/llm-fine-tuning-optimization/model-acceleration-libraries",
"os": ["linux"],
},
{"file": "how-to/llm-fine-tuning-optimization/model-quantization", "os": ["linux"]},
{
"file": "how-to/llm-fine-tuning-optimization/optimizing-with-composable-kernel",
"os": ["linux"],
},
{
"file": "how-to/llm-fine-tuning-optimization/optimizing-triton-kernel",
"os": ["linux"],
},
{
"file": "how-to/llm-fine-tuning-optimization/profiling-and-debugging",
"os": ["linux"],
},
{"file": "how-to/performance-validation/mi300x/vllm-benchmark", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/train-a-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/training/scale-model-training", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/overview", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/fine-tuning-and-inference", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/single-gpu-fine-tuning-and-inference", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/install", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/hugging-face-models", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/llm-inference-frameworks", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/vllm-benchmark", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference/deploy-your-model", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/index", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/model-quantization", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/model-acceleration-libraries", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/optimizing-with-composable-kernel", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/optimizing-triton-kernel", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/profiling-and-debugging", "os": ["linux"]},
{"file": "how-to/rocm-for-ai/inference-optimization/workload", "os": ["linux"]},
{"file": "how-to/system-optimization/index", "os": ["linux"]},
{"file": "how-to/system-optimization/mi300x", "os": ["linux"]},
{"file": "how-to/system-optimization/mi200", "os": ["linux"]},
@@ -100,6 +89,9 @@ extensions = ["rocm_docs", "sphinx_reredirects", "sphinx_sitemap"]
external_projects_current_project = "rocm"
# Uncomment if facing rate limit exceed issue with local build
# external_projects_remote_repository = ""
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "https://rocm-stg.amd.com/")
html_context = {}
if os.environ.get("READTHEDOCS", "") == "True":

150
docs/contribute/building.md Normal file
View File

@@ -0,0 +1,150 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="Building ROCm documentation">
<meta name="keywords" content="documentation, Visual Studio Code, GitHub, command line,
AMD, ROCm">
</head>
# Building documentation
## GitHub
If you open a pull request and scroll down to the summary panel,
there is a commit status section. Next to the line
`docs/readthedocs.com:advanced-micro-devices-demo`, there is a `Details` link.
If you click this, it takes you to the Read the Docs build for your pull request.
![GitHub PR commit status](../data/contribute/commit-status.png)
If you don't see this line, click `Show all checks` to get an itemized view.
## Command line
You can build our documentation via the command line using Python.
See the `build.tools.python` setting in the [Read the Docs configuration file](https://github.com/ROCm/ROCm/blob/develop/.readthedocs.yaml) for the Python version used by Read the Docs to build documentation.
See the [Python requirements file](https://github.com/ROCm/ROCm/blob/develop/docs/sphinx/requirements.txt) for Python packages needed to build the documentation.
Use the Python Virtual Environment (`venv`) and run the following commands from the project root:
```sh
python3 -mvenv .venv
.venv/bin/python -m pip install -r docs/sphinx/requirements.txt
.venv/bin/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
```
Navigate to `_build/html/index.html` and open this file in a web browser.
## Visual Studio Code
With the help of a few extensions, you can create a productive environment to author and test
documentation locally using Visual Studio (VS) Code. Follow these steps to configure VS Code:
1. Install the required extensions:
* Python: `(ms-python.python)`
* Live Server: `(ritwickdey.LiveServer)`
2. Add the following entries to `.vscode/settings.json`.
```json
{
"liveServer.settings.root": "/.vscode/build/html",
"liveServer.settings.wait": 1000,
"python.terminal.activateEnvInCurrentTerminal": true
}
```
* `liveServer.settings.root`: Sets the root of the output website for live previews. Must be changed
alongside the `tasks.json` command.
* `liveServer.settings.wait`: Tells the live server to wait with the update in order to give Sphinx time to
regenerate the site contents and not refresh before the build is complete.
* `python.terminal.activateEnvInCurrentTerminal`: Activates the automatic virtual environment, so you
can build the site from the integrated terminal.
3. Add the following tasks to `.vscode/tasks.json`.
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "Build Docs",
"type": "process",
"windows": {
"command": "${workspaceFolder}/.venv/Scripts/python.exe"
},
"command": "${workspaceFolder}/.venv/bin/python3",
"args": [
"-m",
"sphinx",
"-j",
"auto",
"-T",
"-b",
"html",
"-d",
"${workspaceFolder}/.vscode/build/doctrees",
"-D",
"language=en",
"${workspaceFolder}/docs",
"${workspaceFolder}/.vscode/build/html"
],
"problemMatcher": [
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):(\\d+):\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"line": 2,
"severity": 3,
"message": 4
}
},
{
"owner": "sphinx",
"fileLocation": "absolute",
"pattern": {
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):{1,2}\\s+(WARNING|ERROR):\\s+(.*)$",
"file": 1,
"severity": 2,
"message": 3
}
}
],
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
```
> Implementation detail: two problem matchers were needed to be defined,
> because VS Code doesn't tolerate some problem information being potentially
> absent. While a single regex could match all types of errors, if a capture
> group remains empty (the line number doesn't show up in all warning/error
> messages) but the `pattern` references said empty capture group, VS Code
> discards the message completely.
4. Configure the Python virtual environment (`venv`).
From the Command Palette, run `Python: Create Environment`. Select `venv` environment and
`docs/sphinx/requirements.txt`.
5. Build the docs.
Launch the default build task using one of the following options:
* A hotkey (the default is `Ctrl+Shift+B`)
* Issuing the `Tasks: Run Build Task` from the Command Palette
6. Open the live preview.
Navigate to the site output within VS Code: right-click on `.vscode/build/html/index.html` and
select `Open with Live Server`. The contents should update on every rebuild without having to
refresh the browser.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 173 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 340 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 346 KiB

View File

@@ -0,0 +1,99 @@
.. meta::
:description: Learn about BAR configuration in AMD GPUs and ways to troubleshoot physical addressing limit
:keywords: BAR memory, MMIO, GPU memory, Physical Addressing Limit, AMD, ROCm
**************************************
Troubleshoot BAR access limitation
**************************************
Direct Memory Access (DMA) to PCIe devices using Base Address Registers (BARs) can be restricted due to physical addressing limits. These restrictions can result in data access failures between the system components. Peer-to-peer (P2P) DMA is used to access resources such as registers and memory between devices. PCIe devices need memory-mapped input/output (MMIO) space for DMA, and these MMIO spaces are defined in the PCIe BARs.
These BARs are a set of 32-bit or 64-bit registers that are used to define the resources that PCIe devices provide. The CPU and other system devices also use these to access the resources of the PCIe devices. P2P DMA only works when one device can directly access the local BAR memory of another. If the memory address of a BAR memory exceeds the physical addressing limit of a device, the device will not be able to access that BAR. This could be the device's own BAR or the BAR of another device in the system.
If the BAR memory exceeds than the physical addressing limit of the device, the device will not be able to access the remote BAR.
To handle any BAR access issues that might occur, you need to be aware of the physical address limitations of the devices and understand the :ref:`BAR configuration of AMD GPUs <bar-configuration>`. This information is important when setting up additional MMIO apertures for PCIe devices in the system's physical address space.
Handling physical address limitation
=============================================
When a system boots, the system BIOS allocates the physical address space for the components in the system, including system memory and MMIO apertures. On modern 64-bit platforms, there are generally two or more MMIO apertures: one located below 4 GB of physical address space for 32-bit compatibility, and one or more above 4 GB for devices needing more space.
You can control the memory address of the high MMIO aperture from the system BIOS configuration options. This lets you configure the additional MMIO space to align with the physical addressing limit and allows P2P DMA between the devices. For example, if a PCIe device is limited to 44-bit of physical addressing, you should ensure that the MMIO aperture is set below 44-bit in the system physical address space.
There are two ways to handle this:
* Ensure that the high MMIO aperture is within the physical addressing limits of the devices in the system. For example, if the devices have a 44-bit physical addressing limit, set the ``MMIO High Base`` and ``MMIO High size`` options in the BIOS such that the aperture is within the 44-bit address range, and ensure that the ``Above 4G Decoding`` option is Enabled.
* Enable the Input-Output Memory Management Unit (IOMMU). When the IOMMU is enabled in non-passthrough mode, it will create a virtual I/O address space for each device on the system. It also ensures that all virtual addresses created in that space are within the physical addressing limits of the device. For more information on IOMMU, see :doc:`../conceptual/iommu`.
.. _bar-configuration:
BAR configuration for AMD GPUs
================================================
The following table shows how the BARs are configured for AMD GPUs.
.. list-table::
:widths: 25 25 50
:header-rows: 1
* - BAR Type
- Value
- Description
* - BAR0-1 registers
- 64-bit, Prefetchable, GPU memory
- 8 GB or 16 GB depending on GPU. Set to less than 2^44 to support P2P access from other GPUs with a 44-bit physical address limit. Prefetchable memory enables faster read operation for high-performance computing (HPC) by fetching the contiguous data from the same data source even before requested as an anticipation of a future request.
* - BAR2-3 registers
- 64-bit, Prefetchable, Doorbell
- Set to less than 2^44 to support P2P access from other GPUs with a 44-bit physical address limit. As a Doorbell BAR, it indicates to the GPU that a new operation is in its queue to be processed.
* - BAR4 register
- Optional
- Not a boot device
* - BAR5 register
- 32-bit, Non-prefetchable, MMIO
- Is set to less than 4 GB.
Example of BAR usage on AMD GPUs
-------------------------------------
Following is an example configuration of BARs set by the system BIOS on GFX8 GPUs with the 40-bit physical addressing limit:
.. code:: shell
11:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Fiji [Radeon R9 FURY / NANO
Series] (rev c1)
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Device 0b35
Flags: bus master, fast devsel, latency 0, IRQ 119
Memory at bf40000000 (64-bit, prefetchable) [size=256M]
Memory at bf50000000 (64-bit, prefetchable) [size=2M]
I/O ports at 3000 [size=256]
Memory at c7400000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at c7440000 [disabled] [size=128K]
Details of the BARs configured in the example are:
**GPU Frame Buffer BAR:** ``Memory at bf40000000 (64-bit, prefetchable) [size=256M]``
The size of the BAR in the example is 256 MB. Generally, it will be the size of the GPU memory (typically 4 GB+). Depending upon the physical address limit and generation of AMD GPUs, the BAR can be set below 2^40, 2^44, or 2^48.
**Doorbell BAR:** ``Memory at bf50000000 (64-bit, prefetchable) [size=2M]``
The size of the BAR should typically be less than 10 MB for this generation of GPUs and has been set to 2 MB in the example. This BAR is placed less than 2^40 to allow peer-to-peer access from other generations of AMD GPUs.
**I/O BAR:** ``I/O ports at 3000 [size=256]``
This is for legacy VGA and boot device support. Because the GPUs used are not connected to a display (VGA devices), this is not a concern, even if it isn't set up in the system BIOS.
**MMIO BAR:** ``Memory at c7400000 (32-bit, non-prefetchable) [size=256K]``
The AMD Driver requires this to access the configuration registers. Since the reminder of the BAR available is only 1 DWORD (32-bit), this is set less than 4 GB. In the example, it is fixed at 256 KB.
**Expansion ROM:** ``Expansion ROM at c7440000 [disabled] [size=128K]``
This is required by the AMD Driver to access the GPU video-BIOS. In the example, it is fixed at 128 KB.

View File

@@ -11,20 +11,24 @@ ROCm provides a comprehensive ecosystem for deep learning development, including
deep learning frameworks and libraries such as PyTorch, TensorFlow, and JAX. ROCm works closely with these
frameworks to ensure that framework-specific optimizations take advantage of AMD accelerator and GPU architectures.
The following guides cover installation processes for ROCm-aware deep learning frameworks.
The following guides provide information on compatibility and supported
features for these ROCm-enabled deep learning frameworks.
* :doc:`PyTorch for ROCm <rocm-install-on-linux:install/3rd-party/pytorch-install>`
* :doc:`TensorFlow for ROCm <rocm-install-on-linux:install/3rd-party/tensorflow-install>`
* :doc:`JAX for ROCm <rocm-install-on-linux:install/3rd-party/jax-install>`
* :doc:`PyTorch compatibility <../compatibility/ml-compatibility/pytorch-compatibility>`
* :doc:`TensorFlow compatibility <../compatibility/ml-compatibility/tensorflow-compatibility>`
* :doc:`JAX compatibility <../compatibility/ml-compatibility/jax-compatibility>`
The following chart steps through typical installation workflows for installing deep learning frameworks for ROCm.
This chart steps through typical installation workflows for installing deep learning frameworks for ROCm.
.. image:: ../data/how-to/framework_install_2024_07_04.png
:alt: Flowchart for installing ROCm-aware machine learning frameworks
:align: center
Find information on version compatibility and framework release notes in :doc:`Third-party support matrix
<rocm-install-on-linux:reference/3rd-party-support-matrix>`.
See the installation instructions to get started.
* :doc:`PyTorch for ROCm <rocm-install-on-linux:install/3rd-party/pytorch-install>`
* :doc:`TensorFlow for ROCm <rocm-install-on-linux:install/3rd-party/tensorflow-install>`
* :doc:`JAX for ROCm <rocm-install-on-linux:install/3rd-party/jax-install>`
.. note::
@@ -35,4 +39,11 @@ through the following guides.
* :doc:`rocm-for-ai/index`
* :doc:`llm-fine-tuning-optimization/index`
* :doc:`Training <rocm-for-ai/training/index>`
* :doc:`Fine-tuning LLMs <rocm-for-ai/fine-tuning/index>`
* :doc:`Inference <rocm-for-ai/inference/index>`
* :doc:`Inference optimization <rocm-for-ai/inference-optimization/index>`

View File

@@ -1,264 +0,0 @@
.. meta::
:description: GPU-enabled Message Passing Interface
:keywords: Message Passing Interface, MPI, AMD, ROCm
***************************************************************************************************
GPU-enabled Message Passing Interface
***************************************************************************************************
The Message Passing Interface (`MPI <https://www.mpi-forum.org>`_) is a standard API for distributed
and parallel application development that can scale to multi-node clusters. To facilitate the porting of
applications to clusters with GPUs, ROCm enables various technologies. You can use these
technologies add GPU pointers to MPI calls and enable ROCm-aware MPI libraries to deliver optimal
performance for both intra-node and inter-node GPU-to-GPU communication.
The AMD kernel driver exposes remote direct memory access (RDMA) through *PeerDirect* interfaces.
This allows network interface cards (NICs) to directly read and write to RDMA-capable GPU device
memory, resulting in high-speed direct memory access (DMA) transfers between GPU and NIC. These
interfaces are used to optimize inter-node MPI message communication.
The Open MPI project is an open source implementation of the MPI. It's developed and maintained by
a consortium of academic, research, and industry partners. To compile Open MPI with ROCm support,
refer to the following sections:
* :ref:`open-mpi-ucx`
* :ref:`open-mpi-libfabric`
.. _open-mpi-ucx:
ROCm-aware Open MPI on InfiniBand and RoCE networks using UCX
================================================================
The `Unified Communication Framework <https://www.openucx.org/documentation>`_ (UCX), is an
open source, cross-platform framework designed to provide a common set of communication
interfaces for various network programming models and interfaces. UCX uses ROCm technologies to
implement various network operation primitives. UCX is the standard communication library for
InfiniBand and RDMA over Converged Ethernet (RoCE) network interconnect. To optimize data
transfer operations, many MPI libraries, including Open MPI, can leverage UCX internally.
UCX and Open MPI have a compile option to enable ROCm support. To install and configure UCX to compile Open MPI for ROCm, use the following instructions.
1. Set environment variables to install all software components in the same base directory. We use the
home directory in our example, but you can specify a different location if you want.
.. code-block:: shell
export INSTALL_DIR=$HOME/ompi_for_gpu
export BUILD_DIR=/tmp/ompi_for_gpu_build
mkdir -p $BUILD_DIR
2. Install UCX. To view UCX and ROCm version compatibility, refer to the
`communication libraries tables <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/3rd-party-support-matrix.html>`_
.. code-block:: shell
export UCX_DIR=$INSTALL_DIR/ucx
cd $BUILD_DIR
git clone https://github.com/openucx/ucx.git -b v1.15.x
cd ucx
./autogen.sh
mkdir build
cd build
../configure -prefix=$UCX_DIR \
--with-rocm=/opt/rocm
make -j $(nproc)
make -j $(nproc) install
3. Install Open MPI.
.. code-block:: shell
export OMPI_DIR=$INSTALL_DIR/ompi
cd $BUILD_DIR
git clone --recursive https://github.com/open-mpi/ompi.git \
-b v5.0.x
cd ompi
./autogen.pl
mkdir build
cd build
../configure --prefix=$OMPI_DIR --with-ucx=$UCX_DIR \
--with-rocm=/opt/rocm
make -j $(nproc)
make install
.. _rocm-enabled-osu:
ROCm-enabled OSU benchmarks
---------------------------------------------------------------------------------------------------------------
You can use OSU Micro Benchmarks (OMB) to evaluate the performance of various primitives on
ROCm-supported AMD GPUs. The ``--enable-rocm`` option exposes this functionality.
.. code-block:: shell
export OSU_DIR=$INSTALL_DIR/osu
cd $BUILD_DIR
wget http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-7.2.tar.gz
tar xfz osu-micro-benchmarks-7.2.tar.gz
cd osu-micro-benchmarks-7.2
./configure --enable-rocm \
--with-rocm=/opt/rocm \
CC=$OMPI_DIR/bin/mpicc CXX=$OMPI_DIR/bin/mpicxx \
LDFLAGS="-L$OMPI_DIR/lib/ -lmpi -L/opt/rocm/lib/ \
$(hipconfig -C) -lamdhip64" CXXFLAGS="-std=c++11"
make -j $(nproc)
Intra-node run
----------------------------------------------------------------------------------------------------------------
Before running an Open MPI job, you must set the following environment variables to ensure that
you're using the correct versions of Open MPI and UCX.
.. code-block:: shell
export LD_LIBRARY_PATH=$OMPI_DIR/lib:$UCX_DIR/lib:/opt/rocm/lib
export PATH=$OMPI_DIR/bin:$PATH
To run the OSU bandwidth benchmark between the first two GPU devices (``GPU 0`` and ``GPU 1``)
inside the same node, use the following code.
.. code-block:: shell
$OMPI_DIR/bin/mpirun -np 2 \
-x UCX_TLS=sm,self,rocm \
--mca pml ucx \
./c/mpi/pt2pt/standard/osu_bw D D
This measures the unidirectional bandwidth from the first device (``GPU 0``) to the second device
(``GPU 1``). To select specific devices, for example ``GPU 2`` and ``GPU 3``, include the following
command:
.. code-block:: shell
export HIP_VISIBLE_DEVICES=2,3
To force using a copy kernel instead of a DMA engine for the data transfer, use the following
command:
.. code-block:: shell
export HSA_ENABLE_SDMA=0
The following output shows the effective transfer bandwidth measured for inter-die data transfer
between ``GPU 2`` and ``GPU 3`` on a system with MI250 GPUs. For messages larger than 67 MB, an effective
utilization of about 150 GB/sec is achieved:
.. image:: ../data/how-to/gpu-enabled-mpi-1.png
:width: 400
:alt: Inter-GPU bandwidth for various payload sizes
Collective operations
----------------------------------------------------------------------------------------------------------------
Collective operations on GPU buffers are best handled through the Unified Collective Communication
(UCC) library component in Open MPI. To accomplish this, you must configure and compile the UCC
library with ROCm support.
.. note::
You can verify UCC and ROCm version compatibility using the
`communication libraries tables <https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/3rd-party-support-matrix.html>`_
.. code-block:: shell
export UCC_DIR=$INSTALL_DIR/ucc
git clone https://github.com/openucx/ucc.git -b v1.2.x
cd ucc
./autogen.sh
./configure --with-rocm=/opt/rocm \
--with-ucx=$UCX_DIR \
--prefix=$UCC_DIR
make -j && make install
# Configure and compile Open MPI with UCX, UCC, and ROCm support
cd ompi
./configure --with-rocm=/opt/rocm \
--with-ucx=$UCX_DIR \
--with-ucc=$UCC_DIR
--prefix=$OMPI_DIR
To use the UCC component with an MPI application, you must set additional parameters:
.. code-block:: shell
mpirun --mca pml ucx --mca osc ucx \
--mca coll_ucc_enable 1 \
--mca coll_ucc_priority 100 -np 64 ./my_mpi_app
.. _open-mpi-libfabric:
ROCm-aware Open MPI using libfabric
================================================================
For network interconnects that are not covered in the previous category, such as HPE Slingshot,
ROCm-aware communication can often be achieved through the libfabric library. For more information,
refer to the `libfabric documentation <https://github.com/ofiwg/libfabric/wiki>`_.
.. note::
When using Open MPI v5.0.x with libfabric support, shared memory communication between
processes on the same node goes through the *ob1/sm* component. This component has
fundamental support for GPU memory that is, accomplished by using a staging host buffer
Consequently, the performance of device-to-device shared memory communication is lower than
the theoretical peak performance allowed by the GPU-to-GPU interconnect.
1. Install libfabric. Note that libfabric is often pre-installed. To determine if it's already installed, run:
.. code-block:: shell
module avail libfabric
Alternatively, you can download and compile libfabric with ROCm support. Note that not all
components required to support some networks (e.g., HPE Slingshot) are available in the open source
repository. Therefore, using a pre-installed libfabric library is strongly recommended over compiling
libfabric manually.
If a pre-compiled libfabric library is available on your system, you can skip the following step.
2. Compile libfabric with ROCm support.
.. code-block:: shell
export OFI_DIR=$INSTALL_DIR/ofi
cd $BUILD_DIR
git clone https://github.com/ofiwg/libfabric.git -b v1.19.x
cd libfabric
./autogen.sh
./configure --prefix=$OFI_DIR \
--with-rocr=/opt/rocm
make -j $(nproc)
make install
Installing Open MPI with libfabric support
----------------------------------------------------------------------------------------------------------------
To build Open MPI with libfabric, use the following code:
.. code-block:: shell
export OMPI_DIR=$INSTALL_DIR/ompi
cd $BUILD_DIR
git clone --recursive https://github.com/open-mpi/ompi.git \
-b v5.0.x
cd ompi
./autogen.pl
mkdir build
cd build
../configure --prefix=$OMPI_DIR --with-ofi=$OFI_DIR \
--with-rocm=/opt/rocm
make -j $(nproc)
make install
ROCm-aware OSU with Open MPI and libfabric
----------------------------------------------------------------------------------------------------------------
Compiling a ROCm-aware version of OSU benchmarks with Open MPI and libfabric uses the same
process described in :ref:`rocm-enabled-osu`.
To run an OSU benchmark using multiple nodes, use the following code:
.. code-block:: shell
export LD_LIBRARY_PATH=$OMPI_DIR/lib:$OFI_DIR/lib64:/opt/rocm/lib
$OMPI_DIR/bin/mpirun --mca pml ob1 --mca btl_ofi_mode 2 -np 2 \
./c/mpi/pt2pt/standard/osu_bw D D

View File

@@ -1,37 +0,0 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:keywords: ROCm, LLM, fine-tuning, usage, tutorial
*******************************************
Fine-tuning LLMs and inference optimization
*******************************************
ROCm empowers the fine-tuning and optimization of large language models, making them accessible and efficient for
specialized tasks. ROCm supports the broader AI ecosystem to ensure seamless integration with open frameworks,
models, and tools.
For more information, see `What is ROCm? <https://rocm.docs.amd.com/en/latest/what-is-rocm.html>`_
Throughout the following topics, this guide discusses the goals and :ref:`challenges of fine-tuning a large language
model <fine-tuning-llms-concept-challenge>` like Llama 2. Then, it introduces :ref:`common methods of optimizing your
fine-tuning <fine-tuning-llms-concept-optimizations>` using techniques like LoRA with libraries like PEFT. In the
sections that follow, you'll find practical guides on libraries and tools to accelerate your fine-tuning.
- :doc:`Conceptual overview of fine-tuning LLMs <overview>`
- :doc:`Fine-tuning and inference <fine-tuning-and-inference>` using a
:doc:`single-accelerator <single-gpu-fine-tuning-and-inference>` or
:doc:`multi-accelerator <multi-gpu-fine-tuning-and-inference>` system.
- :doc:`Model quantization <model-quantization>`
- :doc:`Model acceleration libraries <model-acceleration-libraries>`
- :doc:`LLM inference frameworks <llm-inference-frameworks>`
- :doc:`Optimizing with Composable Kernel <optimizing-with-composable-kernel>`
- :doc:`Optimizing Triton kernels <optimizing-triton-kernel>`
- :doc:`Profiling and debugging <profiling-and-debugging>`

View File

@@ -1,407 +0,0 @@
.. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the unified
ROCm Docker image.
:keywords: model, MAD, automation, dashboarding, validate
***********************************************************
LLM inference performance validation on AMD Instinct MI300X
***********************************************************
.. _vllm-benchmark-unified-docker:
The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers
a prebuilt, optimized environment designed for validating large language model
(LLM) inference performance on the AMD Instinct™ MI300X accelerator. This
ROCm vLLM Docker image integrates vLLM and PyTorch tailored specifically for the
MI300X accelerator and includes the following components:
* `ROCm 6.2.1 <https://github.com/ROCm/ROCm>`_
* `vLLM 0.6.4 <https://docs.vllm.ai/en/latest>`_
* `PyTorch 2.5.0 <https://github.com/pytorch/pytorch>`_
* Tuning files (in CSV format)
With this Docker image, you can quickly validate the expected inference
performance numbers on the MI300X accelerator. This topic also provides tips on
optimizing performance with popular AI models.
.. hlist::
:columns: 6
* Llama 3.1 8B
* Llama 3.1 70B
* Llama 3.1 405B
* Llama 2 7B
* Llama 2 70B
* Mixtral 8x7B
* Mixtral 8x22B
* Mixtral 7B
* Qwen2 7B
* Qwen2 72B
* JAIS 13B
* JAIS 30B
.. _vllm-benchmark-vllm:
.. note::
vLLM is a toolkit and library for LLM inference and serving. AMD implements
high-performance custom kernels and modules in vLLM to enhance performance.
See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
more information.
Getting started
===============
Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image.
.. _vllm-benchmark-get-started:
1. Disable NUMA auto-balancing.
To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU
might hang until the periodic balancing is finalized. For more information,
see :ref:`AMD Instinct MI300X system optimization <mi300x-disable-numa>`.
.. code-block:: shell
# disable automatic NUMA balancing
sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'
# check if NUMA balancing is disabled (returns 0 if disabled)
cat /proc/sys/kernel/numa_balancing
0
2. Download the :ref:`ROCm vLLM Docker image <vllm-benchmark-unified-docker>`.
Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4
Once setup is complete, you can choose between two options to reproduce the
benchmark results:
- :ref:`MAD-integrated benchmarking <vllm-benchmark-mad>`
- :ref:`Standalone benchmarking <vllm-benchmark-standalone>`
.. _vllm-benchmark-mad:
MAD-integrated benchmarking
===========================
Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
Use this command to run a performance benchmark test of the Llama 3.1 8B model
on one GPU with ``float16`` data type in the host machine.
.. code-block:: shell
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
python3 tools/run_models.py --tags pyt_vllm_llama-3.1-8b --keep-model-dir --live-output --timeout 28800
ROCm MAD launches a Docker container with the name
``container_ci-pyt_vllm_llama-3.1-8b``. The latency and throughput reports of the
model are collected in the following path: ``~/MAD/reports_float16/``.
Although the following models are preconfigured to collect latency and
throughput performance data, you can also change the benchmarking parameters.
Refer to the :ref:`Standalone benchmarking <vllm-benchmark-standalone>` section.
Available models
----------------
.. hlist::
:columns: 3
* ``pyt_vllm_llama-3.1-8b``
* ``pyt_vllm_llama-3.1-70b``
* ``pyt_vllm_llama-3.1-405b``
* ``pyt_vllm_llama-2-7b``
* ``pyt_vllm_llama-2-70b``
* ``pyt_vllm_mixtral-8x7b``
* ``pyt_vllm_mixtral-8x22b``
* ``pyt_vllm_mistral-7b``
* ``pyt_vllm_qwen2-7b``
* ``pyt_vllm_qwen2-72b``
* ``pyt_vllm_jais-13b``
* ``pyt_vllm_jais-30b``
* ``pyt_vllm_llama-3.1-8b_fp8``
* ``pyt_vllm_llama-3.1-70b_fp8``
* ``pyt_vllm_llama-3.1-405b_fp8``
* ``pyt_vllm_mixtral-8x7b_fp8``
* ``pyt_vllm_mixtral-8x22b_fp8``
.. _vllm-benchmark-standalone:
Standalone benchmarking
=======================
You can run the vLLM benchmark tool independently by starting the
:ref:`Docker container <vllm-benchmark-get-started>` as shown in the following
snippet.
.. code-block::
docker pull rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name vllm_v0.6.4 rocm/vllm:rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4
In the Docker container, clone the ROCm MAD repository and navigate to the
benchmark scripts directory at ``~/MAD/scripts/vllm``.
.. code-block::
git clone https://github.com/ROCm/MAD
cd MAD/scripts/vllm
Command
-------
To start the benchmark, use the following command with the appropriate options.
See :ref:`Options <vllm-benchmark-standalone-options>` for the list of
options and their descriptions.
.. code-block:: shell
./vllm_benchmark_report.sh -s $test_option -m $model_repo -g $num_gpu -d $datatype
See the :ref:`examples <vllm-benchmark-run-benchmark>` for more information.
.. note::
The input sequence length, output sequence length, and tensor parallel (TP) are
already configured. You don't need to specify them with this script.
.. note::
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
.. code-block:: shell
OSError: You are trying to access a gated repo.
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
.. _vllm-benchmark-standalone-options:
Options
-------
.. list-table::
:header-rows: 1
:align: center
* - Name
- Options
- Description
* - ``$test_option``
- latency
- Measure decoding token latency
* -
- throughput
- Measure token generation throughput
* -
- all
- Measure both throughput and latency
* - ``$model_repo``
- ``meta-llama/Meta-Llama-3.1-8B-Instruct``
- Llama 3.1 8B
* - (``float16``)
- ``meta-llama/Meta-Llama-3.1-70B-Instruct``
- Llama 3.1 70B
* -
- ``meta-llama/Meta-Llama-3.1-405B-Instruct``
- Llama 3.1 405B
* -
- ``meta-llama/Llama-2-7b-chat-hf``
- Llama 2 7B
* -
- ``meta-llama/Llama-2-70b-chat-hf``
- Llama 2 70B
* -
- ``mistralai/Mixtral-8x7B-Instruct-v0.1``
- Mixtral 8x7B
* -
- ``mistralai/Mixtral-8x22B-Instruct-v0.1``
- Mixtral 8x22B
* -
- ``mistralai/Mistral-7B-Instruct-v0.3``
- Mixtral 7B
* -
- ``Qwen/Qwen2-7B-Instruct``
- Qwen2 7B
* -
- ``Qwen/Qwen2-72B-Instruct``
- Qwen2 72B
* -
- ``core42/jais-13b-chat``
- JAIS 13B
* -
- ``core42/jais-30b-chat-v3``
- JAIS 30B
* - ``$model_repo``
- ``amd/Meta-Llama-3.1-8B-Instruct-FP8-KV``
- Llama 3.1 8B
* - (``float8``)
- ``amd/Meta-Llama-3.1-70B-Instruct-FP8-KV``
- Llama 3.1 70B
* -
- ``amd/Meta-Llama-3.1-405B-Instruct-FP8-KV``
- Llama 3.1 405B
* -
- ``amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV``
- Mixtral 8x7B
* -
- ``amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV``
- Mixtral 8x22B
* - ``$num_gpu``
- 1 or 8
- Number of GPUs
* - ``$datatype``
- ``float16`` or ``float8``
- Data type
.. _vllm-benchmark-run-benchmark:
Running the benchmark on the MI300X accelerator
-----------------------------------------------
Here are some examples of running the benchmark with various options.
See :ref:`Options <vllm-benchmark-standalone-options>` for the list of
options and their descriptions.
Example 1: latency benchmark
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to benchmark the latency of the Llama 3.1 8B model on one GPU with the ``float16`` and ``float8`` data types.
.. code-block::
./vllm_benchmark_report.sh -s latency -m meta-llama/Meta-Llama-3.1-8B-Instruct -g 1 -d float16
./vllm_benchmark_report.sh -s latency -m amd/Meta-Llama-3.1-8B-Instruct-FP8-KV -g 1 -d float8
Find the latency reports at:
- ``./reports_float16/summary/Meta-Llama-3.1-8B-Instruct_latency_report.csv``
- ``./reports_float8/summary/Meta-Llama-3.1-8B-Instruct-FP8-KV_latency_report.csv``
Example 2: throughput benchmark
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to benchmark the throughput of the Llama 3.1 8B model on one GPU with the ``float16`` and ``float8`` data types.
.. code-block:: shell
./vllm_benchmark_report.sh -s throughput -m meta-llama/Meta-Llama-3.1-8B-Instruct -g 1 -d float16
./vllm_benchmark_report.sh -s throughput -m amd/Meta-Llama-3.1-8B-Instruct-FP8-KV -g 1 -d float8
Find the throughput reports at:
- ``./reports_float16/summary/Meta-Llama-3.1-8B-Instruct_throughput_report.csv``
- ``./reports_float8/summary/Meta-Llama-3.1-8B-Instruct-FP8-KV_throughput_report.csv``
.. raw:: html
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
.. note::
Throughput is calculated as:
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
Further reading
===============
- For application performance optimization strategies for HPC and AI workloads,
including inference with vLLM, see :doc:`/how-to/tuning-guides/mi300x/workload`.
- To learn more about the options for latency and throughput benchmark scripts,
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about system settings and management practices to configure your system for
MI300X accelerators, see :doc:`/how-to/system-optimization/mi300x`.
- To learn how to run LLM models from Hugging Face or your own model, see
:doc:`Using ROCm for AI </how-to/rocm-for-ai/index>`.
- To learn how to optimize inference on LLMs, see
:doc:`Fine-tuning LLMs and inference optimization </how-to/llm-fine-tuning-optimization/index>`.
- For a list of other ready-made Docker images for ROCm, see the
:doc:`Docker image support matrix <rocm-install-on-linux:reference/docker-image-support-matrix>`.
- To compare with the previous version of the ROCm vLLM Docker image for performance validation, refer to
`LLM inference performance validation on AMD Instinct MI300X (ROCm 6.2.0) <https://rocm.docs.amd.com/en/docs-6.2.0/how-to/performance-validation/mi300x/vllm-benchmark.html>`_.

View File

@@ -1,6 +1,6 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:keywords: ROCm, LLM, fine-tuning, inference, usage, tutorial
:description: How to fine-tune models with ROCm
:keywords: ROCm, LLM, fine-tuning, inference, usage, tutorial, deep learning, PyTorch, TensorFlow, JAX
*************************
Fine-tuning and inference
@@ -9,7 +9,7 @@ Fine-tuning and inference
Fine-tuning using ROCm involves leveraging AMD's GPU-accelerated :doc:`libraries <rocm:reference/api-libraries>` and
:doc:`tools <rocm:reference/rocm-tools>` to optimize and train deep learning models. ROCm provides a comprehensive
ecosystem for deep learning development, including open-source libraries for optimized deep learning operations and
ROCm-aware versions of :doc:`deep learning frameworks <../deep-learning-rocm>` such as PyTorch, TensorFlow, and JAX.
ROCm-aware versions of :doc:`deep learning frameworks <../../deep-learning-rocm>` such as PyTorch, TensorFlow, and JAX.
Single-accelerator systems, such as a machine equipped with a single accelerator or GPU, are commonly used for
smaller-scale deep learning tasks, including fine-tuning pre-trained models and running inference on moderately

View File

@@ -0,0 +1,25 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, GPUs, Llama, accelerators
*******************************************
Use ROCm for fine-tuning LLMs
*******************************************
Fine-tuning is an essential technique in machine learning, where a pre-trained model, typically trained on a large-scale dataset, is further refined to achieve better performance and adapt to a particular task or dataset of interest.
With AMD GPUs, the fine-tuning process benefits from the parallel processing capabilities and efficient resource management, ultimately leading to improved performance and faster model adaptation to the target domain.
The ROCm™ software platform helps you optimize this fine-tuning process by supporting various optimization techniques tailored for AMD GPUs. It empowers the fine-tuning of large language models, making them accessible and efficient for specialized tasks. ROCm supports the broader AI ecosystem to ensure seamless integration with open frameworks, models, and tools.
Throughout the following topics, this guide discusses the goals and :ref:`challenges of fine-tuning a large language
model <fine-tuning-llms-concept-challenge>` like Llama 2. In the
sections that follow, you'll find practical guides on libraries and tools to accelerate your fine-tuning.
- :doc:`Conceptual overview of fine-tuning LLMs <overview>`
- :doc:`Fine-tuning and inference <fine-tuning-and-inference>` using a
:doc:`single-accelerator <single-gpu-fine-tuning-and-inference>` or
:doc:`multi-accelerator <multi-gpu-fine-tuning-and-inference>` system.

View File

@@ -1,6 +1,6 @@
.. meta::
:description: Model fine-tuning and inference on a multi-GPU system
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, multi-GPU, distributed, inference
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, multi-GPU, distributed, inference, accelerators, PyTorch, HuggingFace, torchtune
*****************************************************
Fine-tuning and inference using multiple accelerators
@@ -233,4 +233,4 @@ GPU model fine-tuning and inference with LLMs.
INFO:torchtune.utils.logging:Learning rate scheduler is initialized.
1|111|Loss: 1.5790324211120605: 7%|█ | 114/1618
Read more about inference frameworks in :doc:`LLM inference frameworks <llm-inference-frameworks>`.
Read more about inference frameworks in :doc:`LLM inference frameworks <../inference/llm-inference-frameworks>`.

View File

@@ -1,6 +1,6 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, optimzation, LoRA, walkthrough
:description: Conceptual overview of fine-tuning LLMs
:keywords: ROCm, LLM, Llama, fine-tuning, usage, tutorial, optimzation, LoRA, walkthrough, PEFT, Reinforcement
***************************************
Conceptual overview of fine-tuning LLMs
@@ -41,7 +41,7 @@ The weight update is as follows: :math:`W_{updated} = W + ΔW`.
If the weight matrix :math:`W` contains 7B parameters, then the weight update matrix :math:`ΔW` should also
contain 7B parameters. Therefore, the :math:`ΔW` calculation is computationally and memory intensive.
.. figure:: ../../data/how-to/llm-fine-tuning-optimization/weight-update.png
.. figure:: ../../../data/how-to/llm-fine-tuning-optimization/weight-update.png
:alt: Weight update diagram
(a) Weight update in regular fine-tuning. (b) Weight update in LoRA where the product of matrix A (:math:`M\times K`)

View File

@@ -1,6 +1,6 @@
.. meta::
:description: Model fine-tuning and inference on a single-GPU system
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, single-GPU, LoRA, PEFT, inference
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, single-GPU, LoRA, PEFT, inference, SFTTrainer
****************************************************
Fine-tuning and inference using a single accelerator
@@ -80,7 +80,7 @@ Setting up the base implementation environment
#. Install the required dependencies.
bitsandbytes is a library that facilitates quantization to improve the efficiency of deep learning models. Learn more
about its use in :doc:`model-quantization`.
about its use in :doc:`../inference-optimization/model-quantization`.
See the :ref:`Optimizations for model fine-tuning <fine-tuning-llms-concept-optimizations>` for a brief discussion on
PEFT and TRL.
@@ -507,4 +507,4 @@ If using multiple accelerators, see
popular libraries that simplify fine-tuning and inference in a multi-accelerator system.
Read more about inference frameworks like vLLM and Hugging Face TGI in
:doc:`LLM inference frameworks <llm-inference-frameworks>`.
:doc:`LLM inference frameworks <../inference/llm-inference-frameworks>`.

View File

@@ -1,26 +1,27 @@
.. meta::
:description: How to use ROCm for AI
:description: Learn how to use ROCm for AI.
:keywords: ROCm, AI, machine learning, LLM, usage, tutorial
*****************
Using ROCm for AI
*****************
**************************
Use ROCm for AI
**************************
ROCm offers a suite of optimizations for AI workloads from large language models (LLMs) to image and video detection and
recognition, life sciences and drug discovery, autonomous driving, robotics, and more. ROCm proudly supports the broader
AI software ecosystem, including open frameworks, models, and tools.
ROCm™ is an open-source software platform that enables high-performance computing and machine learning applications. It features the ability to accelerate training, fine-tuning, and inference for AI application development. With ROCm, you can access the full power of AMD GPUs, which can significantly improve the performance and efficiency of AI workloads.
For more information, see `What is ROCm? <https://rocm.docs.amd.com/en/latest/what-is-rocm.html>`_
You can use ROCm to perform distributed training, which enables you to train models across multiple GPUs or nodes simultaneously. Additionally, ROCm supports mixed-precision training, which can help reduce the memory and compute requirements of training workloads. For fine-tuning, ROCm provides access to various algorithms and optimization techniques. In terms of inference, ROCm provides several techniques that can help you optimize your models for deployment, such as quantization, GEMM tuning, and optimization with composable kernel.
Overall, ROCm can be used to improve the performance and efficiency of your AI applications. With its training, fine-tuning, and inference support, ROCm provides a complete solution for optimizing AI workflows and achieving the optimum results possible on AMD GPUs.
In this guide, you'll learn about:
In this guide, you'll learn how to use ROCm for AI:
- :doc:`Installing ROCm and machine learning frameworks <install>`
- :doc:`Training <training/index>`
- :doc:`Training a model <train-a-model>`
- :doc:`Fine-tuning LLMs <fine-tuning/index>`
- :doc:`Running models from Hugging Face <hugging-face-models>`
- :doc:`Inference <inference/index>`
- :doc:`Inference optimization <inference-optimization/index>`
- :doc:`Deploying your model <deploy-your-model>`
To learn about ROCm for HPC applications and scientific computing, see
:doc:`../rocm-for-hpc/index`.

View File

@@ -0,0 +1,36 @@
.. meta::
:description: How to Use ROCm for AI inference optimization
:keywords: ROCm, LLM, AI inference, Optimization, GPUs, usage, tutorial
*******************************************
Use ROCm for AI inference optimization
*******************************************
AI inference optimization is the process of improving the performance of machine learning models and speeding up the inference process. It includes:
- **Quantization**: This involves reducing the precision of model weights and activations while maintaining acceptable accuracy levels. Reduced precision improves inference efficiency because lower precision data requires less storage and better utilizes the hardware's computation power.
- **Kernel optimization**: This technique involves optimizing computation kernels to exploit the underlying hardware capabilities. For example, the kernels can be optimized to use multiple GPU cores or utilize specialized hardware like tensor cores to accelerate the computations.
- **Libraries**: Libraries such as Flash Attention, xFormers, and PyTorch TunableOp are used to accelerate deep learning models and improve the performance of inference workloads.
- **Hardware acceleration**: Hardware acceleration techniques, like GPUs for AI inference, can significantly improve performance due to their parallel processing capabilities.
- **Pruning**: This involves removing unnecessary connections, layers, or weights from a pre-trained model while maintaining acceptable accuracy levels, resulting in a smaller model that requires fewer computational resources to run inference.
Utilizing these optimization techniques with the ROCm™ software platform can significantly reduce inference time, improve performance, and reduce the cost of your AI applications.
Throughout the following topics, this guide discusses optimization techniques for inference workloads.
- :doc:`Model quantization <model-quantization>`
- :doc:`Model acceleration libraries <model-acceleration-libraries>`
- :doc:`Optimizing with Composable Kernel <optimizing-with-composable-kernel>`
- :doc:`Optimizing Triton kernels <optimizing-triton-kernel>`
- :doc:`Profiling and debugging <profiling-and-debugging>`
- :doc:`Workload tuning <workload>`

View File

@@ -1,5 +1,5 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:description: How to use model acceleration techniques and libraries to improve memory efficiency and performance.
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, Flash Attention, Hugging Face, xFormers, vLLM, PyTorch
****************************
@@ -20,7 +20,7 @@ Attention (GQA), and Multi-Query Attention (MQA). This reduction in memory movem
time-to-first-token (TTFT) latency for large batch sizes and long prompt sequences, thereby enhancing overall
performance.
.. image:: ../../data/how-to/llm-fine-tuning-optimization/attention-module.png
.. image:: ../../../data/how-to/llm-fine-tuning-optimization/attention-module.png
:alt: Attention module of a large language module utilizing tiling
:align: center
@@ -245,7 +245,7 @@ page describes the options.
Validator,ROCBLAS_VERSION,4.1.0-cefa4a9b-dirty
GemmTunableOp_float_TN,tn_200_100_20,Gemm_Rocblas_32323,0.00669595
.. image:: ../../data/how-to/llm-fine-tuning-optimization/tunableop.png
.. image:: ../../../data/how-to/llm-fine-tuning-optimization/tunableop.png
:alt: GEMM and TunableOp
:align: center
@@ -277,7 +277,7 @@ Installing FBGEMM_GPU
Installing FBGEMM_GPU consists of the following steps:
* Set up an isolated Miniconda environment
* Install ROCm using Docker or the :doc:`package manager <rocm-install-on-linux:install/native-install/index>`
* Install ROCm using Docker or the :doc:`package manager <rocm-install-on-linux:install/install-methods/package-manager-index>`
* Install the nightly `PyTorch <https://pytorch.org/>`_ build
* Complete the pre-build and build tasks

View File

@@ -1,5 +1,5 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:description: How to use model quantization techniques to speed up inference.
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, quantization, GPTQ, transformers, bitsandbytes
*****************************

View File

@@ -1,6 +1,6 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, Triton, kernel, performance, optimization
:description: How to optimize Triton kernels for ROCm.
:keywords: ROCm, LLM, fine-tuning, usage, MI300X, tutorial, Triton, kernel, performance, optimization
*************************
Optimizing Triton kernels
@@ -13,7 +13,7 @@ and CUDA kernel optimization.
Refer to the
:ref:`Triton kernel performance optimization <mi300x-triton-kernel-performance-optimization>`
section of the :doc:`/how-to/tuning-guides/mi300x/workload` guide
section of the :doc:`workload` guide
for detailed information.
Triton kernel performance optimization includes the following topics.

View File

@@ -1,8 +1,9 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="SmoothQuant model inference on AMD Instinct MI300X using Composable Kernel">
<meta name="keywords" content="Mixed Precision, Kernel, Inference, Linear Algebra">
</head>
---
myst:
html_meta:
"description": "How to optimize machine learning workloads with Composable Kernel (CK)."
"keywords": "mixed, precision, kernel, inference, linear, algebra, ck, GEMM"
---
# Optimizing with Composable Kernel
@@ -32,7 +33,7 @@ The template parameters of the instance are grouped into four parameter types:
================
### Figure 2
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-template_parameters.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-template_parameters.jpg
The template parameters of the selected GEMM kernel are classified into four groups. These template parameter groups should be defined properly before running the instance.
```
@@ -126,7 +127,7 @@ The row and column, and stride information of input matrices are also passed to
================
### Figure 3
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-kernel_launch.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-kernel_launch.jpg
Templated kernel launching consists of kernel instantiation, making arguments by passing in actual application parameters, creating an invoker, and running the instance through the invoker.
```
@@ -155,7 +156,7 @@ The first operation in the process is to perform the multiplication of input mat
================
### Figure 4
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-operation_flow.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-operation_flow.jpg
Operation flow.
```
@@ -171,7 +172,7 @@ Here, we use [DeviceBatchedGemmMultiD_Xdl](https://github.com/ROCm/composable_ke
================
### Figure 5
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-root_instance.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-root_instance.jpg
Use the DeviceBatchedGemmMultiD_Xdl instance as a root.
```
@@ -421,7 +422,7 @@ Run `python setup.py install` to build and install the extension. It should look
================
### Figure 6
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-compilation.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-compilation.jpg
Compilation and installation of the INT8 kernels.
```
@@ -433,7 +434,7 @@ The implementation architecture of running SmoothQuant models on MI300X GPUs is
================
### Figure 7
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-inference_flow.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-inference_flow.jpg
The implementation architecture of running SmoothQuant models on AMD MI300X accelerators.
```
@@ -459,7 +460,7 @@ Figure 8 shows the performance comparisons between the original FP16 and the Smo
================
### Figure 8
================ -->
```{figure} ../../data/how-to/llm-fine-tuning-optimization/ck-comparisons.jpg
```{figure} ../../../data/how-to/llm-fine-tuning-optimization/ck-comparisons.jpg
Performance comparisons between the original FP16 and the SmoothQuant-quantized INT8 models on a single MI300X accelerator.
```

View File

@@ -1,12 +1,12 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, profiling, debugging, performance, Triton
:description: How to use ROCm profiling and debugging tools.
:keywords: ROCm, LLM, fine-tuning, usage, MI300X, tutorial, profiling, debugging, performance, Triton
***********************
Profiling and debugging
***********************
This section provides an index for further documentation on profiling and
This section provides an index for further documentation on profiling and
debugging tools and their common usage patterns.
See :ref:`AMD Instinct MI300X™ workload optimization <mi300x-profiling-start>`

View File

@@ -92,7 +92,7 @@ involves configuring tensor parallelism, leveraging advanced features, and
ensuring efficient execution. Heres how to optimize vLLM performance:
* Tensor parallelism: Configure the
:ref:`tensor-parallel-size parameter <mi300x-vllm-optimize-tp-gemm>` to distribute
:ref:`tensor-parallel-size parameter <mi300x-vllm-multiple-gpus>` to distribute
tensor computations across multiple GPUs. Adjust parameters such as
``batch-size``, ``input-len``, and ``output-len`` based on your workload.
@@ -152,7 +152,7 @@ address any new bottlenecks that may emerge.
ROCm provides a prebuilt optimized Docker image that has everything required to implement
the tips in this section. It includes ROCm, vLLM, PyTorch, and tuning files in the CSV
format. For more information, see :doc:`/how-to/performance-validation/mi300x/vllm-benchmark`.
format. For more information, see :doc:`../inference/vllm-benchmark`.
.. _mi300x-profiling-tools:
@@ -173,7 +173,7 @@ tools available depending on their specific profiling needs.
For more information, see
:doc:`ROCm Compute Profiler documentation <rocprofiler-compute:index>`.
Refer to :doc:`/how-to/llm-fine-tuning-optimization/profiling-and-debugging`
Refer to :doc:`profiling-and-debugging`
to explore commonly used profiling tools and their usage patterns.
Once performance bottlenecks are identified, you can implement an informed workload
@@ -412,7 +412,7 @@ usage with ROCm.
ROCm provides a prebuilt optimized Docker image for validating the performance
of LLM inference with vLLM on the MI300X accelerator. The Docker image includes
ROCm, vLLM, PyTorch, and tuning files in the CSV format. For more information,
see :doc:`/how-to/performance-validation/mi300x/vllm-benchmark`.
see :doc:`../inference/vllm-benchmark`.
.. _mi300x-vllm-throughput-measurement:
@@ -1304,7 +1304,7 @@ performance (reduce latency) and improve benchmarking stability.
CK provides a rich set of template parameters for generating flexible accelerated
computing kernels for difference application scenarios.
See :doc:`/how-to/llm-fine-tuning-optimization/optimizing-with-composable-kernel`
See :doc:`optimizing-with-composable-kernel`
for an overview of Composable Kernel GEMM kernels, information on tunable
parameters, and examples.
@@ -2062,11 +2062,10 @@ collectives.
Multi-node FSDP and RCCL settings
---------------------------------
It's recommended to use high-priority HIP streams with RCCL.
The simplest way to enable this is by using the nightly PyTorch wheels, as the required changes from
`PR #122830 <https://github.com/pytorch/pytorch/pull/122830>`_ were not included in the PyTorch 2.3
release but are available in the nightly builds.
When using PyTorch's FSDP (Full Sharded Data Parallel) feature, the HIP
streams used by RCCL and HIP streams used for compute kernels do not
always overlap well. As a workaround, it's recommended to use
high-priority HIP streams with RCCL.
To configure high-priority streams:

View File

@@ -1,5 +1,5 @@
.. meta::
:description: How to use ROCm for AI
:description: How to deploy your model for AI inference using vLLM and Hugging Face TGI.
:keywords: ROCm, AI, LLM, train, fine-tune, deploy, FSDP, DeepSpeed, LLaMA, tutorial
********************
@@ -119,4 +119,4 @@ TGI walkthrough
vLLM and Hugging Face TGI are robust solutions for anyone looking to deploy LLMs for applications that demand high
performance, low latency, and scalability.
Visit the topics in :doc:`Using ROCm for AI <index>` to learn about other ROCm-aware solutions for AI development.
Visit the topics in :doc:`Using ROCm for AI <../index>` to learn about other ROCm-aware solutions for AI development.

View File

@@ -1,5 +1,5 @@
.. meta::
:description: How to use ROCm for AI
:description: How to run models from Hugging Face on AMD GPUs.
:keywords: ROCm, AI, LLM, Hugging Face, Optimum, Flash Attention, GPTQ, ONNX, tutorial
********************************

View File

@@ -0,0 +1,22 @@
.. meta::
:description: How to use ROCm for AI inference workloads.
:keywords: ROCm, AI, machine learning, LLM, AI inference, NLP, GPUs, usage, tutorial
****************************
Use ROCm for AI inference
****************************
AI inference is a process of deploying a trained machine learning model to make predictions or classifications on new data.This commonly involves using the model with real-time data and making quick decisions based on the predictions made by the model.
Understanding the ROCm™ software platforms architecture and capabilities is vital for running AI inference. By leveraging the ROCm platform's capabilities, you can harness the power of high-performance computing and efficient resource management to run inference workloads, leading to faster predictions and classifications on real-time data.
Throughout the following topics, this section provides a comprehensive guide to setting up and deploying AI inference on AMD GPUs. This includes instructions on how to install ROCm, how to use Hugging Face Transformers to manage pre-trained models for natural language processing (NLP) tasks, how to validate vLLM on AMD Instinct™ MI300X accelerators and illustrate how to deploy trained models in production environments.
- :doc:`Installing ROCm and machine learning frameworks <install>`
- :doc:`Running models from Hugging Face <hugging-face-models>`
- :doc:`LLM inference frameworks <llm-inference-frameworks>`
- :doc:`Performance validation <vllm-benchmark>`
- :doc:`Deploying your model <deploy-your-model>`

View File

@@ -1,5 +1,5 @@
.. meta::
:description: How to use ROCm for AI
:description: How to install ROCm and popular machine learning frameworks.
:keywords: ROCm, AI, LLM, train, fine-tune, FSDP, DeepSpeed, LLaMA, tutorial
.. _rocm-for-ai-install:
@@ -26,7 +26,7 @@ If youre using a Radeon GPU for graphics-accelerated applications, refer to t
ROCm supports multiple :doc:`installation methods <rocm-install-on-linux:install/install-overview>`:
* :doc:`Using your Linux distribution's package manager <rocm-install-on-linux:install/native-install/index>`
* :doc:`Using your Linux distribution's package manager <rocm-install-on-linux:install/install-methods/package-manager-index>`
* :doc:`Using the AMDGPU installer <rocm-install-on-linux:install/amdgpu-install>`
@@ -59,4 +59,4 @@ images with the framework pre-installed.
* :doc:`JAX for ROCm <rocm-install-on-linux:install/3rd-party/jax-install>`
The sections that follow in :doc:`Training a model <train-a-model>` are geared for a ROCm with PyTorch installation.
The sections that follow in :doc:`Training a model <../training/train-a-model>` are geared for a ROCm with PyTorch installation.

View File

@@ -1,5 +1,5 @@
.. meta::
:description: How to fine-tune LLMs with ROCm
:description: How to implement the LLM inference frameworks with ROCm acceleration.
:keywords: ROCm, LLM, fine-tuning, usage, tutorial, inference, vLLM, TGI, text generation inference
************************
@@ -8,8 +8,8 @@ LLM inference frameworks
This section discusses how to implement `vLLM <https://docs.vllm.ai/en/latest>`_ and `Hugging Face TGI
<https://huggingface.co/docs/text-generation-inference/en/index>`_ using
:doc:`single-accelerator <single-gpu-fine-tuning-and-inference>` and
:doc:`multi-accelerator <multi-gpu-fine-tuning-and-inference>` systems.
:doc:`single-accelerator <../fine-tuning/single-gpu-fine-tuning-and-inference>` and
:doc:`multi-accelerator <../fine-tuning/multi-gpu-fine-tuning-and-inference>` systems.
.. _fine-tuning-llms-vllm:
@@ -68,7 +68,7 @@ Installing vLLM
The following log message is displayed in your command line indicates that the server is listening for requests.
.. image:: ../../data/how-to/llm-fine-tuning-optimization/vllm-single-gpu-log.png
.. image:: ../../../data/how-to/llm-fine-tuning-optimization/vllm-single-gpu-log.png
:alt: vLLM API server log message
:align: center
@@ -141,7 +141,7 @@ Installing vLLM
ROCm provides a prebuilt optimized Docker image for validating the performance of LLM inference with vLLM
on the MI300X accelerator. The Docker image includes ROCm, vLLM, PyTorch, and tuning files in CSV
format. For more information, see :doc:`/how-to/performance-validation/mi300x/vllm-benchmark`.
format. For more information, see :doc:`vllm-benchmark`.
.. _fine-tuning-llms-tgi:

View File

@@ -0,0 +1,478 @@
.. meta::
:description: Learn how to validate LLM inference performance on MI300X accelerators using AMD MAD and the
ROCm vLLM Docker image.
:keywords: model, MAD, automation, dashboarding, validate
***********************************************************
LLM inference performance validation on AMD Instinct MI300X
***********************************************************
.. _vllm-benchmark-unified-docker:
The `ROCm vLLM Docker <https://hub.docker.com/r/rocm/vllm/tags>`_ image offers
a prebuilt, optimized environment for validating large language model (LLM)
inference performance on the AMD Instinct™ MI300X accelerator. This ROCm vLLM
Docker image integrates vLLM and PyTorch tailored specifically for the MI300X
accelerator and includes the following components:
* `ROCm 6.3.1 <https://github.com/ROCm/ROCm>`_
* `vLLM 0.6.6 <https://docs.vllm.ai/en/latest>`_
* `PyTorch 2.7.0 (2.7.0a0+git3a58512) <https://github.com/pytorch/pytorch>`_
With this Docker image, you can quickly validate the expected inference
performance numbers for the MI300X accelerator. This topic also provides tips on
optimizing performance with popular AI models. For more information, see the lists of
:ref:`available models for MAD-integrated benchmarking <vllm-benchmark-mad-models>`
and :ref:`standalone benchmarking <vllm-benchmark-standalone-options>`.
.. _vllm-benchmark-vllm:
.. note::
vLLM is a toolkit and library for LLM inference and serving. AMD implements
high-performance custom kernels and modules in vLLM to enhance performance.
See :ref:`fine-tuning-llms-vllm` and :ref:`mi300x-vllm-optimization` for
more information.
Getting started
===============
Use the following procedures to reproduce the benchmark results on an
MI300X accelerator with the prebuilt vLLM Docker image.
.. _vllm-benchmark-get-started:
1. Disable NUMA auto-balancing.
To optimize performance, disable automatic NUMA balancing. Otherwise, the GPU
might hang until the periodic balancing is finalized. For more information,
see :ref:`AMD Instinct MI300X system optimization <mi300x-disable-numa>`.
.. code-block:: shell
# disable automatic NUMA balancing
sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'
# check if NUMA balancing is disabled (returns 0 if disabled)
cat /proc/sys/kernel/numa_balancing
0
2. Download the :ref:`ROCm vLLM Docker image <vllm-benchmark-unified-docker>`.
Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
Once the setup is complete, choose between two options to reproduce the
benchmark results:
- :ref:`MAD-integrated benchmarking <vllm-benchmark-mad>`
- :ref:`Standalone benchmarking <vllm-benchmark-standalone>`
.. _vllm-benchmark-mad:
MAD-integrated benchmarking
===========================
Clone the ROCm Model Automation and Dashboarding (`<https://github.com/ROCm/MAD>`__) repository to a local
directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/MAD
cd MAD
pip install -r requirements.txt
Use this command to run a performance benchmark test of the Llama 3.1 8B model
on one GPU with ``float16`` data type in the host machine.
.. code-block:: shell
export MAD_SECRETS_HFTOKEN="your personal Hugging Face token to access gated models"
python3 tools/run_models.py --tags pyt_vllm_llama-3.1-8b --keep-model-dir --live-output --timeout 28800
ROCm MAD launches a Docker container with the name
``container_ci-pyt_vllm_llama-3.1-8b``. The latency and throughput reports of the
model are collected in the following path: ``~/MAD/reports_float16/``.
Although the following models are preconfigured to collect latency and
throughput performance data, you can also change the benchmarking parameters.
Refer to the :ref:`Standalone benchmarking <vllm-benchmark-standalone>` section.
.. _vllm-benchmark-mad-models:
Available models
----------------
.. list-table::
:header-rows: 1
:widths: 2, 3
* - Model name
- Tag
* - `Llama 3.1 8B <https://huggingface.co/meta-llama/Llama-3.1-8B>`_
- ``pyt_vllm_llama-3.1-8b``
* - `Llama 3.1 70B <https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct>`_
- ``pyt_vllm_llama-3.1-70b``
* - `Llama 3.1 405B <https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct>`_
- ``pyt_vllm_llama-3.1-405b``
* - `Llama 3.2 11B Vision <https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct>`_
- ``pyt_vllm_llama-3.2-11b-vision-instruct``
* - `Llama 2 7B <https://huggingface.co/meta-llama/Llama-2-7b-chat-hf>`_
- ``pyt_vllm_llama-2-7b``
* - `Llama 2 70B <https://huggingface.co/meta-llama/Llama-2-70b-chat-hf>`_
- ``pyt_vllm_llama-2-70b``
* - `Mixtral MoE 8x7B <https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1>`_
- ``pyt_vllm_mixtral-8x7b``
* - `Mixtral MoE 8x22B <https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1>`_
- ``pyt_vllm_mixtral-8x22b``
* - `Mistral 7B <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3>`_
- ``pyt_vllm_mistral-7b``
* - `Qwen2 7B <https://huggingface.co/Qwen/Qwen2-7B-Instruct>`_
- ``pyt_vllm_qwen2-7b``
* - `Qwen2 72B <https://huggingface.co/Qwen/Qwen2-72B-Instruct>`_
- ``pyt_vllm_qwen2-72b``
* - `JAIS 13B <https://huggingface.co/core42/jais-13b-chat>`_
- ``pyt_vllm_jais-13b``
* - `JAIS 30B <https://huggingface.co/core42/jais-30b-chat-v3>`_
- ``pyt_vllm_jais-30b``
* - `DBRX Instruct <https://huggingface.co/databricks/dbrx-instruct>`_
- ``pyt_vllm_dbrx-instruct``
* - `Gemma 2 27B <https://huggingface.co/google/gemma-2-27b>`_
- ``pyt_vllm_gemma-2-27b``
* - `C4AI Command R+ 08-2024 <https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024>`_
- ``pyt_vllm_c4ai-command-r-plus-08-2024``
* - `DeepSeek MoE 16B <https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat>`_
- ``pyt_vllm_deepseek-moe-16b-chat``
* - `Llama 3.1 70B FP8 <https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV>`_
- ``pyt_vllm_llama-3.1-70b_fp8``
* - `Llama 3.1 405B FP8 <https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV>`_
- ``pyt_vllm_llama-3.1-405b_fp8``
* - `Mixtral MoE 8x7B FP8 <https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV>`_
- ``pyt_vllm_mixtral-8x7b_fp8``
* - `Mixtral MoE 8x22B FP8 <https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV>`_
- ``pyt_vllm_mixtral-8x22b_fp8``
* - `Mistral 7B FP8 <https://huggingface.co/amd/Mistral-7B-v0.1-FP8-KV>`_
- ``pyt_vllm_mistral-7b_fp8``
* - `DBRX Instruct FP8 <https://huggingface.co/amd/dbrx-instruct-FP8-KV>`_
- ``pyt_vllm_dbrx_fp8``
* - `C4AI Command R+ 08-2024 FP8 <https://huggingface.co/amd/c4ai-command-r-plus-FP8-KV>`_
- ``pyt_vllm_command-r-plus_fp8``
.. _vllm-benchmark-standalone:
Standalone benchmarking
=======================
You can run the vLLM benchmark tool independently by starting the
:ref:`Docker container <vllm-benchmark-get-started>` as shown in the following
snippet.
.. code-block::
docker pull rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 16G --security-opt seccomp=unconfined --security-opt apparmor=unconfined --cap-add=SYS_PTRACE -v $(pwd):/workspace --env HUGGINGFACE_HUB_CACHE=/workspace --name vllm_v0.6.6 rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6
In the Docker container, clone the ROCm MAD repository and navigate to the
benchmark scripts directory at ``~/MAD/scripts/vllm``.
.. code-block::
git clone https://github.com/ROCm/MAD
cd MAD/scripts/vllm
Command
-------
To start the benchmark, use the following command with the appropriate options.
See :ref:`Options <vllm-benchmark-standalone-options>` for the list of
options and their descriptions.
.. code-block:: shell
./vllm_benchmark_report.sh -s $test_option -m $model_repo -g $num_gpu -d $datatype
See the :ref:`examples <vllm-benchmark-run-benchmark>` for more information.
.. note::
The input sequence length, output sequence length, and tensor parallel (TP) are
already configured. You don't need to specify them with this script.
.. note::
If you encounter the following error, pass your access-authorized Hugging
Face token to the gated models.
.. code-block:: shell
OSError: You are trying to access a gated repo.
# pass your HF_TOKEN
export HF_TOKEN=$your_personal_hf_token
.. _vllm-benchmark-standalone-options:
Options and available models
----------------------------
.. list-table::
:header-rows: 1
:align: center
* - Name
- Options
- Description
* - ``$test_option``
- latency
- Measure decoding token latency
* -
- throughput
- Measure token generation throughput
* -
- all
- Measure both throughput and latency
* - ``$model_repo``
- ``meta-llama/Llama-3.1-8B-Instruct``
- `Llama 3.1 8B <https://huggingface.co/meta-llama/Llama-3.1-8B>`_
* - (``float16``)
- ``meta-llama/Llama-3.1-70B-Instruct``
- `Llama 3.1 70B <https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct>`_
* -
- ``meta-llama/Llama-3.1-405B-Instruct``
- `Llama 3.1 405B <https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct>`_
* -
- ``meta-llama/Llama-3.2-11B-Vision-Instruct``
- `Llama 3.2 11B Vision <https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct>`_
* -
- ``meta-llama/Llama-2-7b-chat-hf``
- `Llama 2 7B <https://huggingface.co/meta-llama/Llama-2-7b-chat-hf>`_
* -
- ``meta-llama/Llama-2-70b-chat-hf``
- `Llama 2 7B <https://huggingface.co/meta-llama/Llama-2-70b-chat-hf>`_
* -
- ``mistralai/Mixtral-8x7B-Instruct-v0.1``
- `Mixtral MoE 8x7B <https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1>`_
* -
- ``mistralai/Mixtral-8x22B-Instruct-v0.1``
- `Mixtral MoE 8x22B <https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1>`_
* -
- ``mistralai/Mistral-7B-Instruct-v0.3``
- `Mistral 7B <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3>`_
* -
- ``Qwen/Qwen2-7B-Instruct``
- `Qwen2 7B <https://huggingface.co/Qwen/Qwen2-7B-Instruct>`_
* -
- ``Qwen/Qwen2-72B-Instruct``
- `Qwen2 72B <https://huggingface.co/Qwen/Qwen2-72B-Instruct>`_
* -
- ``core42/jais-13b-chat``
- `JAIS 13B <https://huggingface.co/core42/jais-13b-chat>`_
* -
- ``core42/jais-30b-chat-v3``
- `JAIS 30B <https://huggingface.co/core42/jais-30b-chat-v3>`_
* -
- ``databricks/dbrx-instruct``
- `DBRX Instruct <https://huggingface.co/databricks/dbrx-instruct>`_
* -
- ``google/gemma-2-27b``
- `Gemma 2 27B <https://huggingface.co/google/gemma-2-27b>`_
* -
- ``CohereForAI/c4ai-command-r-plus-08-2024``
- `C4AI Command R+ 08-2024 <https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024>`_
* -
- ``deepseek-ai/deepseek-moe-16b-chat``
- `DeepSeek MoE 16B <https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat>`_
* - ``$model_repo``
- ``amd/Llama-3.1-70B-Instruct-FP8-KV``
- `Llama 3.1 70B FP8 <https://huggingface.co/amd/Llama-3.1-70B-Instruct-FP8-KV>`_
* - (``float8``)
- ``amd/Llama-3.1-405B-Instruct-FP8-KV``
- `Llama 3.1 405B FP8 <https://huggingface.co/amd/Llama-3.1-405B-Instruct-FP8-KV>`_
* -
- ``amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV``
- `Mixtral MoE 8x7B FP8 <https://huggingface.co/amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV>`_
* -
- ``amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV``
- `Mixtral MoE 8x22B FP8 <https://huggingface.co/amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV>`_
* -
- ``amd/Mistral-7B-v0.1-FP8-KV``
- `Mistral 7B FP8 <https://huggingface.co/amd/Mistral-7B-v0.1-FP8-KV>`_
* -
- ``amd/dbrx-instruct-FP8-KV``
- `DBRX Instruct FP8 <https://huggingface.co/amd/dbrx-instruct-FP8-KV>`_
* -
- ``amd/c4ai-command-r-plus-FP8-KV``
- `C4AI Command R+ 08-2024 FP8 <https://huggingface.co/amd/c4ai-command-r-plus-FP8-KV>`_
* - ``$num_gpu``
- 1 or 8
- Number of GPUs
* - ``$datatype``
- ``float16`` or ``float8``
- Data type
.. _vllm-benchmark-run-benchmark:
Running the benchmark on the MI300X accelerator
-----------------------------------------------
Here are some examples of running the benchmark with various options.
See :ref:`Options <vllm-benchmark-standalone-options>` for the list of
options and their descriptions.
Example 1: latency benchmark
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to benchmark the latency of the Llama 3.1 70B model on eight GPUs with the ``float16`` and ``float8`` data types.
.. code-block::
./vllm_benchmark_report.sh -s latency -m meta-llama/Llama-3.1-70B-Instruct -g 8 -d float16
./vllm_benchmark_report.sh -s latency -m amd/Llama-3.1-70B-Instruct-FP8-KV -g 8 -d float8
Find the latency reports at:
- ``./reports_float16/summary/Llama-3.1-70B-Instruct_latency_report.csv``
- ``./reports_float8/summary/Llama-3.1-70B-Instruct-FP8-KV_latency_report.csv``
Example 2: throughput benchmark
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use this command to benchmark the throughput of the Llama 3.1 70B model on eight GPUs with the ``float16`` and ``float8`` data types.
.. code-block:: shell
./vllm_benchmark_report.sh -s throughput -m meta-llama/Llama-3.1-70B-Instruct -g 8 -d float16
./vllm_benchmark_report.sh -s throughput -m amd/Llama-3.1-70B-Instruct-FP8-KV -g 8 -d float8
Find the throughput reports at:
- ``./reports_float16/summary/Llama-3.1-70B-Instruct_throughput_report.csv``
- ``./reports_float8/summary/Llama-3.1-70B-Instruct-FP8-KV_throughput_report.csv``
.. raw:: html
<style>
mjx-container[jax="CHTML"][display="true"] {
text-align: left;
margin: 0;
}
</style>
.. note::
Throughput is calculated as:
- .. math:: throughput\_tot = requests \times (\mathsf{\text{input lengths}} + \mathsf{\text{output lengths}}) / elapsed\_time
- .. math:: throughput\_gen = requests \times \mathsf{\text{output lengths}} / elapsed\_time
Further reading
===============
- For application performance optimization strategies for HPC and AI workloads,
including inference with vLLM, see :doc:`../inference-optimization/workload`.
- To learn more about the options for latency and throughput benchmark scripts,
see `<https://github.com/ROCm/vllm/tree/main/benchmarks>`_.
- To learn more about system settings and management practices to configure your system for
MI300X accelerators, see :doc:`../../system-optimization/mi300x`.
- To learn how to run LLM models from Hugging Face or your own model, see
:doc:`Running models from Hugging Face <hugging-face-models>`.
- To learn how to optimize inference on LLMs, see
:doc:`Inference optimization <../inference-optimization/index>`.
- To learn how to fine-tune LLMs, see
:doc:`Fine-tuning LLMs <../fine-tuning/index>`.
Previous versions
=================
This table lists previous versions of the ROCm vLLM Docker image for inference
performance validation. For detailed information about available models for
benchmarking, see the version-specific documentation.
.. list-table::
:header-rows: 1
:stub-columns: 1
* - ROCm version
- vLLM version
- PyTorch version
- Resources
* - 6.2.1
- 0.6.4
- 2.5.0
-
* `Documentation <https://rocm.docs.amd.com/en/docs-6.3.0/how-to/performance-validation/mi300x/vllm-benchmark.html>`_
* `Docker Hub <https://hub.docker.com/layers/rocm/vllm/rocm6.2_mi300_ubuntu20.04_py3.9_vllm_0.6.4/images/sha256-ccbb74cc9e7adecb8f7bdab9555f7ac6fc73adb580836c2a35ca96ff471890d8>`_
* - 6.2.0
- 0.4.3
- 2.4.0
-
* `Documentation <https://rocm.docs.amd.com/en/docs-6.2.0/how-to/performance-validation/mi300x/vllm-benchmark.html>`_
* `Docker Hub <https://hub.docker.com/layers/rocm/vllm/rocm6.2_mi300_ubuntu22.04_py3.9_vllm_7c5fd50/images/sha256-9e4dd4788a794c3d346d7d0ba452ae5e92d39b8dfac438b2af8efdc7f15d22c0>`_

View File

@@ -0,0 +1,21 @@
.. meta::
:description: How to use ROCm for training models
:keywords: ROCm, LLM, training, GPUs, training model, scaling model, usage, tutorial
=======================
Use ROCm for training
=======================
Training models is the process of teaching a computer program to recognize patterns in data. This involves providing the computer with large amounts of labeled data and allowing it to learn from that data, adjusting the model's parameters.
The process of training models is computationally intensive, requiring specialized hardware like GPUs to accelerate computations and reduce training time. Training models on AMD GPUs involves leveraging the parallel processing capabilities of these GPUs to significantly speed up the model training process in machine learning and deep learning tasks.
Training models on AMD GPUs with the ROCm™ software platform allows you to use the powerful parallel processing capabilities and efficient compute resource management, significantly improving training time and overall performance in machine learning applications.
The ROCm software platform makes it easier to train models on AMD GPUs while maintaining compatibility with existing code and tools. The platform also provides features like multi-GPU support, allowing for scaling and parallelization of model training across multiple GPUs to enhance performance.
In this guide, you'll learn about:
- :doc:`Training a model <train-a-model>`
- :doc:`Scale model training <scale-model-training>`

View File

@@ -1,27 +1,22 @@
.. meta::
:description: How to use ROCm for AI
:keywords: ROCm, AI, LLM, train, fine-tune, FSDP, DeepSpeed, LLaMA, tutorial
:description: How to scale and accelerate model training
:keywords: ROCm, AI, LLM, train, fine-tune, deploy, FSDP, DeepSpeed, LLaMA, tutorial
****************
Training a model
****************
**********************
Scaling model training
**********************
The following is a brief overview of popular component paths per AI development use-case, such as training, LLMs,
and inferencing.
Accelerating model training
===========================
To train a large model like GPT2 or Llama 2 70B, a single accelerator or GPU cannot store all the model parameters
required for training. What if you could convert the single-GPU training code to run on multiple accelerators or GPUs?
PyTorch offers distributed training solutions to facilitate this.
To train a large-scale model like OpenAI GPT-2 or Meta Llama 2 70B, a single accelerator or GPU cannot store all the
model parameters required for training. This immense scale presents a fundamental challenge: no single GPU or
accelerator can simultaneously store and process the entire model's parameters during training. PyTorch
provides an answer to this computational constraint through its distributed training frameworks.
.. _rocm-for-ai-pytorch-distributed:
PyTorch distributed
-------------------
===================
As of PyTorch 1.6.0, features in ``torch.distributed`` are categorized into three main components:
Features in ``torch.distributed`` are categorized into three main components:
- `Distributed data-parallel training
<https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`_ (DDP)
@@ -30,15 +25,15 @@ As of PyTorch 1.6.0, features in ``torch.distributed`` are categorized into thre
- `Collective communication <https://pytorch.org/docs/stable/distributed.html>`_
In this guide, the focus is on the distributed data-parallelism strategy as its the most popular. To get started with DDP,
lets first understand how to coordinate the model and its training data across multiple accelerators or GPUs.
In this topic, the focus is on the distributed data-parallelism strategy as its the most popular. To get started with DDP,
you need to first understand how to coordinate the model and its training data across multiple accelerators or GPUs.
The DDP workflow on multiple accelerators or GPUs is as follows:
#. Split the current global training batch into small local batches on each GPU. For instance, if you have 8 GPUs and
the global batch is set at 32 samples, each of the 8 GPUs will have a local batch size of 4 samples.
#. Copy the model to every device so each device can process its local batches independently.
#. Copy the model to every device so each can process its local batches independently.
#. Run a forward pass, then a backward pass, and output the gradient of the weights with respect to the loss of the
model for that local batch. This happens in parallel on multiple devices.
@@ -46,7 +41,7 @@ The DDP workflow on multiple accelerators or GPUs is as follows:
#. Synchronize the local gradients computed by each device and combine them to update the model weights. The updated
weights are then redistributed to each device.
In DDP training, each process or worker owns a replica of the model and processes a batch of data, then the reducer uses
In DDP training, each process or worker owns a replica of the model and processes a batch of data, and then the reducer uses
``allreduce`` to sum up gradients over different workers.
See the following developer blogs for more in-depth explanations and examples.
@@ -61,19 +56,19 @@ See the following developer blogs for more in-depth explanations and examples.
PyTorch FSDP
------------
As noted in :ref:`PyTorch distributed <rocm-for-ai-pytorch-distributed>`, in DDP model weights and optimizer states
As noted in :ref:`PyTorch distributed <rocm-for-ai-pytorch-distributed>`, DDP model weights and optimizer states
are evenly replicated across all workers. Fully Sharded Data Parallel (FSDP) is a type of data parallelism that shards
model parameters, optimizer states, and gradients across DDP ranks.
When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes
the training of some very large models feasible by allowing larger models or batch sizes to fit on-device. However, this
training some very large models feasible by allowing larger models or batch sizes to fit on-device. However, this
comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations
like overlapping communication and computation.
For a high-level overview of how FSDP works, review `Getting started with Fully Sharded Data Parallel
<https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html#how-fsdp-works>`_.
For detailed training steps, refer to the `PyTorch FSDP examples
For detailed training steps, see `PyTorch FSDP examples
<https://github.com/pytorch/examples/tree/main/distributed/FSDP>`_.
.. _rocm-for-ai-deepspeed:
@@ -85,7 +80,7 @@ DeepSpeed
efficient, and easy to use. Innovations such as ZeRO, 3D-Parallelism, DeepSpeed-MoE, ZeRO-Infinity, and so on fall under
the training pillar.
See `Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs — ROCm Blogs
See `Pre-training a large language model with Megatron-DeepSpeed on multiple AMD GPUs
<https://rocm.blogs.amd.com/artificial-intelligence/megatron-deepspeed-pretrain/README.html>`_ for a detailed example of
training with DeepSpeed on an AMD accelerator or GPU.
@@ -94,7 +89,7 @@ training with DeepSpeed on an AMD accelerator or GPU.
Automatic mixed precision (AMP)
-------------------------------
As models increase in size, the time and memory needed to train them; that is, their cost also increases. Any measure we
As models increase in size, so do the time and memory needed to train them; their cost also increases. Any measure we
can take to reduce training time and memory usage through `automatic mixed precision
<https://pytorch.org/docs/stable/amp.html>`_ (AMP) is highly beneficial for most use cases.
@@ -110,31 +105,31 @@ Fine-tuning your model
ROCm supports multiple techniques for :ref:`optimizing fine-tuning <fine-tuning-llms-concept-optimizations>`, for
example, LoRA, QLoRA, PEFT, and FSDP.
Learn more about challenges and solutions for model fine-tuning in :doc:`../llm-fine-tuning-optimization/index`.
Learn more about challenges and solutions for model fine-tuning in :doc:`../fine-tuning/index`.
The following developer blogs showcase examples of how to fine-tune a model on an AMD accelerator or GPU.
The following developer blogs showcase examples of fine-tuning a model on an AMD accelerator or GPU.
* Fine-tuning Llama2 with LoRA
* `Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering — ROCm Blogs
* `Fine-tune Llama 2 with LoRA: Customizing a large language model for question-answering
<https://rocm.blogs.amd.com/artificial-intelligence/llama2-lora/README.html>`_
* Fine-tuning Llama2 with QLoRA
* `Enhancing LLM accessibility: A deep dive into QLoRA through fine-tuning Llama 2 on a single AMD GPU — ROCm Blogs
* `Enhancing LLM accessibility: A deep dive into QLoRA through fine-tuning Llama 2 on a single AMD GPU
<https://rocm.blogs.amd.com/artificial-intelligence/llama2-Qlora/README.html>`_
* Fine-tuning a BERT-based LLM for a text classification task using JAX
* `LLM distributed supervised fine-tuning with JAX — ROCm Blogs
* `LLM distributed supervised fine-tuning with JAX
<https://rocm.blogs.amd.com/artificial-intelligence/distributed-sft-jax/README.html>`_
* Fine-tuning StarCoder using PEFT
* `Instruction fine-tuning of StarCoder with PEFT on multiple AMD GPUs — ROCm Blogs
* `Instruction fine-tuning of StarCoder with PEFT on multiple AMD GPUs
<https://rocm.blogs.amd.com/artificial-intelligence/starcoder-fine-tune/README.html>`_
* Recipes for fine-tuning Llama2 and 3 with ``llama-recipes``
* `meta-llama/llama-recipes: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover
single/multi-node GPUs <https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/finetuning>`_
single/multi-node GPUs <https://github.com/meta-llama/llama-cookbook/tree/main/getting-started/finetuning>`_

View File

@@ -0,0 +1,503 @@
.. meta::
:description: How to train a model using ROCm Megatron-LM
:keywords: ROCm, AI, LLM, train, Megatron-LM, megatron, Llama, tutorial, docker, torch
**************************************
Training a model with ROCm Megatron-LM
**************************************
.. _amd-megatron-lm:
The ROCm Megatron-LM framework is a specialized fork of the robust Megatron-LM, designed to
enable efficient training of large-scale language models on AMD GPUs. By leveraging AMD Instinct™ MI300X
accelerators, AMD Megatron-LM delivers enhanced scalability, performance, and resource utilization for AI
workloads. It is purpose-built to :ref:`support models <amd-megatron-lm-model-support>`
like Meta's Llama 2, Llama 3, and Llama 3.1, enabling developers to train next-generation AI models with greater
efficiency. See the GitHub repository at `<https://github.com/ROCm/Megatron-LM>`__.
For ease of use, AMD provides a ready-to-use Docker image for MI300X accelerators containing essential
components, including PyTorch, PyTorch Lightning, ROCm libraries, and Megatron-LM utilities. It contains the
following software to accelerate training workloads:
+--------------------------+--------------------------------+
| Software component | Version |
+==========================+================================+
| ROCm | 6.1 |
+--------------------------+--------------------------------+
| PyTorch | 2.4.0 |
+--------------------------+--------------------------------+
| PyTorch Lightning | 2.4.0 |
+--------------------------+--------------------------------+
| Megatron Core | 0.9.0 |
+--------------------------+--------------------------------+
| Transformer Engine | 1.5.0 |
+--------------------------+--------------------------------+
| Flash Attention | v2.6 |
+--------------------------+--------------------------------+
| Transformers | 4.44.0 |
+--------------------------+--------------------------------+
Supported features and models
=============================
Megatron-LM provides the following key features to train large language models efficiently:
- Transformer Engine (TE)
- APEX
- GEMM tuning
- Torch.compile
- 3D parallelism: TP + SP + CP
- Distributed optimizer
- Flash Attention (FA) 2
- Fused kernels
- Pre-training
.. _amd-megatron-lm-model-support:
The following models are pre-optimized for performance on the AMD Instinct MI300X accelerator.
* Llama 2 7B
* Llama 2 70B
* Llama 3 8B
* Llama 3 70B
* Llama 3.1 8B
* Llama 3.1 70B
Prerequisite system validation steps
====================================
Complete the following system validation and optimization steps to set up your system before starting training.
Disable NUMA auto-balancing
---------------------------
Generally, application performance can benefit from disabling NUMA auto-balancing. However,
it might be detrimental to performance with certain types of workloads.
Run the command ``cat /proc/sys/kernel/numa_balancing`` to check your current NUMA (Non-Uniform
Memory Access) settings. Output ``0`` indicates this setting is disabled. If there is no output or
the output is ``1``, run the following command to disable NUMA auto-balancing.
.. code-block:: shell
sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'
See :ref:`mi300x-disable-numa` for more information.
Hardware verification with ROCm
-------------------------------
Use the command ``rocm-smi --setperfdeterminism 1900`` to set the max clock speed up to 1900 MHz
instead of the default 2100 MHz. This can reduce the chance of a PCC event lowering the attainable
GPU clocks. This setting will not be required for new IFWI releases with the production PRC feature.
You can restore this setting to its default value with the ``rocm-smi -r`` command.
Run the command:
.. code-block:: shell
rocm-smi --setperfdeterminism 1900
See :ref:`mi300x-hardware-verification-with-rocm` for more information.
RCCL Bandwidth Test
-------------------
ROCm Collective Communications Library (RCCL) is a standalone library of standard collective communication
routines for GPUs. See the :doc:`RCCL documentation <rccl:index>` for more information. Before starting
pre-training, running a RCCL bandwidth test helps ensure that the multi-GPU or multi-node setup is optimized
for efficient distributed training.
Running the RCCL bandwidth test helps verify that:
- The GPUs can communicate across nodes or within a single node.
- The interconnect (such as InfiniBand, Ethernet, or Infinite fabric) is functioning as expected and
provides adequate bandwidth for communication.
- No hardware setup or cabling issues could affect the communication between GPUs
Tuning and optimizing hyperparameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In distributed training, specific hyperparameters related to distributed communication can be tuned based on
the results of the RCCL bandwidth test. These variables are already set in the Docker image:
.. code-block:: shell
# force all RCCL streams to be high priority
export TORCH_NCCL_HIGH_PRIORITY=1
# specify which RDMA interfaces to use for communication
export NCCL_IB_HCA=rdma0,rdma1,rdma2,rdma3,rdma4,rdma5,rdma6,rdma7
# define the Global ID index used in RoCE mode
export NCCL_IB_GID_INDEX=3
# avoid data corruption/mismatch issue that existed in past releases
export RCCL_MSCCL_ENABLE=0
Running the RCCL Bandwidth Test
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It's recommended you run the RCCL bandwidth test before launching training. It ensures system
performance is sufficient to launch training. RCCL is not included in the AMD Megatron-LM Docker
image; follow the instructions in `<https://github.com/ROCm/rccl-tests>`__ to get started.
See :ref:`mi300x-rccl` for more information.
Run on 8 GPUs (``-g 8``), scanning from 8 bytes to 10 GB:
.. code-block:: shell
./build/all_reduce_perf -b 8 -e 10G -f 2 -g 8
.. image:: ../../../data/how-to/rocm-for-ai/rccl-tests-8-gpu.png
:width: 800
Using one MPI process per GPU and ``-g 1`` for performance-oriented runs on both single-node and multi-node is
recommended. So, a run on 8 GPUs looks something like:
.. code-block:: shell
mpirun -np 8 --bind-to numa ./build/all_reduce_perf -b 8 -e 10G -f 2 -g 1
.. image:: ../../../data/how-to/rocm-for-ai/rccl-tests-1-mpi-process-per-gpu.png
:width: 800
Running with one MPI process per GPU ensures a one-to-one mapping for CPUs and GPUs, which can be beneficial
for smaller message sizes. This better represents the real-world use of RCCL in deep learning frameworks like
PyTorch and TensorFlow.
Use the following script to run the RCCL test for four MI300X GPU nodes. Modify paths and node addresses as needed.
.. code-block::
/home/$USER/ompi_for_gpu/ompi/bin/mpirun -np 32 -H tw022:8,tw024:8,tw010:8, tw015:8 \
--mca pml ucx \
--mca btl ^openib \
-x NCCL_SOCKET_IFNAME=ens50f0np0 \
-x NCCL_IB_HCA=rdma0:1,rdma1:1,rdma2:1,rdma3:1,rdma4:1,rdma5:1,rdma6:1,rdma7:1 \
-x NCCL_IB_GID_INDEX=3 \
-x NCCL_MIN_NCHANNELS=40 \
-x NCCL_DEBUG=version \
$HOME/rccl-tests/build/all_reduce_perf -b 8 -e 8g -f 2 -g 1
.. image:: ../../../data/how-to/rocm-for-ai/rccl-tests-4-mi300x-gpu-nodes.png
:width: 800
.. _mi300x-amd-megatron-lm-training:
Start training on MI300X accelerators
=====================================
The pre-built ROCm Megatron-LM environment allows users to quickly validate system performance, conduct
training benchmarks, and achieve superior performance for models like Llama 2 and Llama 3.1.
Use the following instructions to set up the environment, configure the script to train models, and
reproduce the benchmark results on the MI300X accelerators with the AMD Megatron-LM Docker
image.
.. _amd-megatron-lm-requirements:
Download the Docker image and required packages
-----------------------------------------------
1. Use the following command to pull the Docker image from Docker Hub.
.. code-block:: shell
docker pull rocm/megatron-lm:24.12-dev
2. Launch the Docker container.
.. code-block:: shell
docker run -it --device /dev/dri --device /dev/kfd --network host --ipc host --group-add video --cap-add SYS_PTRACE --security-opt seccomp=unconfined --privileged -v $CACHE_DIR:/root/.cache --name megatron-dev-env rocm/megatron-lm:24.12-dev /bin/bash
3. Clone the ROCm Megatron-LM repository to a local directory and install the required packages on the host machine.
.. code-block:: shell
git clone https://github.com/ROCm/Megatron-LM
cd Megatron-LM
.. note::
This release is validated with ``ROCm/Megatron-LM`` commit `bb93ccb <https://github.com/ROCm/Megatron-LM/tree/bb93ccbfeae6363c67b361a97a27c74ab86e7e92>`_.
Checking out this specific commit is recommended for a stable and reproducible environment.
.. code-block:: shell
git checkout bb93ccbfeae6363c67b361a97a27c74ab86e7e92
Prepare training datasets
-------------------------
If you already have the preprocessed data, you can skip this section.
Use the following command to process datasets. We use GPT data as an example. You may change the merge table, use an
end-of-document token, remove sentence splitting, and use the tokenizer type.
.. code-block:: shell
python tools/preprocess_data.py \
--input my-corpus.json \
--output-prefix my-gpt2 \
--vocab-file gpt2-vocab.json \
--tokenizer-type GPT2BPETokenizer \
--merge-file gpt2-merges.txt \
--append-eod
In this case, the automatically generated output files are named ``my-gpt2_text_document.bin`` and
``my-gpt2_text_document.idx``.
.. image:: ../../../data/how-to/rocm-for-ai/prep-training-datasets-my-gpt2-text-document.png
:width: 800
.. _amd-megatron-lm-environment-setup:
Environment setup
-----------------
In the ``examples/llama`` directory of Megatron-LM, if you're working with Llama 2 7B or Llama 2 70 B, use the
``train_llama2.sh`` configuration script. Likewise, if you're working with Llama 3 or Llama 3.1, then use
``train_llama3.sh`` and update the configuration script accordingly.
Network interface
^^^^^^^^^^^^^^^^^
To avoid connectivity issues, ensure the correct network interface is set in your training scripts.
1. Run the following command to find the active network interface on your system.
.. code-block:: shell
ip a
2. Update the ``NCCL_SOCKET_IFNAME`` and ``GLOO_SOCKET_IFNAME`` variables with your systems network interface. For
example:
.. code-block:: shell
export NCCL_SOCKET_IFNAME=ens50f0np0
export GLOO_SOCKET_IFNAME=ens50f0np0
Dataset options
^^^^^^^^^^^^^^^
You can use either mock data or real data for training.
* If you're using a real dataset, update the ``DATA_PATH`` variable to point to the location of your dataset.
.. code-block:: shell
DATA_DIR="/root/.cache/data" # Change to where your dataset is stored
DATA_PATH=${DATA_DIR}/bookcorpus_text_sentence
.. code-block:: shell
--data-path $DATA_PATH
Ensure that the files are accessible inside the Docker container.
* Mock data can be useful for testing and validation. If you're using mock data, replace ``--data-path $DATA_PATH`` with the ``--mock-data`` option.
.. code-block:: shell
--mock-data
Tokenizer
^^^^^^^^^
Tokenization is the process of converting raw text into tokens that can be processed by the model. For Llama
models, this typically involves sub-word tokenization, where words are broken down into smaller units based on
a fixed vocabulary. The tokenizer is trained along with the model on a large corpus of text, and it learns a
fixed vocabulary that can represent a wide range of text from different domains. This allows Llama models to
handle a variety of input sequences, including unseen words or domain-specific terms.
To train any of the Llama 2 models that this Docker image supports, use the ``Llama2Tokenizer``.
To train any of Llama 3 and Llama 3.1 models that this Docker image supports, use the ``HuggingFaceTokenizer``.
Set the Hugging Face model link in the ``TOKENIZER_MODEL`` variable.
For example, if you're using the Llama 3.1 8B model:
.. code-block:: shell
TOKENIZER_MODEL=meta-llama/Llama-3.1-8B
Run benchmark tests
-------------------
.. note::
If you're running **multi node training**, update the following environment variables. They can
also be passed as command line arguments.
* Change ``localhost`` to the master node's hostname:
.. code-block:: shell
MASTER_ADDR="${MASTER_ADDR:-localhost}"
* Set the number of nodes you want to train on (for instance, ``2``, ``4``, ``8``):
.. code-block:: shell
NNODES="${NNODES:-1}"
* Set the rank of each node (0 for master, 1 for the first worker node, and so on):
.. code-block:: shell
NODE_RANK="${NODE_RANK:-0}"
* Use this command to run a performance benchmark test of any of the Llama 2 models that this Docker image supports (see :ref:`variables <amd-megatron-lm-benchmark-test-vars>`).
.. code-block:: shell
{variables} bash examples/llama/train_llama2.sh
* Use this command to run a performance benchmark test of any of the Llama 3 and Llama 3.1 models that this Docker image supports (see :ref:`variables <amd-megatron-lm-benchmark-test-vars>`).
.. code-block:: shell
{variables} bash examples/llama/train_llama3.sh
.. _amd-megatron-lm-benchmark-test-vars:
The benchmark tests support the same set of variables:
+--------------------------+-----------------------+-----------------------+
| Name | Options | Description |
+==========================+=======================+=======================+
| ``TEE_OUTPUT`` | 0 or 1 | 0: disable training |
| | | log |
| | | |
| | | 1: enable training |
| | | log |
+--------------------------+-----------------------+-----------------------+
| ``MBS`` | | Micro batch size |
+--------------------------+-----------------------+-----------------------+
| ``BS`` | | Batch size |
+--------------------------+-----------------------+-----------------------+
| ``TP`` | 1, 2, 4, 8 | Tensor parallel |
+--------------------------+-----------------------+-----------------------+
| ``TE_FP8`` | 0 or 1 | Datatype. |
| | | If it is set to 1, |
| | | FP8. |
| | | |
| | | If it is set to 0. |
| | | BP16 |
+--------------------------+-----------------------+-----------------------+
| ``NO_TORCH_COMPILE`` | 0 or 1 | If it is set to 1, |
| | | enable torch.compile. |
| | | |
| | | If it is set to 0. |
| | | Disable torch.compile |
| | | (default) |
+--------------------------+-----------------------+-----------------------+
| ``SEQ_LENGTH`` | | Input sequence length |
+--------------------------+-----------------------+-----------------------+
| ``GEMM_TUNING`` | 0 or 1 | If it is set to 1, |
| | | enable gemm tuning. |
| | | |
| | | If it is set to 0, |
| | | disable gemm tuning |
+--------------------------+-----------------------+-----------------------+
| ``USE_FLASH_ATTN`` | 0 or 1 | 0: disable flash |
| | | attention |
| | | |
| | | 1: enable flash |
| | | attention |
+--------------------------+-----------------------+-----------------------+
| ``ENABLE_PROFILING`` | 0 or 1 | 0: disable torch |
| | | profiling |
| | | |
| | | 1: enable torch |
| | | profiling |
+--------------------------+-----------------------+-----------------------+
| ``MODEL_SIZE`` | | The size of the mode: |
| | | 7B/70B, etc. |
+--------------------------+-----------------------+-----------------------+
| ``TOTAL_ITERS`` | | Total number of |
| | | iterations |
+--------------------------+-----------------------+-----------------------+
| ``transformer-impl`` | transformer_engine or | Enable transformer |
| | local | engine by default |
+--------------------------+-----------------------+-----------------------+
Benchmarking examples
^^^^^^^^^^^^^^^^^^^^^
.. tab-set::
.. tab-item:: Single node training
:sync: single
Use this command to run training with Llama 2 7B model on a single node. You can specify MBS, BS, FP,
datatype, and so on.
.. code-block:: bash
TEE_OUTPUT=1 MBS=5 BS=120 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1
SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh
You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script <amd-megatron-lm-environment-setup>`.
See the sample output:
.. image:: ../../../data/how-to/rocm-for-ai/llama2-7b-training-log-sample.png
:width: 800
.. tab-item:: Multi node training
:sync: multi
Launch the Docker container on each node.
In this example, run training with Llama 2 7B model on 2 nodes with specific MBS, BS, FP, datatype, and
so on.
On the master node:
.. code-block:: bash
TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1
SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh
On the worker node:
.. code-block:: bash
TEE_OUTPUT=1 MBS=4 BS=64 TP=8 TE_FP8=0 NO_TORCH_COMPILE=1
SEQ_LENGTH=4096 bash examples/llama/train_llama2.sh
You can find the training logs at the location defined in ``$TRAIN_LOG`` in the :ref:`configuration script <amd-megatron-lm-environment-setup>`.
Sample output for 2-node training:
Master node:
.. image:: ../../../data/how-to/rocm-for-ai/2-node-training-master.png
:width: 800
Worker node:
.. image:: ../../../data/how-to/rocm-for-ai/2-node-training-worker.png
:width: 800

View File

@@ -1,6 +1,6 @@
.. meta::
:description: How to use ROCm for HPC
:keywords: ROCm, AI, high performance computing, HPC
:description: How to use ROCm for high-performance computing (HPC).
:keywords: ROCm, AI, high performance computing, HPC, science, scientific
******************
Using ROCm for HPC
@@ -115,6 +115,12 @@ Ubuntu versions.
for non-destructive testing or for ocean acoustics.
* - Molecular dynamics
- `Amber <https://github.com/amd/InfinityHub-CI/tree/main/amber>`_
- Amber is a suite of biomolecular simulation programs. It is a set of molecular mechanical force fields for
simulating biomolecules. Amber is also a package of molecular simulation
programs which includes source code and demos.
* -
- `GROMACS with HIP (AMD implementation) <https://github.com/amd/InfinityHub-CI/tree/main/gromacs>`_
- GROMACS is a versatile package to perform molecular dynamics, i.e.
simulate the Newtonian equations of motion for systems with hundreds
@@ -129,6 +135,13 @@ Ubuntu versions.
Parallel Simulator.
* - Computational fluid dynamics
- `Ansys Fluent <https://github.com/amd/InfinityHub-CI/tree/main/ansys-fluent>`_
- Ansys Fluent is an advanced computational fluid dynamics (CFD) tool for
simulating and analyzing fluid flow, heat transfer, and related phenomena in complex systems.
It offers a range of powerful features for detailed and accurate modeling of various physical
processes, including turbulence, chemical reactions, and multiphase flows.
* -
- `NEKO <https://github.com/amd/InfinityHub-CI/tree/main/neko>`_
- Neko is a portable framework for high-order spectral element flow simulations.
Written in modern Fortran, Neko adopts an object-oriented approach, allowing
@@ -141,6 +154,26 @@ Ubuntu versions.
- nekRS is an open-source Navier Stokes solver based on the spectral element
method targeting classical processors and accelerators like GPUs.
* -
- `OpenFOAM <https://github.com/amd/InfinityHub-CI/tree/main/openfoam>`_
- OpenFOAM is a free, open-source computational fluid dynamics (CFD)
tool developed primarily by OpenCFD Ltd. It has a large user
base across most areas of engineering and science, from both commercial and
academic organizations. OpenFOAM has extensive features to solve
anything from complex fluid flows involving chemical reactions, turbulence, and
heat transfer, to acoustics, solid mechanics, and electromagnetics.
* -
- `PeleC <https://github.com/amd/InfinityHub-CI/tree/main/pelec>`_
- PeleC is an adaptive mesh refinement(AMR) solver for compressible reacting flows.
* -
- `Simcenter Star-CCM+ <https://github.com/amd/InfinityHub-CI/tree/main/siemens-star-ccm>`_
- Simcenter Star-CCM+ is a comprehensive computational fluid dynamics (CFD) and multiphysics
simulation tool developed by Siemens Digital Industries Software. It is designed to
help engineers and researchers analyze and optimize the performance of products and
systems across various industries.
* - Computational chemistry
- `QUDA <https://github.com/amd/InfinityHub-CI/tree/main/quda>`_
- Library designed for efficient lattice QCD computations on
@@ -170,12 +203,30 @@ Ubuntu versions.
developing atmosphere, ocean, and other earth-system simulation components
for use in climate, regional climate, and weather studies.
* - Energy, Oil, and Gas
- `DevitoPRO <https://github.com/amd/InfinityHub-CI/tree/main/devitopro>`_
- DevitoPRO is an advanced extension of the open-source Devito platform with added
features tailored for high-demand production workflows. It supports
high-performance computing (HPC) needs, especially in seismic imaging and inversion.
It is used to perform optimized finite difference (FD) computations
from high-level symbolic problem definitions. DevitoPro performs automated
code generation and Just-In-time (JIT) compilation based on symbolic equations
defined in SymPy to create and execute highly optimized Finite Difference stencil
kernels on multiple computer platforms.
* -
- `ECHELON <https://github.com/amd/InfinityHub-CI/tree/main/srt-echelon>`_
- ECHELON by Stone Ridge Technology is a reservoir simulation tool. With
fast processing, it retains precise accuracy and preserves legacy simulator results.
Faster reservoir simulation enables reservoir engineers to produce many realizations,
address larger models, and use advanced physics. It opens new workflows based on
ensemble methodologies for history matching and forecasting that yield
increased accuracy and more predictive results.
* - Benchmark
- `rocHPL <https://github.com/amd/InfinityHub-CI/tree/main/rochpl>`_
- HPL, or High-Performance Linpack, is a benchmark which solves a uniformly
random system of linear equations and reports floating-point execution rate.
This documentation supports the implementation of the HPL benchmark on
top of AMD's ROCm platform.
- HPL, or High-Performance Linpack, is a benchmark which solves a uniformly
random system of linear equations and reports floating-point execution rate.
* -
- `rocHPL-MxP <https://github.com/amd/InfinityHub-CI/tree/main/hpl-mxp>`_
@@ -216,6 +267,14 @@ Ubuntu versions.
range of hardware platforms via use of an in-built domain specific language derived
from the Mako templating engine.
* -
- `PETSc <https://github.com/amd/InfinityHub-CI/tree/main/petsc>`_
- Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures
and routines for the scalable (parallel) solution of scientific applications modeled by partial
differential equations. It supports MPI, GPUs through CUDA, HIP, and OpenCL,
as well as hybrid MPI-GPU parallelism. It also supports the NEC-SX Tsubasa Vector Engine.
PETSc also includes the Toolkit for Advanced Optimization (TAO) library.
* -
- `RAJA <https://github.com/amd/InfinityHub-CI/tree/main/raja>`_
- RAJA is a library of C++ software abstractions, primarily developed at Lawrence

View File

@@ -1,5 +1,5 @@
.. meta::
:description: AMD hardware optimization for specific workloads
:description: Learn about AMD hardware optimization for HPC-specific and workstation workloads.
:keywords: high-performance computing, HPC, Instinct accelerators, Radeon,
tuning, tuning guide, AMD, ROCm

View File

@@ -1,9 +1,9 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="MI100 high-performance computing and tuning guide">
<meta name="keywords" content="MI100, high-performance computing, HPC, BIOS
settings, NBIO, AMD, ROCm">
</head>
---
myst:
html_meta:
"description": "AMD Instinct MI100 system settings optimization guide."
"keywords": "Instinct, MI100, microarchitecture, AMD, ROCm"
---
# AMD Instinct MI100 system optimization

View File

@@ -1,9 +1,9 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="MI200 high-performance computing and tuning guide">
<meta name="keywords" content="MI200, high-performance computing, HPC, BIOS
settings, NBIO, AMD, ROCm">
</head>
---
myst:
html_meta:
"description": "Learn about AMD Instinct MI200 system settings and performance tuning."
"keywords": "Instinct, MI200, microarchitecture, AMD, ROCm"
---
# AMD Instinct MI200 system optimization

View File

@@ -1,5 +1,5 @@
.. meta::
:description: AMD Instinct MI300A system settings
:description: Learn about AMD Instinct MI300A system settings and performance tuning.
:keywords: AMD, Instinct, MI300A, HPC, tuning, BIOS settings, NBIO, ROCm,
environment variable, performance, accelerator, GPU, EPYC, GRUB,
operating system

View File

@@ -1,5 +1,5 @@
.. meta::
:description: AMD Instinct MI300X system settings
:description: Learn about AMD Instinct MI300X system settings and performance tuning.
:keywords: AMD, Instinct, MI300X, HPC, tuning, BIOS settings, NBIO, ROCm,
environment variable, performance, accelerator, GPU, EPYC, GRUB,
operating system
@@ -35,7 +35,7 @@ functioning correctly before trying to improve its overall performance. In this
section, the settings discussed mostly ensure proper functionality of your
Instinct-based system. Some settings discussed are known to improve performance
for most applications running on a MI300X system. See
:doc:`/how-to/tuning-guides/mi300x/workload` for how to improve performance for
:doc:`../rocm-for-ai/inference-optimization/workload` for how to improve performance for
specific applications or workloads.
.. _mi300x-bios-settings:
@@ -537,6 +537,8 @@ installation was successful, refer to the
:doc:`rocm-install-on-linux:install/post-install`.
Should verification fail, consult :doc:`/how-to/system-debugging`.
.. _mi300x-hardware-verification-with-rocm:
Hardware verification with ROCm
-------------------------------

View File

@@ -1,9 +1,9 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="RDNA2 workstation tuning guide">
<meta name="keywords" content="RDNA2, workstation, BIOS settings, installation, AMD,
ROCm">
</head>
---
myst:
html_meta:
"description": "Learn about system settings and performance tuning for RDNA2-based GPUs."
"keywords": "RDNA2, workstation, desktop, BIOS, installation, Radeon, pro, v620, w6000"
---
# AMD RDNA2 system optimization
@@ -12,7 +12,7 @@
This chapter reviews system settings that are required to configure the system
for ROCm virtualization on RDNA2-based AMD Radeon™ PRO GPUs. Installing ROCm on
Bare Metal follows the routine ROCm
{doc}`installation procedure<rocm-install-on-linux:install/native-install/index>`.
{doc}`installation procedure<rocm-install-on-linux:install/install-methods/package-manager-index>`.
To enable ROCm virtualization on V620, one has to setup Single Root I/O
Virtualization (SR-IOV) in the BIOS via setting found in the following
@@ -166,4 +166,4 @@ First, assign GPU virtual function (VF) to VM using the following steps.
Then start the VM.
Finally install ROCm on the virtual machine (VM). For detailed instructions,
refer to the {doc}`Linux install guide<rocm-install-on-linux:install/native-install/index>`.
refer to the {doc}`Linux install guide<rocm-install-on-linux:install/install-methods/package-manager-index>`.

View File

@@ -1,3 +1,7 @@
.. meta::
:description: How to configure MI300X accelerators to fully leverage their capabilities and achieve optimal performance.
:keywords: ROCm, AI, machine learning, MI300X, LLM, usage, tutorial, optimization, tuning
************************
AMD MI300X tuning guides
************************
@@ -8,8 +12,8 @@ accelerators. They include detailed instructions on system settings and
application tuning suggestions to help you fully leverage the capabilities of
these accelerators, thereby achieving optimal performance.
* :doc:`/how-to/performance-validation/mi300x/vllm-benchmark`
* :doc:`../../rocm-for-ai/inference/vllm-benchmark`
* :doc:`/how-to/tuning-guides/mi300x/system`
* :doc:`../../system-optimization/mi300x`
* :doc:`/how-to/tuning-guides/mi300x/workload`
* :doc:`../../rocm-for-ai/inference-optimization/workload`

View File

@@ -37,17 +37,14 @@ ROCm documentation is organized into the following categories:
:::{grid-item-card} How to
:class-body: rocm-card-banner rocm-hue-12
* [Programming guide](./how-to/hip_programming_guide.rst)
* [Use ROCm for AI](./how-to/rocm-for-ai/index.rst)
* [Use ROCm for HPC](./how-to/rocm-for-hpc/index.rst)
* [Fine-tune LLMs and inference optimization](./how-to/llm-fine-tuning-optimization/index.rst)
* [System optimization](./how-to/system-optimization/index.rst)
* [AMD Instinct MI300X performance validation and tuning](./how-to/tuning-guides/mi300x/index.rst)
* [GPU cluster networking](https://rocm.docs.amd.com/projects/gpu-cluster-networking/en/latest/index.html)
* [System debugging](./how-to/system-debugging.md)
* [Use MPI](./how-to/gpu-enabled-mpi.rst)
* [Use advanced compiler features](./conceptual/compiler-topics.md)
* [Set the number of CUs](./how-to/setting-cus)
* [Set the number of CUs](./how-to/setting-cus)
* [Troubleshoot BAR access limitation](./how-to/Bar-Memory.rst)
* [ROCm examples](https://github.com/amd/rocm-examples)
:::
@@ -55,12 +52,11 @@ ROCm documentation is organized into the following categories:
:class-body: rocm-card-banner rocm-hue-8
* [GPU architecture overview](./conceptual/gpu-arch.md)
* [GPU memory](./conceptual/gpu-memory.md)
* [Input-Output Memory Management Unit (IOMMU)](./conceptual/iommu.rst)
* [File structure (Linux FHS)](./conceptual/file-reorg.md)
* [GPU isolation techniques](./conceptual/gpu-isolation.md)
* [Using CMake](./conceptual/cmake-packages.rst)
* [ROCm & PCIe atomics](./conceptual/More-about-how-ROCm-uses-PCIe-Atomics.rst)
* [PCIe atomics in ROCm](./conceptual/pcie-atomics.rst)
* [Inception v3 with PyTorch](./conceptual/ai-pytorch-inception.md)
* [Oversubscription of hardware resources](./conceptual/oversubscription.rst)
:::
@@ -73,6 +69,7 @@ ROCm documentation is organized into the following categories:
* [ROCm tools, compilers, and runtimes](./reference/rocm-tools.md)
* [Accelerator and GPU hardware specifications](./reference/gpu-arch-specs.rst)
* [Precision support](./reference/precision-support.rst)
* [Graph safe support](./reference/graph-safe-support.rst)
:::
<!-- markdownlint-enable MD051 -->

View File

@@ -32,6 +32,21 @@ For more information about ROCm hardware compatibility, see the ROCm `Compatibil
- L1 Instruction Cache (KiB)
- VGPR File (KiB)
- SGPR File (KiB)
*
- MI325X
- CDNA3
- gfx942
- 256
- 304 (38 per XCD)
- 64
- 64
- 256
- 32 (4 per XCD)
- 32
- 16 per 2 CUs
- 64 per 2 CUs
- 512
- 12.5
*
- MI300X
- CDNA3

View File

@@ -0,0 +1,111 @@
.. meta::
:description: This page lists supported graph safe ROCm libraries.
:keywords: AMD, ROCm, HIP, hipGRAPH
********************************************************************************
Graph-safe support for ROCm libraries
********************************************************************************
HIP graph-safe libraries operate safely in HIP execution graphs.
:ref:`hip:how_to_HIP_graph` are an alternative way of executing tasks on a GPU
that can provide performance benefits over launching kernels using the standard
method via streams.
Functions and routines from graph-safe libraries shouldnt result in issues like
race conditions, deadlocks, or unintended dependencies.
The following table shows whether a ROCm library is graph-safe.
.. list-table::
:header-rows: 1
*
- ROCm library
- Graph safe support
*
- `Composable Kernel <https://github.com/ROCm/composable_kernel>`_
-
*
- `hipBLAS <https://github.com/ROCm/hipBLAS>`_
-
*
- `hipBLASLt <https://github.com/ROCm/hipBLASLt>`_
- ⚠️
*
- `hipCUB <https://github.com/ROCm/hipCUB>`_
-
*
- `hipFFT <https://github.com/ROCm/hipFFT>`_
- ✅ (see :ref:`details <hipfft:hip-graph-support-for-hipfft>`)
*
- `hipRAND <https://github.com/ROCm/hipRAND>`_
-
*
- `hipSOLVER <https://github.com/ROCm/hipSOLVER>`_
- ⚠️ (experimental)
*
- `hipSPARSE <https://github.com/ROCm/hipSPARSE>`_
-
*
- `hipSPARSELt <https://github.com/ROCm/hipSPARSELt>`_
- ⚠️ (experimental)
*
- `hipTensor <https://github.com/ROCm/hipTensor>`_
-
*
- `MIOpen <https://github.com/ROCm/MIOpen>`_
-
*
- `RCCL <https://github.com/ROCm/rccl>`_
-
*
- `rocAL <https://github.com/ROCm/rocAL>`_
-
*
- `rocALUTION <https://github.com/ROCm/rocALUTION>`_
-
*
- `rocBLAS <https://github.com/ROCm/rocBLAS>`_
- ✅ (see :doc:`details <rocblas:reference/beta-features>`)
*
- `rocDecode <https://github.com/ROCm/rocDecode>`_
-
*
- `rocFFT <https://github.com/ROCm/rocFFT>`_
- ✅ (see :ref:`details <rocfft:hip-graph-support-for-rocfft>`)
*
- `rocHPCG <https://github.com/ROCm/rocHPCG>`_
-
*
- `rocJPEG <https://github.com/ROCm/rocJPEG>`_
-
*
- `rocPRIM <https://github.com/ROCm/rocPRIM>`_
-
*
- `rocRAND <https://github.com/ROCm/rocRAND>`_
-
*
- `rocSOLVER <https://github.com/ROCm/rocSOLVER>`_
- ⚠️ (experimental)
*
- `rocSPARSE <https://github.com/ROCm/rocSPARSE>`_
- ⚠️ (experimental)
*
- `rocThrust <https://github.com/ROCm/rocThrust>`_
- ❌ (see :doc:`details <rocthrust:hipgraph-support>`)
*
- `rocWMMA <https://github.com/ROCm/rocWMMA>`_
-
*
- `RPP <https://github.com/ROCm/rpp>`_
- ⚠️
*
- `Tensile <https://github.com/ROCm/Tensile>`_
-
✅: full support
⚠️: partial support
❌: not supported

View File

@@ -1,7 +1,7 @@
<head>
<meta charset="UTF-8">
<meta name="description" content="ROCm API libraries & tools">
<meta name="keywords" content="ROCm, API, libraries, tools, artificial intelligence, development,
<meta name="keywords" content="ROCm, API, libraries, tools, AI, artificial intelligence, development,
Communications, C++ primitives, Fast Fourier transforms, FFTs, random number generators, linear
algebra, AMD">
</head>

View File

@@ -8,6 +8,7 @@
| Version | Release date |
| ------- | ------------ |
| [6.3.1](https://rocm.docs.amd.com/en/docs-6.3.1/) | December 20, 2024 |
| [6.3.0](https://rocm.docs.amd.com/en/docs-6.3.0/) | December 3, 2024 |
| [6.2.4](https://rocm.docs.amd.com/en/docs-6.2.4/) | November 6, 2024 |
| [6.2.2](https://rocm.docs.amd.com/en/docs-6.2.2/) | September 27, 2024 |

View File

@@ -36,38 +36,62 @@ subtrees:
title: Use ROCm for AI
subtrees:
- entries:
- file: how-to/rocm-for-ai/install.rst
title: Installation
- file: how-to/rocm-for-ai/train-a-model.rst
title: Train a model
- file: how-to/rocm-for-ai/hugging-face-models.rst
title: Run models from Hugging Face
- file: how-to/rocm-for-ai/deploy-your-model.rst
title: Deploy your model
- file: how-to/rocm-for-hpc/index.rst
title: Use ROCm for HPC
- file: how-to/llm-fine-tuning-optimization/index.rst
title: Fine-tune LLMs and inference optimization
subtrees:
- entries:
- file: how-to/llm-fine-tuning-optimization/overview.rst
title: Conceptual overview
- file: how-to/llm-fine-tuning-optimization/fine-tuning-and-inference.rst
- file: how-to/rocm-for-ai/training/index.rst
title: Training
subtrees:
- entries:
- file: how-to/llm-fine-tuning-optimization/single-gpu-fine-tuning-and-inference.rst
title: Use a single accelerator
- file: how-to/llm-fine-tuning-optimization/multi-gpu-fine-tuning-and-inference.rst
title: Use multiple accelerators
- file: how-to/llm-fine-tuning-optimization/model-quantization.rst
- file: how-to/llm-fine-tuning-optimization/model-acceleration-libraries.rst
- file: how-to/llm-fine-tuning-optimization/llm-inference-frameworks.rst
- file: how-to/llm-fine-tuning-optimization/optimizing-with-composable-kernel.md
title: Optimize with Composable Kernel
- file: how-to/llm-fine-tuning-optimization/optimizing-triton-kernel.rst
title: Optimize Triton kernels
- file: how-to/llm-fine-tuning-optimization/profiling-and-debugging.rst
title: Profile and debug
- file: how-to/rocm-for-ai/training/train-a-model.rst
title: Train a model
- file: how-to/rocm-for-ai/training/scale-model-training.rst
title: Scale model training
- file: how-to/rocm-for-ai/fine-tuning/index.rst
title: Fine-tuning LLMs
subtrees:
- entries:
- file: how-to/rocm-for-ai/fine-tuning/overview.rst
title: Conceptual overview
- file: how-to/rocm-for-ai/fine-tuning/fine-tuning-and-inference.rst
title: Fine-tuning
subtrees:
- entries:
- file: how-to/rocm-for-ai/fine-tuning/single-gpu-fine-tuning-and-inference.rst
title: Use a single accelerator
- file: how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference.rst
title: Use multiple accelerators
- file: how-to/rocm-for-ai/inference/index.rst
title: Inference
subtrees:
- entries:
- file: how-to/rocm-for-ai/inference/install.rst
title: Installation
- file: how-to/rocm-for-ai/inference/hugging-face-models.rst
title: Run models from Hugging Face
- file: how-to/rocm-for-ai/inference/llm-inference-frameworks.rst
title: LLM inference frameworks
- file: how-to/rocm-for-ai/inference/vllm-benchmark.rst
title: Performance validation
- file: how-to/rocm-for-ai/inference/deploy-your-model.rst
title: Deploy your model
- file: how-to/rocm-for-ai/inference-optimization/index.rst
title: Inference optimization
subtrees:
- entries:
- file: how-to/rocm-for-ai/inference-optimization/model-quantization.rst
- file: how-to/rocm-for-ai/inference-optimization/model-acceleration-libraries.rst
- file: how-to/rocm-for-ai/inference-optimization/optimizing-with-composable-kernel.md
title: Optimize with Composable Kernel
- file: how-to/rocm-for-ai/inference-optimization/optimizing-triton-kernel.rst
title: Optimize Triton kernels
- file: how-to/rocm-for-ai/inference-optimization/profiling-and-debugging.rst
title: Profile and debug
- file: how-to/rocm-for-ai/inference-optimization/workload.rst
title: Workload tuning
- file: how-to/rocm-for-hpc/index.rst
title: Use ROCm for HPC
- file: how-to/system-optimization/index.rst
title: System optimization
subtrees:
@@ -84,18 +108,6 @@ subtrees:
title: AMD RDNA 2
- file: how-to/tuning-guides/mi300x/index.rst
title: AMD MI300X performance validation and tuning
subtrees:
- entries:
- file: how-to/performance-validation/mi300x/vllm-benchmark.rst
title: Performance validation
- file: how-to/tuning-guides/mi300x/system.rst
title: System tuning
- file: how-to/tuning-guides/mi300x/workload.rst
title: Workload tuning
- url: https://rocm.docs.amd.com/projects/gpu-cluster-networking/en/${branch}/index.html
title: GPU cluster networking
- file: how-to/gpu-enabled-mpi.rst
title: Use MPI
- file: how-to/system-debugging.md
- file: conceptual/compiler-topics.md
title: Use advanced compiler features
@@ -108,7 +120,9 @@ subtrees:
- url: https://rocm.docs.amd.com/projects/llvm-project/en/latest/conceptual/openmp.html
title: OpenMP support
- file: how-to/setting-cus
title: Set the number of CUs
title: Set the number of CUs
- file: how-to/Bar-Memory.rst
title: Troubleshoot BAR access limitation
- url: https://github.com/amd/rocm-examples
title: ROCm examples
@@ -144,8 +158,6 @@ subtrees:
title: AMD Instinct MI100/CDNA1 ISA
- url: https://www.amd.com/system/files/documents/amd-cdna-whitepaper.pdf
title: White paper
- file: conceptual/gpu-memory.md
title: GPU memory
- file: conceptual/iommu.rst
title: Input-Output Memory Management Unit (IOMMU)
- file: conceptual/file-reorg.md
@@ -154,8 +166,8 @@ subtrees:
title: GPU isolation techniques
- file: conceptual/cmake-packages.rst
title: Using CMake
- file: conceptual/More-about-how-ROCm-uses-PCIe-Atomics.rst
title: ROCm & PCIe atomics
- file: conceptual/pcie-atomics.rst
title: PCIe atomics in ROCm
- file: conceptual/ai-pytorch-inception.md
title: Inception v3 with PyTorch
- file: conceptual/oversubscription.rst
@@ -171,6 +183,8 @@ subtrees:
title: Hardware specifications
- file: reference/precision-support.rst
title: Precision support
- file: reference/graph-safe-support.rst
title: Graph safe support
- caption: Contribute
entries:

View File

@@ -1,3 +1,3 @@
rocm-docs-core==1.11.0
rocm-docs-core==1.13.0
sphinx-reredirects
sphinx-sitemap

View File

@@ -16,17 +16,17 @@ beautifulsoup4==4.12.3
# via pydata-sphinx-theme
breathe==4.35.0
# via rocm-docs-core
certifi==2024.8.30
certifi==2024.12.14
# via requests
cffi==1.17.1
# via
# cryptography
# pynacl
charset-normalizer==3.4.0
charset-normalizer==3.4.1
# via requests
click==8.1.7
click==8.1.8
# via sphinx-external-toc
cryptography==43.0.3
cryptography==44.0.0
# via pyjwt
deprecated==1.2.15
# via pygithub
@@ -36,17 +36,17 @@ docutils==0.21.2
# myst-parser
# pydata-sphinx-theme
# sphinx
fastjsonschema==2.20.0
fastjsonschema==2.21.1
# via rocm-docs-core
gitdb==4.0.11
gitdb==4.0.12
# via gitpython
gitpython==3.1.43
gitpython==3.1.44
# via rocm-docs-core
idna==3.10
# via requests
imagesize==1.4.1
# via sphinx
jinja2==3.1.4
jinja2==3.1.5
# via
# myst-parser
# sphinx
@@ -66,18 +66,18 @@ packaging==24.2
# via sphinx
pycparser==2.22
# via cffi
pydata-sphinx-theme==0.16.0
pydata-sphinx-theme==0.16.1
# via
# rocm-docs-core
# sphinx-book-theme
pygithub==2.5.0
# via rocm-docs-core
pygments==2.18.0
pygments==2.19.1
# via
# accessible-pygments
# pydata-sphinx-theme
# sphinx
pyjwt[crypto]==2.10.0
pyjwt[crypto]==2.10.1
# via pygithub
pynacl==1.5.0
# via pygithub
@@ -90,9 +90,9 @@ requests==2.32.3
# via
# pygithub
# sphinx
rocm-docs-core==1.11.0
rocm-docs-core==1.13.0
# via -r requirements.in
smmap==5.0.1
smmap==5.0.2
# via gitdb
snowballstemmer==2.2.0
# via sphinx
@@ -137,15 +137,15 @@ sphinxcontrib-qthelp==2.0.0
# via sphinx
sphinxcontrib-serializinghtml==2.0.0
# via sphinx
tomli==2.1.0
tomli==2.2.1
# via sphinx
typing-extensions==4.12.2
# via
# pydata-sphinx-theme
# pygithub
urllib3==2.2.3
urllib3==2.3.0
# via
# pygithub
# requests
wrapt==1.17.0
wrapt==1.17.1
# via deprecated

View File

@@ -10,9 +10,9 @@ ROCm is a software stack, composed primarily of open-source software, that
provides the tools for programming AMD Graphics Processing Units (GPUs), from
low-level kernels to high-level end-user applications.
.. image:: data/rocm-software-stack-6_3_0.jpg
.. image:: data/rocm-software-stack-6_3_2.jpg
:width: 800
:alt: AMD's ROCm software stack and neighboring technologies.
:alt: AMD's ROCm software stack and enabling technologies.
:align: center
Specifically, ROCm provides the tools for

View File

@@ -1,7 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="rocm-org" fetch="https://github.com/ROCm/" />
<default revision="refs/tags/rocm-6.3.0"
<default revision="refs/tags/rocm-6.3.1"
remote="rocm-org"
sync-c="true"
sync-j="4" />

View File

@@ -0,0 +1,61 @@
# ROCm 6.3.1 release notes
The release notes provide a summary of notable changes since the previous ROCm release.
- [Release highlights](#release-highlights)
- [Operating system and hardware support changes](#operating-system-and-hardware-support-changes)
- [ROCm components versioning](#rocm-components)
- [Detailed component changes](#detailed-component-changes)
- [ROCm known issues](#rocm-known-issues)
- [ROCm resolved issues](#rocm-resolved-issues)
- [ROCm upcoming changes](#rocm-upcoming-changes)
```{note}
If youre using Radeon™ PRO or Radeon GPUs in a workstation setting with a
display connected, continue to use ROCm 6.2.3. See the [Use ROCm on Radeon GPUs](https://rocm.docs.amd.com/projects/radeon/en/latest/index.html)
documentation to verify compatibility and system requirements.
```
## Release highlights
The following are notable new features and improvements in ROCm 6.3.1. For changes to individual components, see
[Detailed component changes](#detailed-component-changes).
### Per queue resiliency for Instinct MI300 accelerators
The AMDGPU driver now includes enhanced resiliency for misbehaving applications on AMD Instinct MI300 accelerators. This helps isolate the impact of misbehaving applications, ensuring other workloads running on the same accelerator are unaffected.
### ROCm Runfile Installer
ROCm 6.3.1 introduces the ROCm Runfile Installer, with initial support for Ubuntu 22.04. The ROCm Runfile Installer facilitates ROCm installation without using a native Linux package management system, with or without network or internet access. For more information, see the [ROCm Runfile Installer documentation](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.3.1/install/rocm-runfile-installer.html).
### ROCm documentation updates
ROCm documentation continues to be updated to provide clearer and more comprehensive guidance for a wider variety of user needs and use cases.
* Added documentation on training a model with ROCm Megatron-LM. AMD offers a Docker image for MI300X accelerators
containing essential components to get started, including ROCm libraries, PyTorch, and Megatron-LM utilities. See
[Training a model using ROCm Megatron-LM](https://rocm.docs.amd.com/en/latest/how-to/rocm-for-ai/train-a-model.html)
to get started.
The new ROCm Megatron-LM training Docker accompanies the [ROCm vLLM inference
Docker](https://rocm.docs.amd.com/en/latest/how-to/performance-validation/mi300x/vllm-benchmark.html)
as a set of ready-to-use containerized solutions to get started with using ROCm
for AI.
* Updated the [Instinct MI300X workload tuning
guide](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html) with more current optimization
strategies. The updated sections include guidance on vLLM optimization, PyTorch TunableOp, and hipBLASLt tuning.
* HIP graph-safe libraries operate safely in HIP execution graphs. [HIP graphs](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_runtime_api/hipgraph.html#how-to-hip-graph) are an alternative way of executing tasks on a GPU that can provide performance benefits over launching kernels using the standard method via streams. A topic that shows whether a [ROCm library is graph-safe](https://advanced-micro-devices-demo--3953.com.readthedocs.build/en/3953/reference/graph-safe-support.html) has been added.
* The [Device memory](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_runtime_api/memory_management/device_memory.html) topic in the HIP memory management section has been updated.
* The HIP documentation has expanded with new resources for developers:
* [Multi device management](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_runtime_api/multi_device.html)
* [OpenGL interoperability](https://rocm.docs.amd.com/projects/HIP/en/latest/how-to/hip_runtime_api/opengl_interop.html)

View File

@@ -0,0 +1,8 @@
## ROCm known issues
ROCm known issues are noted on {fab}`github` [GitHub](https://github.com/ROCm/ROCm/labels/Verified%20Issue). For known
issues related to individual components, review the [Detailed component changes](#detailed-component-changes).
### PCI Express Qualification Tool failure on Debian 12
The PCI Express Qualification Tool (PEQT) module present in the ROCm Validation Suite (RVS) might fail due to the segmentation issue in Debian 12 (bookworm). This will result in failure to determine the characteristics of the PCIe interconnect between the host platform and the GPU like support for Gen 3 atomic completers, DMA transfer statistics, link speed, and link width. The standard PCIe command `lspci` can be used as an alternative to view the characteristics of the PCIe bus interconnect with the GPU. This issue is under investigation and will be addressed in a future release. See [GitHub issue #4175](https://github.com/ROCm/ROCm/issues/4175).

View File

@@ -0,0 +1,23 @@
## ROCm resolved issues
The following are previously known issues resolved in this release. For resolved issues related to
individual components, review the [Detailed component changes](#detailed-component-changes).
### Instinct MI300 series: backward weights convolution performance issue
Fixed a performance issue affecting certain tensor shapes during backward weights convolution when using FP16 or FP32 data types on Instinct MI300 series accelerators. See [GitHub issue #4080](https://github.com/ROCm/ROCm/issues/4080).
### ROCm Compute Profiler and ROCm Systems Profiler post-upgrade issues
Packaging metadata for ROCm Compute Profiler (`rocprofiler-compute`) and ROCm Systems Profiler
(`rocprofiler-systems`) has been updated to handle the renaming from Omniperf and Omnitrace,
respectively. This fixes minor issues when upgrading from ROCm 6.2 to 6.3. For more information, see the GitHub issues
[#4082](https://github.com/ROCm/ROCm/issues/4082) and
[#4083](https://github.com/ROCm/ROCm/issues/4082).
### Stale file due to OpenCL ICD loader deprecation
When upgrading from ROCm 6.2.x to ROCm 6.3.0, the issue of removal of the `rocm-icd-loader` package
leaving a stale file in the old `rocm-6.2.x` directory has been resolved. The stale files left during
the upgrade from ROCm 6.2.x to ROCm 6.3.0 will be removed when upgrading to ROCm 6.3.1. For more
information, see [GitHub issue #4084](https://github.com/ROCm/ROCm/issues/4084).

View File

@@ -0,0 +1,9 @@
## Operating system and hardware support changes
ROCm 6.3.1 adds support for Debian 12 (kernel: 6.1). Debian is supported only on AMD Instinct accelerators. See the installation instructions at [Debian native installation](https://rocm.docs.amd.com/projects/install-on-linux/en/docs-6.3.1/install/native-install/debian.html).
ROCm 6.3.1 enables support for AMD Instinct MI325X accelerator. For more information, see [AMD Instinct™ MI325X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi325x.html).
See the [Compatibility
matrix](https://rocm.docs.amd.com/en/docs-6.3.1/compatibility/compatibility-matrix.html)
for more information about operating system and hardware compatibility.

View File

@@ -0,0 +1,13 @@
## ROCm upcoming changes
The following changes to the ROCm software stack are anticipated for future releases.
### AMDGPU wavefront size compiler macro deprecation
The `__AMDGCN_WAVEFRONT_SIZE__` macro will be deprecated in an upcoming
release. It is recommended to remove any use of this macro. For more information, see [AMDGPU
support](https://rocm.docs.amd.com/projects/llvm-project/en/docs-6.3.1/LLVM/clang/html/AMDGPUSupport.html).
### HIPCC Perl scripts deprecation
The HIPCC Perl scripts (`hipcc.pl` and `hipconfig.pl`) will be removed in an upcoming release.

View File

@@ -66,46 +66,44 @@ endef
# It is a space seperated list with zero or more elements.
$(call adddep,amd_smi_lib,${ASAN_DEP})
$(call adddep,aqlprofile,${ASAN_DEP} hsa)
$(call adddep,aqlprofile,${ASAN_DEP} rocr)
$(call adddep,comgr,lightning devicelibs)
$(call adddep,dbgapi,hsa comgr)
$(call adddep,dbgapi,rocr comgr)
$(call adddep,devicelibs,lightning)
$(call adddep,hip_on_rocclr,${ASAN_DEP} hsa comgr hipcc rocprofiler-register)
$(call adddep,hip_on_rocclr,${ASAN_DEP} rocr comgr hipcc rocprofiler-register)
$(call adddep,hipcc,)
$(call adddep,hipify_clang,hip_on_rocclr lightning)
$(call adddep,hsa,${ASAN_DEP} thunk lightning devicelibs rocprofiler-register)
$(call adddep,lightning,)
$(call adddep,omniperf,${ASAN_DEP})
$(call adddep,omnitrace,hipcc hsa hip_on_rocclr rocm_smi_lib rocprofiler roctracer)
$(call adddep,opencl_icd_loader,)
$(call adddep,opencl_on_rocclr,${ASAN_DEP} hsa comgr opencl_icd_loader)
$(call adddep,openmp_extras,thunk lightning devicelibs hsa)
$(call adddep,rdc,${ASAN_DEP} rocm_smi_lib hsa rocprofiler)
$(call adddep,rocclr,${ASAN_DEP} hsa comgr hipcc rocprofiler-register)
$(call adddep,rocm_bandwidth_test,${ASAN_DEP} hsa)
$(call adddep,opencl_on_rocclr,${ASAN_DEP} rocr comgr)
$(call adddep,openmp_extras,lightning devicelibs rocr)
$(call adddep,rocm_bandwidth_test,${ASAN_DEP} rocr)
$(call adddep,rocm_smi_lib,${ASAN_DEP})
$(call adddep,rocm-cmake,${ASAN_DEP})
$(call adddep,rocm-core,${ASAN_DEP})
$(call adddep,rocm-gdb,dbgapi)
$(call adddep,rocminfo,${ASAN_DEP} hsa)
$(call adddep,rocminfo,${ASAN_DEP} rocr)
$(call adddep,rocprofiler-register,${ASAN_DEP})
$(call adddep,rocprofiler-sdk,${ASAN_DEP} hsa aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocprofiler,${ASAN_DEP} hsa roctracer aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocr_debug_agent,${ASAN_DEP} hip_on_rocclr hsa dbgapi)
$(call adddep,roctracer,${ASAN_DEP} hsa hip_on_rocclr)
$(call adddep,thunk,${ASAN_DEP})
$(call adddep,rocprofiler-sdk,${ASAN_DEP} rocr aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocprofiler-systems,${ASAN_DEP} hipcc rocr hip_on_rocclr rocm_smi_lib rocprofiler roctracer rocprofiler-sdk)
$(call adddep,rocprofiler,${ASAN_DEP} rocr roctracer aqlprofile opencl_on_rocclr hip_on_rocclr comgr)
$(call adddep,rocprofiler-compute,${ASAN_DEP})
$(call adddep,rocr,${ASAN_DEP} lightning rocm_smi_lib devicelibs rocprofiler-register)
$(call adddep,rocr_debug_agent,${ASAN_DEP} hip_on_rocclr rocr dbgapi)
$(call adddep,roctracer,${ASAN_DEP} rocr hip_on_rocclr)
# rocm-dev points to all possible last finish components of Stage1 build.
rocm-dev-components :=rdc hipify_clang openmp_extras \
omniperf omnitrace rocm-core amd_smi_lib hipcc \
rocm_bandwidth_test rocr_debug_agent rocm-gdb
$(call adddep,rocm-dev,$(filter-out ${NOBUILD},${rocm-dev-components}))
rocm-dev-components :=amd_smi_lib aqlprofile comgr dbgapi devicelibs hip_on_rocclr hipcc hipify_clang \
lightning rocprofiler-compute opencl_on_rocclr openmp_extras rocm_bandwidth_test rocm_smi_lib \
rocm-cmake rocm-core rocm-gdb rocminfo rocprofiler-register rocprofiler-sdk rocprofiler-systems \
rocprofiler rocr rocr_debug_agent roctracer
$(call adddep,rocm-dev,$(filter-out ${NOBUILD} kernel_ubuntu,${rocm-dev-components}))
$(call adddep,amdmigraphx,hip_on_rocclr half rocblas miopen-hip lightning hipcc)
$(call adddep,amdmigraphx,hip_on_rocclr half rocblas miopen-hip lightning hipcc hiptensor)
$(call adddep,composable_kernel,lightning hipcc hip_on_rocclr rocm-cmake)
$(call adddep,half,rocm-cmake)
$(call adddep,hipblas-common,lightning)
$(call adddep,hipblas,hip_on_rocclr rocblas rocsolver lightning hipcc)
$(call adddep,hipblaslt,hip_on_rocclr openmp_extras hipblas lightning hipcc)
$(call adddep,hipblaslt,hip_on_rocclr openmp_extras lightning hipcc hipblas-common rocm-dev)
$(call adddep,hipcub,hip_on_rocclr rocprim lightning hipcc)
$(call adddep,hipfft,hip_on_rocclr openmp_extras rocfft rocrand hiprand lightning hipcc)
$(call adddep,hipfort,rocblas hipblas rocsparse hipsparse rocfft hipfft rocrand hiprand rocsolver hipsolver lightning hipcc)
@@ -115,22 +113,25 @@ $(call adddep,hipsparse,hip_on_rocclr rocsparse lightning hipcc)
$(call adddep,hipsparselt,hip_on_rocclr hipsparse lightning hipcc openmp_extras)
$(call adddep,hiptensor,hip_on_rocclr composable_kernel lightning hipcc)
$(call adddep,miopen-deps,lightning hipcc)
$(call adddep,miopen-hip,composable_kernel half hip_on_rocclr miopen-deps rocblas roctracer lightning hipcc)
$(call adddep,miopen-hip,composable_kernel half hip_on_rocclr miopen-deps hipblas hipblaslt rocrand roctracer lightning hipcc)
$(call adddep,mivisionx,amdmigraphx miopen-hip rpp lightning hipcc)
$(call adddep,rccl,hip_on_rocclr hsa lightning hipcc rocm_smi_lib hipify_clang)
$(call adddep,rccl,rocm-core hip_on_rocclr rocr lightning hipcc rocm_smi_lib hipify_clang)
$(call adddep,rdc,rocm_smi_lib rocprofiler rocmvalidationsuite)
$(call adddep,rocalution,rocblas rocsparse rocrand lightning hipcc)
$(call adddep,rocblas,hip_on_rocclr openmp_extras lightning hipcc)
$(call adddep,rocblas,hip_on_rocclr openmp_extras lightning hipcc hipblaslt)
$(call adddep,rocal,mivisionx)
$(call adddep,rocdecode,hip_on_rocclr lightning hipcc)
$(call adddep,rocdecode,hip_on_rocclr lightning hipcc amdmigraphx)
$(call adddep,rocfft,hip_on_rocclr rocrand hiprand lightning hipcc openmp_extras)
$(call adddep,rocmvalidationsuite,hip_on_rocclr hsa rocblas rocm-core lightning hipcc rocm_smi_lib)
$(call adddep,rocjpeg,hip_on_rocclr lightning hipcc rocm-dev)
$(call adddep,rocmvalidationsuite,hip_on_rocclr rocr hipblas hiprand hipblaslt rocm-core lightning hipcc rocm_smi_lib)
$(call adddep,rocprim,hip_on_rocclr lightning hipcc)
$(call adddep,rocrand,hip_on_rocclr lightning hipcc)
$(call adddep,rocsolver,hip_on_rocclr rocblas rocsparse lightning hipcc)
$(call adddep,rocsolver,hip_on_rocclr rocblas rocsparse rocprim lightning hipcc)
$(call adddep,rocsparse,hip_on_rocclr rocprim lightning hipcc)
$(call adddep,rocthrust,hip_on_rocclr rocprim lightning hipcc)
$(call adddep,rocwmma,hip_on_rocclr rocblas lightning hipcc rocm-cmake rocm_smi_lib)
$(call adddep,rpp,half lightning hipcc openmp_extras)
$(call adddep,transferbench,hip_on_rocclr lightning hipcc)
# -------------------------------------------------------------------------
@@ -189,7 +190,7 @@ else # } {
# Pass in jobserver info using the RMAKE variable
${RMAKE}@( if set -x && source $${INFRA_REPO}/envsetup.sh && \
rm -f $$@.errors $$@ $$@.repackaged && \
$${INFRA_REPO}/build_$1.sh -c && source $${INFRA_REPO}/ccache-env-mathlib.sh && \
$${INFRA_REPO}/build_$1.sh -c && \
time bash -x $${INFRA_REPO}/build_$1.sh $${RELEASE_FLAG} $${SANITIZER_FLAG} && $${INFRA_REPO}/post_inst_pkg.sh "$1" ; \
then mv $$@.inprogress $$@ ; \
else mv $$@.inprogress $$@.errors ; echo Error in $1 >&2 ; exit 1 ;\
@@ -216,11 +217,14 @@ $(call peval,$(foreach dep,$(strip ${components}),$(call toplevel,${dep})))
all: $(addprefix T_,$(filter-out ${NOBUILD},${components}))
@echo All ROCm components built
# Do not document this target
upload: $(addprefix U_,${components})
upload: $(addprefix U_,$(filter-out ${NOBUILD},${components}))
@echo All ROCm components built and uploaded
upload-rocm-dev: $(addprefix U_,$(filter-out ${NOBUILD},${components}))
@echo All rocm-dev components built and uploaded
##help rocm-dev: Build a subset of ROCm
rocm-dev: T_rocm-dev
rocm-dev: $(addprefix T_,$(filter-out ${NOBUILD},${components}))
@echo rocm-dev built
${OUT_DIR}/logs:

View File

@@ -22,15 +22,15 @@ printUsage() {
return 0
}
PROJ_NAME="amdsmi"
PACKAGE_ROOT="$(getPackageRoot)"
TARGET="build"
PACKAGE_LIB=$(getLibPath)
PACKAGE_INCLUDE="$(getIncludePath)"
AMDSMI_BUILD_DIR=$(getBuildPath amdsmi)
AMDSMI_PACKAGE_DEB_DIR="$(getPackageRoot)/deb/amdsmi"
AMDSMI_PACKAGE_RPM_DIR="$(getPackageRoot)/rpm/amdsmi"
AMDSMI_BUILD_DIR=$(getBuildPath $PROJ_NAME)
AMDSMI_PACKAGE_DEB_DIR="$PACKAGE_ROOT/deb/$PROJ_NAME"
AMDSMI_PACKAGE_RPM_DIR="$PACKAGE_ROOT/rpm/$PROJ_NAME"
AMDSMI_BUILD_TYPE="debug"
BUILD_TYPE="Debug"
@@ -57,10 +57,9 @@ do
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on
# TODO - support standard option of passing cmake environment vars - CFLAGS,CXXFLAGS etc., to enable address sanitizer
ADDRESS_SANITIZER=true ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-p | --package)

View File

@@ -10,7 +10,9 @@ build_amdmigraphx() {
cd $COMPONENT_SRC
pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
if ! command -v rbuild &> /dev/null; then
pip3 install https://github.com/RadeonOpenCompute/rbuild/archive/master.tar.gz
fi
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
@@ -20,7 +22,7 @@ build_amdmigraphx() {
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="$GPU_ARCHS"
else
GPU_TARGETS="gfx908;gfx90a;gfx940;gfx941;gfx942;gfx1030;gfx1100;gfx1101"
GPU_TARGETS="gfx900;gfx906;gfx908;gfx90a;gfx1030;gfx1100;gfx1101;gfx1102;gfx942;gfx1200;gfx1201"
fi
init_rocm_common_cmake_params
@@ -29,7 +31,7 @@ build_amdmigraphx() {
--cxx="${ROCM_PATH}/llvm/bin/clang++" \
--cc="${ROCM_PATH}/llvm/bin/clang" \
"${rocm_math_common_cmake_params[@]}" \
-DCMAKE_MODULE_LINKER_FLAGS="-Wl,--enable-new-dtags -Wl,--rpath,$ROCM_LIB_RPATH" \
-DCMAKE_MODULE_LINKER_FLAGS="-Wl,--enable-new-dtags,--build-id=sha1,--rpath,$ROCM_LIB_RPATH" \
-DGPU_TARGETS="${GPU_TARGETS}" \
-DCMAKE_INSTALL_RPATH=""

View File

@@ -11,7 +11,9 @@ printUsage() {
echo " -p, --package <type> Specify packaging format"
echo " -r, --release Make a release build instead of a debug build"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
type referred to by pkg_type"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore.
No effect of the param on this build"
echo " -h, --help Prints this help"
echo
echo "Possible values for <type>:"

View File

@@ -1,136 +0,0 @@
#!/bin/bash
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
printUsage() {
echo
echo "Usage: $(basename "${BASH_SOURCE}") [-c|-r|-h] [makeopts]"
echo
echo "Options:"
echo " -c, --clean Removes all clang-ocl build artifacts"
echo " -r, --release Build non-debug version clang-ocl (default is debug)"
echo " -a, --address_sanitizer Enable address sanitizer"
echo " -o, --outdir <pkg_type> Print path of output directory containing packages of
type referred to by pkg_type"
echo " -h, --help Prints this help"
echo " -s, --static Supports static CI by accepting this param & not bailing out. No effect of the param though"
echo
return 0
}
TARGET="build"
CLANG_OCL_DEST="$(getBinPath)"
CLANG_OCL_SRC_ROOT="$CLANG_OCL_ROOT"
CLANG_OCL_BUILD_DIR="$(getBuildPath clang-ocl)"
MAKEARG="$DASH_JAY"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_UTILS="$(getUtilsPath)"
CLANG_OCL_PACKAGE_DEB="$PACKAGE_ROOT/deb/clang-ocl"
CLANG_OCL_PACKAGE_RPM="$PACKAGE_ROOT/rpm/clang-ocl"
BUILD_TYPE="Debug"
SHARED_LIBS="ON"
CLEAN_OR_OUT=0;
MAKETARGET="deb"
PKGTYPE="deb"
VALID_STR=`getopt -o hcraso:g: --long help,clean,release,clean,static,address_sanitizer,outdir:,gpu_list: -- "$@"`
eval set -- "$VALID_STR"
while true ;
do
case "$1" in
(-h | --help)
printUsage ; exit 0;;
(-c | --clean)
TARGET="clean" ; ((CLEAN_OR_OUT|=1)) ; shift ;;
(-r | --release)
MAKEARG="$MAKEARG BUILD_TYPE=rel" ; BUILD_TYPE="Release" ; shift ;;
(-a | --address_sanitizer)
set_asan_env_vars
set_address_sanitizer_on ; shift ;;
(-s | --static)
SHARED_LIBS="OFF" ; shift ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; OUT_DIR_SPECIFIED=1 ; ((CLEAN_OR_OUT|=2)) ; shift 2 ;;
(-g | --gpu_list )
GPU_LIST=$2; shift 2 ;;
--) shift; break;;
(*)
echo " This should never come but just incase : UNEXPECTED ERROR Parm : [$1] ">&2 ; exit 20;;
esac
done
RET_CONFLICT=1
check_conflicting_options $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
if [ $RET_CONFLICT -ge 30 ]; then
print_vars $API_NAME $TARGET $BUILD_TYPE $SHARED_LIBS $CLEAN_OR_OUT $PKGTYPE $MAKETARGET
exit $RET_CONFLICT
fi
clean_clang-ocl() {
echo "Removing clang-ocl"
rm -rf $CLANG_OCL_DEST/clang-ocl
rm -rf $CLANG_OCL_BUILD_DIR
rm -rf $CLANG_OCL_PACKAGE_DEB
rm -rf $CLANG_OCL_PACKAGE_RPM
}
build_clang-ocl() {
if [ ! -d "$CLANG_OCL_BUILD_DIR" ]; then
mkdir -p $CLANG_OCL_BUILD_DIR
pushd $CLANG_OCL_BUILD_DIR
if [ -e $PACKAGE_ROOT/lib/bitcode/opencl.amdgcn.bc ]; then
BC_DIR="$ROCM_INSTALL_PATH/lib"
else
BC_DIR="$ROCM_INSTALL_PATH/amdgcn/bitcode"
fi
cmake \
$(rocm_cmake_params) \
-DDISABLE_CHECKS="ON" \
-DCLANG_BIN="$ROCM_INSTALL_PATH/llvm/bin" \
-DBITCODE_DIR="$BC_DIR" \
$(rocm_common_cmake_params) \
-DCPACK_SET_DESTDIR="OFF" \
$CLANG_OCL_SRC_ROOT
echo "Making clang-ocl:"
cmake --build . -- $MAKEARG
cmake --build . -- $MAKEARG install
cmake --build . -- $MAKEARG package
popd
fi
copy_if DEB "${CPACKGEN:-"DEB;RPM"}" "$CLANG_OCL_PACKAGE_DEB" $CLANG_OCL_BUILD_DIR/rocm-clang-ocl*.deb
copy_if RPM "${CPACKGEN:-"DEB;RPM"}" "$CLANG_OCL_PACKAGE_RPM" $CLANG_OCL_BUILD_DIR/rocm-clang-ocl*.rpm
}
print_output_directory() {
case ${PKGTYPE} in
("deb")
echo ${CLANG_OCL_PACKAGE_DEB};;
("rpm")
echo ${CLANG_OCL_PACKAGE_RPM};;
(*)
echo "Invalid package type \"${PKGTYPE}\" provided for -o" >&2; exit 1;;
esac
exit
}
case $TARGET in
(clean) clean_clang-ocl ;;
(build) build_clang-ocl ;;
(outdir) print_output_directory ;;
(*) die "Invalid target $TARGET" ;;
esac
echo "Operation complete"
exit 0

View File

@@ -6,73 +6,53 @@ source "$(dirname "${BASH_SOURCE[0]}")/compute_helper.sh"
set_component_src composable_kernel
GPU_ARCH_LIST="gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
build_miopen_ck() {
echo "Start Building Composable Kernel"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
GPU_ARCH_LIST="gfx908:xnack+;gfx90a:xnack+;gfx942:xnack+"
else
unset_asan_env_vars
set_address_sanitizer_off
fi
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
GPU_ARCH_LIST="gfx942"
ack_and_skip_static
fi
PYTHON_VERSION_WORKAROUND=''
echo "DISTRO_ID: ${DISTRO_ID}"
if [ "$DISTRO_ID" = "rhel-8.8" ] || [ "$DISTRO_ID" = "sles-15.5" ] ; then
EXTRA_PYTHON_PATH=/opt/Python-3.8.13
PYTHON_VERSION_WORKAROUND="-DCK_USE_ALTERNATIVE_PYTHON=${EXTRA_PYTHON_PATH}/bin/python3.8"
# For the python interpreter we need to export LD_LIBRARY_PATH.
export LD_LIBRARY_PATH=${EXTRA_PYTHON_PATH}/lib:$LD_LIBRARY_PATH
fi
cd $COMPONENT_SRC
mkdir "$BUILD_DIR" && cd "$BUILD_DIR"
init_rocm_common_cmake_params
if [ -n "$GPU_ARCHS" ]; then
GPU_TARGETS="-DAMDGPU_TARGETS=${GPU_ARCHS}"
fi
if [ "${ASAN_CMAKE_PARAMS}" == "true" ] ; then
cmake -DBUILD_DEV=OFF \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE:-'RelWithDebInfo'} \
-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++ \
-DCMAKE_CXX_FLAGS=" -O3 " \
-DCMAKE_PREFIX_PATH="${ROCM_PATH%-*}/lib/cmake;${ROCM_PATH%-*}/$ASAN_LIBDIR;${ROCM_PATH%-*}/llvm;${ROCM_PATH%-*}" \
-DCMAKE_SHARED_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_LIB_RPATH" \
-DCMAKE_EXE_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_EXE_RPATH" \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DCMAKE_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH=${ROCM_PATH} \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
${LAUNCHER_FLAGS} \
-DINSTANCES_ONLY=ON \
-DENABLE_ASAN_PACKAGING=true \
"${GPU_TARGETS}" \
"$COMPONENT_SRC"
else
cmake -DBUILD_DEV=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++ \
-DCMAKE_CXX_FLAGS=" -O3 " \
-DCMAKE_PREFIX_PATH=${ROCM_PATH%-*} \
-DCMAKE_SHARED_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN' \
-DCMAKE_EXE_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN/../lib' \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX=${ROCM_PATH} \
-DCMAKE_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX=${ROCM_PATH} \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH=${ROCM_PATH} \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DINSTANCES_ONLY=ON \
"${GPU_TARGETS}" \
"$COMPONENT_SRC"
fi
cmake \
-DBUILD_DEV=OFF \
"${rocm_math_common_cmake_params[@]}" \
${PYTHON_VERSION_WORKAROUND} \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DGPU_ARCHS="${GPU_ARCH_LIST}" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_CXX_FLAGS=" -O3 " \
"$COMPONENT_SRC"
cmake --build . -- -j${PROC} package
cmake --build "$BUILD_DIR" -- install
mkdir -p $PACKAGE_DIR && cp ./*.${PKGTYPE} $PACKAGE_DIR
rm -rf *
}
unset_asan_env_vars() {
@@ -88,85 +68,6 @@ set_address_sanitizer_off() {
export LDFLAGS=""
}
build_miopen_ckProf() {
ENABLE_ADDRESS_SANITIZER=false
echo "Start Building Composable Kernel Profiler"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
set_address_sanitizer_on
else
unset_asan_env_vars
set_address_sanitizer_off
fi
cd $COMPONENT_SRC
cd "$BUILD_DIR"
rm -rf *
architectures='gfx10 gfx11 gfx90 gfx94'
if [ -n "$GPU_ARCHS" ]; then
architectures=$(echo ${GPU_ARCHS} | awk -F';' '{for(i=1;i<=NF;i++) a[substr($i,1,5)]} END{for(i in a) printf i" "}')
fi
for arch in ${architectures}
do
if [ "${ASAN_CMAKE_PARAMS}" == "true" ] ; then
cmake -DBUILD_DEV=OFF \
-DCMAKE_PREFIX_PATH="${ROCM_PATH%-*}/lib/cmake;${ROCM_PATH%-*}/$ASAN_LIBDIR;${ROCM_PATH%-*}/llvm;${ROCM_PATH%-*}" \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE:-'RelWithDebInfo'} \
-DCMAKE_SHARED_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_LIB_RPATH" \
-DCMAKE_EXE_LINKER_FLAGS_INIT="-Wl,--enable-new-dtags,--rpath,$ROCM_ASAN_EXE_RPATH" \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX="${ROCM_PATH}" \
-DCMAKE_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH="${ROCM_PATH}" \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DPROFILER_ONLY=ON \
-DENABLE_ASAN_PACKAGING=true \
-DGPU_ARCH="${arch}" \
"$COMPONENT_SRC"
else
cmake -DBUILD_DEV=OFF \
-DCMAKE_PREFIX_PATH="${ROCM_PATH%-*}" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_SHARED_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN' \
-DCMAKE_EXE_LINKER_FLAGS_INIT='-Wl,--enable-new-dtags,--rpath,$ORIGIN/../lib' \
-DCMAKE_VERBOSE_MAKEFILE=1 \
-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE \
-DCMAKE_INSTALL_PREFIX="${ROCM_PATH}" \
-DCMAKE_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DBUILD_FILE_REORG_BACKWARD_COMPATIBILITY=OFF \
-DROCM_SYMLINK_LIBS=OFF \
-DCPACK_PACKAGING_INSTALL_PREFIX="${ROCM_PATH}" \
-DROCM_DISABLE_LDCONFIG=ON \
-DROCM_PATH="${ROCM_PATH}" \
-DCPACK_GENERATOR="${PKGTYPE^^}" \
-DCMAKE_CXX_COMPILER="${ROCM_PATH}/llvm/bin/clang++" \
-DCMAKE_C_COMPILER="${ROCM_PATH}/llvm/bin/clang" \
${LAUNCHER_FLAGS} \
-DPROFILER_ONLY=ON \
-DGPU_ARCH="${arch}" \
"$COMPONENT_SRC"
fi
cmake --build . -- -j${PROC} package
cp ./*ckprofiler*.${PKGTYPE} $PACKAGE_DIR
rm -rf *
done
rm -rf _CPack_Packages/ && find -name '*.o' -delete
echo "Finished building Composable Kernel"
show_build_cache_stats
}
clean_miopen_ck() {
echo "Cleaning MIOpen-CK build directory: ${BUILD_DIR} ${PACKAGE_DIR}"
rm -rf "$BUILD_DIR" "$PACKAGE_DIR"
@@ -176,7 +77,7 @@ clean_miopen_ck() {
stage2_command_args "$@"
case $TARGET in
build) build_miopen_ck; build_miopen_ckProf;;
build) build_miopen_ck ;;
outdir) print_output_directory ;;
clean) clean_miopen_ck ;;
*) die "Invalid target $TARGET" ;;

View File

@@ -15,7 +15,7 @@ printUsage() {
type referred to by pkg_type"
echo " -h, --help Prints this help"
echo " -M, --skip_man_pages Do not build the 'docs' target"
echo " -s, --static Supports static CI by accepting this param & not bailing out. No effect of the param though"
echo " -s, --static Component/Build does not support static builds just accepting this param & ignore. No effect of the param on this build"
echo
echo "Possible values for <type>:"
echo " deb -> Debian format (default)"
@@ -65,7 +65,7 @@ do
set_asan_env_vars
set_address_sanitizer_on ;;
(-s | --static)
SHARED_LIBS="OFF" ;;
ack_and_skip_static ;;
(-o | --outdir)
TARGET="outdir"; PKGTYPE=$2 ; ((CLEAN_OR_OUT|=2)) ; shift 1 ;;
(-M | --skip_man_pages) DODOCSBUILD=false;;

View File

@@ -96,6 +96,7 @@ build_devicelibs() {
if [ ! -e Makefile ]; then
cmake $(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DROCM_DEVICE_LIBS_BITCODE_INSTALL_LOC_NEW="$bitcodeInstallLoc/amdgcn" \
-DROCM_DEVICE_LIBS_BITCODE_INSTALL_LOC_OLD="amdgcn" \
"$DEVICELIBS_ROOT"

View File

@@ -24,14 +24,15 @@ printUsage() {
source "$(dirname "${BASH_SOURCE}")/compute_utils.sh"
MAKEOPTS="$DASH_JAY"
PROJ_NAME="hip-on-rocclr"
BUILD_PATH="$(getBuildPath hip-on-rocclr)"
BUILD_PATH="$(getBuildPath $PROJ_NAME)"
TARGET="build"
PACKAGE_ROOT="$(getPackageRoot)"
PACKAGE_SRC="$(getSrcPath)"
PACKAGE_DEB="$PACKAGE_ROOT/deb/hip-on-rocclr"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/hip-on-rocclr"
PACKAGE_DEB="$PACKAGE_ROOT/deb/$PROJ_NAME"
PACKAGE_RPM="$PACKAGE_ROOT/rpm/$PROJ_NAME"
PREFIX_PATH="$PACKAGE_ROOT"
CORE_BUILD_DIR="$(getBuildPath hsa-core)"
ROCclr_BUILD_DIR="$(getBuildPath rocclr)"
@@ -52,7 +53,7 @@ MAKETARGET="deb"
PKGTYPE="deb"
OFFLOAD_ARCH=()
DEFAULT_OFFLOAD_ARCH=(gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942 gfx1030 gfx1031 gfx1033 gfx1034 gfx1035 gfx1100 gfx1101 gfx1102 gfx1103)
DEFAULT_OFFLOAD_ARCH=(gfx900 gfx906 gfx908 gfx90a gfx940 gfx941 gfx942 gfx1030 gfx1031 gfx1033 gfx1034 gfx1035 gfx1100 gfx1101 gfx1102 gfx1200 gfx1201)
VALID_STR=`getopt -o hcrast:o: --long help,clean,release,address_sanitizer,static,offload-arch=:,outdir: -- "$@"`
eval set -- "$VALID_STR"
@@ -168,9 +169,11 @@ build_catch_tests() {
export ROCM_PATH="$ROCM_INSTALL_PATH"
cmake \
-DCMAKE_BUILD_TYPE="${BUILD_TYPE}" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DHIP_PLATFORM=amd \
-DROCM_PATH="$ROCM_INSTALL_PATH" \
-DOFFLOAD_ARCH_STR="$OFFLOAD_ARCH_STR" \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DCPACK_RPM_DEBUGINFO_PACKAGE=FALSE \
-DCPACK_DEBIAN_DEBUGINFO_PACKAGE=FALSE \
@@ -206,6 +209,8 @@ package_samples() {
export ROCM_PATH="$ROCM_INSTALL_PATH"
cmake \
-DROCM_PATH="$ROCM_INSTALL_PATH" \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
$(rocm_cmake_params) \
$(rocm_common_cmake_params) \
-DCMAKE_MODULE_PATH="$CMAKE_PATH/hip" \
-DCPACK_INSTALL_PREFIX="$ROCM_INSTALL_PATH" \

View File

@@ -0,0 +1,41 @@
#!/bin/bash
set -ex
source "$(dirname "${BASH_SOURCE[0]}")/compute_helper.sh"
set_component_src hipBLAS-common
build_hipblas-common() {
echo "Start build"
cd $COMPONENT_SRC
mkdir -p "$BUILD_DIR" && cd "$BUILD_DIR"
init_rocm_common_cmake_params
cmake \
"${rocm_math_common_cmake_params[@]}" \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- install
cmake --build "$BUILD_DIR" -- package
rm -rf _CPack_Packages/ && find -name '*.o' -delete
mkdir -p $PACKAGE_DIR && cp ${BUILD_DIR}/*.${PKGTYPE} $PACKAGE_DIR
show_build_cache_stats
}
clean_hipblas-common() {
echo "Cleaning hipBLAS-common build directory: ${BUILD_DIR} ${PACKAGE_DIR}"
rm -rf "$BUILD_DIR" "$PACKAGE_DIR"
echo "Done!"
}
stage2_command_args "$@"
case $TARGET in
build) build_hipblas-common ;;
outdir) print_output_directory ;;
clean) clean_hipblas-common ;;
*) die "Invalid target $TARGET" ;;
esac

View File

@@ -10,6 +10,12 @@ build_hipblas() {
echo "Start build"
CXX="g++"
CXX_FLAG=
if [ "${ENABLE_STATIC_BUILDS}" == "true" ]; then
CXX="amdclang++"
CXX_FLAG="-DCMAKE_CXX_COMPILER=${ROCM_PATH}/llvm/bin/clang++"
fi
CLIENTS_SAMPLES="ON"
if [ "${ENABLE_ADDRESS_SANITIZER}" == "true" ]; then
set_asan_env_vars
@@ -17,6 +23,8 @@ build_hipblas() {
CLIENTS_SAMPLES="OFF"
fi
SHARED_LIBS="ON"
echo "C compiler: $CC"
echo "CXX compiler: $CXX"
echo "FC compiler: $FC"
@@ -33,11 +41,12 @@ build_hipblas() {
${LAUNCHER_FLAGS} \
"${rocm_math_common_cmake_params[@]}" \
-DUSE_CUDA=OFF \
-DBUILD_SHARED_LIBS=$SHARED_LIBS \
-DBUILD_CLIENTS_TESTS=ON \
-DBUILD_CLIENTS_BENCHMARKS=ON \
-DBUILD_CLIENTS_SAMPLES="${CLIENTS_SAMPLES}" \
-DCPACK_SET_DESTDIR=OFF \
-DBUILD_ADDRESS_SANITIZER="${ADDRESS_SANITIZER}" \
${CXX_FLAG} \
"$COMPONENT_SRC"
cmake --build "$BUILD_DIR" -- -j${PROC}

Some files were not shown because too many files have changed in this diff Show More