Compare commits
474 Commits
rocm-3.10.
...
docs/5.0.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
815a7aa81d | ||
|
|
966d100108 | ||
|
|
ee0084a97d | ||
|
|
110f9c37d6 | ||
|
|
0bf18fc59a | ||
|
|
5ef1b40475 | ||
|
|
1e9227e254 | ||
|
|
b1e441451e | ||
|
|
d3a0e85c21 | ||
|
|
1f39f027fc | ||
|
|
eec431a98f | ||
|
|
e651705e91 | ||
|
|
34e9bb3c2e | ||
|
|
0ace5e9db2 | ||
|
|
7062df1c67 | ||
|
|
24db72b8a8 | ||
|
|
4668632fa2 | ||
|
|
270bc73661 | ||
|
|
a7ce874940 | ||
|
|
e78e6a9a23 | ||
|
|
4bf9dc9560 | ||
|
|
0dd3fc9eb4 | ||
|
|
0908ed22b1 | ||
|
|
683b940a89 | ||
|
|
3f60f1df3b | ||
|
|
d9543485ce | ||
|
|
65180d309c | ||
|
|
07901f4a38 | ||
|
|
a6e350fbe1 | ||
|
|
e7c7e1ea3b | ||
|
|
6c541a111a | ||
|
|
9841218253 | ||
|
|
2915e7a484 | ||
|
|
0795eadc2d | ||
|
|
92c0940012 | ||
|
|
4a069ba039 | ||
|
|
6da407a83c | ||
|
|
27fb7070fd | ||
|
|
1f58c354e5 | ||
|
|
a9cda12159 | ||
|
|
de5e5c2b84 | ||
|
|
0e8714e498 | ||
|
|
2193408ae8 | ||
|
|
f445672f4b | ||
|
|
c5b4b70f11 | ||
|
|
d2cd9e2141 | ||
|
|
3b7d8b102a | ||
|
|
18d5dd866c | ||
|
|
9829be899e | ||
|
|
5cf7fff338 | ||
|
|
7a16e52d52 | ||
|
|
d4dcd6b9e5 | ||
|
|
71a132b926 | ||
|
|
9b8603d58c | ||
|
|
4cb5510bf0 | ||
|
|
676953fe8c | ||
|
|
15292ddebe | ||
|
|
89986d332d | ||
|
|
de6fc1634a | ||
|
|
52986c3635 | ||
|
|
b86717e454 | ||
|
|
7dbd277203 | ||
|
|
31ee8e712c | ||
|
|
55eda666d5 | ||
|
|
01e24da121 | ||
|
|
f68c47d748 | ||
|
|
2c0a351bbd | ||
|
|
b6509809d3 | ||
|
|
a29205cc5c | ||
|
|
16c4d22099 | ||
|
|
ed3335c3a5 | ||
|
|
30f27c4644 | ||
|
|
f4be54f896 | ||
|
|
9c04aef6a5 | ||
|
|
c7d4e75e95 | ||
|
|
aabbea88f2 | ||
|
|
7747e130b9 | ||
|
|
a471e8debe | ||
|
|
8c86526f98 | ||
|
|
a42fae5140 | ||
|
|
bcb3dd3b4a | ||
|
|
8784fe3fba | ||
|
|
6e79d204b8 | ||
|
|
7076bc18ca | ||
|
|
519df7a51f | ||
|
|
90c697b6d3 | ||
|
|
125cc37981 | ||
|
|
5752b5986c | ||
|
|
2829c088c2 | ||
|
|
3b9fb62600 | ||
|
|
b7222caed2 | ||
|
|
c285dd729f | ||
|
|
0c93636d23 | ||
|
|
3fa5f1fddc | ||
|
|
17b029b885 | ||
|
|
460f46c3be | ||
|
|
6feca81dd0 | ||
|
|
ec8496041a | ||
|
|
c7350c08ab | ||
|
|
c1809766e6 | ||
|
|
61df1ec8c6 | ||
|
|
983987aab5 | ||
|
|
914b62e219 | ||
|
|
faac45772c | ||
|
|
d206494272 | ||
|
|
26c73a3986 | ||
|
|
dc74008ac6 | ||
|
|
108287dcd7 | ||
|
|
38440915ef | ||
|
|
d9c434881a | ||
|
|
4c795d45f6 | ||
|
|
ef0a88ea0e | ||
|
|
34578f0193 | ||
|
|
6d32125543 | ||
|
|
f4a481e58b | ||
|
|
081a2948ff | ||
|
|
6c1fff6692 | ||
|
|
0b249ff088 | ||
|
|
49d4d1b6bc | ||
|
|
f953a99298 | ||
|
|
4096b867d8 | ||
|
|
494ba37d87 | ||
|
|
df32eed823 | ||
|
|
b173f6b226 | ||
|
|
09423f1e4e | ||
|
|
d9f272a505 | ||
|
|
ba14589a9a | ||
|
|
f8fe609302 | ||
|
|
fd9ae73706 | ||
|
|
58481f3b83 | ||
|
|
012e4c542b | ||
|
|
55b5b66901 | ||
|
|
62ed404058 | ||
|
|
66ed6adf6c | ||
|
|
e04c646088 | ||
|
|
fcc6283748 | ||
|
|
28a4b8d477 | ||
|
|
2aec75e201 | ||
|
|
2072f82761 | ||
|
|
5c4ab7d675 | ||
|
|
d5eb2b25f2 | ||
|
|
bcc1432d83 | ||
|
|
776605266c | ||
|
|
4c62bb74ff | ||
|
|
57c601262b | ||
|
|
b897bddf38 | ||
|
|
48db1eea8d | ||
|
|
08821f1098 | ||
|
|
3a93ce8fc9 | ||
|
|
a167088d41 | ||
|
|
85dd6e4234 | ||
|
|
507530aeb5 | ||
|
|
2de2059feb | ||
|
|
b81a27c2a2 | ||
|
|
19c0ba1150 | ||
|
|
043427989f | ||
|
|
21033eb98b | ||
|
|
c3298b5944 | ||
|
|
7bbd5bc79d | ||
|
|
b1a971b432 | ||
|
|
41dc33d95d | ||
|
|
97339ffe33 | ||
|
|
47688609af | ||
|
|
1533f5edb6 | ||
|
|
1ec7e1c933 | ||
|
|
64a243fc29 | ||
|
|
fa298efcbb | ||
|
|
08d8d2612a | ||
|
|
fc3f2ccb38 | ||
|
|
9683d6f776 | ||
|
|
9833748ff0 | ||
|
|
e83512605d | ||
|
|
e7ed560520 | ||
|
|
110e2444e9 | ||
|
|
71c16c4b96 | ||
|
|
2e7266c829 | ||
|
|
80778f173f | ||
|
|
415f3b93ad | ||
|
|
63b3b55ed5 | ||
|
|
286f120d9a | ||
|
|
519707db4f | ||
|
|
b213d94dd6 | ||
|
|
875e07b801 | ||
|
|
ac42cbc97b | ||
|
|
20f8185e0d | ||
|
|
934cc718b1 | ||
|
|
5534e47b16 | ||
|
|
ca10bba2c3 | ||
|
|
8702d500ad | ||
|
|
e9ee6b9874 | ||
|
|
2f51e147f2 | ||
|
|
01422a3cc4 | ||
|
|
903aae3321 | ||
|
|
d76b9b2fbf | ||
|
|
7f4b69c3a0 | ||
|
|
e65c857ad2 | ||
|
|
b951a2bef8 | ||
|
|
1a570efb48 | ||
|
|
75f4c018cc | ||
|
|
f1a46ae86b | ||
|
|
8bc40f4649 | ||
|
|
d614c6e500 | ||
|
|
3b4c592c53 | ||
|
|
bcba7ed752 | ||
|
|
9144ac6238 | ||
|
|
b65adbd159 | ||
|
|
4ce8372761 | ||
|
|
5c80077b67 | ||
|
|
5787b613f6 | ||
|
|
5ce34c593a | ||
|
|
3db2cff387 | ||
|
|
555e4f078b | ||
|
|
b19681711c | ||
|
|
67cd4c3789 | ||
|
|
a2790438b5 | ||
|
|
e6646b2f38 | ||
|
|
9126c010d4 | ||
|
|
52876c050b | ||
|
|
81722b3451 | ||
|
|
e464db856c | ||
|
|
8b49837f76 | ||
|
|
0e2b33f904 | ||
|
|
4eb9653b68 | ||
|
|
a1884e46fe | ||
|
|
419f1a9560 | ||
|
|
a9c87c8b13 | ||
|
|
002cca3756 | ||
|
|
48ded5bc01 | ||
|
|
ee989c21f9 | ||
|
|
b638a620ac | ||
|
|
36a57f1389 | ||
|
|
c92f5af561 | ||
|
|
09001c933b | ||
|
|
b7c9943ff7 | ||
|
|
25a52ec827 | ||
|
|
b14834e5a1 | ||
|
|
368178d758 | ||
|
|
a047d37bfe | ||
|
|
7536ef0196 | ||
|
|
5241caf779 | ||
|
|
1ae99c5e4b | ||
|
|
f034733da2 | ||
|
|
d4879fdec4 | ||
|
|
60957c84b7 | ||
|
|
3859eef2a9 | ||
|
|
4915438362 | ||
|
|
c4ce059e12 | ||
|
|
ca4d4597ba | ||
|
|
418e8bfda6 | ||
|
|
82477df454 | ||
|
|
075562b1f2 | ||
|
|
74d067032e | ||
|
|
526846dc7e | ||
|
|
a47030ca10 | ||
|
|
fac29ca466 | ||
|
|
986ba19e80 | ||
|
|
e00f7f6d59 | ||
|
|
cac8ecf2bc | ||
|
|
2653e081e2 | ||
|
|
34eb2a85f3 | ||
|
|
164129954e | ||
|
|
eaf8e74802 | ||
|
|
403c81a83e | ||
|
|
ced195c62c | ||
|
|
3486206b09 | ||
|
|
c379917e1c | ||
|
|
0a60a3b256 | ||
|
|
99a3476a5e | ||
|
|
ad3a774274 | ||
|
|
5bb9c86fb6 | ||
|
|
0a0b750e0e | ||
|
|
c6ec9d7b55 | ||
|
|
a1eac48dea | ||
|
|
94f4488904 | ||
|
|
afc1a33ad7 | ||
|
|
9b6fb663c9 | ||
|
|
7d78a111b4 | ||
|
|
f04316efdb | ||
|
|
0083f955a7 | ||
|
|
237e662486 | ||
|
|
475711bb7d | ||
|
|
dc2b00f43d | ||
|
|
c0cd1b72ce | ||
|
|
95493f625c | ||
|
|
c3f91afb26 | ||
|
|
d827b836b2 | ||
|
|
99d5fb03e0 | ||
|
|
1f6c308006 | ||
|
|
bb3aa02a86 | ||
|
|
9b82c422d0 | ||
|
|
8eed074e8a | ||
|
|
53db303dd3 | ||
|
|
36ec27d9a4 | ||
|
|
d78bb0121b | ||
|
|
f72c130e06 | ||
|
|
c058e7a1c9 | ||
|
|
0d12925fe9 | ||
|
|
f088317e44 | ||
|
|
ca8f60e96f | ||
|
|
ba8c56abdc | ||
|
|
18410afcd7 | ||
|
|
c637c2a964 | ||
|
|
5a56a31fac | ||
|
|
82b35be1ee | ||
|
|
03fb0f863c | ||
|
|
c730ade1e3 | ||
|
|
164a386ed6 | ||
|
|
db517138f6 | ||
|
|
bc63e35725 | ||
|
|
c9a8556171 | ||
|
|
91f193a510 | ||
|
|
b2fac149b5 | ||
|
|
1d23bb0ec6 | ||
|
|
fedfa50634 | ||
|
|
51ea894667 | ||
|
|
63b0e6d273 | ||
|
|
f1383c5d16 | ||
|
|
f3ec7b4720 | ||
|
|
9492fc9b0d | ||
|
|
c103fe233f | ||
|
|
63c16a229e | ||
|
|
18aa89804f | ||
|
|
65a4524834 | ||
|
|
b04ab30e81 | ||
|
|
4c8787087a | ||
|
|
7cd85779c4 | ||
|
|
c676ff480e | ||
|
|
6d19f5b6c1 | ||
|
|
4679e8ac87 | ||
|
|
8a3209f985 | ||
|
|
79d0d00b2a | ||
|
|
db5121cdfe | ||
|
|
035f4995bb | ||
|
|
f63e3f9ce1 | ||
|
|
4e56ed7dc3 | ||
|
|
2faf5b6ab7 | ||
|
|
e69b7e6f71 | ||
|
|
d53ffd1c89 | ||
|
|
e177599de1 | ||
|
|
9fc1ba3970 | ||
|
|
520764faa3 | ||
|
|
7d0b53c87f | ||
|
|
c3a8ecd0c5 | ||
|
|
21cf37b2df | ||
|
|
f4419a3d1c | ||
|
|
5ffdcf84ab | ||
|
|
085295daea | ||
|
|
cf5cec2580 | ||
|
|
e7a93ae3f5 | ||
|
|
e3b7d2f39d | ||
|
|
0c4565d913 | ||
|
|
313a589132 | ||
|
|
1caf5514e8 | ||
|
|
d029ad24cf | ||
|
|
ca6638d917 | ||
|
|
5cba920022 | ||
|
|
cefc8ef1d7 | ||
|
|
b71c5705a2 | ||
|
|
977a1d14cd | ||
|
|
3ab60d1326 | ||
|
|
4b5b13294e | ||
|
|
ce66b14d9e | ||
|
|
01f63f546f | ||
|
|
72eab2779e | ||
|
|
8a366db3d7 | ||
|
|
8267a84345 | ||
|
|
f7b3a38d49 | ||
|
|
12e3bb376b | ||
|
|
a44e82f263 | ||
|
|
9af988ffc8 | ||
|
|
5fed386cf1 | ||
|
|
d729428302 | ||
|
|
8611c5f450 | ||
|
|
ae0b56d029 | ||
|
|
3862c69b09 | ||
|
|
be34f32307 | ||
|
|
08c9cce749 | ||
|
|
a83a7c9206 | ||
|
|
71faa9c81f | ||
|
|
6b021edb23 | ||
|
|
3936d236e6 | ||
|
|
dbcb26756d | ||
|
|
96de448de6 | ||
|
|
ee0bc562e6 | ||
|
|
376b8673b7 | ||
|
|
e9147a9103 | ||
|
|
fab1a697f0 | ||
|
|
a369e642b8 | ||
|
|
9101972654 | ||
|
|
f3ba8df53d | ||
|
|
ba7a87a2dc | ||
|
|
df6d746d50 | ||
|
|
2b2bab5bf3 | ||
|
|
5ec9b12f99 | ||
|
|
803148affd | ||
|
|
9275fb6298 | ||
|
|
b6ae3f145e | ||
|
|
f80eefc965 | ||
|
|
c5d91843a7 | ||
|
|
733a9c097c | ||
|
|
ff2b3f8a23 | ||
|
|
5a4cf1cee1 | ||
|
|
dccf5ca356 | ||
|
|
8b20bd56a6 | ||
|
|
65cb10e5e8 | ||
|
|
ac2625dd26 | ||
|
|
3716310e93 | ||
|
|
2dee17f7d6 | ||
|
|
61e8b0d70e | ||
|
|
8a3304a8d9 | ||
|
|
55488a9424 | ||
|
|
ff4a1d4059 | ||
|
|
4b2d93fb7e | ||
|
|
061ccd21b8 | ||
|
|
0ed1bd9f8e | ||
|
|
856c74de55 | ||
|
|
12c6f60e45 | ||
|
|
897b1e8e2d | ||
|
|
382ea7553f | ||
|
|
2014b47dcb | ||
|
|
b9f9bafd9b | ||
|
|
ff15f420c6 | ||
|
|
f51c9be952 | ||
|
|
64e254dc99 | ||
|
|
af7f921474 | ||
|
|
8b3377749f | ||
|
|
c3a3ce55d1 | ||
|
|
64c727449b | ||
|
|
182dfc65cf | ||
|
|
d529d5c585 | ||
|
|
cca6bc4921 | ||
|
|
e3dbbb6bbf | ||
|
|
6e39c80762 | ||
|
|
f96f5df625 | ||
|
|
0639a312c8 | ||
|
|
a2878b1460 | ||
|
|
1daf261d25 | ||
|
|
5848bc3d7e | ||
|
|
d9692359ad | ||
|
|
25110784cf | ||
|
|
9ff31d316f | ||
|
|
b072119ad6 | ||
|
|
095544032c | ||
|
|
26a39a637a | ||
|
|
6fb55e6f45 | ||
|
|
290091946f | ||
|
|
2874a8ae6c | ||
|
|
f62f2b24da | ||
|
|
790567e3bd | ||
|
|
57d7a202d4 | ||
|
|
80d2aa739b | ||
|
|
b18851f804 | ||
|
|
0f0dbf0c92 | ||
|
|
224a45379f | ||
|
|
f521943747 | ||
|
|
2b7f806b10 | ||
|
|
cd55ef67c9 | ||
|
|
9320669eee | ||
|
|
c1211c66e3 | ||
|
|
c8fcff6488 | ||
|
|
7118076ab4 | ||
|
|
ec5523395a | ||
|
|
41d8f6a235 | ||
|
|
c69eef858a | ||
|
|
5b902ca38c | ||
|
|
761ed4e70f | ||
|
|
67bd7501c1 | ||
|
|
d62f1c4247 | ||
|
|
c3d5bc6406 | ||
|
|
db45731729 | ||
|
|
34552e95e0 | ||
|
|
8d0c516c5c | ||
|
|
5cba919767 | ||
|
|
bb0022e972 |
1
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1 @@
|
||||
* @saadrahim @Rmalavally @amd-aakash @zhang2amd @jlgreathouse @samjwu @MathiasMagnus
|
||||
12
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
# To get started with Dependabot version updates, you'll need to specify which
|
||||
# package ecosystems to update and where the package manifests are located.
|
||||
# Please see the documentation for all configuration options:
|
||||
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "pip" # See documentation for possible values
|
||||
directory: "/docs/sphinx" # Location of package manifests
|
||||
open-pull-requests-limit: 10
|
||||
schedule:
|
||||
interval: "daily"
|
||||
56
.github/workflows/linting.yml
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
name: Linting
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- develop
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- develop
|
||||
- main
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.ref }}-${{ github.workflow }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
lint-rest:
|
||||
name: "RestructuredText"
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
- name: Install rst-lint
|
||||
run: pip install restructuredtext-lint
|
||||
- name: Lint ResT files
|
||||
run: rst-lint ${{ join(github.workspace, '/docs') }}
|
||||
|
||||
lint-md:
|
||||
name: "Markdown"
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
- name: Use markdownlint-cli2
|
||||
uses: DavidAnson/markdownlint-cli2-action@v10.0.1
|
||||
with:
|
||||
globs: '**/*.md'
|
||||
|
||||
spelling:
|
||||
name: "Spelling"
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v3
|
||||
- name: Fetch config
|
||||
shell: sh
|
||||
run: |
|
||||
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.spellcheck.yaml -O
|
||||
curl --silent --show-error --fail --location https://raw.github.com/RadeonOpenCompute/rocm-docs-core/develop/.wordlist.txt >> .wordlist.txt
|
||||
- name: Run spellcheck
|
||||
uses: rojopolis/spellcheck-github-actions@0.30.0
|
||||
- name: On fail
|
||||
if: failure()
|
||||
run: |
|
||||
echo "Please check for spelling mistakes or add them to '.wordlist.txt' in either the root of this project or in rocm-docs-core."
|
||||
18
.gitignore
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
.venv
|
||||
.vscode
|
||||
build
|
||||
|
||||
# documentation artifacts
|
||||
_build/
|
||||
_images/
|
||||
_static/
|
||||
_templates/
|
||||
_toc.yml
|
||||
docBin/
|
||||
_doxygen/
|
||||
_readthedocs/
|
||||
|
||||
# avoid duplicating contributing.md due to conf.py
|
||||
docs/contributing.md
|
||||
docs/release.md
|
||||
docs/CHANGELOG.md
|
||||
14
.markdownlint-cli2.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
config:
|
||||
default: true
|
||||
MD013: false
|
||||
MD026:
|
||||
punctuation: '.,;:!'
|
||||
MD029:
|
||||
style: ordered
|
||||
MD033: false
|
||||
MD034: false
|
||||
MD041: false
|
||||
ignores:
|
||||
- CHANGELOG.md
|
||||
- "{,docs/}{RELEASE,release}.md"
|
||||
- tools/autotag/templates/**/*.md
|
||||
22
.readthedocs.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
# Read the Docs configuration file
|
||||
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
||||
|
||||
version: 2
|
||||
|
||||
sphinx:
|
||||
configuration: docs/conf.py
|
||||
|
||||
formats: [htmlzip]
|
||||
|
||||
python:
|
||||
install:
|
||||
- requirements: docs/sphinx/requirements.txt
|
||||
|
||||
build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: "3.10"
|
||||
apt_packages:
|
||||
- "doxygen"
|
||||
- "gfortran" # For pre-processing fortran sources
|
||||
- "graphviz" # For dot graphs in doxygen
|
||||
29
.wordlist.txt
Normal file
@@ -0,0 +1,29 @@
|
||||
# isv_deployment_win
|
||||
ABI
|
||||
# gpu_aware_mpi
|
||||
DMA
|
||||
GDR
|
||||
HCA
|
||||
MPI
|
||||
MVAPICH
|
||||
Mellanox's
|
||||
NIC
|
||||
OFED
|
||||
OSU
|
||||
OpenFabrics
|
||||
PeerDirect
|
||||
RDMA
|
||||
UCX
|
||||
ib_core
|
||||
# linear algebra
|
||||
LAPACK
|
||||
MMA
|
||||
backends
|
||||
cuSOLVER
|
||||
cuSPARSE
|
||||
# tuning_guides
|
||||
BMC
|
||||
DGEMM
|
||||
HPCG
|
||||
HPL
|
||||
IOPM
|
||||
735
CHANGELOG.md
Normal file
@@ -0,0 +1,735 @@
|
||||
# Release Notes
|
||||
<!-- Do not edit this file! This file is autogenerated with -->
|
||||
<!-- tools/autotag/tag_script.py -->
|
||||
|
||||
<!-- Disable lints since this is an auto-generated file. -->
|
||||
<!-- markdownlint-disable blanks-around-headers -->
|
||||
<!-- markdownlint-disable no-duplicate-header -->
|
||||
<!-- markdownlint-disable no-blanks-blockquote -->
|
||||
<!-- markdownlint-disable ul-indent -->
|
||||
<!-- markdownlint-disable no-trailing-spaces -->
|
||||
|
||||
<!-- spellcheck-disable -->
|
||||
|
||||
The release notes for the ROCm platform.
|
||||
|
||||
-------------------
|
||||
|
||||
## ROCm 5.0.0
|
||||
<!-- markdownlint-disable first-line-h1 -->
|
||||
<!-- markdownlint-disable no-blanks-blockquote -->
|
||||
### What's New in This Release
|
||||
|
||||
#### HIP Enhancements
|
||||
|
||||
The ROCm v5.0 release consists of the following HIP enhancements.
|
||||
|
||||
##### HIP Installation Guide Updates
|
||||
|
||||
The HIP Installation Guide is updated to include building HIP from source on the NVIDIA platform.
|
||||
|
||||
Refer to the HIP Installation Guide v5.0 for more details.
|
||||
|
||||
##### Managed Memory Allocation
|
||||
|
||||
Managed memory, including the `__managed__` keyword, is now supported in the HIP combined host/device compilation. Through unified memory allocation, managed memory allows data to be shared and accessible to both the CPU and GPU using a single pointer. The allocation is managed by the AMD GPU driver using the Linux Heterogeneous Memory Management (HMM) mechanism. The user can call managed memory API hipMallocManaged to allocate a large chunk of HMM memory, execute kernels on a device, and fetch data between the host and device as needed.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> In a HIP application, it is recommended to do a capability check before calling the managed memory APIs. For example,
|
||||
>
|
||||
> ```cpp
|
||||
> int managed_memory = 0;
|
||||
> HIPCHECK(hipDeviceGetAttribute(&managed_memory,
|
||||
> hipDeviceAttributeManagedMemory,p_gpuDevice));
|
||||
> if (!managed_memory ) {
|
||||
> printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
|
||||
> }
|
||||
> else {
|
||||
> HIPCHECK(hipSetDevice(p_gpuDevice));
|
||||
> HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
|
||||
> . . .
|
||||
> }
|
||||
> ```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The managed memory capability check may not be necessary; however, if HMM is not supported, managed malloc will fall back to using system memory. Other managed memory API calls will, then, have
|
||||
|
||||
Refer to the HIP API documentation for more details on managed memory APIs.
|
||||
|
||||
For the application, see
|
||||
|
||||
<https://github.com/ROCm-Developer-Tools/HIP/blob/rocm-4.5.x/tests/src/runtimeApi/memory/hipMallocManaged.cpp>
|
||||
|
||||
#### New Environment Variable
|
||||
|
||||
The following new environment variable is added in this release:
|
||||
|
||||
| Environment Variable | Value | Description |
|
||||
|----------------------|-----------------------|-------------|
|
||||
| HSA_COOP_CU_COUNT | 0 or 1 (default is 0) | Some processors support more CUs than can reliably be used in a cooperative dispatch. Setting the environment variable HSA_COOP_CU_COUNT to 1 will cause ROCr to return the correct CU count for cooperative groups through the HSA_AMD_AGENT_INFO_COOPERATIVE_COMPUTE_UNIT_COUNT attribute of hsa_agent_get_info(). Setting HSA_COOP_CU_COUNT to other values, or leaving it unset, will cause ROCr to return the same CU count for the attributes HSA_AMD_AGENT_INFO_COOPERATIVE_COMPUTE_UNIT_COUNT and HSA_AMD_AGENT_INFO_COMPUTE_UNIT_COUNT. Future ROCm releases will make HSA_COOP_CU_COUNT=1 the default. |
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
#### Runtime Breaking Change
|
||||
|
||||
Re-ordering of the enumerated type in hip_runtime_api.h to better match NV. See below for the difference in enumerated types.
|
||||
|
||||
ROCm software will be affected if any of the defined enums listed below are used in the code. Applications built with ROCm v5.0 enumerated types will work with a ROCm 4.5.2 driver. However, an undefined behavior error will occur with a ROCm v4.5.2 application that uses these enumerated types with a ROCm 5.0 runtime.
|
||||
|
||||
```diff
|
||||
typedef enum hipDeviceAttribute_t {
|
||||
- hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block.
|
||||
- hipDeviceAttributeMaxBlockDimX, ///< Maximum x-dimension of a block.
|
||||
- hipDeviceAttributeMaxBlockDimY, ///< Maximum y-dimension of a block.
|
||||
- hipDeviceAttributeMaxBlockDimZ, ///< Maximum z-dimension of a block.
|
||||
- hipDeviceAttributeMaxGridDimX, ///< Maximum x-dimension of a grid.
|
||||
- hipDeviceAttributeMaxGridDimY, ///< Maximum y-dimension of a grid.
|
||||
- hipDeviceAttributeMaxGridDimZ, ///< Maximum z-dimension of a grid.
|
||||
- hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in
|
||||
- ///< bytes.
|
||||
- hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes.
|
||||
- hipDeviceAttributeWarpSize, ///< Warp size in threads.
|
||||
- hipDeviceAttributeMaxRegistersPerBlock, ///< Maximum number of 32-bit registers available to a
|
||||
- ///< thread block. This number is shared by all thread
|
||||
- ///< blocks simultaneously resident on a
|
||||
- ///< multiprocessor.
|
||||
- hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz.
|
||||
- hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz.
|
||||
- hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits.
|
||||
- hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device.
|
||||
- hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in.
|
||||
- hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2
|
||||
- ///< cache.
|
||||
- hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per
|
||||
- ///< multiprocessor.
|
||||
- hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number.
|
||||
- hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number.
|
||||
- hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels
|
||||
- ///< concurrently.
|
||||
- hipDeviceAttributePciBusId, ///< PCI Bus ID.
|
||||
- hipDeviceAttributePciDeviceId, ///< PCI Device ID.
|
||||
- hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory Per
|
||||
- ///< Multiprocessor.
|
||||
- hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices.
|
||||
- hipDeviceAttributeIntegrated, ///< iGPU
|
||||
- hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch
|
||||
- hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices
|
||||
- hipDeviceAttributeMaxTexture1DWidth, ///< Maximum number of elements in 1D images
|
||||
- hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D images in image elements
|
||||
- hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension height of 2D images in image elements
|
||||
- hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D images in image elements
|
||||
- hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimensions height of 3D images in image elements
|
||||
- hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimensions depth of 3D images in image elements
|
||||
+ hipDeviceAttributeCudaCompatibleBegin = 0,
|
||||
|
||||
- hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
|
||||
- hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
|
||||
+ hipDeviceAttributeEccEnabled = hipDeviceAttributeCudaCompatibleBegin, ///< Whether ECC support is enabled.
|
||||
+ hipDeviceAttributeAccessPolicyMaxWindowSize, ///< Cuda only. The maximum size of the window policy in bytes.
|
||||
+ hipDeviceAttributeAsyncEngineCount, ///< Cuda only. Asynchronous engines number.
|
||||
+ hipDeviceAttributeCanMapHostMemory, ///< Whether host memory can be mapped into device address space
|
||||
+ hipDeviceAttributeCanUseHostPointerForRegisteredMem,///< Cuda only. Device can access host registered memory
|
||||
+ ///< at the same virtual address as the CPU
|
||||
+ hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz.
|
||||
+ hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in.
|
||||
+ hipDeviceAttributeComputePreemptionSupported, ///< Cuda only. Device supports Compute Preemption.
|
||||
+ hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels concurrently.
|
||||
+ hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory concurrently with the CPU
|
||||
+ hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices
|
||||
+ hipDeviceAttributeDeviceOverlap, ///< Cuda only. Device can concurrently copy memory and execute a kernel.
|
||||
+ ///< Deprecated. Use instead asyncEngineCount.
|
||||
+ hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on
|
||||
+ ///< the device without migration
|
||||
+ hipDeviceAttributeGlobalL1CacheSupported, ///< Cuda only. Device supports caching globals in L1
|
||||
+ hipDeviceAttributeHostNativeAtomicSupported, ///< Cuda only. Link between the device and the host supports native atomic operations
|
||||
+ hipDeviceAttributeIntegrated, ///< Device is integrated GPU
|
||||
+ hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices.
|
||||
+ hipDeviceAttributeKernelExecTimeout, ///< Run time limit for kernels executed on the device
|
||||
+ hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2 cache.
|
||||
+ hipDeviceAttributeLocalL1CacheSupported, ///< caching locals in L1 is supported
|
||||
+ hipDeviceAttributeLuid, ///< Cuda only. 8-byte locally unique identifier in 8 bytes. Undefined on TCC and non-Windows platforms
|
||||
+ hipDeviceAttributeLuidDeviceNodeMask, ///< Cuda only. Luid device node mask. Undefined on TCC and non-Windows platforms
|
||||
+ hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number.
|
||||
+ hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system
|
||||
+ hipDeviceAttributeMaxBlocksPerMultiProcessor, ///< Cuda only. Max block size per multiprocessor
|
||||
+ hipDeviceAttributeMaxBlockDimX, ///< Max block size in width.
|
||||
+ hipDeviceAttributeMaxBlockDimY, ///< Max block size in height.
|
||||
+ hipDeviceAttributeMaxBlockDimZ, ///< Max block size in depth.
|
||||
+ hipDeviceAttributeMaxGridDimX, ///< Max grid size in width.
|
||||
+ hipDeviceAttributeMaxGridDimY, ///< Max grid size in height.
|
||||
+ hipDeviceAttributeMaxGridDimZ, ///< Max grid size in depth.
|
||||
+ hipDeviceAttributeMaxSurface1D, ///< Maximum size of 1D surface.
|
||||
+ hipDeviceAttributeMaxSurface1DLayered, ///< Cuda only. Maximum dimensions of 1D layered surface.
|
||||
+ hipDeviceAttributeMaxSurface2D, ///< Maximum dimension (width, height) of 2D surface.
|
||||
+ hipDeviceAttributeMaxSurface2DLayered, ///< Cuda only. Maximum dimensions of 2D layered surface.
|
||||
+ hipDeviceAttributeMaxSurface3D, ///< Maximum dimension (width, height, depth) of 3D surface.
|
||||
+ hipDeviceAttributeMaxSurfaceCubemap, ///< Cuda only. Maximum dimensions of Cubemap surface.
|
||||
+ hipDeviceAttributeMaxSurfaceCubemapLayered, ///< Cuda only. Maximum dimension of Cubemap layered surface.
|
||||
+ hipDeviceAttributeMaxTexture1DWidth, ///< Maximum size of 1D texture.
|
||||
+ hipDeviceAttributeMaxTexture1DLayered, ///< Cuda only. Maximum dimensions of 1D layered texture.
|
||||
+ hipDeviceAttributeMaxTexture1DLinear, ///< Maximum number of elements allocatable in a 1D linear texture.
|
||||
+ ///< Use cudaDeviceGetTexture1DLinearMaxWidth() instead on Cuda.
|
||||
+ hipDeviceAttributeMaxTexture1DMipmap, ///< Cuda only. Maximum size of 1D mipmapped texture.
|
||||
+ hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D texture.
|
||||
+ hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension hight of 2D texture.
|
||||
+ hipDeviceAttributeMaxTexture2DGather, ///< Cuda only. Maximum dimensions of 2D texture if gather operations performed.
|
||||
+ hipDeviceAttributeMaxTexture2DLayered, ///< Cuda only. Maximum dimensions of 2D layered texture.
|
||||
+ hipDeviceAttributeMaxTexture2DLinear, ///< Cuda only. Maximum dimensions (width, height, pitch) of 2D textures bound to pitched memory.
|
||||
+ hipDeviceAttributeMaxTexture2DMipmap, ///< Cuda only. Maximum dimensions of 2D mipmapped texture.
|
||||
+ hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D texture.
|
||||
+ hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimension height of 3D texture.
|
||||
+ hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimension depth of 3D texture.
|
||||
+ hipDeviceAttributeMaxTexture3DAlt, ///< Cuda only. Maximum dimensions of alternate 3D texture.
|
||||
+ hipDeviceAttributeMaxTextureCubemap, ///< Cuda only. Maximum dimensions of Cubemap texture
|
||||
+ hipDeviceAttributeMaxTextureCubemapLayered, ///< Cuda only. Maximum dimensions of Cubemap layered texture.
|
||||
+ hipDeviceAttributeMaxThreadsDim, ///< Maximum dimension of a block
|
||||
+ hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block.
|
||||
+ hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per multiprocessor.
|
||||
+ hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies
|
||||
+ hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits.
|
||||
+ hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz.
|
||||
+ hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number.
|
||||
+ hipDeviceAttributeMultiGpuBoardGroupID, ///< Cuda only. Unique ID of device group on the same multi-GPU board
|
||||
+ hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device.
|
||||
+ hipDeviceAttributeName, ///< Device name.
|
||||
+ hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory
|
||||
+ ///< without calling hipHostRegister on it
|
||||
+ hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via the host's page tables
|
||||
+ hipDeviceAttributePciBusId, ///< PCI Bus ID.
|
||||
+ hipDeviceAttributePciDeviceId, ///< PCI Device ID.
|
||||
+ hipDeviceAttributePciDomainID, ///< PCI Domain ID.
|
||||
+ hipDeviceAttributePersistingL2CacheMaxSize, ///< Cuda11 only. Maximum l2 persisting lines capacity in bytes
|
||||
+ hipDeviceAttributeMaxRegistersPerBlock, ///< 32-bit registers available to a thread block. This number is shared
|
||||
+ ///< by all thread blocks simultaneously resident on a multiprocessor.
|
||||
+ hipDeviceAttributeMaxRegistersPerMultiprocessor, ///< 32-bit registers available per block.
|
||||
+ hipDeviceAttributeReservedSharedMemPerBlock, ///< Cuda11 only. Shared memory reserved by CUDA driver per block.
|
||||
+ hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in bytes.
|
||||
+ hipDeviceAttributeSharedMemPerBlockOptin, ///< Cuda only. Maximum shared memory per block usable by special opt in.
|
||||
+ hipDeviceAttributeSharedMemPerMultiprocessor, ///< Cuda only. Shared memory available per multiprocessor.
|
||||
+ hipDeviceAttributeSingleToDoublePrecisionPerfRatio, ///< Cuda only. Performance ratio of single precision to double precision.
|
||||
+ hipDeviceAttributeStreamPrioritiesSupported, ///< Cuda only. Whether to support stream priorities.
|
||||
+ hipDeviceAttributeSurfaceAlignment, ///< Cuda only. Alignment requirement for surfaces
|
||||
+ hipDeviceAttributeTccDriver, ///< Cuda only. Whether device is a Tesla device using TCC driver
|
||||
+ hipDeviceAttributeTextureAlignment, ///< Alignment requirement for textures
|
||||
+ hipDeviceAttributeTexturePitchAlignment, ///< Pitch alignment requirement for 2D texture references bound to pitched memory;
|
||||
+ hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes.
|
||||
+ hipDeviceAttributeTotalGlobalMem, ///< Global memory available on devicice.
|
||||
+ hipDeviceAttributeUnifiedAddressing, ///< Cuda only. An unified address space shared with the host.
|
||||
+ hipDeviceAttributeUuid, ///< Cuda only. Unique ID in 16 byte.
|
||||
+ hipDeviceAttributeWarpSize, ///< Warp size in threads.
|
||||
|
||||
- hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies
|
||||
- hipDeviceAttributeTextureAlignment, ///<Alignment requirement for textures
|
||||
- hipDeviceAttributeTexturePitchAlignment, ///<Pitch alignment requirement for 2D texture references bound to pitched memory;
|
||||
- hipDeviceAttributeKernelExecTimeout, ///<Run time limit for kernels executed on the device
|
||||
- hipDeviceAttributeCanMapHostMemory, ///<Device can map host memory into device address space
|
||||
- hipDeviceAttributeEccEnabled, ///<Device has ECC support enabled
|
||||
+ hipDeviceAttributeCudaCompatibleEnd = 9999,
|
||||
+ hipDeviceAttributeAmdSpecificBegin = 10000,
|
||||
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched functions
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched grid dimensions
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched block dimensions
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched shared memories
|
||||
- hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device
|
||||
- hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system
|
||||
- hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on
|
||||
- /// the device without migration
|
||||
- hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory
|
||||
- /// concurrently with the CPU
|
||||
- hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory
|
||||
- /// without calling hipHostRegister on it
|
||||
- hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via
|
||||
- /// the host's page tables
|
||||
- hipDeviceAttributeCanUseStreamWaitValue ///< '1' if Device supports hipStreamWaitValue32() and
|
||||
- ///< hipStreamWaitValue64() , '0' otherwise.
|
||||
+ hipDeviceAttributeClockInstructionRate = hipDeviceAttributeAmdSpecificBegin, ///< Frequency in khz of the timer used by the device-side "clock*"
|
||||
+ hipDeviceAttributeArch, ///< Device architecture
|
||||
+ hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory PerMultiprocessor.
|
||||
+ hipDeviceAttributeGcnArch, ///< Device gcn architecture
|
||||
+ hipDeviceAttributeGcnArchName, ///< Device gcnArch name in 256 bytes
|
||||
+ hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
|
||||
+ hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched functions
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched grid dimensions
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched block dimensions
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched shared memories
|
||||
+ hipDeviceAttributeIsLargeBar, ///< Whether it is LargeBar
|
||||
+ hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device
|
||||
+ hipDeviceAttributeCanUseStreamWaitValue, ///< '1' if Device supports hipStreamWaitValue32() and
|
||||
+ ///< hipStreamWaitValue64() , '0' otherwise.
|
||||
|
||||
+ hipDeviceAttributeAmdSpecificEnd = 19999,
|
||||
+ hipDeviceAttributeVendorSpecificBegin = 20000,
|
||||
+ // Extended attributes for vendors
|
||||
} hipDeviceAttribute_t;
|
||||
|
||||
enum hipComputeMode {
|
||||
```
|
||||
|
||||
### Known Issues
|
||||
|
||||
#### Incorrect dGPU Behavior When Using AMDVBFlash Tool
|
||||
|
||||
The AMDVBFlash tool, used for flashing the VBIOS image to dGPU, does not communicate with the ROM Controller specifically when the driver is present. This is because the driver, as part of its runtime power management feature, puts the dGPU to a sleep state.
|
||||
|
||||
As a workaround, users can run amdgpu.runpm=0, which temporarily disables the runtime power management feature from the driver and dynamically changes some power control-related sysfs files.
|
||||
|
||||
#### Issue with START Timestamp in ROCProfiler
|
||||
|
||||
Users may encounter an issue with the enabled timestamp functionality for monitoring one or multiple counters. ROCProfiler outputs the following four timestamps for each kernel:
|
||||
|
||||
- Dispatch
|
||||
|
||||
- Start
|
||||
|
||||
- End
|
||||
|
||||
- Complete
|
||||
|
||||
##### Issue
|
||||
|
||||
This defect is related to the Start timestamp functionality, which incorrectly shows an earlier time than the Dispatch timestamp.
|
||||
|
||||
To reproduce the issue,
|
||||
|
||||
1. Enable timing using the --timestamp on flag.
|
||||
|
||||
2. Use the -i option with the input filename that contains the name of the counter(s) to monitor.
|
||||
|
||||
3. Run the program.
|
||||
|
||||
4. Check the output result file.
|
||||
|
||||
##### Current behavior
|
||||
|
||||
BeginNS is lower than DispatchNS, which is incorrect.
|
||||
|
||||
##### Expected behavior
|
||||
|
||||
The correct order is:
|
||||
|
||||
Dispatch < Start < End < Complete
|
||||
|
||||
Users cannot use ROCProfiler to measure the time spent on each kernel because of the incorrect timestamp with counter collection enabled.
|
||||
|
||||
##### Recommended Workaround
|
||||
|
||||
Users are recommended to collect kernel execution timestamps without monitoring counters, as follows:
|
||||
|
||||
1. Enable timing using the --timestamp on flag, and run the application.
|
||||
|
||||
2. Rerun the application using the -i option with the input filename that contains the name of the counter(s) to monitor, and save this to a different output file using the -o flag.
|
||||
|
||||
3. Check the output result file from step 1.
|
||||
|
||||
4. The order of timestamps correctly displays as:
|
||||
DispathNS < BeginNS < EndNS < CompleteNS
|
||||
|
||||
5. Users can find the values of the collected counters in the output file generated in step 2.
|
||||
|
||||
#### Radeon Pro V620 and W6800 Workstation GPUs
|
||||
|
||||
##### No Support for SMI and ROCDebugger on SRIOV
|
||||
|
||||
System Management Interface (SMI) and ROCDebugger are not supported in the SRIOV environment on any GPU. For more information, refer to the Systems Management Interface documentation.
|
||||
|
||||
### Deprecations and Warnings
|
||||
|
||||
#### ROCm Libraries Changes – Deprecations and Deprecation Removal
|
||||
|
||||
- The hipFFT.h header is now provided only by the hipFFT package. Up to ROCm 5.0, users would get hipFFT.h in the rocFFT package too.
|
||||
|
||||
- The GlobalPairwiseAMG class is now entirely removed, users should use the PairwiseAMG class instead.
|
||||
|
||||
- The rocsparse_spmm signature in 5.0 was changed to match that of rocsparse_spmm_ex. In 5.0, rocsparse_spmm_ex is still present, but deprecated. Signature diff for rocsparse_spmm
|
||||
rocsparse_spmm in 5.0
|
||||
|
||||
```h
|
||||
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
|
||||
rocsparse_operation trans_A,
|
||||
rocsparse_operation trans_B,
|
||||
const void* alpha,
|
||||
const rocsparse_spmat_descr mat_A,
|
||||
const rocsparse_dnmat_descr mat_B,
|
||||
const void* beta,
|
||||
const rocsparse_dnmat_descr mat_C,
|
||||
rocsparse_datatype compute_type,
|
||||
rocsparse_spmm_alg alg,
|
||||
rocsparse_spmm_stage stage,
|
||||
size_t* buffer_size,
|
||||
void* temp_buffer);
|
||||
```
|
||||
|
||||
rocSPARSE_spmm in 4.0
|
||||
|
||||
```h
|
||||
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
|
||||
rocsparse_operation trans_A,
|
||||
rocsparse_operation trans_B,
|
||||
const void* alpha,
|
||||
const rocsparse_spmat_descr mat_A,
|
||||
const rocsparse_dnmat_descr mat_B,
|
||||
const void* beta,
|
||||
const rocsparse_dnmat_descr mat_C,
|
||||
rocsparse_datatype compute_type,
|
||||
rocsparse_spmm_alg alg,
|
||||
size_t* buffer_size,
|
||||
void* temp_buffer);
|
||||
```
|
||||
|
||||
#### HIP API Deprecations and Warnings
|
||||
|
||||
##### Warning - Arithmetic Operators of HIP Complex and Vector Types
|
||||
|
||||
In this release, arithmetic operators of HIP complex and vector types are deprecated.
|
||||
|
||||
- As alternatives to arithmetic operators of HIP complex types, users can use arithmetic operators of `std::complex` types.
|
||||
|
||||
- As alternatives to arithmetic operators of HIP vector types, users can use the operators of the native clang vector type associated with the data member of HIP vector types.
|
||||
|
||||
During the deprecation, two macros `_HIP_ENABLE_COMPLEX_OPERATORS` and `_HIP_ENABLE_VECTOR_OPERATORS` are provided to allow users to conditionally enable arithmetic operators of HIP complex or vector types.
|
||||
|
||||
Note, the two macros are mutually exclusive and, by default, set to Off.
|
||||
|
||||
The arithmetic operators of HIP complex and vector types will be removed in a future release.
|
||||
|
||||
Refer to the HIP API Guide for more information.
|
||||
|
||||
#### Warning - Compiler-Generated Code Object Version 4 Deprecation
|
||||
|
||||
Support for loading compiler-generated code object version 4 will be deprecated in a future release with no release announcement and replaced with code object 5 as the default version.
|
||||
|
||||
The current default is code object version 4.
|
||||
|
||||
#### Warning - MIOpenTensile Deprecation
|
||||
|
||||
MIOpenTensile will be deprecated in a future release.
|
||||
|
||||
### Library Changes in ROCM 5.0.0
|
||||
|
||||
| Library | Version |
|
||||
|---------|---------|
|
||||
| hipBLAS | ⇒ [0.49.0](https://github.com/ROCmSoftwarePlatform/hipBLAS/releases/tag/rocm-5.0.0) |
|
||||
| hipCUB | ⇒ [2.10.13](https://github.com/ROCmSoftwarePlatform/hipCUB/releases/tag/rocm-5.0.0) |
|
||||
| hipFFT | ⇒ [1.0.4](https://github.com/ROCmSoftwarePlatform/hipFFT/releases/tag/rocm-5.0.0) |
|
||||
| hipSOLVER | ⇒ [1.2.0](https://github.com/ROCmSoftwarePlatform/hipSOLVER/releases/tag/rocm-5.0.0) |
|
||||
| hipSPARSE | ⇒ [2.0.0](https://github.com/ROCmSoftwarePlatform/hipSPARSE/releases/tag/rocm-5.0.0) |
|
||||
| rccl | ⇒ [2.10.3](https://github.com/ROCmSoftwarePlatform/rccl/releases/tag/rocm-5.0.0) |
|
||||
| rocALUTION | ⇒ [2.0.1](https://github.com/ROCmSoftwarePlatform/rocALUTION/releases/tag/rocm-5.0.0) |
|
||||
| rocBLAS | ⇒ [2.42.0](https://github.com/ROCmSoftwarePlatform/rocBLAS/releases/tag/rocm-5.0.0) |
|
||||
| rocFFT | ⇒ [1.0.13](https://github.com/ROCmSoftwarePlatform/rocFFT/releases/tag/rocm-5.0.0) |
|
||||
| rocPRIM | ⇒ [2.10.12](https://github.com/ROCmSoftwarePlatform/rocPRIM/releases/tag/rocm-5.0.0) |
|
||||
| rocRAND | ⇒ [2.10.12](https://github.com/ROCmSoftwarePlatform/rocRAND/releases/tag/rocm-5.0.0) |
|
||||
| rocSOLVER | ⇒ [3.16.0](https://github.com/ROCmSoftwarePlatform/rocSOLVER/releases/tag/rocm-5.0.0) |
|
||||
| rocSPARSE | ⇒ [2.0.0](https://github.com/ROCmSoftwarePlatform/rocSPARSE/releases/tag/rocm-5.0.0) |
|
||||
| rocThrust | ⇒ [2.13.0](https://github.com/ROCmSoftwarePlatform/rocThrust/releases/tag/rocm-5.0.0) |
|
||||
| Tensile | ⇒ [4.31.0](https://github.com/ROCmSoftwarePlatform/Tensile/releases/tag/rocm-5.0.0) |
|
||||
|
||||
#### hipBLAS 0.49.0
|
||||
|
||||
hipBLAS 0.49.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Added rocSOLVER functions to hipblas-bench
|
||||
- Added option ROCM_MATHLIBS_API_USE_HIP_COMPLEX to opt-in to use hipFloatComplex and hipDoubleComplex
|
||||
- Added compilation warning for future trmm changes
|
||||
- Added documentation to hipblas.h
|
||||
- Added option to forgo pivoting for getrf and getri when ipiv is nullptr
|
||||
- Added code coverage option
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Fixed use of incorrect 'HIP_PATH' when building from source.
|
||||
- Fixed windows packaging
|
||||
- Allowing negative increments in hipblas-bench
|
||||
- Removed boost dependency
|
||||
|
||||
#### hipCUB 2.10.13
|
||||
|
||||
hipCUB 2.10.13 for ROCm 5.0.0
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Added missing includes to hipcub.hpp
|
||||
|
||||
##### Added
|
||||
|
||||
- Bfloat16 support to test cases (device_reduce & device_radix_sort)
|
||||
- Device merge sort
|
||||
- Block merge sort
|
||||
- API update to CUB 1.14.0
|
||||
|
||||
##### Changed
|
||||
|
||||
- The SetupNVCC.cmake automatic target selector select all of the capabalities of all available card for NVIDIA backend.
|
||||
|
||||
#### hipFFT 1.0.4
|
||||
|
||||
hipFFT 1.0.4 for ROCm 5.0.0
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Add calls to rocFFT setup/cleanup.
|
||||
- Cmake fixes for clients and backend support.
|
||||
|
||||
##### Added
|
||||
|
||||
- Added support for Windows 10 as a build target.
|
||||
|
||||
#### hipSOLVER 1.2.0
|
||||
|
||||
hipSOLVER 1.2.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Added functions
|
||||
- sytrf
|
||||
- hipsolverSsytrf_bufferSize, hipsolverDsytrf_bufferSize, hipsolverCsytrf_bufferSize, hipsolverZsytrf_bufferSize
|
||||
- hipsolverSsytrf, hipsolverDsytrf, hipsolverCsytrf, hipsolverZsytrf
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Fixed use of incorrect `HIP_PATH` when building from source (#40).
|
||||
Thanks [@jakub329homola](https://github.com/jakub329homola)!
|
||||
|
||||
#### hipSPARSE 2.0.0
|
||||
|
||||
hipSPARSE 2.0.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Added (conjugate) transpose support for csrmv, hybmv and spmv routines
|
||||
|
||||
#### rccl 2.10.3
|
||||
|
||||
RCCL 2.10.3 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Compatibility with NCCL 2.10.3
|
||||
|
||||
##### Known Issues
|
||||
|
||||
- Managed memory is not currently supported for clique-based kernels
|
||||
|
||||
#### rocALUTION 2.0.1
|
||||
|
||||
rocALUTION 2.0.1 for ROCm 5.0.0
|
||||
|
||||
##### Changed
|
||||
|
||||
- Removed deprecated GlobalPairwiseAMG class, please use PairwiseAMG instead.
|
||||
- Changed to C++ 14 Standard
|
||||
|
||||
##### Improved
|
||||
|
||||
- Added sanitizer option
|
||||
- Improved documentation
|
||||
|
||||
#### rocBLAS 2.42.0
|
||||
|
||||
rocBLAS 2.42.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Added rocblas_get_version_string_size convenience function
|
||||
- Added rocblas_xtrmm_outofplace, an out-of-place version of rocblas_xtrmm
|
||||
- Added hpl and trig initialization for gemm_ex to rocblas-bench
|
||||
- Added source code gemm. It can be used as an alternative to Tensile for debugging and development
|
||||
- Added option ROCM_MATHLIBS_API_USE_HIP_COMPLEX to opt-in to use hipFloatComplex and hipDoubleComplex
|
||||
|
||||
##### Optimizations
|
||||
|
||||
- Improved performance of non-batched and batched single-precision GER for size m > 1024. Performance enhanced by 5-10% measured on a MI100 (gfx908) GPU.
|
||||
- Improved performance of non-batched and batched HER for all sizes and data types. Performance enhanced by 2-17% measured on a MI100 (gfx908) GPU.
|
||||
|
||||
##### Changed
|
||||
|
||||
- Instantiate templated rocBLAS functions to reduce size of librocblas.so
|
||||
- Removed static library dependency on msgpack
|
||||
- Removed boost dependencies for clients
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Option to install script to build only rocBLAS clients with a pre-built rocBLAS library
|
||||
- Correctly set output of nrm2_batched_ex and nrm2_strided_batched_ex when given bad input
|
||||
- Fix for dgmm with side == rocblas_side_left and a negative incx
|
||||
- Fixed out-of-bounds read for small trsm
|
||||
- Fixed numerical checking for tbmv_strided_batched
|
||||
|
||||
#### rocFFT 1.0.13
|
||||
|
||||
rocFFT 1.0.13 for ROCm 5.0.0
|
||||
|
||||
##### Optimizations
|
||||
|
||||
- Improved many plans by removing unnecessary transpose steps.
|
||||
- Optimized scheme selection for 3D problems.
|
||||
- Imposed less restrictions on 3D_BLOCK_RC selection. More problems can use 3D_BLOCK_RC and
|
||||
have some performance gain.
|
||||
- Enabled 3D_RC. Some 3D problems with SBCC-supported z-dim can use less kernels and get benefit.
|
||||
- Force --length 336 336 56 (dp) use faster 3D_RC to avoid it from being skipped by conservative
|
||||
threshold test.
|
||||
- Optimized some even-length R2C/C2R cases by doing more operations
|
||||
in-place and combining pre/post processing into Stockham kernels.
|
||||
- Added radix-17.
|
||||
|
||||
##### Added
|
||||
|
||||
- Added new kernel generator for select fused-2D transforms.
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Improved large 1D transform decompositions.
|
||||
|
||||
#### rocPRIM 2.10.12
|
||||
|
||||
rocPRIM 2.10.12 for ROCm 5.0.0
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Enable bfloat16 tests and reduce threshold for bfloat16
|
||||
- Fix device scan limit_size feature
|
||||
- Non-optimized builds no longer trigger local memory limit errors
|
||||
|
||||
##### Added
|
||||
|
||||
- Added scan size limit feature
|
||||
- Added reduce size limit feature
|
||||
- Added transform size limit feature
|
||||
- Add block_load_striped and block_store_striped
|
||||
- Add gather_to_blocked to gather values from other threads into a blocked arrangement
|
||||
- The block sizes for device merge sorts initial block sort and its merge steps are now separate in its kernel config
|
||||
- the block sort step supports multiple items per thread
|
||||
|
||||
##### Changed
|
||||
|
||||
- size_limit for scan, reduce and transform can now be set in the config struct instead of a parameter
|
||||
- Device_scan and device_segmented_scan: `inclusive_scan` now uses the input-type as accumulator-type, `exclusive_scan` uses initial-value-type.
|
||||
- This particularly changes behaviour of small-size input types with large-size output types (e.g. `short` input, `int` output).
|
||||
- And low-res input with high-res output (e.g. `float` input, `double` output)
|
||||
- Revert old Fiji workaround, because they solved the issue at compiler side
|
||||
- Update README cmake minimum version number
|
||||
- Block sort support multiple items per thread
|
||||
- currently only powers of two block sizes, and items per threads are supported and only for full blocks
|
||||
- Bumped the minimum required version of CMake to 3.16
|
||||
|
||||
##### Known Issues
|
||||
|
||||
- Unit tests may soft hang on MI200 when running in hipMallocManaged mode.
|
||||
- device_segmented_radix_sort, device_scan unit tests failing for HIP on Windows
|
||||
- ReduceEmptyInput cause random faulire with bfloat16
|
||||
|
||||
#### rocRAND 2.10.12
|
||||
|
||||
rocRAND 2.10.12 for ROCm 5.0.0
|
||||
|
||||
##### Changed
|
||||
|
||||
- No updates or changes for ROCm 5.0.0.
|
||||
|
||||
#### rocSOLVER 3.16.0
|
||||
|
||||
rocSOLVER 3.16.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Symmetric matrix factorizations:
|
||||
- LASYF
|
||||
- SYTF2, SYTRF (with batched and strided\_batched versions)
|
||||
- Added `rocsolver_get_version_string_size` to help with version string queries
|
||||
- Added `rocblas_layer_mode_ex` and the ability to print kernel calls in the trace and profile logs
|
||||
- Expanded batched and strided\_batched sample programs.
|
||||
|
||||
##### Optimized
|
||||
|
||||
- Improved general performance of LU factorization
|
||||
- Increased parallelism of specialized kernels when compiling from source, reducing build times on multi-core systems.
|
||||
|
||||
##### Changed
|
||||
|
||||
- The rocsolver-test client now prints the rocSOLVER version used to run the tests,
|
||||
rather than the version used to build them
|
||||
- The rocsolver-bench client now prints the rocSOLVER version used in the benchmark
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Added missing stdint.h include to rocsolver.h
|
||||
|
||||
#### rocSPARSE 2.0.0
|
||||
|
||||
rocSPARSE 2.0.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- csrmv, coomv, ellmv, hybmv for (conjugate) transposed matrices
|
||||
- csrmv for symmetric matrices
|
||||
|
||||
##### Changed
|
||||
|
||||
- spmm\_ex is now deprecated and will be removed in the next major release
|
||||
|
||||
##### Improved
|
||||
|
||||
- Optimization for gtsv
|
||||
|
||||
#### rocThrust 2.13.0
|
||||
|
||||
rocThrust 2.13.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- Updated to match upstream Thrust 1.13.0
|
||||
- Updated to match upstream Thrust 1.14.0
|
||||
- Added async scan
|
||||
|
||||
##### Changed
|
||||
|
||||
- Scan algorithms: `inclusive_scan` now uses the input-type as accumulator-type, `exclusive_scan` uses initial-value-type.
|
||||
- This particularly changes behaviour of small-size input types with large-size output types (e.g. `short` input, `int` output).
|
||||
- And low-res input with high-res output (e.g. `float` input, `double` output)
|
||||
|
||||
#### Tensile 4.31.0
|
||||
|
||||
Tensile 4.31.0 for ROCm 5.0.0
|
||||
|
||||
##### Added
|
||||
|
||||
- DirectToLds support (x2/x4)
|
||||
- DirectToVgpr support for DGEMM
|
||||
- Parameter to control number of files kernels are merged into to better parallelize kernel compilation
|
||||
- FP16 alternate implementation for HPA HGEMM on aldebaran
|
||||
|
||||
##### Optimized
|
||||
|
||||
- Add DGEMM NN custom kernel for HPL on aldebaran
|
||||
|
||||
##### Changed
|
||||
|
||||
- Update tensile_client executable to std=c++14
|
||||
|
||||
##### Removed
|
||||
|
||||
- Remove unused old Tensile client code
|
||||
|
||||
##### Fixed
|
||||
|
||||
- Fix hipErrorInvalidHandle during benchmarks
|
||||
- Fix addrVgpr for atomic GSU
|
||||
- Fix for Python 3.8: add case for Constant nodeType
|
||||
- Fix architecture mapping for gfx1011 and gfx1012
|
||||
- Fix PrintSolutionRejectionReason verbiage in KernelWriter.py
|
||||
- Fix vgpr alignment problem when enabling flat buffer load
|
||||
246
CONTRIBUTING.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Contributing to ROCm Docs
|
||||
|
||||
AMD values and encourages the ROCm community to contribute to our code and
|
||||
documentation. This repository is focused on ROCm documentation and this
|
||||
contribution guide describes the recommend method for creating and modifying our
|
||||
documentation.
|
||||
|
||||
While interacting with ROCm Documentation, we encourage you to be polite and
|
||||
respectful in your contributions, content or otherwise. Authors, maintainers of
|
||||
these docs act on good intentions and to the best of their knowledge.
|
||||
Keep that in mind while you engage. Should you have issues with contributing
|
||||
itself, refer to
|
||||
[discussions](https://github.com/RadeonOpenCompute/ROCm/discussions) on the
|
||||
GitHub repository.
|
||||
|
||||
## Supported Formats
|
||||
|
||||
Our documentation includes both markdown and rst files. Markdown is encouraged
|
||||
over rst due to the lower barrier to participation. GitHub flavored markdown is preferred
|
||||
for all submissions as it will render accurately on our GitHub repositories. For existing documentation,
|
||||
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html) markdown
|
||||
is used to implement certain features unsupported in GitHub markdown. This is
|
||||
not encouraged for new documentation. AMD will transition
|
||||
to stricter use of GitHub flavored markdown with a few caveats. ROCm documentation
|
||||
also uses [sphinx-design](https://sphinx-design.readthedocs.io/en/latest/index.html)
|
||||
in our markdown and rst files. We also will use breathe syntax for doxygen documentation
|
||||
in our markdown files. Other design elements for effective HTML rendering of the documents
|
||||
may be added to our markdown files. Please see
|
||||
[GitHub](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github)'s
|
||||
guide on writing and formatting on GitHub as a starting point.
|
||||
|
||||
ROCm documentation adds additional requirements to markdown and rst based files
|
||||
as follows:
|
||||
|
||||
- Level one headers are only used for page titles. There must be only one level
|
||||
1 header per file for both Markdown and Restructured Text.
|
||||
- Pass [markdownlint](https://github.com/markdownlint/markdownlint) check via
|
||||
our automated github action on a Pull Request (PR).
|
||||
|
||||
## Filenames and folder structure
|
||||
|
||||
Please use snake case for file names. Our documentation follows pitchfork for
|
||||
folder structure. All documentation is in /docs except for special files like
|
||||
the contributing guide in the / folder. All images used in the documentation are
|
||||
place in the /docs/data folder.
|
||||
|
||||
## How to provide feedback for for ROCm documentation
|
||||
|
||||
There are three standard ways to provide feedback for this repository.
|
||||
|
||||
### Pull Request
|
||||
|
||||
All contributions to ROCm documentation should arrive via the
|
||||
[GitHub Flow](https://docs.github.com/en/get-started/quickstart/github-flow)
|
||||
targetting the develop branch of the repository. If you are unable to contribute
|
||||
via the GitHub Flow, feel free to email us. TODO, confirm email address.
|
||||
|
||||
### GitHub Issue
|
||||
|
||||
Issues on existing or absent docs can be filed as [GitHub issues
|
||||
](https://github.com/RadeonOpenCompute/ROCm/issues).
|
||||
|
||||
### Email Feedback
|
||||
|
||||
## Language and Style
|
||||
|
||||
Adopting Microsoft CPP-Docs guidelines for [Voice and Tone
|
||||
](https://github.com/MicrosoftDocs/cpp-docs/blob/main/styleguide/voice-tone.md).
|
||||
|
||||
ROCm documentation templates to be made public shortly. ROCm templates dictate
|
||||
the recommended structure and flow of the documentation. Guidelines on how to
|
||||
integrate figures, equations, and tables are all based off
|
||||
[MyST](https://myst-parser.readthedocs.io/en/latest/intro.html).
|
||||
|
||||
Font size and selection, page layout, white space control, and other formatting
|
||||
details are controlled via rocm-docs-core, sphinx extention. Please raise issues
|
||||
in rocm-docs-core for any formatting concerns and changes requested.
|
||||
|
||||
## Building Documentation
|
||||
|
||||
While contributing, one may build the documentation locally on the command-line
|
||||
or rely on Continuous Integration for previewing the resulting HTML pages in a
|
||||
browser.
|
||||
|
||||
### Command line documentation builds
|
||||
|
||||
Python versions known to build documentation:
|
||||
|
||||
- 3.8
|
||||
|
||||
To build the docs locally using Python Virtual Environment (`venv`), execute the
|
||||
following commands from the project root:
|
||||
|
||||
```sh
|
||||
python3 -mvenv .venv
|
||||
# Windows
|
||||
.venv/Scripts/python -m pip install -r docs/sphinx/requirements.txt
|
||||
.venv/Scripts/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
|
||||
# Linux
|
||||
.venv/bin/python -m pip install -r docs/sphinx/requirements.txt
|
||||
.venv/bin/python -m sphinx -T -E -b html -d _build/doctrees -D language=en docs _build/html
|
||||
```
|
||||
|
||||
Then open up `_build/html/index.html` in your favorite browser.
|
||||
|
||||
### Pull Requests documentation builds
|
||||
|
||||
When opening a PR to the `develop` branch on GitHub, the page corresponding to
|
||||
the PR (`https://github.com/RadeonOpenCompute/ROCm/pull/<pr_number>`) will have
|
||||
a summary at the bottom. This requires the user be logged in to GitHub.
|
||||
|
||||
- There, click `Show all checks` and `Details` of the Read the Docs pipeline. It
|
||||
will take you to `https://readthedocs.com/projects/advanced-micro-devices-rocm/
|
||||
builds/<some_build_num>/`
|
||||
- The list of commands shown are the exact ones used by CI to produce a render
|
||||
of the documentation.
|
||||
- There, click on the small blue link `View docs` (which is not the same as the
|
||||
bigger button with the same text). It will take you to the built HTML site with
|
||||
a URL of the form `https://
|
||||
advanced-micro-devices-demo--<pr_number>.com.readthedocs.build/projects/alpha/en
|
||||
/<pr_number>/`.
|
||||
|
||||
### Build the docs using VS Code
|
||||
|
||||
One can put together a productive environment to author documentation and also
|
||||
test it locally using VS Code with only a handful of extensions. Even though the
|
||||
extension landscape of VS Code is ever changing, here is one example setup that
|
||||
proved useful at the time of writing. In it, one can change/add content, build a
|
||||
new version of the docs using a single VS Code Task (or hotkey), see all errors/
|
||||
warnings emitted by Sphinx in the Problems pane and immediately see the
|
||||
resulting website show up on a locally serving web server.
|
||||
|
||||
#### Configuring VS Code
|
||||
|
||||
1. Install the following extensions:
|
||||
|
||||
- Python (ms-python.python)
|
||||
- Live Server (ritwickdey.LiveServer)
|
||||
|
||||
2. Add the following entries in `.vscode/settings.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"liveServer.settings.root": "/.vscode/build/html",
|
||||
"liveServer.settings.wait": 1000,
|
||||
"python.terminal.activateEnvInCurrentTerminal": true
|
||||
}
|
||||
```
|
||||
|
||||
The settings in order are set for the following reasons:
|
||||
- Sets the root of the output website for live previews. Must be changed
|
||||
alongside the `tasks.json` command.
|
||||
- Tells live server to wait with the update to give time for Sphinx to
|
||||
regenerate site contents and not refresh before all is don. (Empirical value)
|
||||
- Automatic virtual env activation is a nice touch, should you want to build
|
||||
the site from the integrated terminal.
|
||||
|
||||
3. Add the following tasks in `.vscode/tasks.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "2.0.0",
|
||||
"tasks": [
|
||||
{
|
||||
"label": "Build Docs",
|
||||
"type": "process",
|
||||
"windows": {
|
||||
"command": "${workspaceFolder}/.venv/Scripts/python.exe"
|
||||
},
|
||||
"command": "${workspaceFolder}/.venv/bin/python3",
|
||||
"args": [
|
||||
"-m",
|
||||
"sphinx",
|
||||
"-j",
|
||||
"auto",
|
||||
"-T",
|
||||
"-b",
|
||||
"html",
|
||||
"-d",
|
||||
"${workspaceFolder}/.vscode/build/doctrees",
|
||||
"-D",
|
||||
"language=en",
|
||||
"${workspaceFolder}/docs",
|
||||
"${workspaceFolder}/.vscode/build/html"
|
||||
],
|
||||
"problemMatcher": [
|
||||
{
|
||||
"owner": "sphinx",
|
||||
"fileLocation": "absolute",
|
||||
"pattern": {
|
||||
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):(\\d+):\\s+(WARNING|ERROR):\\s+(.*)$",
|
||||
"file": 1,
|
||||
"line": 2,
|
||||
"severity": 3,
|
||||
"message": 4
|
||||
},
|
||||
},
|
||||
{
|
||||
"owner": "sphinx",
|
||||
"fileLocation": "absolute",
|
||||
"pattern": {
|
||||
"regexp": "^(?:.*\\.{3}\\s+)?(\\/[^:]*|[a-zA-Z]:\\\\[^:]*):{1,2}\\s+(WARNING|ERROR):\\s+(.*)$",
|
||||
"file": 1,
|
||||
"severity": 2,
|
||||
"message": 3
|
||||
}
|
||||
}
|
||||
],
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": true
|
||||
}
|
||||
},
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
> (Implementation detail: two problem matchers were needed to be defined,
|
||||
> because VS Code doesn't tolerate some problem information being potentially
|
||||
> absent. While a single regex could match all types of errors, if a capture
|
||||
> group remains empty (the line number doesn't show up in all warning/error
|
||||
> messages) but the `pattern` references said empty capture group, VS Code
|
||||
> discards the message completely.)
|
||||
|
||||
4. Configure Python virtual environment (venv)
|
||||
|
||||
- From the Command Palette, run `Python: Create Environment`
|
||||
- Select `venv` environment and the `docs/sphinx/requirements.txt` file.
|
||||
_(Simply pressing enter while hovering over the file from the dropdown is
|
||||
insufficient, one has to select the radio button with the 'Space' key if
|
||||
using the keyboard.)_
|
||||
|
||||
5. Build the docs
|
||||
|
||||
- Launch the default build Task using either:
|
||||
- a hotkey _(default is 'Ctrl+Shift+B')_ or
|
||||
- by issuing the `Tasks: Run Build Task` from the Command Palette.
|
||||
|
||||
6. Open the live preview
|
||||
|
||||
- Navigate to the output of the site within VS Code, right-click on
|
||||
`.vscode/build/html/index.html` and select `Open with Live Server`. The
|
||||
contents should update on every rebuild without having to refresh the
|
||||
browser.
|
||||
|
||||
<!-- markdownlint-restore -->
|
||||
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
572
README.md
@@ -1,518 +1,54 @@
|
||||
|
||||
# AMD ROCm™ Release Notes v3.10.0
|
||||
|
||||
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
|
||||
It also covers known issues in this release.
|
||||
|
||||
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
|
||||
* [Supported Operating Systems](#Supported-Operating-Systems)
|
||||
* [ROCm Installation Updates](#ROCm-Installation-Updates)
|
||||
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
|
||||
|
||||
- [What\'s New in This Release](#Whats-New-in-This-Release)
|
||||
* [ROCm Data Center Tool](#ROCm-Data-Center-Tool)
|
||||
* [ROCm System Management Information](#ROCm-System-Management-Information)
|
||||
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
|
||||
* [ROCM AOMP Enhancements](#ROCm-AOMP-Enhancements)
|
||||
|
||||
- [Fixed Defects](#Fixed-Defects)
|
||||
|
||||
- [Known Issues](#Known-Issues)
|
||||
|
||||
- [Deprecations](#Deprecations)
|
||||
|
||||
- [Deploying ROCm](#Deploying-ROCm)
|
||||
|
||||
- [Hardware and Software Support](#Hardware-and-Software-Support)
|
||||
|
||||
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
|
||||
* [ROCm Binary Package Structure](#ROCm-Binary-Package-Structure)
|
||||
* [ROCm Platform Packages](#ROCm-Platform-Packages)
|
||||
|
||||
|
||||
|
||||
# Supported Operating Systems
|
||||
|
||||
## List of Supported Operating Systems
|
||||
|
||||
The AMD ROCm platform is designed to support the following operating systems:
|
||||
|
||||
* Ubuntu 20.04.1 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
|
||||
|
||||
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
|
||||
|
||||
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
|
||||
|
||||
* SLES 15 SP2
|
||||
|
||||
**Note**: The ROCm Data Center Tool is supported only on Ubuntu v18.04.5 and Ubuntu v20.04.1 in the AMD ROCm v3.10.0 release.
|
||||
|
||||
The CentOS/RHEL and SLES environments are not supported at this time.
|
||||
|
||||
|
||||
|
||||
# ROCm Installation Updates
|
||||
|
||||
## Fresh Installation of AMD ROCm v3.10 Recommended
|
||||
A fresh and clean installation of AMD ROCm v3.10 is recommended. An upgrade from previous releases to AMD ROCm v3.10 is not supported.
|
||||
|
||||
For more information, refer to the AMD ROCm Installation Guide at:
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
|
||||
|
||||
**Note**: *render group* is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use *video group*.
|
||||
|
||||
* For ROCm v3.5 and releases thereafter,the *clinfo* path is changed to - */opt/rocm/opencl/bin/clinfo*.
|
||||
|
||||
* For ROCm v3.3 and older releases, the *clinfo* path remains unchanged - */opt/rocm/opencl/bin/x86_64/clinfo*.
|
||||
|
||||
**Note**: After an operating system upgrade, AMD ROCm may upgrade automatically and result in an error. This is because AMD ROCm does not support upgrades currently. You must uninstall and reinstall AMD ROCm after an operating system upgrade.
|
||||
|
||||
|
||||
## ROCm MultiVersion Installation Update
|
||||
|
||||
With the AMD ROCm v3.10 release, the following ROCm multi-version installation changes apply:
|
||||
|
||||
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
|
||||
|
||||
* Multi-version installation of ROCm should be performed by installing rocm-dev<version> using each of the desired ROCm versions. For example, rocm-dev3.7.0, rocm-dev3.8.0, rocm-dev3.9.0.
|
||||
* Version files must be created for each multi-version rocm <= 3.10.0
|
||||
|
||||
* command: echo <version> | sudo tee /opt/rocm-<version>/.info/version
|
||||
|
||||
* example: echo 3.9.0 | sudo tee /opt/rocm-3.10.0/.info/version
|
||||
|
||||
* The rock-dkms loadable kernel modules should be installed using a single rock-dkms package.
|
||||
|
||||
* ROCm v3.10 and above will not set any *ldconfig* entries for ROCm libraries for multi-version installation. Users must set *LD_LIBRARY_PATH* to load the ROCm library version of choice.
|
||||
|
||||
|
||||
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
|
||||
|
||||
|
||||
|
||||
# AMD ROCm Documentation Updates
|
||||
|
||||
## AMD ROCm Installation Guide
|
||||
|
||||
The AMD ROCm Installation Guide in this release includes:
|
||||
|
||||
* Updated Supported Environments
|
||||
* Installation Instructions for v3.10
|
||||
* HIP Installation Instructions
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
|
||||
## ROCm SMI API Documentation Updates
|
||||
|
||||
* System DMA (SDMA) Utilization API
|
||||
|
||||
* ROCm-SMI Command Line Interface
|
||||
|
||||
* Enhanced ROCm SMI Library for Events
|
||||
|
||||
|
||||
For the updated ROCm SMI API Guide, see
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_API_Guide_v3.10.pdf
|
||||
|
||||
|
||||
## ROCm Data Center Tool User Guide
|
||||
|
||||
The ROCm Data Center Tool User Guide includes the following enhancements:
|
||||
|
||||
* ROCm Data Center Tool Python Binding
|
||||
|
||||
* Prometheus plugin integration
|
||||
|
||||
For more information, refer to the ROCm Data Center Tool User Guide at:
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
|
||||
|
||||
For ROCm Data Center APIs, see
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_Data_Center_API_Guide.pdf
|
||||
|
||||
|
||||
## AMD ROCm - HIP Documentation Updates
|
||||
|
||||
* HIP FAQ
|
||||
|
||||
For more information, refer to
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
|
||||
|
||||
|
||||
## General AMD ROCm Documentation Links
|
||||
|
||||
Access the following links for more information:
|
||||
|
||||
* For AMD ROCm documentation, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/
|
||||
|
||||
* For installation instructions on supped platforms, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
* For AMD ROCm binary structure, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu
|
||||
|
||||
* For AMD ROCm Release History, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
|
||||
|
||||
|
||||
|
||||
# What\'s New in This Release
|
||||
|
||||
## ROCm DATA CENTER TOOL
|
||||
|
||||
The following enhancements are made to the ROCm Data Center Tool.
|
||||
|
||||
### Prometheus Plugin for ROCm Data Center Tool
|
||||
|
||||
The ROCm Data Center (RDC) Tool now provides the Prometheus plugin, a Python client to collect the telemetry data of the GPU.
|
||||
The RDC uses Python binding for Prometheus and the collected plugin. The Python binding maps the RDC C APIs to Python using ctypes. The functions supported by C APIs can also be used in the Python binding.
|
||||
|
||||
For installation instructions, refer to the ROCm Data Center Tool User Guide at
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
|
||||
|
||||
### Python Binding
|
||||
|
||||
The ROCm Data Center (RDC) Tool now uses PyThon Binding for Prometheus and collectd plugins. PyThon binding maps the RDC C APIs to PyThon using ctypes. All the functions supported by C APIs can also be used in PyThon binding. A generic PyThon class RdcReader is created to simplify the usage of the RDC:
|
||||
|
||||
* Users can only specify the fields they want to monitor. RdcReader creates groups and fieldgroups, watches the fields, and fetches the fields.
|
||||
|
||||
* RdcReader can support both the Embedded and Standalone mode. Standalone mode can be used with and without authentication.
|
||||
|
||||
* In the Standalone mode, the RdcReader can automatically reconnect to rdcd when connection is lost.When rdcd is restarted, the previously created group and fieldgroup may lose. The RdcReader can re-create them and watch the fields after a reconnect.
|
||||
|
||||
* If the client is restarted, RdcReader can detect the groups and fieldgroups created previously, and, therefore, can avoid recreating them.
|
||||
|
||||
* Users can pass the unit converter if they do not want to use the RDC default unit.
|
||||
|
||||
See the following sample program to monitor the power and GPU utilization using the RdcReader:
|
||||
|
||||
```
|
||||
|
||||
from RdcReader import RdcReader
|
||||
from RdcUtil import RdcUtil
|
||||
from rdc_bootstrap import *
|
||||
|
||||
default_field_ids = [
|
||||
rdc_field_t.RDC_FI_POWER_USAGE,
|
||||
rdc_field_t.RDC_FI_GPU_UTIL
|
||||
]
|
||||
|
||||
class SimpleRdcReader(RdcReader):
|
||||
def __init__(self):
|
||||
RdcReader.__init__(self,ip_port=None, field_ids = default_field_ids, update_freq=1000000)
|
||||
def handle_field(self, gpu_index, value):
|
||||
field_name = self.rdc_util.field_id_string(value.field_id).lower()
|
||||
print("%d %d:%s %d" % (value.ts, gpu_index, field_name, value.value.l_int))
|
||||
|
||||
if __name__ == '__main__':
|
||||
reader = SimpleRdcReader()
|
||||
while True:
|
||||
time.sleep(1)
|
||||
reader.process()
|
||||
|
||||
```
|
||||
|
||||
For more information about RDC Python binding and the Prometheus plugin integration, refer to the ROCm Data Center Tool User Guide at
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
|
||||
|
||||
|
||||
## ROCm SYSTEM MANAGEMENT INFORMATION
|
||||
|
||||
### System DMA (SDMA) Utilization
|
||||
|
||||
Per-process, the SDMA usage is exposed via the ROCm SMI library. The structure rsmi_process_info_t is extended to include sdma_usage. sdma_usage is a 64-bit value that counts the duration (in microseconds) for which the SDMA engine was active during that process's lifetime.
|
||||
|
||||
For example, see the rsmi_compute_process_info_by_pid_get() API below.
|
||||
|
||||
```
|
||||
|
||||
/**
|
||||
* @brief This structure contains information specific to a process.
|
||||
*/
|
||||
typedef struct {
|
||||
- - -,
|
||||
uint64_t sdma_usage; // SDMA usage in microseconds
|
||||
} rsmi_process_info_t;
|
||||
rsmi_status_t
|
||||
rsmi_compute_process_info_by_pid_get(uint32_t pid,
|
||||
rsmi_process_info_t *proc);
|
||||
|
||||
```
|
||||
|
||||
### ROCm-SMI Command Line Interface
|
||||
|
||||
The SDMA usage per-process is available using the following command,
|
||||
|
||||
```
|
||||
$ rocm-smi –showpids
|
||||
|
||||
```
|
||||
|
||||
For more information, see the ROCm SMI API guide at,
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_API_Guide_v3.10.pdf
|
||||
|
||||
|
||||
### Enhanced ROCm SMI Library for Events
|
||||
|
||||
ROCm-SMI library clients can now register to receive the following events:
|
||||
|
||||
* GPU PRE RESET: This reset event is sent to the client just before a GPU is going to be RESET.
|
||||
|
||||
* GPU POST RESET: This reset event is sent to the client after a successful GPU RESET.
|
||||
|
||||
* GPU THERMAL THROTTLE: This Thermal throttling event is sent if GPU clocks are throttled.
|
||||
|
||||
|
||||
For more information, refer to the ROCm SMI API Guide at:
|
||||
|
||||
https://github.com/RadeonOpenCompute/ROCm/blob/master/ROCm_SMI_API_Guide_v3.10.pdf
|
||||
|
||||
|
||||
### ROCm SMI – Command Line Interface Hardware Topology
|
||||
|
||||
This feature provides a matrix representation of the GPUs present in a system by providing information of the manner in which the nodes are connected. This is represented in terms of weights, hops, and link types between two given GPUs. It also provides the numa node and the CPU affinity associated with every GPU.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
|
||||
## ROCm MATH and COMMUNICATION LIBRARIES
|
||||
|
||||
### New rocSOLVER APIs
|
||||
The following new rocSOLVER APIs are added in this release:
|
||||
|
||||

|
||||
|
||||
For more information, refer to
|
||||
|
||||
https://rocsolver.readthedocs.io/en/latest/userguide_api.html
|
||||
|
||||
### RCCL Alltoallv Support in PyTorch
|
||||
|
||||
The AMD ROCm v3.10 release includes a new API for ROCm Communication Collectives Library (RCCL). This API sends data from all to all ranks and each rank provides arrays of input/output data counts and offsets.
|
||||
|
||||
For details about the functions and parameters, see
|
||||
|
||||
https://rccl.readthedocs.io/en/master/allapi.html
|
||||
|
||||
## ROCm AOMP ENHANCEMENTS
|
||||
|
||||
### AOMP Release 11.11-0
|
||||
|
||||
The source code base for this release is the upstream LLVM 11 monorepo release/11.x sources with the hash value
|
||||
|
||||
*176249bd6732a8044d457092ed932768724a6f06*
|
||||
|
||||
This release includes fixes to the internal Clang math headers:
|
||||
|
||||
* This set of changes applies to clang internal headers to support OpenMP C, C++, and FORTRAN and for HIP C. This establishes consistency between NVPTX and AMDGCN offloading and between OpenMP, HIP, and CUDA. OpenMP uses function variants and header overlays to define device versions of functions. This causes clang LLVM IR codegen to mangled names of variants in both the definition and callsites of functions defined in the internal clang headers. These changes apply to headers found in the installation subdirectory lib/clang/11.0.0/include.
|
||||
|
||||
* These changes temporarily eliminate the use of the libm bitcode libraries for C and C++. Although math functions are now defined with internal clang headers, a bitcode library of the C functions defined in the headers is still built for FORTRAN toolchain linking because FORTRAN cannot use c math headers. This bitcode library is installed in lib/libdevice/libm-.bc. The source build of this bitcode library is done with the aomp-extras repository and the component built script build_extras.sh. In the future, we will introduce across the board changes to eliminate massive header files for math libraries and replace them with linking to bitcode libraries.
|
||||
|
||||
* Added support for -gpubnames in Flang Driver
|
||||
|
||||
* Added an example category for Kokkos. The Kokkos example makefile detects if Kokkos is installed and, if not, it builds Kokkos from the Web. Refer to the script kokkos_build.sh in the bin directory on how to build Kokkos. Kokkos now builds cleanly with the OpenMP backend for simple test cases.
|
||||
|
||||
* Fixed hostrpc cmake race condition in the build of openmp
|
||||
|
||||
* Add a fatal error if missing -Xopenmp-target or -march options when -fopenmp-targets is specified. However, we do forgive this requirement for offloading to host when there is only a single target and that target is the host.
|
||||
|
||||
* Fix a bug in InstructionSimplify pass where a comparison of two constants of different sizes found in the optimization pass. This fixes issue #182 which was causing kokkos build failure.
|
||||
|
||||
* Fix openmp error message output for no_rocm_device_lib, was asserting.
|
||||
|
||||
* Changed linkage on constant per-kernel symbols from external to weaklinkageonly to prevent duplicate symbols when building kokkos.
|
||||
|
||||
|
||||
|
||||
# Fixed Defects
|
||||
|
||||
The following defects are fixed in this release:
|
||||
|
||||
* HIPfort failed to be installed
|
||||
|
||||
* rocm-smi does not work as-is in 3.9, instead prints a reference to documentation
|
||||
|
||||
* *--showtopo*, weight and hop count shows wrong data
|
||||
|
||||
* Unable to install RDC on CentOS/RHEL 7.8/8.2 & SLES
|
||||
|
||||
* Unable to install mivisionx with error "Problem: nothing provides opencv needed"
|
||||
|
||||
|
||||
# Known Issues
|
||||
|
||||
The following are the known issues in this release.
|
||||
|
||||
## Upgrade to AMD ROCm v3.10 Not Supported
|
||||
|
||||
An upgrade from previous releases to AMD ROCm v3.10 is not supported. A fresh and clean installation of AMD ROCm v3.10 is recommended.
|
||||
|
||||
|
||||
# Deprecations
|
||||
|
||||
This section describes deprecations and removals in AMD ROCm.
|
||||
|
||||
## WARNING: COMPILER-GENERATED CODE OBJECT VERSION 2 DEPRECATION
|
||||
|
||||
Compiler-generated code object version 2 is no longer supported and will be removed shortly. AMD ROCm users must plan for the code object version 2 deprecation immediately.
|
||||
|
||||
Support for loading code object version 2 is also being deprecated with no announced removal release.
|
||||
|
||||
|
||||
# Deploying ROCm
|
||||
|
||||
AMD hosts both Debian and RPM repositories for the ROCm v3.10.x packages.
|
||||
|
||||
For more information on ROCM installation on all platforms, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
|
||||
|
||||
|
||||
## Machine Learning and High Performance Computing Software Stack for AMD GPU
|
||||
|
||||
For an updated version of the software stack for AMD GPU, see
|
||||
|
||||
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu
|
||||
|
||||
|
||||
|
||||
# Hardware and Software Support
|
||||
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
|
||||
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
|
||||
|
||||
#### Supported GPUs
|
||||
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
|
||||
|
||||
**Note:** The integrated GPUs of Ryzen are not officially supported targets for ROCm.
|
||||
|
||||
ROCm officially supports AMD GPUs that use following chips:
|
||||
|
||||
* GFX8 GPUs
|
||||
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
|
||||
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
|
||||
* GFX9 GPUs
|
||||
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
|
||||
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
|
||||
|
||||
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
|
||||
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
|
||||
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
|
||||
|
||||
* GFX8 GPUs
|
||||
* "Polaris 11" chips, such as on the AMD Radeon RX 570 and Radeon Pro WX 4100
|
||||
* "Polaris 12" chips, such as on the AMD Radeon RX 550 and Radeon RX 540
|
||||
* GFX7 GPUs
|
||||
* "Hawaii" chips, such as the AMD Radeon R9 390X and FirePro W9100
|
||||
|
||||
As described in the next section, GFX8 GPUs require PCI Express 3.0 (PCIe 3.0) with support for PCIe atomics. This requires both CPU and motherboard support. GFX9 GPUs require PCIe 3.0 with support for PCIe atomics by default, but they can operate in most cases without this capability.
|
||||
|
||||
The integrated GPUs in AMD APUs are not officially supported targets for ROCm.
|
||||
As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven Ridge" APUs are enabled in our upstream drivers and the ROCm OpenCL runtime.
|
||||
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
|
||||
As such, they are not yet officially supported targets for ROCm.
|
||||
|
||||
For a more detailed list of hardware support, please see [the following documentation](https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units).
|
||||
|
||||
#### Supported CPUs
|
||||
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
|
||||
In particular, the CPU and every active PCIe point between the CPU and GPU require support for PCIe 3.0 and PCIe atomics.
|
||||
The CPU root must indicate PCIe AtomicOp Completion capabilities and any intermediate switch must indicate PCIe AtomicOp Routing capabilities.
|
||||
|
||||
Current CPUs which support PCIe Gen3 + PCIe Atomics are:
|
||||
|
||||
* AMD Ryzen CPUs
|
||||
* The CPUs in AMD Ryzen APUs
|
||||
* AMD Ryzen Threadripper CPUs
|
||||
* AMD EPYC CPUs
|
||||
* Intel Xeon E7 v3 or newer CPUs
|
||||
* Intel Xeon E5 v3 or newer CPUs
|
||||
* Intel Xeon E3 v3 or newer CPUs
|
||||
* Intel Core i7 v4, Core i5 v4, Core i3 v4 or newer CPUs (i.e. Haswell family or newer)
|
||||
* Some Ivy Bridge-E systems
|
||||
|
||||
Beginning with ROCm 1.8, GFX9 GPUs (such as Vega 10) no longer require PCIe atomics.
|
||||
We have similarly opened up more options for number of PCIe lanes.
|
||||
GFX9 GPUs can now be run on CPUs without PCIe atomics and on older PCIe generations, such as PCIe 2.0.
|
||||
This is not supported on GPUs below GFX9, e.g. GFX8 cards in the Fiji and Polaris families.
|
||||
|
||||
If you are using any PCIe switches in your system, please note that PCIe Atomics are only supported on some switches, such as Broadcom PLX.
|
||||
When you install your GPUs, make sure you install them in a PCIe 3.1.0 x16, x8, x4, or x1 slot attached either directly to the CPU's Root I/O controller or via a PCIe switch directly attached to the CPU's Root I/O controller.
|
||||
|
||||
In our experience, many issues stem from trying to use consumer motherboards which provide physical x16 connectors that are electrically connected as e.g. PCIe 2.0 x4, PCIe slots connected via the Southbridge PCIe I/O controller, or PCIe slots connected through a PCIe switch that does
|
||||
not support PCIe atomics.
|
||||
|
||||
If you attempt to run ROCm on a system without proper PCIe atomic support, you may see an error in the kernel log (`dmesg`):
|
||||
```
|
||||
kfd: skipped device 1002:7300, PCI rejects atomics
|
||||
```
|
||||
|
||||
Experimental support for our Hawaii (GFX7) GPUs (Radeon R9 290, R9 390, FirePro W9100, S9150, S9170)
|
||||
does not require or take advantage of PCIe Atomics. However, we still recommend that you use a CPU
|
||||
from the list provided above for compatibility purposes.
|
||||
|
||||
#### Not supported or limited support under ROCm
|
||||
|
||||
##### Limited support
|
||||
|
||||
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
|
||||
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
|
||||
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
|
||||
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
|
||||
* In addition, various "Carrizo" and "Bristol Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
|
||||
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
|
||||
* AMD "Raven Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
|
||||
* As of ROCm 2.1, "Raven Ridge" requires the use of upstream kernel drivers.
|
||||
* In addition, various "Raven Ridge" platforms may not work due to OEM and ODM choices when it comes to key configurations parameters such as inclusion of the required CRAT tables and IOMMU configuration parameters in the system BIOS.
|
||||
* Before purchasing such a system for ROCm, please verify that the BIOS provides an option for enabling IOMMUv2 and that the system BIOS properly exposes the correct CRAT table. Inquire with your vendor about the latter.
|
||||
|
||||
##### Not supported
|
||||
|
||||
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
|
||||
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
|
||||
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
|
||||
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
|
||||
|
||||
#### ROCm support in upstream Linux kernels
|
||||
|
||||
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
|
||||
As such, users have the option of either using the ROCK kernel driver that are part of AMD's ROCm repositories or using the upstream driver and only installing ROCm user-level utilities from AMD's ROCm repositories.
|
||||
|
||||
These releases of the upstream Linux kernel support the following GPUs in ROCm:
|
||||
* 4.17: Fiji, Polaris 10, Polaris 11
|
||||
* 4.18: Fiji, Polaris 10, Polaris 11, Vega10
|
||||
* 4.20: Fiji, Polaris 10, Polaris 11, Vega10, Vega 7nm
|
||||
|
||||
The upstream driver may be useful for running ROCm software on systems that are not compatible with the kernel driver available in AMD's repositories.
|
||||
For users that have the option of using either AMD's or the upstreamed driver, there are various tradeoffs to take into consideration:
|
||||
|
||||
| | Using AMD's `rock-dkms` package | Using the upstream kernel driver |
|
||||
| ---- | ------------------------------------------------------------| ----- |
|
||||
| Pros | More GPU features, and they are enabled earlier | Includes the latest Linux kernel features |
|
||||
| | Tested by AMD on supported distributions | May work on other distributions and with custom kernels |
|
||||
| | Supported GPUs enabled regardless of kernel version | |
|
||||
| | Includes the latest GPU firmware | |
|
||||
| Cons | May not work on all Linux distributions or versions | Features and hardware support varies depending on kernel version |
|
||||
| | Not currently supported on kernels newer than 5.4 | Limits GPU's usage of system memory to 3/8 of system memory (before 5.6). For 5.6 and beyond, both DKMS and upstream kernels allow use of 15/16 of system memory. |
|
||||
| | | IPC and RDMA capabilities are not yet enabled |
|
||||
| | | Not tested by AMD to the same level as `rock-dkms` package |
|
||||
| | | Does not include most up-to-date firmware |
|
||||
|
||||
|
||||
|
||||
|
||||
# AMD ROCm™ Platform
|
||||
|
||||
ROCm™ is an open-source stack for GPU computation. ROCm is primarily Open-Source
|
||||
Software (OSS) that allows developers the freedom to customize and tailor their
|
||||
GPU software for their own needs while collaborating with a community of other
|
||||
developers, and helping each other find solutions in an agile, flexible, rapid
|
||||
and secure manner.
|
||||
|
||||
ROCm is a collection of drivers, development tools and APIs enabling GPU
|
||||
programming from the low-level kernel to end-user applications. ROCm is powered
|
||||
by AMD’s Heterogeneous-computing Interface for Portability (HIP), an OSS C++ GPU
|
||||
programming environment and its corresponding runtime. HIP allows ROCm
|
||||
developers to create portable applications on different platforms by deploying
|
||||
code on a range of platforms, from dedicated gaming GPUs to exascale HPC
|
||||
clusters. ROCm supports programming models such as OpenMP and OpenCL, and
|
||||
includes all the necessary OSS compilers, debuggers and libraries. ROCm is fully
|
||||
integrated into ML frameworks such as PyTorch and TensorFlow. ROCm can be
|
||||
deployed in many ways, including through the use of containers such as Docker,
|
||||
Spack, and your own build from source.
|
||||
|
||||
ROCm’s goal is to allow our users to maximize their GPU hardware investment.
|
||||
ROCm is designed to help develop, test and deploy GPU accelerated HPC, AI,
|
||||
scientific computing, CAD, and other applications in a free, open-source,
|
||||
integrated and secure software ecosystem.
|
||||
|
||||
This repository contains the manifest file for ROCm™ releases, changelogs, and
|
||||
release information. The file default.xml contains information for all
|
||||
repositories and the associated commit used to build the current ROCm release.
|
||||
|
||||
The default.xml file uses the repo Manifest format.
|
||||
|
||||
The develop branch of this repository contains content for the next
|
||||
ROCm release.
|
||||
|
||||
## ROCm Documentation
|
||||
|
||||
ROCm Documentation is available online at
|
||||
[rocm.docs.amd.com](https://rocm.docs.amd.com). Source code for the documenation
|
||||
is located in the docs folder of most repositories that are part of ROCm.
|
||||
|
||||
### How to build documentation via Sphinx
|
||||
|
||||
```bash
|
||||
cd docs
|
||||
|
||||
pip3 install -r sphinx/requirements.txt
|
||||
|
||||
python3 -m sphinx -T -E -b html -d _build/doctrees -D language=en . _build/html
|
||||
```
|
||||
|
||||
## Older ROCm™ Releases
|
||||
|
||||
For release information for older ROCm™ releases, refer to
|
||||
[CHANGELOG](./CHANGELOG.md).
|
||||
|
||||
418
RELEASE.md
Normal file
@@ -0,0 +1,418 @@
|
||||
# Release Notes
|
||||
<!-- Do not edit this file! This file is autogenerated with -->
|
||||
<!-- tools/autotag/tag_script.py -->
|
||||
|
||||
<!-- Disable lints since this is an auto-generated file. -->
|
||||
<!-- markdownlint-disable blanks-around-headers -->
|
||||
<!-- markdownlint-disable no-duplicate-header -->
|
||||
<!-- markdownlint-disable no-blanks-blockquote -->
|
||||
<!-- markdownlint-disable ul-indent -->
|
||||
<!-- markdownlint-disable no-trailing-spaces -->
|
||||
|
||||
<!-- spellcheck-disable -->
|
||||
|
||||
The release notes for the ROCm platform.
|
||||
|
||||
-------------------
|
||||
|
||||
## ROCm 5.0.0
|
||||
<!-- markdownlint-disable first-line-h1 -->
|
||||
<!-- markdownlint-disable no-duplicate-header -->
|
||||
### What's New in This Release
|
||||
|
||||
#### HIP Enhancements
|
||||
|
||||
The ROCm v5.0 release consists of the following HIP enhancements.
|
||||
|
||||
##### HIP Installation Guide Updates
|
||||
|
||||
The HIP Installation Guide is updated to include building HIP from source on the NVIDIA platform.
|
||||
|
||||
Refer to the HIP Installation Guide v5.0 for more details.
|
||||
|
||||
##### Managed Memory Allocation
|
||||
|
||||
Managed memory, including the `__managed__` keyword, is now supported in the HIP combined host/device compilation. Through unified memory allocation, managed memory allows data to be shared and accessible to both the CPU and GPU using a single pointer. The allocation is managed by the AMD GPU driver using the Linux Heterogeneous Memory Management (HMM) mechanism. The user can call managed memory API hipMallocManaged to allocate a large chunk of HMM memory, execute kernels on a device, and fetch data between the host and device as needed.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> In a HIP application, it is recommended to do a capability check before calling the managed memory APIs. For example,
|
||||
>
|
||||
> ```cpp
|
||||
> int managed_memory = 0;
|
||||
> HIPCHECK(hipDeviceGetAttribute(&managed_memory,
|
||||
> hipDeviceAttributeManagedMemory,p_gpuDevice));
|
||||
> if (!managed_memory ) {
|
||||
> printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
|
||||
> }
|
||||
> else {
|
||||
> HIPCHECK(hipSetDevice(p_gpuDevice));
|
||||
> HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
|
||||
> . . .
|
||||
> }
|
||||
> ```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The managed memory capability check may not be necessary; however, if HMM is not supported, managed malloc will fall back to using system memory. Other managed memory API calls will, then, have
|
||||
|
||||
Refer to the HIP API documentation for more details on managed memory APIs.
|
||||
|
||||
For the application, see
|
||||
|
||||
<https://github.com/ROCm-Developer-Tools/HIP/blob/rocm-4.5.x/tests/src/runtimeApi/memory/hipMallocManaged.cpp>
|
||||
|
||||
#### New Environment Variable
|
||||
|
||||
The following new environment variable is added in this release:
|
||||
|
||||
| Environment Variable | Value | Description |
|
||||
|----------------------|-----------------------|-------------|
|
||||
| HSA_COOP_CU_COUNT | 0 or 1 (default is 0) | Some processors support more CUs than can reliably be used in a cooperative dispatch. Setting the environment variable HSA_COOP_CU_COUNT to 1 will cause ROCr to return the correct CU count for cooperative groups through the HSA_AMD_AGENT_INFO_COOPERATIVE_COMPUTE_UNIT_COUNT attribute of hsa_agent_get_info(). Setting HSA_COOP_CU_COUNT to other values, or leaving it unset, will cause ROCr to return the same CU count for the attributes HSA_AMD_AGENT_INFO_COOPERATIVE_COMPUTE_UNIT_COUNT and HSA_AMD_AGENT_INFO_COMPUTE_UNIT_COUNT. Future ROCm releases will make HSA_COOP_CU_COUNT=1 the default. |
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
#### Runtime Breaking Change
|
||||
|
||||
Re-ordering of the enumerated type in hip_runtime_api.h to better match NV. See below for the difference in enumerated types.
|
||||
|
||||
ROCm software will be affected if any of the defined enums listed below are used in the code. Applications built with ROCm v5.0 enumerated types will work with a ROCm 4.5.2 driver. However, an undefined behavior error will occur with a ROCm v4.5.2 application that uses these enumerated types with a ROCm 5.0 runtime.
|
||||
|
||||
```diff
|
||||
typedef enum hipDeviceAttribute_t {
|
||||
- hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block.
|
||||
- hipDeviceAttributeMaxBlockDimX, ///< Maximum x-dimension of a block.
|
||||
- hipDeviceAttributeMaxBlockDimY, ///< Maximum y-dimension of a block.
|
||||
- hipDeviceAttributeMaxBlockDimZ, ///< Maximum z-dimension of a block.
|
||||
- hipDeviceAttributeMaxGridDimX, ///< Maximum x-dimension of a grid.
|
||||
- hipDeviceAttributeMaxGridDimY, ///< Maximum y-dimension of a grid.
|
||||
- hipDeviceAttributeMaxGridDimZ, ///< Maximum z-dimension of a grid.
|
||||
- hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in
|
||||
- ///< bytes.
|
||||
- hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes.
|
||||
- hipDeviceAttributeWarpSize, ///< Warp size in threads.
|
||||
- hipDeviceAttributeMaxRegistersPerBlock, ///< Maximum number of 32-bit registers available to a
|
||||
- ///< thread block. This number is shared by all thread
|
||||
- ///< blocks simultaneously resident on a
|
||||
- ///< multiprocessor.
|
||||
- hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz.
|
||||
- hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz.
|
||||
- hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits.
|
||||
- hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device.
|
||||
- hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in.
|
||||
- hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2
|
||||
- ///< cache.
|
||||
- hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per
|
||||
- ///< multiprocessor.
|
||||
- hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number.
|
||||
- hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number.
|
||||
- hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels
|
||||
- ///< concurrently.
|
||||
- hipDeviceAttributePciBusId, ///< PCI Bus ID.
|
||||
- hipDeviceAttributePciDeviceId, ///< PCI Device ID.
|
||||
- hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory Per
|
||||
- ///< Multiprocessor.
|
||||
- hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices.
|
||||
- hipDeviceAttributeIntegrated, ///< iGPU
|
||||
- hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch
|
||||
- hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices
|
||||
- hipDeviceAttributeMaxTexture1DWidth, ///< Maximum number of elements in 1D images
|
||||
- hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D images in image elements
|
||||
- hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension height of 2D images in image elements
|
||||
- hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D images in image elements
|
||||
- hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimensions height of 3D images in image elements
|
||||
- hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimensions depth of 3D images in image elements
|
||||
+ hipDeviceAttributeCudaCompatibleBegin = 0,
|
||||
|
||||
- hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
|
||||
- hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
|
||||
+ hipDeviceAttributeEccEnabled = hipDeviceAttributeCudaCompatibleBegin, ///< Whether ECC support is enabled.
|
||||
+ hipDeviceAttributeAccessPolicyMaxWindowSize, ///< Cuda only. The maximum size of the window policy in bytes.
|
||||
+ hipDeviceAttributeAsyncEngineCount, ///< Cuda only. Asynchronous engines number.
|
||||
+ hipDeviceAttributeCanMapHostMemory, ///< Whether host memory can be mapped into device address space
|
||||
+ hipDeviceAttributeCanUseHostPointerForRegisteredMem,///< Cuda only. Device can access host registered memory
|
||||
+ ///< at the same virtual address as the CPU
|
||||
+ hipDeviceAttributeClockRate, ///< Peak clock frequency in kilohertz.
|
||||
+ hipDeviceAttributeComputeMode, ///< Compute mode that device is currently in.
|
||||
+ hipDeviceAttributeComputePreemptionSupported, ///< Cuda only. Device supports Compute Preemption.
|
||||
+ hipDeviceAttributeConcurrentKernels, ///< Device can possibly execute multiple kernels concurrently.
|
||||
+ hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory concurrently with the CPU
|
||||
+ hipDeviceAttributeCooperativeLaunch, ///< Support cooperative launch
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceLaunch, ///< Support cooperative launch on multiple devices
|
||||
+ hipDeviceAttributeDeviceOverlap, ///< Cuda only. Device can concurrently copy memory and execute a kernel.
|
||||
+ ///< Deprecated. Use instead asyncEngineCount.
|
||||
+ hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on
|
||||
+ ///< the device without migration
|
||||
+ hipDeviceAttributeGlobalL1CacheSupported, ///< Cuda only. Device supports caching globals in L1
|
||||
+ hipDeviceAttributeHostNativeAtomicSupported, ///< Cuda only. Link between the device and the host supports native atomic operations
|
||||
+ hipDeviceAttributeIntegrated, ///< Device is integrated GPU
|
||||
+ hipDeviceAttributeIsMultiGpuBoard, ///< Multiple GPU devices.
|
||||
+ hipDeviceAttributeKernelExecTimeout, ///< Run time limit for kernels executed on the device
|
||||
+ hipDeviceAttributeL2CacheSize, ///< Size of L2 cache in bytes. 0 if the device doesn't have L2 cache.
|
||||
+ hipDeviceAttributeLocalL1CacheSupported, ///< caching locals in L1 is supported
|
||||
+ hipDeviceAttributeLuid, ///< Cuda only. 8-byte locally unique identifier in 8 bytes. Undefined on TCC and non-Windows platforms
|
||||
+ hipDeviceAttributeLuidDeviceNodeMask, ///< Cuda only. Luid device node mask. Undefined on TCC and non-Windows platforms
|
||||
+ hipDeviceAttributeComputeCapabilityMajor, ///< Major compute capability version number.
|
||||
+ hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system
|
||||
+ hipDeviceAttributeMaxBlocksPerMultiProcessor, ///< Cuda only. Max block size per multiprocessor
|
||||
+ hipDeviceAttributeMaxBlockDimX, ///< Max block size in width.
|
||||
+ hipDeviceAttributeMaxBlockDimY, ///< Max block size in height.
|
||||
+ hipDeviceAttributeMaxBlockDimZ, ///< Max block size in depth.
|
||||
+ hipDeviceAttributeMaxGridDimX, ///< Max grid size in width.
|
||||
+ hipDeviceAttributeMaxGridDimY, ///< Max grid size in height.
|
||||
+ hipDeviceAttributeMaxGridDimZ, ///< Max grid size in depth.
|
||||
+ hipDeviceAttributeMaxSurface1D, ///< Maximum size of 1D surface.
|
||||
+ hipDeviceAttributeMaxSurface1DLayered, ///< Cuda only. Maximum dimensions of 1D layered surface.
|
||||
+ hipDeviceAttributeMaxSurface2D, ///< Maximum dimension (width, height) of 2D surface.
|
||||
+ hipDeviceAttributeMaxSurface2DLayered, ///< Cuda only. Maximum dimensions of 2D layered surface.
|
||||
+ hipDeviceAttributeMaxSurface3D, ///< Maximum dimension (width, height, depth) of 3D surface.
|
||||
+ hipDeviceAttributeMaxSurfaceCubemap, ///< Cuda only. Maximum dimensions of Cubemap surface.
|
||||
+ hipDeviceAttributeMaxSurfaceCubemapLayered, ///< Cuda only. Maximum dimension of Cubemap layered surface.
|
||||
+ hipDeviceAttributeMaxTexture1DWidth, ///< Maximum size of 1D texture.
|
||||
+ hipDeviceAttributeMaxTexture1DLayered, ///< Cuda only. Maximum dimensions of 1D layered texture.
|
||||
+ hipDeviceAttributeMaxTexture1DLinear, ///< Maximum number of elements allocatable in a 1D linear texture.
|
||||
+ ///< Use cudaDeviceGetTexture1DLinearMaxWidth() instead on Cuda.
|
||||
+ hipDeviceAttributeMaxTexture1DMipmap, ///< Cuda only. Maximum size of 1D mipmapped texture.
|
||||
+ hipDeviceAttributeMaxTexture2DWidth, ///< Maximum dimension width of 2D texture.
|
||||
+ hipDeviceAttributeMaxTexture2DHeight, ///< Maximum dimension hight of 2D texture.
|
||||
+ hipDeviceAttributeMaxTexture2DGather, ///< Cuda only. Maximum dimensions of 2D texture if gather operations performed.
|
||||
+ hipDeviceAttributeMaxTexture2DLayered, ///< Cuda only. Maximum dimensions of 2D layered texture.
|
||||
+ hipDeviceAttributeMaxTexture2DLinear, ///< Cuda only. Maximum dimensions (width, height, pitch) of 2D textures bound to pitched memory.
|
||||
+ hipDeviceAttributeMaxTexture2DMipmap, ///< Cuda only. Maximum dimensions of 2D mipmapped texture.
|
||||
+ hipDeviceAttributeMaxTexture3DWidth, ///< Maximum dimension width of 3D texture.
|
||||
+ hipDeviceAttributeMaxTexture3DHeight, ///< Maximum dimension height of 3D texture.
|
||||
+ hipDeviceAttributeMaxTexture3DDepth, ///< Maximum dimension depth of 3D texture.
|
||||
+ hipDeviceAttributeMaxTexture3DAlt, ///< Cuda only. Maximum dimensions of alternate 3D texture.
|
||||
+ hipDeviceAttributeMaxTextureCubemap, ///< Cuda only. Maximum dimensions of Cubemap texture
|
||||
+ hipDeviceAttributeMaxTextureCubemapLayered, ///< Cuda only. Maximum dimensions of Cubemap layered texture.
|
||||
+ hipDeviceAttributeMaxThreadsDim, ///< Maximum dimension of a block
|
||||
+ hipDeviceAttributeMaxThreadsPerBlock, ///< Maximum number of threads per block.
|
||||
+ hipDeviceAttributeMaxThreadsPerMultiProcessor, ///< Maximum resident threads per multiprocessor.
|
||||
+ hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies
|
||||
+ hipDeviceAttributeMemoryBusWidth, ///< Global memory bus width in bits.
|
||||
+ hipDeviceAttributeMemoryClockRate, ///< Peak memory clock frequency in kilohertz.
|
||||
+ hipDeviceAttributeComputeCapabilityMinor, ///< Minor compute capability version number.
|
||||
+ hipDeviceAttributeMultiGpuBoardGroupID, ///< Cuda only. Unique ID of device group on the same multi-GPU board
|
||||
+ hipDeviceAttributeMultiprocessorCount, ///< Number of multiprocessors on the device.
|
||||
+ hipDeviceAttributeName, ///< Device name.
|
||||
+ hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory
|
||||
+ ///< without calling hipHostRegister on it
|
||||
+ hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via the host's page tables
|
||||
+ hipDeviceAttributePciBusId, ///< PCI Bus ID.
|
||||
+ hipDeviceAttributePciDeviceId, ///< PCI Device ID.
|
||||
+ hipDeviceAttributePciDomainID, ///< PCI Domain ID.
|
||||
+ hipDeviceAttributePersistingL2CacheMaxSize, ///< Cuda11 only. Maximum l2 persisting lines capacity in bytes
|
||||
+ hipDeviceAttributeMaxRegistersPerBlock, ///< 32-bit registers available to a thread block. This number is shared
|
||||
+ ///< by all thread blocks simultaneously resident on a multiprocessor.
|
||||
+ hipDeviceAttributeMaxRegistersPerMultiprocessor, ///< 32-bit registers available per block.
|
||||
+ hipDeviceAttributeReservedSharedMemPerBlock, ///< Cuda11 only. Shared memory reserved by CUDA driver per block.
|
||||
+ hipDeviceAttributeMaxSharedMemoryPerBlock, ///< Maximum shared memory available per block in bytes.
|
||||
+ hipDeviceAttributeSharedMemPerBlockOptin, ///< Cuda only. Maximum shared memory per block usable by special opt in.
|
||||
+ hipDeviceAttributeSharedMemPerMultiprocessor, ///< Cuda only. Shared memory available per multiprocessor.
|
||||
+ hipDeviceAttributeSingleToDoublePrecisionPerfRatio, ///< Cuda only. Performance ratio of single precision to double precision.
|
||||
+ hipDeviceAttributeStreamPrioritiesSupported, ///< Cuda only. Whether to support stream priorities.
|
||||
+ hipDeviceAttributeSurfaceAlignment, ///< Cuda only. Alignment requirement for surfaces
|
||||
+ hipDeviceAttributeTccDriver, ///< Cuda only. Whether device is a Tesla device using TCC driver
|
||||
+ hipDeviceAttributeTextureAlignment, ///< Alignment requirement for textures
|
||||
+ hipDeviceAttributeTexturePitchAlignment, ///< Pitch alignment requirement for 2D texture references bound to pitched memory;
|
||||
+ hipDeviceAttributeTotalConstantMemory, ///< Constant memory size in bytes.
|
||||
+ hipDeviceAttributeTotalGlobalMem, ///< Global memory available on devicice.
|
||||
+ hipDeviceAttributeUnifiedAddressing, ///< Cuda only. An unified address space shared with the host.
|
||||
+ hipDeviceAttributeUuid, ///< Cuda only. Unique ID in 16 byte.
|
||||
+ hipDeviceAttributeWarpSize, ///< Warp size in threads.
|
||||
|
||||
- hipDeviceAttributeMaxPitch, ///< Maximum pitch in bytes allowed by memory copies
|
||||
- hipDeviceAttributeTextureAlignment, ///<Alignment requirement for textures
|
||||
- hipDeviceAttributeTexturePitchAlignment, ///<Pitch alignment requirement for 2D texture references bound to pitched memory;
|
||||
- hipDeviceAttributeKernelExecTimeout, ///<Run time limit for kernels executed on the device
|
||||
- hipDeviceAttributeCanMapHostMemory, ///<Device can map host memory into device address space
|
||||
- hipDeviceAttributeEccEnabled, ///<Device has ECC support enabled
|
||||
+ hipDeviceAttributeCudaCompatibleEnd = 9999,
|
||||
+ hipDeviceAttributeAmdSpecificBegin = 10000,
|
||||
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched functions
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched grid dimensions
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched block dimensions
|
||||
- hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple
|
||||
- ///devices with unmatched shared memories
|
||||
- hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device
|
||||
- hipDeviceAttributeManagedMemory, ///< Device supports allocating managed memory on this system
|
||||
- hipDeviceAttributeDirectManagedMemAccessFromHost, ///< Host can directly access managed memory on
|
||||
- /// the device without migration
|
||||
- hipDeviceAttributeConcurrentManagedAccess, ///< Device can coherently access managed memory
|
||||
- /// concurrently with the CPU
|
||||
- hipDeviceAttributePageableMemoryAccess, ///< Device supports coherently accessing pageable memory
|
||||
- /// without calling hipHostRegister on it
|
||||
- hipDeviceAttributePageableMemoryAccessUsesHostPageTables, ///< Device accesses pageable memory via
|
||||
- /// the host's page tables
|
||||
- hipDeviceAttributeCanUseStreamWaitValue ///< '1' if Device supports hipStreamWaitValue32() and
|
||||
- ///< hipStreamWaitValue64() , '0' otherwise.
|
||||
+ hipDeviceAttributeClockInstructionRate = hipDeviceAttributeAmdSpecificBegin, ///< Frequency in khz of the timer used by the device-side "clock*"
|
||||
+ hipDeviceAttributeArch, ///< Device architecture
|
||||
+ hipDeviceAttributeMaxSharedMemoryPerMultiprocessor, ///< Maximum Shared Memory PerMultiprocessor.
|
||||
+ hipDeviceAttributeGcnArch, ///< Device gcn architecture
|
||||
+ hipDeviceAttributeGcnArchName, ///< Device gcnArch name in 256 bytes
|
||||
+ hipDeviceAttributeHdpMemFlushCntl, ///< Address of the HDP_MEM_COHERENCY_FLUSH_CNTL register
|
||||
+ hipDeviceAttributeHdpRegFlushCntl, ///< Address of the HDP_REG_COHERENCY_FLUSH_CNTL register
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedFunc, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched functions
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedGridDim, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched grid dimensions
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedBlockDim, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched block dimensions
|
||||
+ hipDeviceAttributeCooperativeMultiDeviceUnmatchedSharedMem, ///< Supports cooperative launch on multiple
|
||||
+ ///< devices with unmatched shared memories
|
||||
+ hipDeviceAttributeIsLargeBar, ///< Whether it is LargeBar
|
||||
+ hipDeviceAttributeAsicRevision, ///< Revision of the GPU in this device
|
||||
+ hipDeviceAttributeCanUseStreamWaitValue, ///< '1' if Device supports hipStreamWaitValue32() and
|
||||
+ ///< hipStreamWaitValue64() , '0' otherwise.
|
||||
|
||||
+ hipDeviceAttributeAmdSpecificEnd = 19999,
|
||||
+ hipDeviceAttributeVendorSpecificBegin = 20000,
|
||||
+ // Extended attributes for vendors
|
||||
} hipDeviceAttribute_t;
|
||||
|
||||
enum hipComputeMode {
|
||||
```
|
||||
|
||||
### Known Issues
|
||||
|
||||
#### Incorrect dGPU Behavior When Using AMDVBFlash Tool
|
||||
|
||||
The AMDVBFlash tool, used for flashing the VBIOS image to dGPU, does not communicate with the ROM Controller specifically when the driver is present. This is because the driver, as part of its runtime power management feature, puts the dGPU to a sleep state.
|
||||
|
||||
As a workaround, users can run amdgpu.runpm=0, which temporarily disables the runtime power management feature from the driver and dynamically changes some power control-related sysfs files.
|
||||
|
||||
#### Issue with START Timestamp in ROCProfiler
|
||||
|
||||
Users may encounter an issue with the enabled timestamp functionality for monitoring one or multiple counters. ROCProfiler outputs the following four timestamps for each kernel:
|
||||
|
||||
- Dispatch
|
||||
|
||||
- Start
|
||||
|
||||
- End
|
||||
|
||||
- Complete
|
||||
|
||||
##### Issue
|
||||
|
||||
This defect is related to the Start timestamp functionality, which incorrectly shows an earlier time than the Dispatch timestamp.
|
||||
|
||||
To reproduce the issue,
|
||||
|
||||
1. Enable timing using the --timestamp on flag.
|
||||
|
||||
2. Use the -i option with the input filename that contains the name of the counter(s) to monitor.
|
||||
|
||||
3. Run the program.
|
||||
|
||||
4. Check the output result file.
|
||||
|
||||
##### Current behavior
|
||||
|
||||
BeginNS is lower than DispatchNS, which is incorrect.
|
||||
|
||||
##### Expected behavior
|
||||
|
||||
The correct order is:
|
||||
|
||||
Dispatch < Start < End < Complete
|
||||
|
||||
Users cannot use ROCProfiler to measure the time spent on each kernel because of the incorrect timestamp with counter collection enabled.
|
||||
|
||||
##### Recommended Workaround
|
||||
|
||||
Users are recommended to collect kernel execution timestamps without monitoring counters, as follows:
|
||||
|
||||
1. Enable timing using the --timestamp on flag, and run the application.
|
||||
|
||||
2. Rerun the application using the -i option with the input filename that contains the name of the counter(s) to monitor, and save this to a different output file using the -o flag.
|
||||
|
||||
3. Check the output result file from step 1.
|
||||
|
||||
4. The order of timestamps correctly displays as:
|
||||
DispathNS < BeginNS < EndNS < CompleteNS
|
||||
|
||||
5. Users can find the values of the collected counters in the output file generated in step 2.
|
||||
|
||||
#### Radeon Pro V620 and W6800 Workstation GPUs
|
||||
|
||||
##### No Support for SMI and ROCDebugger on SRIOV
|
||||
|
||||
System Management Interface (SMI) and ROCDebugger are not supported in the SRIOV environment on any GPU. For more information, refer to the Systems Management Interface documentation.
|
||||
|
||||
### Deprecations and Warnings
|
||||
|
||||
#### ROCm Libraries Changes – Deprecations and Deprecation Removal
|
||||
|
||||
- The hipFFT.h header is now provided only by the hipFFT package. Up to ROCm 5.0, users would get hipFFT.h in the rocFFT package too.
|
||||
|
||||
- The GlobalPairwiseAMG class is now entirely removed, users should use the PairwiseAMG class instead.
|
||||
|
||||
- The rocsparse_spmm signature in 5.0 was changed to match that of rocsparse_spmm_ex. In 5.0, rocsparse_spmm_ex is still present, but deprecated. Signature diff for rocsparse_spmm
|
||||
rocsparse_spmm in 5.0
|
||||
|
||||
```h
|
||||
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
|
||||
rocsparse_operation trans_A,
|
||||
rocsparse_operation trans_B,
|
||||
const void* alpha,
|
||||
const rocsparse_spmat_descr mat_A,
|
||||
const rocsparse_dnmat_descr mat_B,
|
||||
const void* beta,
|
||||
const rocsparse_dnmat_descr mat_C,
|
||||
rocsparse_datatype compute_type,
|
||||
rocsparse_spmm_alg alg,
|
||||
rocsparse_spmm_stage stage,
|
||||
size_t* buffer_size,
|
||||
void* temp_buffer);
|
||||
```
|
||||
|
||||
rocSPARSE_spmm in 4.0
|
||||
|
||||
```h
|
||||
rocsparse_status rocsparse_spmm(rocsparse_handle handle,
|
||||
rocsparse_operation trans_A,
|
||||
rocsparse_operation trans_B,
|
||||
const void* alpha,
|
||||
const rocsparse_spmat_descr mat_A,
|
||||
const rocsparse_dnmat_descr mat_B,
|
||||
const void* beta,
|
||||
const rocsparse_dnmat_descr mat_C,
|
||||
rocsparse_datatype compute_type,
|
||||
rocsparse_spmm_alg alg,
|
||||
size_t* buffer_size,
|
||||
void* temp_buffer);
|
||||
```
|
||||
|
||||
#### HIP API Deprecations and Warnings
|
||||
|
||||
##### Warning - Arithmetic Operators of HIP Complex and Vector Types
|
||||
|
||||
In this release, arithmetic operators of HIP complex and vector types are deprecated.
|
||||
|
||||
- As alternatives to arithmetic operators of HIP complex types, users can use arithmetic operators of `std::complex` types.
|
||||
|
||||
- As alternatives to arithmetic operators of HIP vector types, users can use the operators of the native clang vector type associated with the data member of HIP vector types.
|
||||
|
||||
During the deprecation, two macros `_HIP_ENABLE_COMPLEX_OPERATORS` and `_HIP_ENABLE_VECTOR_OPERATORS` are provided to allow users to conditionally enable arithmetic operators of HIP complex or vector types.
|
||||
|
||||
Note, the two macros are mutually exclusive and, by default, set to Off.
|
||||
|
||||
The arithmetic operators of HIP complex and vector types will be removed in a future release.
|
||||
|
||||
Refer to the HIP API Guide for more information.
|
||||
|
||||
#### Warning - Compiler-Generated Code Object Version 4 Deprecation
|
||||
|
||||
Support for loading compiler-generated code object version 4 will be deprecated in a future release with no release announcement and replaced with code object 5 as the default version.
|
||||
|
||||
The current default is code object version 4.
|
||||
|
||||
#### Warning - MIOpenTensile Deprecation
|
||||
|
||||
MIOpenTensile will be deprecated in a future release.
|
||||
63
default.xml
@@ -1,7 +1,7 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<manifest>
|
||||
<remote name="roc-github"
|
||||
fetch="http://github.com/RadeonOpenCompute/" />
|
||||
fetch="https://github.com/RadeonOpenCompute/" />
|
||||
<remote name="rocm-devtools"
|
||||
fetch="https://github.com/ROCm-Developer-Tools/" />
|
||||
<remote name="rocm-swplat"
|
||||
@@ -12,16 +12,16 @@ fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
|
||||
fetch="https://github.com/GPUOpen-Tools/" />
|
||||
<remote name="KhronosGroup"
|
||||
fetch="https://github.com/KhronosGroup/" />
|
||||
<default revision="refs/tags/rocm-3.10.0"
|
||||
remote="roc-github"
|
||||
sync-c="true"
|
||||
sync-j="4" />
|
||||
<default revision="refs/tags/rocm-5.5.1"
|
||||
remote="roc-github"
|
||||
sync-c="true"
|
||||
sync-j="4" />
|
||||
<!--list of projects for ROCM-->
|
||||
<project name="ROCK-Kernel-Driver" />
|
||||
<project name="ROCT-Thunk-Interface" />
|
||||
<project name="ROCR-Runtime" />
|
||||
<project name="ROC-smi" />
|
||||
<project name="rocm_smi_lib" />
|
||||
<project name="rocm-core" />
|
||||
<project name="rocm-cmake" />
|
||||
<project name="rocminfo" />
|
||||
<project name="rocprofiler" remote="rocm-devtools" />
|
||||
@@ -31,9 +31,11 @@ sync-j="4" />
|
||||
<project name="clang-ocl" />
|
||||
<!--HIP Projects-->
|
||||
<project name="HIP" remote="rocm-devtools" />
|
||||
<project name="hipamd" remote="rocm-devtools" />
|
||||
<project name="HIP-Examples" remote="rocm-devtools" />
|
||||
<project name="ROCclr" remote="rocm-devtools" />
|
||||
<project name="HIPIFY" remote="rocm-devtools" />
|
||||
<project name="HIPCC" remote="rocm-devtools" />
|
||||
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
|
||||
<project name="llvm-project" />
|
||||
<project name="ROCm-Device-Libs" />
|
||||
@@ -47,40 +49,31 @@ sync-j="4" />
|
||||
<project name="ROCgdb" remote="rocm-devtools" />
|
||||
<project name="ROCdbgapi" remote="rocm-devtools" />
|
||||
<!-- ROCm Libraries -->
|
||||
<project name="rdc" remote="roc-github" />
|
||||
<project name="rocBLAS" remote="rocm-swplat" />
|
||||
<project name="hipBLAS" remote="rocm-swplat" />
|
||||
<project name="rocFFT" remote="rocm-swplat" />
|
||||
<project name="rocRAND" remote="rocm-swplat" />
|
||||
<project name="rocSPARSE" remote="rocm-swplat" />
|
||||
<project name="rocSOLVER" remote="rocm-swplat" />
|
||||
<project name="hipSPARSE" remote="rocm-swplat" />
|
||||
<project name="rocALUTION" remote="rocm-swplat" />
|
||||
<project name="rdc" />
|
||||
<project groups="mathlibs" name="rocBLAS" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="Tensile" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="hipBLAS" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocFFT" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="hipFFT" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocRAND" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocSPARSE" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocSOLVER" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="hipSOLVER" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="hipSPARSE" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocALUTION" remote="rocm-swplat" />
|
||||
<project name="MIOpenGEMM" remote="rocm-swplat" />
|
||||
<project name="MIOpen" remote="rocm-swplat" />
|
||||
<project name="rccl" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rccl" remote="rocm-swplat" />
|
||||
<project name="MIVisionX" remote="gpuopen-libs" />
|
||||
<project name="rocThrust" remote="rocm-swplat" />
|
||||
<project name="hipCUB" remote="rocm-swplat" />
|
||||
<project name="rocPRIM" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocThrust" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="hipCUB" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocPRIM" remote="rocm-swplat" />
|
||||
<project groups="mathlibs" name="rocWMMA" remote="rocm-swplat" />
|
||||
<project name="hipfort" remote="rocm-swplat" />
|
||||
<project name="AMDMIGraphX" remote="rocm-swplat" />
|
||||
<project name="ROCmValidationSuite" remote="rocm-devtools" />
|
||||
<!-- Projects for AOMP -->
|
||||
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" />
|
||||
<project name="ROCR-Runtime" path="aomp/rocr-runtime" />
|
||||
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" />
|
||||
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" />
|
||||
<project name="rocminfo" path="aomp/rocminfo" />
|
||||
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
|
||||
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
|
||||
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
|
||||
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
|
||||
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
|
||||
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
|
||||
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" />
|
||||
<!-- Projects for OpenMP-Extras -->
|
||||
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" revision="refs/tags/rocm-uc-3.10.0" />
|
||||
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" revision="refs/tags/rocm-uc-3.10.0" />
|
||||
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" revision="refs/tags/rocm-uc-3.10.0" />
|
||||
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
|
||||
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
|
||||
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
|
||||
</manifest>
|
||||
|
||||
6
docs/404.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# 404 Page Not Found
|
||||
|
||||
Page could not be found.
|
||||
|
||||
Return to [home](./index) or please use the links from the sidebar to find what
|
||||
you are looking for.
|
||||
74
docs/about.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# About ROCm Documentation
|
||||
|
||||
ROCm documentation is made available under open source [licenses](licensing.md).
|
||||
Documentation is built using open source toolchains. Contributions to our
|
||||
documentation is encouraged and welcome. As a contributor, please familiarize
|
||||
yourself with our documentation toolchain.
|
||||
|
||||
## ReadTheDocs
|
||||
|
||||
[ReadTheDocs](https://docs.readthedocs.io/en/stable/) is our front end for the
|
||||
our documentation. By front end, this is the tool that serves our HTML based
|
||||
documentation to our end users.
|
||||
|
||||
## Doxygen
|
||||
|
||||
[Doxygen](https://www.doxygen.nl/) is the most common inline code documentation
|
||||
standard. ROCm projects are use Doxygen for public API documentation (unless the
|
||||
upstream project is using a different tool).
|
||||
|
||||
## Sphinx
|
||||
|
||||
[Sphinx](https://www.sphinx-doc.org/en/master/) is a documentation generator
|
||||
originally used for python. It is now widely used in the Open Source community.
|
||||
Originally, sphinx supported RST based documentation. Markdown support is now
|
||||
available. ROCm documentation plans to default to markdown for new projects.
|
||||
Existing projects using RST are under no obligation to convert to markdown. New
|
||||
projects that believe markdown is not suitable should contact the documentation
|
||||
team prior to selecting RST.
|
||||
|
||||
### MyST
|
||||
|
||||
[Markedly Structured Text (MyST)](https://myst-tools.org/docs/spec) is an extended
|
||||
flavor of Markdown ([CommonMark](https://commonmark.org/)) influenced by reStructuredText (RST) and Sphinx.
|
||||
It is integrated via [`myst-parser`](https://myst-parser.readthedocs.io/en/latest/).
|
||||
A cheat sheet that showcases how to use the MyST syntax is available over at [the Jupyter
|
||||
reference](https://jupyterbook.org/en/stable/reference/cheatsheet.html).
|
||||
|
||||
### Sphinx Theme
|
||||
|
||||
ROCm is using the
|
||||
[Sphinx Book Theme](https://sphinx-book-theme.readthedocs.io/en/latest/). This
|
||||
theme is used by Jupyter books. ROCm documentation applies some customization
|
||||
include a header and footer on top of the Sphinx Book Theme. A future custom
|
||||
ROCm theme will be part of our documentation goals.
|
||||
|
||||
### Sphinx Design
|
||||
|
||||
Sphinx Design is an extension for sphinx based websites that add design
|
||||
functionality. Please see the documentation
|
||||
[here](https://sphinx-design.readthedocs.io/en/latest/index.html). ROCm
|
||||
documentation uses sphinx design for grids, cards, and synchronized tabs.
|
||||
Other features may be used in the future.
|
||||
|
||||
### Sphinx External TOC
|
||||
|
||||
ROCm uses the
|
||||
[sphinx-external-toc](https://sphinx-external-toc.readthedocs.io/en/latest/intro.html)
|
||||
for our navigation. This tool allows a YAML file based left navigation menu. This
|
||||
tool was selected due to its flexibility that allows scripts to operate on the
|
||||
YAML file. Please transition to this file for the project's navigation. You can
|
||||
see the `_toc.yml.in` file in this repository in the docs/sphinx folder for an
|
||||
example.
|
||||
|
||||
### Breathe
|
||||
|
||||
Sphinx uses [Breathe](https://www.breathe-doc.org/) to integrate Doxygen
|
||||
content.
|
||||
|
||||
## `rocm-docs-core` pip package
|
||||
|
||||
[rocm-docs-core](https://github.com/RadeonOpenCompute/rocm-docs-core) is an AMD
|
||||
maintained project that applies customization for our documentation. This
|
||||
project is the tool most ROCm repositories will use as part of the documentation
|
||||
build.
|
||||
76
docs/conf.py
Normal file
@@ -0,0 +1,76 @@
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# This file only contains a selection of the most common options. For a full
|
||||
# list see the documentation:
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
|
||||
import shutil
|
||||
|
||||
from rocm_docs import ROCmDocs
|
||||
|
||||
|
||||
shutil.copy2('../CONTRIBUTING.md','./contributing.md')
|
||||
shutil.copy2('../RELEASE.md','./release.md')
|
||||
# Keep capitalization due to similar linking on GitHub's markdown preview.
|
||||
shutil.copy2('../CHANGELOG.md','./CHANGELOG.md')
|
||||
|
||||
# configurations for PDF output by Read the Docs
|
||||
project = "ROCm Documentation"
|
||||
author = "Advanced Micro Devices, Inc."
|
||||
copyright = "Copyright (c) 2023 Advanced Micro Devices, Inc. All rights reserved."
|
||||
version = "5.0.0"
|
||||
release = "5.0.0"
|
||||
|
||||
setting_all_article_info = True
|
||||
all_article_info_os = ["linux"]
|
||||
all_article_info_author = ""
|
||||
|
||||
# pages with specific settings
|
||||
article_pages = [
|
||||
{"file":"deploy/linux/index", "os":["linux"]},
|
||||
{"file":"deploy/linux/install_overview", "os":["linux"]},
|
||||
{"file":"deploy/linux/prerequisites", "os":["linux"]},
|
||||
{"file":"deploy/linux/quick_start", "os":["linux"]},
|
||||
{"file":"deploy/linux/install", "os":["linux"]},
|
||||
{"file":"deploy/linux/upgrade", "os":["linux"]},
|
||||
{"file":"deploy/linux/uninstall", "os":["linux"]},
|
||||
{"file":"deploy/linux/package_manager_integration", "os":["linux"]},
|
||||
{"file":"deploy/docker", "os":["linux"]},
|
||||
|
||||
{"file":"release/gpu_os_support", "os":["linux"]},
|
||||
{"file":"release/docker_support_matrix", "os":["linux"]},
|
||||
|
||||
{"file":"reference/gpu_libraries/communication", "os":["linux"]},
|
||||
{"file":"reference/ai_tools", "os":["linux"]},
|
||||
{"file":"reference/management_tools", "os":["linux"]},
|
||||
{"file":"reference/validation_tools", "os":["linux"]},
|
||||
{"file":"reference/framework_compatibility/framework_compatibility", "os":["linux"]},
|
||||
{"file":"reference/computer_vision", "os":["linux"]},
|
||||
|
||||
{"file":"how_to/deep_learning_rocm", "os":["linux"]},
|
||||
{"file":"how_to/gpu_aware_mpi", "os":["linux"]},
|
||||
{"file":"how_to/magma_install/magma_install", "os":["linux"]},
|
||||
{"file":"how_to/pytorch_install/pytorch_install", "os":["linux"]},
|
||||
{"file":"how_to/system_debugging", "os":["linux"]},
|
||||
{"file":"how_to/tensorflow_install/tensorflow_install", "os":["linux"]},
|
||||
|
||||
{"file":"examples/machine_learning", "os":["linux"]},
|
||||
{"file":"examples/inception_casestudy/inception_casestudy", "os":["linux"]},
|
||||
|
||||
{"file":"understand/file_reorg", "os":["linux"]},
|
||||
|
||||
{"file":"understand/isv_deployment_win", "os":["windows"]},
|
||||
]
|
||||
|
||||
external_toc_path = "./sphinx/_toc.yml"
|
||||
|
||||
docs_core = ROCmDocs("ROCm 5.0.0 Documentation Home")
|
||||
docs_core.setup()
|
||||
|
||||
external_projects_current_project = "rocm"
|
||||
|
||||
for sphinx_var in ROCmDocs.SPHINX_VARS:
|
||||
globals()[sphinx_var] = getattr(docs_core, sphinx_var)
|
||||
html_theme_options = {
|
||||
"link_main_doc": False
|
||||
}
|
||||
BIN
docs/data/deploy/linux/image.001.png
Normal file
|
After Width: | Height: | Size: 939 KiB |
BIN
docs/data/deploy/linux/image.002.png
Normal file
|
After Width: | Height: | Size: 537 KiB |
BIN
docs/data/deploy/linux/image.003.png
Normal file
|
After Width: | Height: | Size: 292 KiB |
BIN
docs/data/deploy/linux/image.004.png
Normal file
|
After Width: | Height: | Size: 1.3 MiB |
BIN
docs/data/deploy/quick_start_windows/AMD-Display-Driver.png
Normal file
|
After Width: | Height: | Size: 163 KiB |
BIN
docs/data/deploy/quick_start_windows/AMD-Logo.png
Normal file
|
After Width: | Height: | Size: 2.1 KiB |
BIN
docs/data/deploy/quick_start_windows/BitCode-Profiler.png
Normal file
|
After Width: | Height: | Size: 34 KiB |
BIN
docs/data/deploy/quick_start_windows/DeSelectAll.png
Normal file
|
After Width: | Height: | Size: 183 KiB |
BIN
docs/data/deploy/quick_start_windows/HIP-Libraries.png
Normal file
|
After Width: | Height: | Size: 40 KiB |
BIN
docs/data/deploy/quick_start_windows/HIP-Ray-Tracing.png
Normal file
|
After Width: | Height: | Size: 40 KiB |
BIN
docs/data/deploy/quick_start_windows/HIP-Runtime-Compiler.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
docs/data/deploy/quick_start_windows/HIP-SDK-Core.png
Normal file
|
After Width: | Height: | Size: 38 KiB |
BIN
docs/data/deploy/quick_start_windows/Installation-Complete.png
Normal file
|
After Width: | Height: | Size: 407 KiB |
BIN
docs/data/deploy/quick_start_windows/Installation.png
Normal file
|
After Width: | Height: | Size: 465 KiB |
BIN
docs/data/deploy/quick_start_windows/Installer-Window.png
Normal file
|
After Width: | Height: | Size: 207 KiB |
BIN
docs/data/deploy/quick_start_windows/Loading-Window.png
Normal file
|
After Width: | Height: | Size: 461 KiB |
BIN
docs/data/deploy/quick_start_windows/LoadingWindow.png
Normal file
|
After Width: | Height: | Size: 461 KiB |
BIN
docs/data/deploy/quick_start_windows/Setup-Icon.png
Normal file
|
After Width: | Height: | Size: 3.5 KiB |
BIN
docs/data/deploy/quick_start_windows/Uninstallation.png
Normal file
|
After Width: | Height: | Size: 412 KiB |
BIN
docs/data/deploy/quick_start_windows/Windows-Security.png
Normal file
|
After Width: | Height: | Size: 68 KiB |
1
docs/data/deploy/quick_start_windows/image planning
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
BIN
docs/data/framework_compatibility/with_pytorch.png
Normal file
|
After Width: | Height: | Size: 10 KiB |
BIN
docs/data/framework_compatibility/with_tensorflow.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
BIN
docs/data/how_to/gpu_enabled_mpi_1.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
BIN
docs/data/how_to/magma_install/image.005.png
Normal file
|
After Width: | Height: | Size: 88 KiB |
BIN
docs/data/how_to/magma_install/image.006.png
Normal file
|
After Width: | Height: | Size: 32 KiB |
BIN
docs/data/how_to/tuning_guides/image.001.png
Normal file
|
After Width: | Height: | Size: 99 KiB |
BIN
docs/data/how_to/tuning_guides/image.002.png
Normal file
|
After Width: | Height: | Size: 130 KiB |
BIN
docs/data/how_to/tuning_guides/image.003.png
Normal file
|
After Width: | Height: | Size: 21 KiB |
BIN
docs/data/how_to/tuning_guides/image.004.png
Normal file
|
After Width: | Height: | Size: 8.8 KiB |
BIN
docs/data/how_to/tuning_guides/image.005.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/data/how_to/tuning_guides/image.006.png
Normal file
|
After Width: | Height: | Size: 25 KiB |
BIN
docs/data/how_to/tuning_guides/image.007.png
Normal file
|
After Width: | Height: | Size: 144 KiB |
BIN
docs/data/how_to/tuning_guides/image.008.png
Normal file
|
After Width: | Height: | Size: 17 KiB |
BIN
docs/data/how_to/tuning_guides/image.009.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
docs/data/how_to/tuning_guides/image.010.png
Normal file
|
After Width: | Height: | Size: 41 KiB |
BIN
docs/data/how_to/tuning_guides/image.011.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/data/how_to/tuning_guides/image.012.png
Normal file
|
After Width: | Height: | Size: 19 KiB |
BIN
docs/data/how_to/tuning_guides/image.013.png
Normal file
|
After Width: | Height: | Size: 57 KiB |
BIN
docs/data/how_to/tuning_guides/image.014.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
docs/data/how_to/tuning_guides/image.015.png
Normal file
|
After Width: | Height: | Size: 102 KiB |
BIN
docs/data/how_to/tuning_guides/image.016.png
Normal file
|
After Width: | Height: | Size: 114 KiB |
BIN
docs/data/reference/gpu_arch/image.001.png
Normal file
|
After Width: | Height: | Size: 103 KiB |
BIN
docs/data/reference/gpu_arch/image.002.png
Normal file
|
After Width: | Height: | Size: 59 KiB |
BIN
docs/data/reference/gpu_arch/image.003.png
Normal file
|
After Width: | Height: | Size: 41 KiB |
BIN
docs/data/reference/gpu_arch/image.004.png
Normal file
|
After Width: | Height: | Size: 39 KiB |
BIN
docs/data/reference/gpu_arch/image.005.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
docs/data/reference/gpu_arch/image.006.png
Normal file
|
After Width: | Height: | Size: 33 KiB |
BIN
docs/data/understand/deep_learning/Deep Learning Image 1.png
Normal file
|
After Width: | Height: | Size: 58 KiB |
|
After Width: | Height: | Size: 46 KiB |
BIN
docs/data/understand/deep_learning/Machine Learning.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
docs/data/understand/deep_learning/Matrix-1.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
BIN
docs/data/understand/deep_learning/Matrix-2.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
docs/data/understand/deep_learning/Matrix-3.png
Normal file
|
After Width: | Height: | Size: 21 KiB |
BIN
docs/data/understand/deep_learning/Model In.png
Normal file
|
After Width: | Height: | Size: 91 KiB |
BIN
docs/data/understand/deep_learning/Pytorch 11.png
Normal file
|
After Width: | Height: | Size: 88 KiB |
BIN
docs/data/understand/deep_learning/Text Classification 1.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
docs/data/understand/deep_learning/Text Classification 2.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/data/understand/deep_learning/Text Classification 3.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
BIN
docs/data/understand/deep_learning/Text Classification 4.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
docs/data/understand/deep_learning/Text Classification 5.png
Normal file
|
After Width: | Height: | Size: 87 KiB |
BIN
docs/data/understand/deep_learning/TextClassification_3.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
BIN
docs/data/understand/deep_learning/TextClassification_4.png
Normal file
|
After Width: | Height: | Size: 36 KiB |
BIN
docs/data/understand/deep_learning/TextClassification_5.png
Normal file
|
After Width: | Height: | Size: 87 KiB |
BIN
docs/data/understand/deep_learning/TextClassification_6.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/data/understand/deep_learning/TextClassification_7.png
Normal file
|
After Width: | Height: | Size: 18 KiB |
BIN
docs/data/understand/deep_learning/amd_logo.png
Normal file
|
After Width: | Height: | Size: 3.3 KiB |
BIN
docs/data/understand/deep_learning/image.018.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
BIN
docs/data/understand/deep_learning/inception_v3.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
docs/data/understand/deep_learning/mnist 4.png
Normal file
|
After Width: | Height: | Size: 9.1 KiB |
BIN
docs/data/understand/deep_learning/mnist 5.png
Normal file
|
After Width: | Height: | Size: 4.8 KiB |
BIN
docs/data/understand/deep_learning/mnist_1.png
Normal file
|
After Width: | Height: | Size: 22 KiB |
BIN
docs/data/understand/deep_learning/mnist_2.png
Normal file
|
After Width: | Height: | Size: 69 KiB |
BIN
docs/data/understand/deep_learning/mnist_3.png
Normal file
|
After Width: | Height: | Size: 9.8 KiB |
BIN
docs/data/understand/deep_learning/mnist_4.png
Normal file
|
After Width: | Height: | Size: 9.1 KiB |
BIN
docs/data/understand/deep_learning/mnist_5.png
Normal file
|
After Width: | Height: | Size: 4.8 KiB |
90
docs/deploy/docker.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Deploy ROCm Docker containers
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Docker containers share the kernel with the host operating system, therefore the
|
||||
ROCm kernel-mode driver must be installed on the host. Please refer to
|
||||
{ref}`using-the-package-manager` on installing `amdgpu-dkms`. The other
|
||||
user-space parts (like the HIP-runtime or math libraries) of the ROCm stack will
|
||||
be loaded from the container image and don't need to be installed to the host.
|
||||
|
||||
(docker-access-gpus-in-container)=
|
||||
|
||||
## Accessing GPUs in containers
|
||||
|
||||
In order to access GPUs in a container (to run applications using HIP, OpenCL or
|
||||
OpenMP offloading) explicit access to the GPUs must be granted.
|
||||
|
||||
The ROCm runtimes make use of multiple device files:
|
||||
|
||||
- `/dev/kfd`: the main compute interface shared by all GPUs
|
||||
- `/dev/dri/renderD<node>`: direct rendering interface (DRI) devices for each
|
||||
GPU. **`<node>`** is a number for each card in the system starting from 128.
|
||||
|
||||
Exposing these devices to a container is done by using the
|
||||
[`--device`](https://docs.docker.com/engine/reference/commandline/run/#device)
|
||||
option, i.e. to allow access to all GPUs expose `/dev/kfd` and all
|
||||
`/dev/dri/renderD` devices:
|
||||
|
||||
```shell
|
||||
docker run --device /dev/kfd --device /dev/renderD128 --device /dev/renderD129 ...
|
||||
```
|
||||
|
||||
More conveniently, instead of listing all devices, the entire `/dev/dri` folder
|
||||
can be exposed to the new container:
|
||||
|
||||
```shell
|
||||
docker run --device /dev/kfd --device /dev/dri
|
||||
```
|
||||
|
||||
Note that this gives more access than strictly required, as it also exposes the
|
||||
other device files found in that folder to the container.
|
||||
|
||||
(docker-restrict-gpus)=
|
||||
|
||||
### Restricting a container to a subset of the GPUs
|
||||
|
||||
If a `/dev/dri/renderD` device is not exposed to a container then it cannot use
|
||||
the GPU associated with it; this allows to restrict a container to any subset of
|
||||
devices.
|
||||
|
||||
For example to allow the container to access the first and third GPU start it
|
||||
like:
|
||||
|
||||
```shell
|
||||
docker run --device /dev/kfd --device /dev/dri/renderD128 --device /dev/dri/renderD130 <image>
|
||||
```
|
||||
|
||||
### Additional Options
|
||||
|
||||
The performance of an application can vary depending on the assignment of GPUs
|
||||
and CPUs to the task. Typically, `numactl` is installed as part of many HPC
|
||||
applications to provide GPU/CPU mappings. This Docker runtime option supports
|
||||
memory mapping and can improve performance.
|
||||
|
||||
```shell
|
||||
--security-opt seccomp=unconfined
|
||||
```
|
||||
|
||||
This option is recommended for Docker Containers running HPC applications.
|
||||
|
||||
```shell
|
||||
docker run --device /dev/kfd --device /dev/dri --security-opt seccomp=unconfined ...
|
||||
```
|
||||
|
||||
## Docker images in the ROCm ecosystem
|
||||
|
||||
### Base images
|
||||
|
||||
<https://github.com/RadeonOpenCompute/ROCm-docker> hosts images useful for users
|
||||
wishing to build their own containers leveraging ROCm. The built images are
|
||||
available from [Docker Hub](https://hub.docker.com/u/rocm). In particular
|
||||
`rocm/rocm-terminal` is a small image with the prerequisites to build HIP
|
||||
applications, but does not include any libraries.
|
||||
|
||||
### Applications
|
||||
|
||||
AMD provides pre-built images for various GPU-ready applications through its
|
||||
Infinity Hub at <https://www.amd.com/en/technologies/infinity-hub>.
|
||||
Examples for invoking each application and suggested parameters used for
|
||||
benchmarking are also provided there.
|
||||
53
docs/deploy/linux/index.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Deploy ROCm on Linux
|
||||
|
||||
Start with {doc}`/deploy/linux/quick_start` or follow the detailed
|
||||
instructions below.
|
||||
|
||||
## Prepare to Install
|
||||
|
||||
::::{grid} 1 1 2 2
|
||||
:gutter: 1
|
||||
|
||||
:::{grid-item-card} Prerequisites
|
||||
:link: prerequisites
|
||||
:link-type: doc
|
||||
|
||||
The prerequisites page lists the required steps *before* installation.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} Install Choices
|
||||
:link: install_overview
|
||||
:link-type: doc
|
||||
|
||||
Package manager vs AMDGPU Installer
|
||||
|
||||
Standard Packages vs Multi-Version Packages
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
## Choose your install method
|
||||
|
||||
::::{grid} 1 1 2 2
|
||||
:gutter: 1
|
||||
|
||||
:::{grid-item-card} Package Manager
|
||||
:link: os-native/index
|
||||
:link-type: doc
|
||||
|
||||
Directly use your distribution's package manager to install ROCm.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} AMDGPU Installer
|
||||
:link: installer/index
|
||||
:link-type: doc
|
||||
|
||||
Use an installer tool that orchestrates changes via the package
|
||||
manager.
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
## See Also
|
||||
|
||||
- {doc}`/release/gpu_os_support`
|
||||
71
docs/deploy/linux/install_overview.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# ROCm Installation Options (Linux)
|
||||
|
||||
Users installing ROCm must choose between various installation options. A new
|
||||
user should follow the [Quick Start guide](./quick_start).
|
||||
|
||||
## Package Manager versus AMDGPU Installer?
|
||||
|
||||
ROCm supports two methods for installation:
|
||||
|
||||
- Directly using the Linux distribution's package manager
|
||||
- The `amdgpu-install` script
|
||||
|
||||
There is no difference in the final installation state when choosing either
|
||||
option.
|
||||
|
||||
Using the distribution's package manager lets the user install,
|
||||
upgrade and uninstall using familiar commands and workflows. Third party
|
||||
ecosystem support is the same as your OS package manager.
|
||||
|
||||
The `amdgpu-install` script is a wrapper around the package manager. The same
|
||||
packages are installed by this script as the package manager system.
|
||||
|
||||
The installer automates the installation process for the AMDGPU
|
||||
and ROCm stack. It handles the complete installation process
|
||||
for ROCm, including setting up the repository, cleaning the system, updating,
|
||||
and installing the desired drivers and meta-packages. Users who are
|
||||
less familiar with the package manager can choose this method for ROCm
|
||||
installation.
|
||||
|
||||
(installation-types)=
|
||||
|
||||
## Single Version ROCm install versus Multi-Version
|
||||
|
||||
ROCm packages are versioned with both semantic versioning that is package
|
||||
specific and a ROCm release version.
|
||||
|
||||
### Single-version Installation
|
||||
|
||||
The single-version ROCm installation refers to the following:
|
||||
|
||||
- Installation of a single instance of the ROCm release on a system
|
||||
- Use of non-versioned ROCm meta-packages
|
||||
|
||||
### Multi-version Installation
|
||||
|
||||
The multi-version installation refers to the following:
|
||||
|
||||
- Installation of multiple instances of the ROCm stack on a system. Extending
|
||||
the package name and its dependencies with the release version adds the
|
||||
ability to support multiple versions of packages simultaneously.
|
||||
- Use of versioned ROCm meta-packages.
|
||||
|
||||
```{attention}
|
||||
ROCm packages that were previously installed from a single-version installation
|
||||
must be removed before proceeding with the multi-version installation to avoid
|
||||
conflicts.
|
||||
```
|
||||
|
||||
```{note}
|
||||
Multiversion install is not available for the kernel driver module, also referred to as AMDGPU.
|
||||
```
|
||||
|
||||
The following image demonstrates the difference between single-version and
|
||||
multi-version ROCm installation types:
|
||||
|
||||
```{figure-md} install-types
|
||||
|
||||
<img src="/data/deploy/linux/image.001.png" alt="">
|
||||
|
||||
ROCm Installation Types
|
||||
```
|
||||
31
docs/deploy/linux/installer/index.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# AMDGPU Install Script
|
||||
|
||||
::::{grid} 2 3 3 3
|
||||
:gutter: 1
|
||||
|
||||
:::{grid-item-card} Install
|
||||
:link: install
|
||||
:link-type: doc
|
||||
|
||||
How to install ROCm?
|
||||
:::
|
||||
|
||||
:::{grid-item-card} Upgrade
|
||||
:link: upgrade
|
||||
:link-type: doc
|
||||
|
||||
Instructions for upgrading an existing ROCm installation.
|
||||
:::
|
||||
|
||||
:::{grid-item-card} Uninstall
|
||||
:link: uninstall
|
||||
:link-type: doc
|
||||
|
||||
Steps for removing ROCm packages libraries and tools.
|
||||
:::
|
||||
|
||||
::::
|
||||
|
||||
## See Also
|
||||
|
||||
- {doc}`/release/gpu_os_support`
|
||||