Compare commits

...

187 Commits

Author SHA1 Message Date
Roopa Malavally
4c8787087a Update README.md 2021-08-27 15:37:37 -07:00
Roopa Malavally
7cd85779c4 Update README.md 2021-08-27 15:31:42 -07:00
Aakash Sudhanwa
c676ff480e Update default.xml (#1567) 2021-08-27 15:26:48 -07:00
Roopa Malavally
6d19f5b6c1 Add files via upload 2021-08-27 15:24:56 -07:00
Roopa Malavally
4679e8ac87 Update README.md 2021-08-27 15:24:20 -07:00
Roopa Malavally
8a3209f985 Update README.md 2021-08-27 15:23:58 -07:00
Roopa Malavally
79d0d00b2a Update README.md 2021-08-27 15:23:18 -07:00
Roopa Malavally
db5121cdfe Update README.md 2021-08-27 15:22:30 -07:00
Aakash Sudhanwa
035f4995bb Merge branch 'master' into master 2021-08-27 15:08:41 -07:00
Roopa Malavally
f63e3f9ce1 Add files via upload 2021-08-27 15:02:49 -07:00
Roopa Malavally
4e56ed7dc3 Update README.md 2021-08-13 11:49:38 -07:00
Roopa Malavally
2faf5b6ab7 Update README.md 2021-08-13 11:48:18 -07:00
Roopa Malavally
e69b7e6f71 Delete OSKernel.PNG 2021-08-13 11:48:00 -07:00
Roopa Malavally
d53ffd1c89 Add files via upload 2021-08-13 11:47:48 -07:00
Roopa Malavally
e177599de1 Add files via upload 2021-08-09 12:55:19 -07:00
Roopa Malavally
9fc1ba3970 Add files via upload 2021-08-09 12:47:17 -07:00
Nick Curtis
520764faa3 Fix missing links in rocprof docs (#1550) 2021-08-07 08:42:25 -07:00
Roopa Malavally
7d0b53c87f Add files via upload 2021-08-03 10:53:16 -07:00
Roopa Malavally
c3a8ecd0c5 Delete AMD_Compiler_Reference_Guide_v4.3.pdf 2021-08-03 10:49:28 -07:00
Roopa Malavally
21cf37b2df Add files via upload 2021-08-02 21:37:19 -07:00
Roopa Malavally
f4419a3d1c Delete AMD_HIP_Programming_Guide_v4.3.pdf 2021-08-02 21:37:00 -07:00
zhozha
5ffdcf84ab Update to ROCm 4.3 manifest 2021-08-02 17:33:25 -07:00
Roopa Malavally
085295daea Update README.md 2021-08-02 16:51:39 -07:00
Roopa Malavally
cf5cec2580 ROCm v4.3 Release Notes (#1540)
* Delete AMD HIP Programming Guide_v4.2.pdf

* Delete AMD_HIP_API_Guide_4.2.pdf

* Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf

* Delete AMD_ROCm_Release_Notes_v4.2.pdf

* Delete HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf

* Delete ROCm_Data_Center_Tool_API_Guide_v4.2.pdf

* Delete ROCm_Debugger_API_Guide_v4.2.pdf

* Delete ROCm_Debugger_User_Guide_v4.2.pdf

* Delete ROCm_SMI_Manual_4.2.pdf

* Update README.md

* Update README.md

* Delete CG1.PNG

* Delete CG2.PNG

* Delete CG3.PNG

* Delete CGMain.PNG

* Delete CLI1.PNG

* Delete CLI2.PNG

* Delete SMI.PNG

* Delete keyfeatures.PNG

* Delete latestGPU.PNG

* Delete rocsolverAPI.PNG

* Create test.rst

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2021-08-02 16:39:54 -07:00
Roopa Malavally
e7a93ae3f5 Add files via upload 2021-08-01 18:53:14 -07:00
Roopa Malavally
e3b7d2f39d Delete AMD_ROCDebugger_API.pdf.pdf 2021-08-01 18:52:58 -07:00
Roopa Malavally
0c4565d913 Delete AMD_ROCDebugger_User_Guide.pdf.pdf 2021-08-01 18:52:30 -07:00
Roopa Malavally
313a589132 Add files via upload 2021-08-01 18:52:03 -07:00
Roopa Malavally
1caf5514e8 Add files via upload 2021-08-01 18:33:33 -07:00
Roopa Malavally
d029ad24cf Add files via upload 2021-08-01 18:09:17 -07:00
Roopa Malavally
ca6638d917 Add files via upload 2021-08-01 17:42:39 -07:00
Roopa Malavally
5cba920022 Add files via upload 2021-08-01 16:21:37 -07:00
Roopa Malavally
cefc8ef1d7 Add files via upload 2021-08-01 16:17:54 -07:00
Roopa Malavally
b71c5705a2 Delete ROCm_SMI_Manual_4.2.pdf 2021-08-01 16:13:32 -07:00
Roopa Malavally
977a1d14cd Delete ROCm_Debugger_User_Guide_v4.2.pdf 2021-08-01 16:13:17 -07:00
Roopa Malavally
3ab60d1326 Delete ROCm_Debugger_API_Guide_v4.2.pdf 2021-08-01 16:13:04 -07:00
Roopa Malavally
4b5b13294e Delete ROCm_Data_Center_Tool_API_Guide_v4.2.pdf 2021-08-01 16:12:50 -07:00
Roopa Malavally
ce66b14d9e Delete HIP_Supported_CUDA_API_Reference_Guide_v4.2.pdf 2021-08-01 16:12:32 -07:00
Roopa Malavally
01f63f546f Delete AMD_ROCm_Release_Notes_v4.2.pdf 2021-08-01 16:12:20 -07:00
Roopa Malavally
72eab2779e Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.2.pdf 2021-08-01 16:12:05 -07:00
Roopa Malavally
8a366db3d7 Delete AMD_HIP_API_Guide_4.2.pdf 2021-08-01 16:11:50 -07:00
Roopa Malavally
8267a84345 Delete AMD HIP Programming Guide_v4.2.pdf 2021-08-01 16:11:30 -07:00
zhang2amd
f7b3a38d49 Merge pull request #1470 from RadeonOpenCompute/roc-4.2.x
4.2 : Manifest Files
2021-05-11 14:58:43 -07:00
Lad, Aditya
12e3bb376b 4.2 : Manifest Files 2021-05-11 14:37:52 -07:00
Roopa Malavally
a44e82f263 ROCm v4.2 Release Notes (#1469)
* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2021-05-11 14:15:34 -07:00
Roopa Malavally
9af988ffc8 Add files via upload 2021-05-11 14:09:17 -07:00
Roopa Malavally
5fed386cf1 Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-05-11 14:08:41 -07:00
Roopa Malavally
d729428302 Add files via upload 2021-05-10 17:47:35 -07:00
Roopa Malavally
8611c5f450 Delete ROCm_SMI_API_GUIDE_v4.1.pdf 2021-05-10 17:47:20 -07:00
Roopa Malavally
ae0b56d029 Delete AMD_HIP_Programming_Guide_v4.1.pdf 2021-05-10 17:28:58 -07:00
Roopa Malavally
3862c69b09 Add files via upload 2021-05-10 16:30:37 -07:00
Roopa Malavally
be34f32307 Add files via upload 2021-05-10 15:18:46 -07:00
Roopa Malavally
08c9cce749 Add files via upload 2021-05-10 15:18:03 -07:00
Roopa Malavally
a83a7c9206 Delete Debugging with ROCGDB User Guide v4.1.pdf 2021-05-10 15:16:45 -07:00
Roopa Malavally
71faa9c81f Delete AMD-Debugger API Guide v4.1.pdf 2021-05-10 15:16:35 -07:00
Roopa Malavally
6b021edb23 Add files via upload 2021-05-10 13:37:23 -07:00
Roopa Malavally
3936d236e6 Delete AMD_ROCm_DataCenter_Tool_User_Guide_v4.1.pdf 2021-05-10 13:37:12 -07:00
Roopa Malavally
dbcb26756d Add files via upload 2021-05-10 13:13:55 -07:00
Roopa Malavally
96de448de6 Delete HIP_Supported_CUDA_API_Reference_Guide_v4.1.pdf 2021-05-10 13:13:39 -07:00
Roopa Malavally
ee0bc562e6 Add files via upload 2021-05-10 12:01:17 -07:00
Roopa Malavally
376b8673b7 Delete ROCm_Data_Center_Tool_API_Manual_4.1.pdf 2021-05-10 12:00:11 -07:00
Roopa Malavally
e9147a9103 Add files via upload 2021-05-10 11:58:50 -07:00
Roopa Malavally
fab1a697f0 Delete AMD_HIP_API_Guide_4.2.pdf.pdf 2021-05-10 11:58:28 -07:00
Roopa Malavally
a369e642b8 Delete AMD_HIP_API_Guide_v4.1.pdf 2021-05-10 11:58:16 -07:00
Roopa Malavally
9101972654 Add files via upload 2021-05-10 11:57:52 -07:00
Roopa Malavally
f3ba8df53d Update README.md 2021-04-21 08:28:44 -07:00
Roopa Malavally
ba7a87a2dc Update README.md 2021-04-19 13:43:39 -07:00
zhang2amd
df6d746d50 Merge pull request #1443 from RadeonOpenCompute/roc-4.1.1
ROCm 4.1.1 default.xml
2021-04-08 10:06:17 -07:00
Lad, Aditya
2b2bab5bf3 ROCm 4.1.1 default.xml 2021-04-08 09:59:11 -07:00
Roopa Malavally
5ec9b12f99 Update README.md 2021-04-08 09:27:07 -07:00
Roopa Malavally
803148affd Update README.md 2021-04-08 09:21:27 -07:00
Roopa Malavally
9275fb6298 Update README.md 2021-04-08 09:19:52 -07:00
Roopa Malavally
b6ae3f145e Update README.md 2021-04-07 11:06:04 -07:00
Roopa Malavally
f80eefc965 Update README.md 2021-04-07 11:04:51 -07:00
Roopa Malavally
c5d91843a7 Update README.md 2021-04-07 11:03:31 -07:00
Roopa Malavally
733a9c097c Update README.md 2021-04-07 07:15:49 -07:00
Roopa Malavally
ff2b3f8a23 Add files via upload 2021-03-26 12:14:59 -07:00
Roopa Malavally
5a4cf1cee1 Delete AMD_ROCm_Release_Notes_v4.1.docx 2021-03-26 12:14:46 -07:00
Roopa Malavally
dccf5ca356 Update README.md 2021-03-26 12:01:54 -07:00
Roopa Malavally
8b20bd56a6 Update README.md 2021-03-26 10:00:07 -07:00
zhang2amd
65cb10e5e8 Merge pull request #1427 from xuhuisheng/patch-1
add hipFFT to default.xml
2021-03-25 23:03:26 -07:00
Roopa Malavally
ac2625dd26 Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-03-25 15:55:22 -07:00
Roopa Malavally
3716310e93 Add files via upload 2021-03-25 15:55:04 -07:00
Roopa Malavally
2dee17f7d6 Add files via upload 2021-03-25 13:03:33 -07:00
Roopa Malavally
61e8b0d70e Delete AMD_ROCm_Release_Notes_v4.1.pdf 2021-03-25 13:03:20 -07:00
Roopa Malavally
8a3304a8d9 Update README.md 2021-03-25 11:45:08 -07:00
Roopa Malavally
55488a9424 Update README.md 2021-03-25 11:03:19 -07:00
Roopa Malavally
ff4a1d4059 Update README.md 2021-03-25 10:03:46 -07:00
Xu Huisheng
4b2d93fb7e add hipFFT to default.xml
There is hipFFT on <http://repo.radeon.com/rocm/apt/4.1/pool/main/h/hipfft/>.
Please add related repository in default.xml.
Thank you.
2021-03-25 19:41:05 +08:00
Roopa Malavally
061ccd21b8 Update README.md 2021-03-24 10:26:07 -07:00
Roopa Malavally
0ed1bd9f8e Add files via upload 2021-03-24 10:25:24 -07:00
Roopa Malavally
856c74de55 Update README.md 2021-03-24 07:59:03 -07:00
Roopa Malavally
12c6f60e45 Update README.md 2021-03-24 07:58:30 -07:00
Aditya Lad
897b1e8e2d Merge pull request #1422 from RadeonOpenCompute/roc-4.1.x
Roc 4.1.x
2021-03-23 17:59:19 -07:00
Lad, Aditya
382ea7553f Remove inaccessible repos 2021-03-23 17:56:10 -07:00
Aditya Lad
2014b47dcb Merge pull request #1420 from RadeonOpenCompute/master
Addition of ROCm release notes
2021-03-23 17:29:17 -07:00
zhang2amd
b9f9bafd9b Merge pull request #1419 from RadeonOpenCompute/roc-4.1.x
ROCm 4.1 Release
2021-03-23 17:17:00 -07:00
Lad, Aditya
ff15f420c6 ROCm 4.1 default.xml edit 2021-03-23 17:10:44 -07:00
Lad, Aditya
f51c9be952 Release ROCm 4.1 Readme.md and default.xml 2021-03-23 17:03:00 -07:00
Lad, Aditya
64e254dc99 Release ROCm 4.1 Readme.md and default.xml 2021-03-23 17:01:33 -07:00
Roopa Malavally
af7f921474 Add files via upload 2021-03-23 17:00:17 -07:00
Roopa Malavally
8b3377749f Add files via upload 2021-03-23 14:16:46 -07:00
Roopa Malavally
c3a3ce55d1 Delete gdb.pdf 2021-03-22 17:29:33 -07:00
Roopa Malavally
64c727449b Delete amd-dbgapi.pdf 2021-03-22 17:29:24 -07:00
Roopa Malavally
182dfc65cf Add files via upload 2021-03-22 16:43:36 -07:00
Roopa Malavally
d529d5c585 Delete AMD_ROCm_Release_Notes_v4.0.pdf 2021-03-22 16:29:11 -07:00
Roopa Malavally
cca6bc4921 Delete HIP_Programming_Guide_v4.0.pdf 2021-03-22 16:28:56 -07:00
Roopa Malavally
e3dbbb6bbf Add files via upload 2021-03-22 16:27:41 -07:00
Roopa Malavally
6e39c80762 Add files via upload 2021-03-22 16:17:38 -07:00
Roopa Malavally
f96f5df625 Add files via upload 2021-03-22 16:07:44 -07:00
Roopa Malavally
0639a312c8 Delete ROCm_Data_Center_Too_API_Manual_4.1.pdf 2021-03-22 16:07:03 -07:00
Roopa Malavally
a2878b1460 Add files via upload 2021-03-22 15:38:16 -07:00
Roopa Malavally
1daf261d25 Delete ROCm_SMI_API_Guide_v4.0.pdf 2021-03-22 15:37:54 -07:00
Roopa Malavally
5848bc3d7e Add files via upload 2021-03-22 15:37:15 -07:00
Roopa Malavally
d9692359ad Delete HIP-API_Guide_v4.0.pdf 2021-03-22 15:36:42 -07:00
Roopa Malavally
25110784cf Add files via upload 2021-03-22 14:53:33 -07:00
Roopa Malavally
9ff31d316f Update README.md 2021-03-10 07:53:11 -08:00
Roopa Malavally
b072119ad6 Update README.md 2021-03-09 09:03:05 -08:00
Roopa Malavally
095544032c Update README.md 2021-02-25 07:28:52 -08:00
Roopa Malavally
26a39a637a Update README.md 2021-02-25 07:24:46 -08:00
Roopa Malavally
6fb55e6f45 Update README.md 2021-02-24 13:16:33 -08:00
Lad, Aditya
290091946f ROCm 4.0.1 Manifest file 2021-01-25 15:11:55 -08:00
Roopa Malavally
2874a8ae6c Update README.md 2021-01-25 15:02:27 -08:00
Roopa Malavally
f62f2b24da Add files via upload 2021-01-20 18:10:40 -08:00
Roopa Malavally
790567e3bd Update README.md 2020-12-18 15:08:54 -08:00
Roopa Malavally
57d7a202d4 Update README.md 2020-12-18 15:08:24 -08:00
Aditya Lad
80d2aa739b Merge pull request #1343 from RadeonOpenCompute/roc-4.0.x
ROCm 4.0 Release
2020-12-18 14:30:27 -08:00
Roopa Malavally
b18851f804 Update README.md 2020-12-18 13:12:20 -08:00
Roopa Malavally
0f0dbf0c92 Update README.md 2020-12-18 13:11:59 -08:00
Lad, Aditya
224a45379f ROCm 4.0 Release 2020-12-18 12:53:33 -08:00
Roopa Malavally
f521943747 Update README.md 2020-12-18 12:52:04 -08:00
Roopa Malavally
2b7f806b10 AMD ROCm Release Notes v4.0 (#1342)
* Update README.md

* Update README.md

* Add files via upload

* Delete AMD_ROCm_Release_Notes_v3.10.pdf

* Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf

* Delete ROCm_Data_Center_API_Guide.pdf

* Delete ROCm_SMI_API_Guide_v3.10.pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2020-12-18 12:46:40 -08:00
Roopa Malavally
cd55ef67c9 Add files via upload 2020-12-18 12:32:43 -08:00
Roopa Malavally
9320669eee Delete AMD_ROCm_Release_Notes_v3.10.pdf 2020-12-18 08:19:51 -08:00
Roopa Malavally
c1211c66e3 Delete ROCm_SMI_API_Guide_v3.10.pdf 2020-12-18 08:19:36 -08:00
Roopa Malavally
c8fcff6488 Delete ROCm_Data_Center_API_Guide.pdf 2020-12-18 08:19:18 -08:00
Roopa Malavally
7118076ab4 Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-12-18 08:18:58 -08:00
Roopa Malavally
ec5523395a Add files via upload 2020-12-17 21:00:59 -08:00
Roopa Malavally
41d8f6a235 Add files via upload 2020-12-17 14:00:59 -08:00
Roopa Malavally
c69eef858a Update README.md 2020-12-10 13:38:07 -08:00
Aditya Lad
5b902ca38c Merge pull request #1316 from RadeonOpenCompute/roc-3.10.x
add rdc and half
2020-12-02 16:11:11 -08:00
Aditya Lad
68c5c198df add rdc and half 2020-12-02 16:07:15 -08:00
Aditya Lad
761ed4e70f Merge pull request #1314 from RadeonOpenCompute/roc-3.10.x
3.10 : Manifest Files
2020-12-01 16:31:55 -08:00
Lad, Aditya
8d5a160f0a 3.10 : Manifest Files 2020-12-01 16:24:12 -08:00
Roopa Malavally
f61c2ad155 Add files via upload 2020-12-01 15:45:33 -08:00
Roopa Malavally
3e2e30cc9a Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-12-01 15:44:56 -08:00
Roopa Malavally
a1f3b4e6b8 Update README.md 2020-12-01 15:08:53 -08:00
Roopa Malavally
7a3a012e6a Update README.md 2020-11-30 15:45:42 -08:00
Roopa Malavally
5b6ab31db3 Update README.md 2020-11-30 14:12:01 -08:00
Roopa Malavally
acabe2c532 Update README.md 2020-11-30 14:10:06 -08:00
Roopa Malavally
39d8bcd504 Release notes for v3.10 (#1312)
* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Delete matrix.png

* Delete ROCMCLI3.PNG

* Delete ROCMCLI2.PNG

* Delete ROCMCLI1.PNG

* Delete GEMM2.PNG

* Add files via upload

* Delete ROCm_SMI_Manual_v3.9.pdf

* Delete AMD_ROCm_Release_Notes_v3.9.pdf

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md
2020-11-30 14:07:52 -08:00
Roopa Malavally
af6d1e9b26 Add files via upload 2020-11-30 14:01:36 -08:00
Roopa Malavally
1fa1d4a935 Add files via upload 2020-11-30 09:53:49 -08:00
Roopa Malavally
03d93c1948 Delete AMD_ROCm_Release_Notes_v3.9.pdf 2020-11-30 08:55:35 -08:00
Roopa Malavally
93984b0956 Add files via upload 2020-11-30 08:54:52 -08:00
Roopa Malavally
6ccb1cfc0f Add files via upload 2020-11-30 07:29:29 -08:00
Roopa Malavally
f054f82173 Delete ROCm_SMI_Manual_v3.9.pdf 2020-11-30 07:28:11 -08:00
Xu Huisheng
bb6756b58d remove dumplicated remote=roc-github (#1248) 2020-11-18 08:19:23 -08:00
Roopa Malavally
d957b8a17c Update README.md 2020-11-12 13:47:48 -08:00
Roopa Malavally
37ece61861 Update README.md 2020-11-11 14:16:48 -08:00
Roopa Malavally
434023f31b Update README.md 2020-11-03 07:45:53 -08:00
Aditya Lad
a555260687 Merge pull request #1268 from RadeonOpenCompute/roc-3.9.x
Roc 3.9.x
2020-10-28 17:39:17 -07:00
Lad, Aditya
bf89c6bbf1 3.9 documentation 2020-10-28 15:32:49 -07:00
Lad, Aditya
bd4b772255 ROCm 3.9 default.xml 2020-10-28 15:22:02 -07:00
Lad, Aditya
e99027c39c ROCm 3.9 : Manifest files 2020-10-28 15:14:41 -07:00
Roopa Malavally
93c69afb5b Add files via upload 2020-10-28 14:54:54 -07:00
Roopa Malavally
bc2ce5c35b Delete staticlinkinglib.PNG 2020-10-28 14:52:02 -07:00
Roopa Malavally
bf633aec6b Delete forweb.PNG 2020-10-28 14:51:49 -07:00
Roopa Malavally
8608a9a1c9 Delete RDCComponentsrevised.png 2020-10-28 14:51:33 -07:00
Roopa Malavally
76afb05b6c Delete AMD_ROCm_DataCenter_Tool_User_Guide.pdf 2020-10-28 14:51:19 -07:00
Roopa Malavally
8bc67a21ea Update README.md 2020-10-19 20:23:07 -07:00
Roopa Malavally
1ce148edb1 Update README.md 2020-10-19 20:21:08 -07:00
Roopa Malavally
cc6147c25b Update README.md 2020-10-19 20:20:20 -07:00
Roopa Malavally
aadd9e68e1 Update README.md 2020-10-19 20:17:34 -07:00
Roopa Malavally
dce5aee2dc Add files via upload 2020-10-19 19:34:27 -07:00
Aditya Lad
0bcae510a3 Merge pull request #1244 from RadeonOpenCompute/roc-3.8.x
Remove MiGraphX from 3.8
2020-09-25 10:06:57 -07:00
Roopa Malavally
506cdcf6db Update README.md 2020-09-25 08:06:49 -07:00
Roopa Malavally
a919ba64c9 Update README.md 2020-09-25 08:00:10 -07:00
Roopa Malavally
fae25ccf9b Update README.md 2020-09-22 16:52:31 -07:00
Aakash Sudhanwa
67bd7501c1 Update README.md 2019-12-18 14:10:38 -08:00
Aakash Sudhanwa
d62f1c4247 Merge pull request #12 from RadeonOpenCompute/master
Rebase
2019-12-18 14:09:40 -08:00
Aakash Sudhanwa
c3d5bc6406 Rename Release nodes pdf 2019-11-25 20:54:25 -08:00
Aakash Sudhanwa
db45731729 Merge pull request #11 from RadeonOpenCompute/master
ROCm Release 2.10 (#947)
2019-11-25 20:12:36 -08:00
Aakash Sudhanwa
34552e95e0 Release Notes 2019-11-25 19:23:24 -08:00
Aakash Sudhanwa
8d0c516c5c Merge pull request #10 from RadeonOpenCompute/master
Update to 2.10
2019-11-25 19:20:50 -08:00
Aakash Sudhanwa
5cba919767 default.xml: ROCm Rel 2.10 2019-11-25 14:38:06 -08:00
Aakash Sudhanwa
bb0022e972 Merge pull request #9 from RadeonOpenCompute/master
Updating to latest
2019-11-25 13:04:27 -08:00
22 changed files with 690 additions and 127 deletions

Binary file not shown.

BIN
AMD_HIP_API_Guide_v4.3.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
AMD_RDC_API_Guide_v4.3.pdf Normal file

Binary file not shown.

BIN
AMD_ROCDebugger_API.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
AMD_ROCm_SMI_Guide_v4.3.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

759
README.md
View File

@@ -1,25 +1,91 @@
# AMD ROCm Release Notes v3.8.0
# AMD ROCm™ v4.3.1 Point Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software.
This page describes the features, fixed issues, and information about downloading and installing the ROCm software.
It also covers known issues in this release.
## List of Supported Operating Systems
The AMD ROCm platform supports the following operating systems:
![Screenshot](https://github.com/RadeonOpenCompute/ROCm/blob/master/images/SuppEnv.PNG)
## What\'s New in This Release
The ROCm v4.3.1 release consists of the following enhancements:
### Support for RHEL V8.4
This release extends support for RHEL v8.4.
### Support for SLES V15 Service Pack 3
This release extends support for SLES v15 SP3.
### Pass Manager Update
In the AMD ROCm 4.3.1 release, the ROCm compiler uses the legacy pass manager, by default, to provide a better performance experience with some workloads.
Previously, in ROCm v4.3, the default choice for the ROCm compiler was the new pass manager.
For more information about legacy and new pass managers, see http://llvm.org.
## Known Issues in This Release
### General Userspace and Application Freeze on MI25
For some workloads on MI25, general user space and application freeze are observed, and the GPU resets intermittently. Note, the freeze may take hours to reproduce.
This issue is under active investigation, and no workarounds are available currently.
### hipRTC - File Not Found Error
hipRTC may fail, and users may encounter the following error:
<built-in>:1:10: fatal error: '__clang_hip_runtime_wrapper.h' file not found
#include "__clang_hip_runtime_wrapper.h"
#### Suggested Workarounds
* Set LLVM_PATH in the environment to <path to ROCm llvm>/llvm. Note, if ROCm is installed at the default location, then LLVM_PATH must be set to /opt/rocm/llvm.
* Add “-I <path to ROCm>/llvm/lib/clang/13.0.0/include/” to compiler options in the call to hiprtcCompileProgram (). Note, this workaround requires the following changes in the code:
// set NUM_OPTIONS to one more than the number of options that was previously required
const char* options[NUM_OPTIONS];
// fill other options[] here
std::string sarg = "-I/opt/rocm/llvm/lib/clang/13.0.0/include/";
options[NUM_OPTIONS - 1] = sarg.c_str();
hiprtcResult compileResult{hiprtcCompileProgram(prog, NUM_OPTIONS, options)};"
# AMD ROCm™ v4.3 Release Notes
This document describes the features, fixed issues, and information about downloading and installing the AMD ROCm™ software. It also covers known issues and deprecations in this release.
- [Supported Operating Systems and Documentation Updates](#Supported-Operating-Systems-and-Documentation-Updates)
* [Supported Operating Systems](#Supported-Operating-Systems)
* [ROCm Installation Updates](#ROCm-Installation-Updates)
* [AMD ROCm Documentation Updates](#AMD-ROCm-Documentation-Updates)
- [What\'s New in This Release](#Whats-New-in-This-Release)
* [Hipfort-Interface for GPU Kernel Libraries](#Hipfort-Interface-for-GPU-Kernel-Libraries)
* [HIP Enhancements](#HIP-Enhancements)
* [ROCm Data Center Tool](#ROCm-Data-Center-Tool)
* [Error-Correcting Code Fields in ROCm Data Center Tool](#Error-Correcting-Code-Fields-in-ROCm-Data-Center-Tool)
* [Static Linking Libraries](#Static-Linking-Libraries)
- [Fixed Defects](#Fixed-Defects)
* [ROCm Math and Communication Libraries](#ROCm-Math-and-Communication-Libraries)
* [ROCProfiler Enhancements](#ROCProfiler-Enhancements)
- [Known Issues](#Known-Issues)
- [Known Issues in This Release](#Known-Issues-in-This-Release)
- [Deploying ROCm](#Deploying-ROCm)
- [Hardware and Software Support](#Hardware-and-Software-Support)
- [Machine Learning and High Performance Computing Software Stack for AMD GPU](#Machine-Learning-and-High-Performance-Computing-Software-Stack-for-AMD-GPU)
@@ -28,34 +94,53 @@ It also covers known issues in this release.
# Supported Operating Systems
## Support for Vega 7nm Workstation
## ROCm Installation Updates
This release extends support to the Vega 7nm Workstation (Vega20 GL-XE) version.
## List of Supported Operating Systems
### Supported Operating Systems
The AMD ROCm platform is designed to support the following operating systems:
* Ubuntu 20.04 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)
* CentOS 7.8 & RHEL 7.8 (Kernel 3.10.0-1127) (Using devtoolset-7 runtime support)
* CentOS 8.2 & RHEL 8.2 (Kernel 4.18.0 ) (devtoolset is not required)
* SLES 15 SP1
![Screenshot](https://github.com/RadeonOpenCompute/ROCm/blob/master/images/OSKernelupdated.PNG)
## Fresh Installation of AMD ROCm v3.8 Recommended
A fresh and clean installation of AMD ROCm v3.8 is recommended. An upgrade from previous releases to AMD ROCm v3.8 is not supported.
For more information, refer to the AMD ROCm Installation Guide at:
### Fresh Installation of AMD ROCM V4.3 Recommended
Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm. **An upgrade from previous releases to AMD ROCm v4.3 is not supported**. For more information, refer to the AMD ROCm Installation Guide at
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
**Note**: AMD ROCm release v3.3 or prior releases are not fully compatible with AMD ROCm v3.5 and higher versions. You must perform a fresh ROCm installation if you want to upgrade from AMD ROCm v3.3 or older to 3.5 or higher versions and vice-versa.
**Note**: *render group* is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use *video group*.
**Note**: *render* group is required only for Ubuntu v20.04. For all other ROCm supported operating systems, continue to use video group.
* For ROCm v3.5 and releases thereafter,the *clinfo* path is changed to - */opt/rocm/opencl/bin/clinfo*.
* For ROCm v3.5 and releases thereafter, the clinfo path is changed to /opt/rocm/opencl/bin/clinfo.
* For ROCm v3.3 and older releases, the *clinfo* path remains unchanged - */opt/rocm/opencl/bin/x86_64/clinfo*.
* For ROCm v3.3 and older releases, the clinfo path remains /opt/rocm/opencl/bin/x86_64/clinfo.
## ROCm Multi-Version Installation Update
With the AMD ROCm v4.3 release, the following ROCm multi-version installation changes apply:
The meta packages rocm-dkms<version> are now deprecated for multi-version ROCm installs. For example, rocm-dkms3.7.0, rocm-dkms3.8.0.
* Multi-version installation of ROCm should be performed by installing rocm-dev<version> using each of the desired ROCm versions. For example, rocm-dev3.7.0, rocm-dev3.8.0, rocm-dev3.9.0.
* The rock-dkms loadable kernel modules should be installed using a single rock-dkms package.
* ROCm v3.9 and above will not set any ldconfig entries for ROCm libraries for multi-version installation. Users must set LD_LIBRARY_PATH to load the ROCm library version of choice.
**NOTE**: The single version installation of the ROCm stack remains the same. The rocm-dkms package can be used for single version installs and is not deprecated at this time.
## Support for Enviornment Modules
Environment modules are now supported. This enhancement in the ROCm v4.3 release enables users to switch between ROCm v4.2 and ROCm v4.3 easily and efficiently.
For more information about installing environment modules, refer to
https://modules.readthedocs.io/en/latest/
# AMD ROCm Documentation Updates
@@ -64,27 +149,71 @@ https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
The AMD ROCm Installation Guide in this release includes:
* Updated Supported Environments
* HIP Installation Instructions
* Tensorflow ROCm Port: Basic Installations on RHEL v8.2
* Supported Environments
* Installation Instructions
* HIP Installation Instructions
For more information, refer to the ROCm documentation website at:
https://rocmdocs.amd.com/en/latest/
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
## AMD ROCm - HIP Documentation Updates
* HIP Repository Information
* HIP Programming Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.3.pdf
* HIP API Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_API_Guide_v4.3.pdf
* HIP-Supported CUDA API Reference Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Supported_CUDA_API_Reference_Guide_v4.3.pdf
For more information, see
* **NEW** - AMD ROCm Compiler Reference Guide v4.3
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_Compiler_Reference_Guide_v4.3.pdf
* HIP FAQ
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-FAQ.html#hip-faq
https://rocmdocs.amd.com/en/latest/Programming_Guides/Programming-Guides.html#hip-repository-information
## ROCm Data Center Tool User Guide
## ROCm Data Center User and API Guide
* Error-Correction Codes Field and Output Documentation
* ROCm Data Center Tool User Guide
For more information, refer to the AMD ROCm Data Center User Guide at
- Prometheus (Grafana) Integration with Automatic Node Detection
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.3.pdf
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
* ROCm Data Center Tool API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_RDC_API_Guide_v4.3.pdf
## ROCm SMI API Documentation Updates
* ROCm SMI API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_SMI_Guide_v4.3.pdf
## ROC Debugger User and API Guide
* ROC Debugger User Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCDebugger_User_Guide.pdf
* Debugger API Guide
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCDebugger_API.pdf
## General AMD ROCm Documentation Links
@@ -100,129 +229,557 @@ Access the following links for more information:
* For AMD ROCm binary structure, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#build-amd-rocm
https://rocmdocs.amd.com/en/latest/Installation_Guide/Software-Stack-for-AMD-GPU.html
* For AMD ROCm Release History, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#amd-rocm-version-history
https://rocmdocs.amd.com/en/latest/Current_Release_Notes/ROCm-Version-History.html
# What\'s New in This Release
## Hipfort-Interface for GPU Kernel Libraries
## HIP Enhancements
Hipfort is an interface library for accessing GPU Kernels. It provides support to the AMD ROCm architecture from within the Fortran programming language. Currently, the gfortran and HIP-Clang compilers support hipfort. Note, the gfortran compiler belongs to the GNU Compiler Collection (GCC). While hipfc wrapper calls hipcc for the non-fortran kernel source, gfortran is used for FORTRAN applications that call GPU kernels.
### HIP Versioning Update
The hipfort interface library is meant for Fortran developers with a focus on gfortran users.
The HIP version definition is updated from the ROCm v4.2 release as follows:
```
HIP_VERSION=HIP_VERSION_MAJOR * 10000000 + HIP_VERSION_MINOR * 100000 +
HIP_VERSION_PATCH)
```
The HIP version can be queried from a HIP API call
```
hipRuntimeGetVersion(&runtimeVersion);
```
**Note**: The version returned will be greater than the version in previous ROCm releases.
### Support for Managed Memory Allocation
HIP now supports and automatically manages Heterogeneous Memory Management (HMM) allocation. The HIP application performs a capability check before making the managed memory API call hipMallocManaged.
**Note**: The _managed_ keyword is unsupported currently.
```
int managed_memory = 0;
HIPCHECK(hipDeviceGetAttribute(&managed_memory,
hipDeviceAttributeManagedMemory,p_gpuDevice));
if (!managed_memory ) {
printf ("info: managed memory access not supported on the device %d\n Skipped\n", p_gpuDevice);
}
else {
HIPCHECK(hipSetDevice(p_gpuDevice));
HIPCHECK(hipMallocManaged(&Hmm, N * sizeof(T)));
. . .
}
```
### Kernel Enqueue Serialization
Developers can control kernel command serialization from the host using the following environment variable,
AMD_SERIALIZE_KERNEL
* AMD_SERIALIZE_KERNEL = 1, Wait for completion before enqueue,
* AMD_SERIALIZE_KERNEL = 2, Wait for completion after enqueue,
* AMD_SERIALIZE_KERNEL = 3, Both.
This environment variable setting enables HIP runtime to wait for GPU idle before/after any GPU command.
### NUMA-aware Host Memory Allocation
The Non-Uniform Memory Architecture (NUMA) policy determines how memory is allocated and selects a CPU closest to each GPU.
NUMA also measures the distance between the GPU and CPU devices. By default, each GPU selects a Numa CPU node that has the least NUMA distance between them; the host memory is automatically allocated closest to the memory pool of the NUMA node of the current GPU device.
Note, using the *hipSetDevice* API with a different GPU provides access to the host allocation. However, it may have a longer NUMA distance.
### New Atomic System Scope Atomic Operations
HIP now provides new APIs with _system as a suffix to support system scope atomic operations. For example, atomicAnd atomic is dedicated to the GPU device, and atomicAnd_system allows developers to extend the atomic operation to system scope from the GPU device to other CPUs and GPU devices in the system.
For more information, refer to the HIP Programming Guide at,
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_HIP_Programming_Guide_v4.3.pdf
### Indirect Function Call and C++ Virtual Functions
While the new release of the ROCm compiler supports indirect function calls and C++ virtual functions on a device, there are some known limitations and issues.
**Limitations**
* An address to a function is device specific. Note, a function address taken on the host can not be used on a device, and a function address taken on a device can not be used on the host. On a system with multiple devices, an address taken on one device can not be used on a different device.
* C++ virtual functions only work on the device where the object was constructed.
* Indirect call to a device function with function scope shared memory allocation is not supported. For example, LDS.
* Indirect call to a device function defined in a source file different than the calling function/kernel is only supported when compiling the entire program with -fgpu-rdc.
**Known Issues in This Release**
* Programs containing kernels with different launch bounds may crash when making an indirect function call. This issue is due to a compiler issue miscalculating the register budget for the callee function.
* Programs may not work correctly when making an indirect call to a function that uses more resources. For example, scratch memory, shared memory, registers made available by the caller.
* Compiling a program with objects with pure or deleted virtual functions on the device will result in a linker error. This issue is due to the missing implementation of some C++ runtime functions on the device.
* Constructing an object with virtual functions in private or shared memory may crash the program due to a compiler issue when generating code for the constructor.
For information on HIPFort installation and examples, see
https://github.com/ROCmSoftwarePlatform/hipfort
## ROCm Data Center Tool
The ROCm™ Data Center Tool™ simplifies the administration and addresses key infrastructure challenges in AMD GPUs in cluster and datacenter environments. The important features of this tool are:
### Prometheus (Grafana) Integration with Automatic Node Detection
* GPU telemetry
The ROCm Data Center (RDC) tool enables you to use Consul to discover the rdc_prometheus service automatically. Consul is “a service mesh solution providing a full-featured control plane with service discovery, configuration, and segmentation functionality.” For more information, refer to their website at https://www.consul.io/docs/intro.
The ROCm Data Center Tool uses Consul for health checks of RDCs integration with the Prometheus plug-in (rdc_prometheus), and these checks provide information on its efficiency.
Previously, when a new compute node was added, users had to change prometheus_targets.json to use Consul manually. Now, with the Consul agent integration, a new compute node can be discovered automatically.
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide_v4.3.pdf
* GPU statistics for jobs
### Coarse Grain Utilization
This feature provides a counter that displays the coarse grain GPU usage information, as shown below.
Sample output
* Integration with third-party tools
```
$ rocm_smi.py --showuse
============================== % time GPU is busy =============================
GPU[0] : GPU use (%): 0
GPU[0] : GFX Activity: 3401
```
* Open source
### Add 64-bit Energy Accumulator In-band
This feature provides an average value of energy consumed over time in a free-flowing RAPL counter, a 64-bit Energy Accumulator.
Sample output
```
$ rocm_smi.py --showenergycounter
=============================== Consumed Energy ================================
GPU[0] : Energy counter: 2424868
GPU[0] : Accumulated Energy (uJ): 0.0
The ROCm Data Center Tool can be used in the standalone mode if all components are installed. The same set of features is also available in a library format that can be used by existing management tools.
```
### Support for Continuous Clocks Values
ROCm SMI will support continuous clock values instead of the previous discrete levels. Moving forward the updated sysfs file will consist of only MIN and MAX values and the user can set the clock value in the given range.
Sample output:
![ScreenShot](https://github.com/Rmalavally/ROCm/blob/master/RDCComponentsrevised.png)
```
$ rocm_smi.py --setsrange 551 1270
Do you accept these terms? [y/N] y
============================= Set Valid sclk Range=======
GPU[0] : Successfully set sclk from 551(MHz) to 1270(MHz)
GPU[1] : Successfully set sclk from 551(MHz) to 1270(MHz)
=========================================================================
$ rocm_smi.py --showsclkrange
============================ Show Valid sclk Range======
GPU[0] : Valid sclk range: 551Mhz - 1270Mhz
GPU[1] : Valid sclk range: 551Mhz - 1270Mhz
```
### Memory Utilization Counters
This feature provides a counter display memory utilization information as shown below.
Sample output
```
$ rocm_smi.py --showmemuse
========================== Current Memory Use ==============================
GPU[0] : GPU memory use (%): 0
GPU[0] : Memory Activity: 0
```
### Performance Determinism
ROCm SMI supports performance determinism as a unique mode of operation. Performance variations are minimal as this enhancement allows users to control the entry and exit to set a soft maximum (ceiling) for the GFX clock.
Sample output
```
$ rocm_smi.py --setperfdeterminism 650
cat pp_od_clk_voltage
GFXCLK:
0: 500Mhz
1: 650Mhz *
2: 1200Mhz
$ rocm_smi.py --resetperfdeterminism
```
**Note**: The idle clock will not take up higher clock values if no workload is running. After enabling determinism, users can run a GFX workload to set performance determinism to the desired clock value in the valid range.
* GFX clock could either be less than or equal to the max value set in this mode. GFX clock will be at the max clock set in this mode only when required by the running workload.
* VDDGFX will be higher by an offset (75mv or so based on PPTable) in the determinism mode.
### HBM Temperature Metric Per Stack
This feature will enable ROCm SMI to report all HBM temperature values as shown below.
Sample output
```
$ rocm_smi.py showtemp
================================= Temperature =================================
GPU[0] : Temperature (Sensor edge) (C): 29.0
GPU[0] : Temperature (Sensor junction) (C): 36.0
GPU[0] : Temperature (Sensor memory) (C): 45.0
GPU[0] : Temperature (Sensor HBM 0) (C): 43.0
GPU[0] : Temperature (Sensor HBM 1) (C): 42.0
GPU[0] : Temperature (Sensor HBM 2) (C): 44.0
GPU[0] : Temperature (Sensor HBM 3) (C): 45.0
```
## ROCm Math and Communication Libraries
### rocBLAS
**Optimizations**
* Improved performance of non-batched and batched rocblas_Xgemv for gfx908 when m <= 15000 and n <= 15000
* Improved performance of non-batched and batched rocblas_sgemv and rocblas_dgemv for gfx906 when m <= 6000 and n <= 6000
* Improved the overall performance of non-batched and batched rocblas_cgemv for gfx906
* Improved the overall performance of rocblas_Xtrsv
For more information, refer to
https://rocblas.readthedocs.io/en/master/
Refer to the ROCm Data Center Tool™ User Guide for more details on the different modes of operation.
### rocRAND
NOTE: The ROCm Data Center User Guide is intended to provide an overview of ROCm Data Center Tool features and how system administrators and Data Center (or HPC) users can administer and configure AMD GPUs. The guide also provides an overview of its components and open source developer handbook.
**Enhancements**
* gfx90a support added
* gfx1030 support added
For installation information on different distributions, refer to the ROCm Data Center User Guide at
* gfx803 supported re-enabled
https://github.com/RadeonOpenCompute/ROCm/blob/master/AMD_ROCm_DataCenter_Tool_User_Guide.pdf
**Fixed**
* Memory leaks in Poisson tests has been fixed.
* Memory leaks when generator has been created but setting seed/offset/dimensions display an exception has been fixed.
For more information, refer to
https://rocrand.readthedocs.io/en/latest/
### Error Correcting Code Fields in ROCm Data Center Tool
### rocSOLVER
The ROCm Data Center (RDC) tool is enhanced to provide counters to track correctable and uncorrectable errors. While a single bit per word error can be corrected, double bit per word errors cannot be corrected.
**Enhancements**
Linear solvers for general non-square systems:
* GELS now supports underdetermined and transposed cases
* Inverse of triangular matrices
* TRTRI (with batched and strided_batched versions)
* Out-of-place general matrix inversion
* GETRI_OUTOFPLACE (with batched and strided_batched versions)
* Argument names for the benchmark client now match argument names from the public API
**Fixed Issues**
* Known issues with Thin-SVD. The problem was identified in the test specification, not in the thin-SVD implementation or the rocBLAS gemm_batched routines.
The RDC tool now helps monitor and protect undetected memory data corruption. If the system is using ECC- enabled memory, the ROCm Data Center tool can report the error counters to monitor the status of the memory.
* Benchmark client longer crashes as a result of leading dimension or stride arguments not being provided on the command line.
![ScreenShot](https://github.com/Rmalavally/ROCm/blob/master/forweb.PNG)
**Optimizations**
* Improved general performance of matrix inversion (GETRI)
## Static Linking Libraries
For more information, refer to
The underlying libraries of AMD ROCm are dynamic and are called shared objects (.so) in Linux.
The AMD ROCm v3.8 release includes the capability to build static ROCm libraries and link to the applications statically. CMake target files enable linking an application statically to ROCm libraries and each component exports the required dependencies for linking. The static libraries are called Archives (.a) in Linux.
https://rocsolver.readthedocs.io/en/latest/
This release also comprises of the requisite changes required for all the components to work in a static environment. The components have been successfully tested for basic functionalities like *rocminfo /rocm_bandwidth_test* and archives.
In the AMD ROCm v3.8 release, the following libraries support static linking:
### rocSPARSE
**Enhancements**
* (batched) tridiagonal solver with and without pivoting
* dense matrix sparse vector multiplication (gemvi)
* support for gfx90a
* sampled dense-dense matrix multiplication (sddmm)
**Improvements**
* client matrix download mechanism
* boost dependency in clients removed
![ScreenShot](https://github.com/Rmalavally/ROCm/blob/master/staticlinkinglib.PNG)
# Fixed Defects
The following defects are fixed in this release:
For more information, refer to
* GPU Kernel C++ Names Not Demangled
* MIGraphX Fails for fp16 Datatype
* Issue with Peer-to-Peer Transfers
* rocprof option --parallel-kernels Not Supported in this Release
https://rocsparse.readthedocs.io/en/latest/usermanual.html#rocsparse-gebsrmv
# Known Issues
## Undefined Reference Issue in Statically Linked Libraries
### hipBLAS
Libraries and applications statically linked using flags -rtlib=compiler-rt, such as rocBLAS, have an implicit dependency on gcc_s not captured in their CMAKE configuration.
**Enhancements**
* Added *hipblasStatusToString*
**Fixed**
* Added catch() blocks around API calls to prevent the leak of C++ exceptions
Client applications may require linking with an additional library -lgcc_s to resolve the undefined reference to symbol '_Unwind_Resume@@GCC_3.0'.
### rocFFT
## MIGraphX Pooling Operation Fails for Some Models
**Changes**
* Re-split device code into single-precision, double-precision, and miscellaneous kernels.
**Fixed Issues**
* double-precision planar->planar transpose.
* 3D transforms with unusual strides, for SBCC-optimized sizes.
* Improved buffer placement logic.
MIGraphX does not work for some models with pooling operations and the following error appears:
For more information, refer to
*test_gpu_ops_test FAILED*
https://rocfft.readthedocs.io/en/rocm-4.3.0/
This issue is currently under investigation and there is no known workaround currently.
### hipFFT
## MIVisionX Installation Error on CentOS/RHEL8.2 and SLES 15
**Fixed Issues**
* CMAKE updates
* Added callback API in hipfftXt.h header.
Installing ROCm on MIVisionX results in the following error on CentOS/RHEL8.2 and SLES 15:
*"Problem: nothing provides opencv needed"*
### rocALUTION
**Enhancements**
* Support for gfx90a target
* Support for gfx1030 target
**Improvements**
* Install script
For more information, refer to
### rocTHRUST
As a workaround, install opencv before installing MIVisionX.
**Enhancements**
* Updated to match upstream Thrust 1.11
* gfx90a support added
* gfx803 support re-enabled
hipCUB
Enhancements
* DiscardOutputIterator to backend header
## ROCProfiler Enhancements
### Tracing Multiple MPI Ranks
When tracing multiple MPI ranks in ROCm v4.3, users must use the form:
```
mpirun ... <mpi args> ... rocprof ... <rocprof args> ... application ... <application args>
```
**NOTE**: This feature differs from ROCm v4.2 (and lower), which used "rocprof ... mpirun ... application".
This change was made to enable ROCProfiler to handle process forking better and launching via mpirun (and related) executables.
From a user perspective, this new execution mode requires:
1. Generation of trace data per MPI (or process) rank.
2. Use of a new ["merge_traces.sh" utility script](https://github.com/ROCm-Developer-Tools/rocprofiler/blob/rocm-4.3.x/bin/merge_traces.sh) to combine traces from multiple processes into a unified trace for profiling.
For example, to accomplish step #1, ROCm provides a simple bash wrapper that demonstrates how to generate a unique output directory per process:
```
$ cat wrapper.sh
#! /usr/bin/env bash
if [[ -n ${OMPI_COMM_WORLD_RANK+z} ]]; then
# mpich
export MPI_RANK=${OMPI_COMM_WORLD_RANK}
elif [[ -n ${MV2_COMM_WORLD_RANK+z} ]]; then
# ompi
export MPI_RANK=${MV2_COMM_WORLD_RANK}
fi
args="$*"
pid="$$"
outdir="rank_${pid}_${MPI_RANK}"
outfile="results_${pid}_${MPI_RANK}.csv"
eval "rocprof -d ${outdir} -o ${outdir}/${outfile} $*"
```
This script:
* Determines the global MPI rank (implemented here for OpenMPI and MPICH only)
* Determines the process id of the MPI rank
* Generates a unique output directory using the two
To invoke this wrapper, use the following command:
```
mpirun <mpi args> ./wrapper.sh --hip-trace <application> <args>
```
This generates an output directory for each used MPI rank. For example,
```
$ ls -ld rank_* | awk {'print $5" "$9'}
4096 rank_513555_0
4096 rank_513556_1
```
Finally, these traces may be combined using the [merge traces script](https://github.com/ROCm-Developer-Tools/rocprofiler/blob/rocm-4.3.x/bin/merge_traces.sh). For example,
```
$ ./merge_traces.sh -h
Script for aggregating results from multiple rocprofiler out directries.
Full path: /opt/rocm/bin/merge_traces.sh
Usage:
merge_traces.sh -o <outputdir> [<inputdir>...]
```
Use the following input arguments to the merge_traces.sh script to control which traces are merged and where the resulting merged trace is saved.
* -o <*outputdir*> - output directory where the results are aggregated.
* <*inputdir*>... - space-separated list of rocprofiler directories. If not specified, CWD is used.
For example, if an output directory named "unified" was supplied to the `merge_traces.sh` script, the file 'unified/results.json' will be generated, and the contains trace data from both MPI ranks.
Known issue for ROCProfiler
Collecting several counter collection passes (multiple "pmc:" lines in an counter input file) is not supported in a single run.
The workaround is to break the multiline counter input file into multiple single-line counter input files and execute runs.
# Known Issues in This Release
The following are the known issues in this release.
## Upgrade to AMD ROCm v4.3 Not Supported
An upgrade from previous releases to AMD ROCm v4.2 is not supported. Complete uninstallation of previous ROCm versions is required before installing a new version of ROCm.
## _LAUNCH BOUNDS_Ignored During Kernel Launch
The HIP runtime returns the hipErrorLaunchFailure error code when an application tries to launch kernel with a block size larger than the launch bounds mentioned during compile time. If no launch bounds were specified during the compile time, the default value of 1024 is assumed. Refer to the HIP trace for more information about the failing kernel. A sample error in the trace is shown below:
Snippet of the HIP trace
```
:3:devprogram.cpp :2504: 2227377746776 us: Using Code Object V4.
:3:hip_module.cpp :361 : 2227377768546 us: 7670 : [7f7c6eddd180] ihipModuleLaunchKernel ( 0x0x16fe080, 2048, 1, 1, 1024, 1, 1, 0, stream:<null>, 0x7ffded8ad260, char array:<null>, event:0, event:0, 0, 0 )
:1:hip_module.cpp :254 : 2227377768572 us: Launch params (1024, 1, 1) are larger than launch bounds (64) for kernel _Z8MyKerneliPd
:3:hip_platform.cpp :667 : 2227377768577 us: 7670 : [7f7c6eddd180] ihipLaunchKernel: Returned hipErrorLaunchFailure :
:3:hip_module.cpp :493 : 2227377768581 us: 7670 : [7f7c6eddd180] hipLaunchKernel: Returned hipErrorLaunchFailure :
```
There is no known workaround at this time.
## PYCACHE Folder Exists After ROCM SMI Library Uninstallation
Users may observe that the /opt/rocm-x/bin/__pycache__ folder continues to exist even after the rocm_smi_lib uninstallation.
Workaround: Delete the /opt/rocm-x/bin/__pycache__ folder manually before uninstalling rocm_smi_lib.
# Deploying ROCm
AMD hosts both Debian and RPM repositories for the ROCm v3.8.x packages.
AMD hosts both Debian and RPM repositories for the ROCm packages.
For more information on ROCM installation on all platforms, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html
# Machine Learning and High Performance Computing Software Stack for AMD GPU
For an updated version of the software stack for AMD GPU, see
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu
# Hardware and Software Support
ROCm is focused on using AMD GPUs to accelerate computational tasks such as machine learning, engineering workloads, and scientific computing.
In order to focus our development efforts on these domains of interest, ROCm supports a targeted set of hardware configurations which are detailed further in this section.
**Note:** The AMD ROCm™ open software platform is a compute stack for headless system deployments. GUI-based software applications are currently not supported.
#### Supported GPUs
Because the ROCm Platform has a focus on particular computational domains, we offer official support for a selection of AMD GPUs that are designed to offer good performance and price in these domains.
**Note:** The integrated GPUs of Ryzen are not officially supported targets for ROCm.
ROCm officially supports AMD GPUs that use following chips:
* GFX8 GPUs
* "Fiji" chips, such as on the AMD Radeon R9 Fury X and Radeon Instinct MI8
* "Polaris 10" chips, such as on the AMD Radeon RX 580 and Radeon Instinct MI6
* GFX9 GPUs
* "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
* "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII
* GFX9 GPUs
- "Vega 10" chips, such as on the AMD Radeon RX Vega 64 and Radeon Instinct MI25
- "Vega 7nm" chips, such as on the Radeon Instinct MI50, Radeon Instinct MI60 or AMD Radeon VII, Radeon Pro VII
* CDNA GPUs
- MI100 chips such as on the AMD Instinct™ MI100
ROCm is a collection of software ranging from drivers and runtimes to libraries and developer tools.
Some of this software may work with more GPUs than the "officially supported" list above, though AMD does not make any official claims of support for these devices on the ROCm software platform.
The following list of GPUs are enabled in the ROCm software, though full support is not guaranteed:
* GFX8 GPUs
@@ -238,7 +795,7 @@ As described [below](#limited-support), "Carrizo", "Bristol Ridge", and "Raven R
However, they are not enabled in the HIP runtime, and may not work due to motherboard or OEM hardware limitations.
As such, they are not yet officially supported targets for ROCm.
For a more detailed list of hardware support, please see [the following documentation](https://rocm.github.io/hardware.html).
For a more detailed list of hardware support, please see [the following documentation](https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units).
#### Supported CPUs
As described above, GFX8 GPUs require PCIe 3.0 with PCIe atomics in order to run ROCm.
@@ -281,7 +838,7 @@ from the list provided above for compatibility purposes.
##### Limited support
* ROCm 2.9.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* ROCm 4.x should support PCIe 2.0 enabled CPUs such as the AMD Opteron, Phenom, Phenom II, Athlon, Athlon X2, Athlon II and older Intel Xeon and Intel Core Architecture and Pentium CPUs. However, we have done very limited testing on these configurations, since our test farm has been catering to CPUs listed above. This is where we need community support. _If you find problems on such setups, please report these issues_.
* Thunderbolt 1, 2, and 3 enabled breakout boxes should now be able to work with ROCm. Thunderbolt 1 and 2 are PCIe 2.0 based, and thus are only supported with GPUs that do not require PCIe 3.1.0 atomics (e.g. Vega 10). However, we have done no testing on this configuration and would need community support due to limited access to this type of equipment.
* AMD "Carrizo" and "Bristol Ridge" APUs are enabled to run OpenCL, but do not yet support HIP or our libraries built on top of these compilers and runtimes.
* As of ROCm 2.1, "Carrizo" and "Bristol Ridge" require the use of upstream kernel drivers.
@@ -294,11 +851,13 @@ from the list provided above for compatibility purposes.
##### Not supported
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported in ROCm 2.9.x
* "Tonga", "Iceland", "Vega M", and "Vega 12" GPUs are not supported.
* We do not support GFX8-class GPUs (Fiji, Polaris, etc.) on CPUs that do not have PCIe 3.0 with PCIe atomics.
* As such, we do not support AMD Carrizo and Kaveri APUs as hosts for such GPUs.
* Thunderbolt 1 and 2 enabled GPUs are not supported by GFX8 GPUs on ROCm. Thunderbolt 1 & 2 are based on PCIe 2.0.
In the default ROCm configuration, GFX8 and GFX9 GPUs require PCI Express 3.0 with PCIe atomics. The ROCm platform leverages these advanced capabilities to allow features such as user-level submission of work from the host to the GPU. This includes PCIe atomic Fetch and Add, Compare and Swap, Unconditional Swap, and AtomicOp Completion.
#### ROCm support in upstream Linux kernels
As of ROCm 1.9.0, the ROCm user-level software is compatible with the AMD drivers in certain upstream Linux kernels.
@@ -325,9 +884,17 @@ For users that have the option of using either AMD's or the upstreamed driver, t
| | | Does not include most up-to-date firmware |
# Disclaimer
## Machine Learning and High Performance Computing Software Stack for AMD GPU
AMD®, the AMD Arrow logo, AMD Instinct™, Radeon™, ROCm® and combinations thereof are trademarks of Advanced Micro Devices, Inc.
For an updated version of the software stack for AMD GPU, see
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
PCIe® is a registered trademark of PCI-SIG Corporation. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
Google® is a registered trademark of Google LLC.
Ubuntu and the Ubuntu logo are registered trademarks of Canonical Ltd.
Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html#software-stack-for-amd-gpu

View File

@@ -1,27 +1,26 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<remote name="roc-github"
fetch="http://github.com/RadeonOpenCompute/" />
fetch="http://github.com/RadeonOpenCompute/" />
<remote name="rocm-devtools"
fetch="https://github.com/ROCm-Developer-Tools/" />
fetch="https://github.com/ROCm-Developer-Tools/" />
<remote name="rocm-swplat"
fetch="https://github.com/ROCmSoftwarePlatform/" />
fetch="https://github.com/ROCmSoftwarePlatform/" />
<remote name="gpuopen-libs"
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
fetch="https://github.com/GPUOpen-ProfessionalCompute-Libraries/" />
<remote name="gpuopen-tools"
fetch="https://github.com/GPUOpen-Tools/" />
fetch="https://github.com/GPUOpen-Tools/" />
<remote name="KhronosGroup"
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-3.8.0"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
fetch="https://github.com/KhronosGroup/" />
<default revision="refs/tags/rocm-4.3.1"
remote="roc-github"
sync-c="true"
sync-j="4" />
<!--list of projects for ROCM-->
<project name="ROCK-Kernel-Driver" />
<project name="ROCT-Thunk-Interface" />
<project name="ROCR-Runtime" />
<project name="ROC-smi" />
<project name="rocm_smi_lib" remote="roc-github" />
<project name="rocm_smi_lib" />
<project name="rocm-cmake" />
<project name="rocminfo" />
<project name="rocprofiler" remote="rocm-devtools" />
@@ -29,26 +28,30 @@
<project name="ROCm-OpenCL-Runtime" />
<project path="ROCm-OpenCL-Runtime/api/opencl/khronos/icd" name="OpenCL-ICD-Loader" remote="KhronosGroup" revision="6c03f8b58fafd9dd693eaac826749a5cfad515f8" />
<project name="clang-ocl" />
<!--HIP Projects-->
<!--HIP Projects-->
<project name="HIP" remote="rocm-devtools" />
<project name="HIP-Examples" remote="rocm-devtools" />
<project name="ROCclr" remote="rocm-devtools" />
<project name="HIPIFY" remote="rocm-devtools" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" path="llvm_amd-stg-open" />
<!-- The following projects are all associated with the AMDGPU LLVM compiler -->
<project name="llvm-project" />
<project name="ROCm-Device-Libs" />
<project name="atmi" />
<project name="ROCm-CompilerSupport" />
<project name="rocr_debug_agent" remote="rocm-devtools" />
<project name="rocm_bandwidth_test" />
<project name="half" remote="rocm-swplat" revision="37742ce15b76b44e4b271c1e66d13d2fa7bd003e" />
<project name="RCP" remote="gpuopen-tools" revision="3a49405a1500067c49d181844ec90aea606055bb" />
<!-- gdb projects -->
<!-- gdb projects -->
<project name="ROCgdb" remote="rocm-devtools" />
<project name="ROCdbgapi" remote="rocm-devtools" />
<!-- ROCm Libraries -->
<!-- ROCm Libraries -->
<project name="rdc" remote="roc-github" />
<project name="rocBLAS" remote="rocm-swplat" />
<project name="Tensile" remote="rocm-swplat" />
<project name="hipBLAS" remote="rocm-swplat" />
<project name="rocFFT" remote="rocm-swplat" />
<project name="hipFFT" remote="rocm-swplat" />
<project name="rocRAND" remote="rocm-swplat" />
<project name="rocSPARSE" remote="rocm-swplat" />
<project name="rocSOLVER" remote="rocm-swplat" />
@@ -62,18 +65,10 @@
<project name="hipCUB" remote="rocm-swplat" />
<project name="rocPRIM" remote="rocm-swplat" />
<project name="hipfort" remote="rocm-swplat" />
<project name="AMDMIGraphX" remote="rocm-swplat" />
<project name="ROCmValidationSuite" remote="rocm-devtools" />
<!-- Projects for AOMP -->
<project name="ROCT-Thunk-Interface" path="aomp/roct-thunk-interface" remote="roc-github" />
<project name="ROCR-Runtime" path="aomp/rocr-runtime" remote="roc-github" />
<project name="ROCm-Device-Libs" path="aomp/rocm-device-libs" remote="roc-github" />
<project name="ROCm-CompilerSupport" path="aomp/rocm-compilersupport" remote="roc-github" />
<project name="rocminfo" path="aomp/rocminfo" remote="roc-github" />
<project name="HIP" path="aomp/hip-on-vdi" remote="rocm-devtools" />
<project name="aomp" path="aomp/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="aomp/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="aomp/flang" remote="rocm-devtools" />
<project name="amd-llvm-project" path="aomp/amd-llvm-project" remote="rocm-devtools" />
<project name="ROCclr" path="aomp/vdi" remote="rocm-devtools" />
<project name="ROCm-OpenCL-Runtime" path="aomp/opencl-on-vdi" remote="roc-github" />
<!-- Projects for OpenMP-Extras -->
<project name="aomp" path="openmp-extras/aomp" remote="rocm-devtools" />
<project name="aomp-extras" path="openmp-extras/aomp-extras" remote="rocm-devtools" />
<project name="flang" path="openmp-extras/flang" remote="rocm-devtools" />
</manifest>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

BIN
images/OSKernelupdated.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
images/SuppEnv.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

1
images/test.rst Normal file
View File

@@ -0,0 +1 @@

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB