anisha-amd
|
a98236a4e3
|
Main Docs: references of accelerator removal and change to GPU (#5495)
* Docs: references of accelerator removal and change to GPU
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>
|
2025-10-16 11:22:10 -04:00 |
|
Peter Park
|
c3faa9670b
|
Add PyTorch inference benchmark Docker guide (+ CLIP and Chai-1) (#4654)
* update vLLM links in deploy-your-model.rst
* add pytorch inference benchmark doc
* update toc and vLLM title
* remove previous versions
* update
* wording
* fix link and "applies to"
* add pytorch to wordlist
* add tunableop note to clip
* make tunableop note appear to all models
* Update docs/how-to/rocm-for-ai/inference/pytorch-inference-benchmark.rst
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
* Update docs/how-to/rocm-for-ai/inference/pytorch-inference-benchmark.rst
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
* Update docs/how-to/rocm-for-ai/inference/pytorch-inference-benchmark.rst
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
* Update docs/how-to/rocm-for-ai/inference/pytorch-inference-benchmark.rst
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
* fix incorrect links
* wording
* fix wrong docker pull tag
---------
Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com>
|
2025-04-23 17:35:52 -04:00 |
|
Peter Park
|
9b2ce2b634
|
Update vLLM performance Docker docs (#4491)
* add links to performance results
words
* change "performance validation" to "performance testing"
* update vLLM docker 3/11
* add previous versions
add previous versions
* fix llama 3.1 8b model repo name
* words
|
2025-03-13 10:04:21 -04:00 |
|
Pratik Basyal
|
353d2fe1c1
|
2nd POC for How to Use ROCm for AI (#282) (#4299)
* New TOC for ROCm for AI developed
Co-authored-by: Peter Park <peter.park@amd.com>
|
2025-01-27 15:49:21 -05:00 |
|