mirror of
https://github.com/ROCm/ROCm.git
synced 2026-01-08 22:28:06 -05:00
* Docs: references of accelerator removal and change to GPU Co-authored-by: Leo Paoletti <164940351+lpaoletti@users.noreply.github.com> Co-authored-by: Pratik Basyal <pratik.basyal@amd.com>
21 lines
1.3 KiB
ReStructuredText
21 lines
1.3 KiB
ReStructuredText
.. meta::
|
|
:description: How to fine-tune models with ROCm
|
|
:keywords: ROCm, LLM, fine-tuning, inference, usage, tutorial, deep learning, PyTorch, TensorFlow, JAX
|
|
|
|
*************************
|
|
Fine-tuning and inference
|
|
*************************
|
|
|
|
Fine-tuning using ROCm involves leveraging AMD's GPU-accelerated :doc:`libraries <rocm:reference/api-libraries>` and
|
|
:doc:`tools <rocm:reference/rocm-tools>` to optimize and train deep learning models. ROCm provides a comprehensive
|
|
ecosystem for deep learning development, including open-source libraries for optimized deep learning operations and
|
|
ROCm-aware versions of :doc:`deep learning frameworks <../../deep-learning-rocm>` such as PyTorch, TensorFlow, and JAX.
|
|
|
|
Single-accelerator systems, such as a machine equipped with a single GPU, are commonly used for
|
|
smaller-scale deep learning tasks, including fine-tuning pre-trained models and running inference on moderately
|
|
sized datasets. See :doc:`single-gpu-fine-tuning-and-inference`.
|
|
|
|
Multi-accelerator systems, on the other hand, consist of multiple GPUs working in parallel. These systems are
|
|
typically used in LLMs and other large-scale deep learning tasks where performance, scalability, and the handling of
|
|
massive datasets are crucial. See :doc:`multi-gpu-fine-tuning-and-inference`.
|