From 0d17c96f7fef6a970d328c35228eda59fdfc94d5 Mon Sep 17 00:00:00 2001 From: Matt Williams Date: Wed, 10 Dec 2025 11:17:59 -0500 Subject: [PATCH] Fixing link redirects (#5758) * Update multi-gpu-fine-tuning-and-inference.rst * Update pytorch-training-v25.6.rst * Update pytorch-compatibility.rst --- docs/compatibility/ml-compatibility/pytorch-compatibility.rst | 4 ++-- .../fine-tuning/multi-gpu-fine-tuning-and-inference.rst | 2 +- .../previous-versions/pytorch-training-v25.6.rst | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/compatibility/ml-compatibility/pytorch-compatibility.rst b/docs/compatibility/ml-compatibility/pytorch-compatibility.rst index a51f726d3..5cb2454c3 100644 --- a/docs/compatibility/ml-compatibility/pytorch-compatibility.rst +++ b/docs/compatibility/ml-compatibility/pytorch-compatibility.rst @@ -349,7 +349,7 @@ with ROCm. you need to explicitly move audio data (waveform tensor) to GPU using ``.to('cuda')``. - * - `torchtune `_ + * - `torchtune `_ - PyTorch-native library designed for fine-tuning large language models (LLMs). Provides supports the full fine-tuning workflow and offers compatibility with popular production inference systems. @@ -366,7 +366,7 @@ with ROCm. constructing flexible and performant data pipelines, with features still in prototype stage. - * - `torchrec `_ + * - `torchrec `_ - PyTorch domain library for common sparsity and parallelism primitives needed for large-scale recommender systems, enabling authors to train models with large embedding tables shared across many GPUs. diff --git a/docs/how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference.rst b/docs/how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference.rst index 83ec3927b..0e2b879e8 100644 --- a/docs/how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference.rst +++ b/docs/how-to/rocm-for-ai/fine-tuning/multi-gpu-fine-tuning-and-inference.rst @@ -130,7 +130,7 @@ After loading the model in this way, the model is fully ready to use the resourc torchtune for fine-tuning and inference ============================================= -`torchtune `_ is a PyTorch-native library for easy single and multi-GPU +`torchtune `_ is a PyTorch-native library for easy single and multi-GPU model fine-tuning and inference with LLMs. #. Install torchtune using pip. diff --git a/docs/how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.6.rst b/docs/how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.6.rst index 8499a4b47..958b07575 100644 --- a/docs/how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.6.rst +++ b/docs/how-to/rocm-for-ai/training/benchmark-docker/previous-versions/pytorch-training-v25.6.rst @@ -240,7 +240,7 @@ The following models are pre-optimized for performance on the AMD Instinct MI325 - `Hugging Face Datasets `_ 3.2.0 * - ``torchdata`` - - `TorchData `_ + - `TorchData `_ * - ``tomli`` - `Tomli `_