From e5cebe7b4e74b1e6a194bbbfc408f9f02dbf7bb8 Mon Sep 17 00:00:00 2001 From: Pratik Basyal Date: Tue, 16 Dec 2025 13:24:12 -0500 Subject: [PATCH] Taichi removed from ROCm docs [Develop] (#5779) (#5781) * Taichi removed from ROCm docs * Warnings fixed --- .wordlist.txt | 2 - CHANGELOG.md | 4 +- .../compatibility-matrix-historical-6.0.csv | 1 - docs/compatibility/compatibility-matrix.rst | 5 +- .../ml-compatibility/taichi-compatibility.rst | 99 ------------------- docs/conf.py | 1 - .../precision-support/precision-support.yaml | 4 +- docs/how-to/deep-learning-rocm.rst | 12 --- .../previous-versions/xdit-25.10.rst | 4 +- docs/sphinx/_toc.yml.in | 2 - 10 files changed, 8 insertions(+), 126 deletions(-) delete mode 100644 docs/compatibility/ml-compatibility/taichi-compatibility.rst diff --git a/.wordlist.txt b/.wordlist.txt index f98bf4e02..da7a23833 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -524,8 +524,6 @@ TPS TPU TPUs TSME -Taichi -Taichi's Tagram TensileLite TensorBoard diff --git a/CHANGELOG.md b/CHANGELOG.md index 66d15e40b..11101599d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1734,8 +1734,8 @@ HIP runtime has the following functional improvements which improves runtime per #### Upcoming changes -* `__AMDGCN_WAVEFRONT_SIZE__` macro and HIP’s `warpSize` variable as `constexpr` are deprecated and will be disabled in a future release. Users are encouraged to update their code if needed to ensure future compatibility. For more information, see [AMDGCN_WAVEFRONT_SIZE deprecation](#amdgpu-wavefront-size-compiler-macro-deprecation). -* The `roc-obj-ls` and `roc-obj-extract` tools are deprecated. To extract all Clang offload bundles into separate code objects use `llvm-objdump --offloading `. For more information, see [Changes to ROCm Object Tooling](#changes-to-rocm-object-tooling). +* `__AMDGCN_WAVEFRONT_SIZE__` macro and HIP’s `warpSize` variable as `constexpr` are deprecated and will be disabled in a future release. Users are encouraged to update their code if needed to ensure future compatibility. For more information, see [AMDGCN_WAVEFRONT_SIZE deprecation](https://rocm.docs.amd.com/en/docs-7.0.0/about/release-notes.html#amdgpu-wavefront-size-compiler-macro-deprecation). +* The `roc-obj-ls` and `roc-obj-extract` tools are deprecated. To extract all Clang offload bundles into separate code objects use `llvm-objdump --offloading `. For more information, see [Changes to ROCm Object Tooling](https://rocm.docs.amd.com/en/docs-7.0.0/about/release-notes.html#changes-to-rocm-object-tooling). ### **MIGraphX** (2.13.0) diff --git a/docs/compatibility/compatibility-matrix-historical-6.0.csv b/docs/compatibility/compatibility-matrix-historical-6.0.csv index 627072d40..7ee4de0f8 100644 --- a/docs/compatibility/compatibility-matrix-historical-6.0.csv +++ b/docs/compatibility/compatibility-matrix-historical-6.0.csv @@ -37,7 +37,6 @@ ROCm Version,7.1.1,7.1.0,7.0.2,7.0.1/7.0.0,6.4.3,6.4.2,6.4.1,6.4.0,6.3.3,6.3.2,6 :doc:`Stanford Megatron-LM <../compatibility/ml-compatibility/stanford-megatron-lm-compatibility>` [#stanford-megatron-lm_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,85f95ae,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`DGL <../compatibility/ml-compatibility/dgl-compatibility>` [#dgl_compat-past-60]_,N/A,N/A,N/A,2.4.0,2.4.0,N/A,N/A,2.4.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`Megablocks <../compatibility/ml-compatibility/megablocks-compatibility>` [#megablocks_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,0.7.0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A - :doc:`Taichi <../compatibility/ml-compatibility/taichi-compatibility>` [#taichi_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,1.8.0b1,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`Ray <../compatibility/ml-compatibility/ray-compatibility>` [#ray_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,2.48.0.post0,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`llama.cpp <../compatibility/ml-compatibility/llama-cpp-compatibility>` [#llama-cpp_compat-past-60]_,N/A,N/A,N/A,b6652,b6356,b6356,b6356,b5997,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A :doc:`FlashInfer <../compatibility/ml-compatibility/flashinfer-compatibility>` [#flashinfer_compat-past-60]_,N/A,N/A,N/A,N/A,N/A,N/A,v0.2.5,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A diff --git a/docs/compatibility/compatibility-matrix.rst b/docs/compatibility/compatibility-matrix.rst index 709fdd5f0..2cac5c0af 100644 --- a/docs/compatibility/compatibility-matrix.rst +++ b/docs/compatibility/compatibility-matrix.rst @@ -155,8 +155,8 @@ compatibility and system requirements. .. rubric:: Footnotes -.. [#os-compatibility] Some operating systems are supported on limited GPUs. For detailed information, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.1 `_, `ROCm 7.1.0 `_, and `ROCm 6.4.0 `_. -.. [#gpu-compatibility] Some GPUs have limited operating system support. For detailed information, see the latest :ref:`supported_GPUs`. For version specific information, see `ROCm 7.1.1 `_, `ROCm 7.1.0 `_, and `ROCm 6.4.0 `_. +.. [#os-compatibility] Some operating systems are supported on limited GPUs. For detailed information, see the latest :ref:`supported_distributions`. For version specific information, see `ROCm 7.1.1 `__, `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. +.. [#gpu-compatibility] Some GPUs have limited operating system support. For detailed information, see the latest :ref:`supported_GPUs`. For version specific information, see `ROCm 7.1.1 `__, `ROCm 7.1.0 `__, and `ROCm 6.4.0 `__. .. [#dgl_compat] DGL is supported only on ROCm 7.0.0, ROCm 6.4.3 and ROCm 6.4.0. .. [#llama-cpp_compat] llama.cpp is supported only on ROCm 7.0.0 and ROCm 6.4.x. .. [#mi325x_KVM] For AMD Instinct MI325X KVM SR-IOV users, do not use AMD GPU Driver (amdgpu) 30.20.0. @@ -245,7 +245,6 @@ Expand for full historical view of: .. [#stanford-megatron-lm_compat-past-60] Stanford Megatron-LM is supported only on ROCm 6.3.0. .. [#dgl_compat-past-60] DGL is supported only on ROCm 7.0.0, ROCm 6.4.3 and ROCm 6.4.0. .. [#megablocks_compat-past-60] Megablocks is supported only on ROCm 6.3.0. - .. [#taichi_compat-past-60] Taichi is supported only on ROCm 6.3.2. .. [#ray_compat-past-60] Ray is supported only on ROCm 6.4.1. .. [#llama-cpp_compat-past-60] llama.cpp is supported only on ROCm 7.0.0 and 6.4.x. .. [#flashinfer_compat-past-60] FlashInfer is supported only on ROCm 6.4.1. diff --git a/docs/compatibility/ml-compatibility/taichi-compatibility.rst b/docs/compatibility/ml-compatibility/taichi-compatibility.rst deleted file mode 100644 index 3da4a3776..000000000 --- a/docs/compatibility/ml-compatibility/taichi-compatibility.rst +++ /dev/null @@ -1,99 +0,0 @@ -:orphan: - -.. meta:: - :description: Taichi compatibility - :keywords: GPU, Taichi, deep learning, framework compatibility - -.. version-set:: rocm_version latest - -******************************************************************************* -Taichi compatibility -******************************************************************************* - -`Taichi `_ is an open-source, imperative, and parallel -programming language designed for high-performance numerical computation. -Embedded in Python, it leverages just-in-time (JIT) compilation frameworks such as LLVM to accelerate -compute-intensive Python code by compiling it to native GPU or CPU instructions. - -Taichi is widely used across various domains, including real-time physical simulation, -numerical computing, augmented reality, artificial intelligence, computer vision, robotics, -visual effects in film and gaming, and general-purpose computing. - -Support overview -================================================================================ - -- The ROCm-supported version of Taichi is maintained in the official `https://github.com/ROCm/taichi - `__ repository, which differs from the - `https://github.com/taichi-dev/taichi `__ upstream repository. - -- To get started and install Taichi on ROCm, use the prebuilt :ref:`Docker image `, - which includes ROCm, Taichi, and all required dependencies. - - - See the :doc:`ROCm Taichi installation guide ` - for installation and setup instructions. - - - You can also consult the upstream `Installation guide `__ - for additional context. - -Version support --------------------------------------------------------------------------------- - -Taichi is supported on `ROCm 6.3.2 `__. - -Supported devices --------------------------------------------------------------------------------- - -- **Officially Supported**: AMD Instinct™ MI250X, MI210X (with the exception of Taichi’s GPU rendering system, CGUI) -- **Upcoming Support**: AMD Instinct™ MI300X - -.. _taichi-recommendations: - -Use cases and recommendations -================================================================================ - -* The `Accelerating Parallel Programming in Python with Taichi Lang on AMD GPUs - `__ - blog highlights Taichi as an open-source programming language designed for high-performance - numerical computation, particularly in domains like real-time physical simulation, - artificial intelligence, computer vision, robotics, and visual effects. Taichi - is embedded in Python and uses just-in-time (JIT) compilation frameworks like - LLVM to optimize execution on GPUs and CPUs. The blog emphasizes the versatility - of Taichi in enabling complex simulations and numerical algorithms, making - it ideal for developers working on compute-intensive tasks. Developers are - encouraged to follow recommended coding patterns and utilize Taichi decorators - for performance optimization, with examples available in the `https://github.com/ROCm/taichi_examples - `_ repository. Prebuilt Docker images - integrating ROCm, PyTorch, and Taichi are provided for simplified installation - and deployment, making it easier to leverage Taichi for advanced computational workloads. - -.. _taichi-docker-compat: - -Docker image compatibility -================================================================================ - -.. |docker-icon| raw:: html - - - -AMD validates and publishes ready-made `ROCm Taichi Docker images `_ -with ROCm backends on Docker Hub. The following Docker image tag and associated inventories -represent the latest Taichi version from the official Docker Hub. -Click |docker-icon| to view the image on Docker Hub. - -.. list-table:: - :header-rows: 1 - :class: docker-image-compatibility - - * - Docker image - - ROCm - - Taichi - - Ubuntu - - Python - - * - .. raw:: html - - rocm/taichi - - `6.3.2 `_ - - `1.8.0b1 `_ - - 22.04 - - `3.10.12 `_ \ No newline at end of file diff --git a/docs/conf.py b/docs/conf.py index 9ab46b1a5..f624ef828 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -111,7 +111,6 @@ article_pages = [ {"file": "compatibility/ml-compatibility/stanford-megatron-lm-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/dgl-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/megablocks-compatibility", "os": ["linux"]}, - {"file": "compatibility/ml-compatibility/taichi-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/ray-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/llama-cpp-compatibility", "os": ["linux"]}, {"file": "compatibility/ml-compatibility/flashinfer-compatibility", "os": ["linux"]}, diff --git a/docs/data/reference/precision-support/precision-support.yaml b/docs/data/reference/precision-support/precision-support.yaml index 6f4773319..3ec7f57a4 100644 --- a/docs/data/reference/precision-support/precision-support.yaml +++ b/docs/data/reference/precision-support/precision-support.yaml @@ -32,7 +32,7 @@ library_groups: - name: "MIGraphX" tag: "migraphx" - doc_link: "amdmigraphx:reference/cpp" + doc_link: "amdmigraphx:reference/MIGraphX-cpp" data_types: - type: "int8" support: "⚠️" @@ -290,7 +290,7 @@ library_groups: - name: "Tensile" tag: "tensile" - doc_link: "tensile:reference/precision-support" + doc_link: "tensile:src/reference/precision-support" data_types: - type: "int8" support: "✅" diff --git a/docs/how-to/deep-learning-rocm.rst b/docs/how-to/deep-learning-rocm.rst index b19b459df..f0b5bd248 100644 --- a/docs/how-to/deep-learning-rocm.rst +++ b/docs/how-to/deep-learning-rocm.rst @@ -100,18 +100,6 @@ The table below summarizes information about ROCm-enabled deep learning framewor - * - `Taichi `__ - - .. raw:: html - - - - - - `Docker image `__ - - `Wheels package `__ - - - .. raw:: html - - - * - `Ray `__ - .. raw:: html diff --git a/docs/how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.10.rst b/docs/how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.10.rst index aa38b65fa..92c2e908a 100644 --- a/docs/how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.10.rst +++ b/docs/how-to/rocm-for-ai/inference/benchmark-docker/previous-versions/xdit-25.10.rst @@ -47,7 +47,7 @@ What's new - Supported architectures: gfx942 and gfx950 (AMD Instinct™ MI300X, MI325X, MI350X, and MI355X). - Supported workloads: Wan 2.1, Wan 2.2, HunyuanVideo, and Flux models. -.. _xdit-video-diffusion-supported-models: +.. _xdit-video-diffusion-supported-models-2510: Supported models ================ @@ -145,7 +145,7 @@ run benchmarks and generate outputs. .. container:: model-doc {{model.mad_tag}} The following commands are written for {{ model.model }}. - See :ref:`xdit-video-diffusion-supported-models` to switch to another available model. + See :ref:`xdit-video-diffusion-supported-models-2510` to switch to another available model. {% endfor %} {% endfor %} diff --git a/docs/sphinx/_toc.yml.in b/docs/sphinx/_toc.yml.in index d46a111b6..592ce9dc1 100644 --- a/docs/sphinx/_toc.yml.in +++ b/docs/sphinx/_toc.yml.in @@ -43,8 +43,6 @@ subtrees: title: DGL compatibility - file: compatibility/ml-compatibility/megablocks-compatibility.rst title: Megablocks compatibility - - file: compatibility/ml-compatibility/taichi-compatibility.rst - title: Taichi compatibility - file: compatibility/ml-compatibility/ray-compatibility.rst title: Ray compatibility - file: compatibility/ml-compatibility/llama-cpp-compatibility.rst