Add Hunyuan Video to PyTorch inference benchmark models doc (#5094)

This commit is contained in:
Peter Park
2025-08-12 11:54:59 -04:00
committed by GitHub
parent 231aa0bfc6
commit 80f7dc79b9
2 changed files with 10 additions and 2 deletions

View File

@@ -39,7 +39,7 @@ pytorch_inference_benchmark:
model_repo: Wan-AI/Wan2.1-T2V-14B
url: https://huggingface.co/Wan-AI/Wan2.1-T2V-14B
precision: bfloat16
- group: Janus-Pro
- group: Janus Pro
tag: janus-pro
models:
- model: Janus Pro 7B
@@ -47,3 +47,11 @@ pytorch_inference_benchmark:
model_repo: deepseek-ai/Janus-Pro-7B
url: https://huggingface.co/deepseek-ai/Janus-Pro-7B
precision: bfloat16
- group: Hunyuan Video
tag: hunyuan
models:
- model: Hunyuan Video
mad_tag: pyt_hy_video
model_repo: tencent/HunyuanVideo
url: https://huggingface.co/tencent/HunyuanVideo
precision: float16

View File

@@ -103,7 +103,7 @@ PyTorch inference performance testing
The Chai-1 benchmark uses a specifically selected Docker image using ROCm 6.2.3 and PyTorch 2.3.0 to address an accuracy issue.
.. container:: model-doc pyt_clip_inference pyt_mochi_video_inference pyt_wan2.1_inference pyt_janus_pro_inference
.. container:: model-doc pyt_clip_inference pyt_mochi_video_inference pyt_wan2.1_inference pyt_janus_pro_inference pyt_hy_video
Use the following command to pull the `ROCm PyTorch Docker image <https://hub.docker.com/layers/rocm/pytorch/latest/images/sha256-05b55983e5154f46e7441897d0908d79877370adca4d1fff4899d9539d6c4969>`__ from Docker Hub.