mirror of
https://github.com/ROCm/ROCm.git
synced 2026-04-05 03:01:17 -04:00
Add QwQ 32B to vllm-benchmark.rst (#4685)
* Add Qwen2 MoE 2.7B to vllm-benchmark-models.yaml * Add QwQ-32B-Preview to vllm-benchmark-models.yaml * add links to performance results words * change "performance validation" to "performance testing" * remove "-Preview" from QwQ-32B * move qwen2 MoE after qwen2 * add TunableOp section * fix formatting * add link to TunableOp doc * add tunableop note * fix vllm-benchmark template * remove cmdline option for --tunableop on * update docker details * remove "training" * remove qwen2
This commit is contained in:
@@ -102,6 +102,12 @@ vllm_benchmark:
|
||||
model_repo: Qwen/Qwen2-72B-Instruct
|
||||
url: https://huggingface.co/Qwen/Qwen2-72B-Instruct
|
||||
precision: float16
|
||||
- model: QwQ-32B
|
||||
mad_tag: pyt_vllm_qwq-32b
|
||||
model_repo: Qwen/QwQ-32B
|
||||
url: https://huggingface.co/Qwen/QwQ-32B
|
||||
precision: float16
|
||||
tunableop: true
|
||||
- group: DBRX
|
||||
tag: dbrx
|
||||
models:
|
||||
|
||||
@@ -34,7 +34,7 @@ vLLM inference performance testing
|
||||
|
||||
.. _vllm-benchmark-available-models:
|
||||
|
||||
Available models
|
||||
Supported models
|
||||
================
|
||||
|
||||
.. raw:: html
|
||||
@@ -183,6 +183,25 @@ vLLM inference performance testing
|
||||
to collect latency and throughput performance data, you can also change the benchmarking
|
||||
parameters. See the standalone benchmarking tab for more information.
|
||||
|
||||
{% if model.tunableop %}
|
||||
|
||||
.. note::
|
||||
|
||||
For improved performance, consider enabling :ref:`PyTorch TunableOp <mi300x-tunableop>`.
|
||||
TunableOp automatically explores different implementations and configurations of certain PyTorch
|
||||
operators to find the fastest one for your hardware.
|
||||
|
||||
By default, ``{{model.mad_tag}}`` runs with TunableOp disabled
|
||||
(see
|
||||
`<https://github.com/ROCm/MAD/blob/develop/models.json>`__). To
|
||||
enable it, edit the default run behavior in the ``models.json``
|
||||
configuration before running inference -- update the model's run
|
||||
``args`` by changing ``--tunableop off`` to ``--tunableop on``.
|
||||
|
||||
Enabling TunableOp triggers a two-pass run -- a warm-up followed by the performance-collection run.
|
||||
|
||||
{% endif %}
|
||||
|
||||
.. tab-item:: Standalone benchmarking
|
||||
|
||||
Run the vLLM benchmark tool independently by starting the
|
||||
|
||||
Reference in New Issue
Block a user