101 Commits
v0.9.0 ... 310

Author SHA1 Message Date
Mahmoud Ashraf
ba812f55a2 Fix quotes for Python version in CI workflow 2025-10-30 21:14:30 +03:00
Mahmoud Ashraf
44466c7535 Upgrade Python version from 3.9 to 3.10 in CI 2025-10-30 21:12:36 +03:00
Mahmoud Ashraf
e3e46675b2 Update Python version requirements to 3.10 and 3.12 2025-10-30 21:11:50 +03:00
Mahmoud Ashraf
14ad587c98 Update Python version requirement to 3.10 or greater 2025-10-30 21:11:07 +03:00
Purfview
9090997d25 Fix a typo (#1377) 2025-10-22 15:51:56 +03:00
Mahmoud Ashraf
dea24cbcc6 Upgrade to Silero-VAD V6 (#1373)
Co-authored-by: sssshhhhhh 193317444+sssshhhhhh@users.noreply.github.com
2025-10-14 15:29:56 +03:00
Mario
14ba1051f3 Fix: add <|nocaptions|> to suppressed tokens (#1338)
* Fix: Prevent <|nocaptions|> tokens in BatchedInferencePipeline

- Add nocaptions component tokens [1771, 496, 9799] to suppress_tokens list
- Add segment filtering to remove any remaining <|nocaptions|> segments
- Resolves issue where BatchedInferencePipeline would generate malformed
  special tokens during periods of silence or low-confidence transcription
- Includes comprehensive tests to verify the fix

The issue occurred because while bracket tokens ('<', '|', '>') were
already suppressed, the content tokens ('no', 'ca', 'ptions') were not,
leading to partial token generation that formed complete <|nocaptions|>
tags in the output.

Files changed:
- faster_whisper/transcribe.py: Core fix implementation
- test_nocaptions_comprehensive.py: Comprehensive test suite
- tests/test_nocaptions_fix.py: Unit tests

* removed

* Fix: Prevent <|nocaptions|> tokens in BatchedInferencePipeline

* Fix: Implement proper <|nocaptions|> token suppression using single token approach

* ci: trigger tests

* fix: remove trailing whitespace from blank lines

* Update faster_whisper/transcribe.py

Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>

* Update faster_whisper/tokenizer.py

Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>

* Update faster_whisper/tokenizer.py

Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>

* Rename no_speech to no_captions in tokenizer

* nocaptions has been renamed to nospeech

* break line

* line break

* Refactor no_speech method for improved readability by adjusting line breaks

---------

Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>
2025-10-10 21:56:54 +03:00
Mahmoud Ashraf
c26d609974 only merge when clip_timestamps are not provided (#1345)
fixes #1340 and allows for batching multiple audio files less than 30s each
2025-08-16 14:30:50 +03:00
黑墨水鱼
4bd98d5c5b Update README.md to include whisper-fastapi (#1325) 2025-08-11 13:44:48 +03:00
Mahmoud Ashraf
93001a9438 bump version to 1.2.0 2025-08-06 03:31:36 +03:00
Mahmoud Ashraf
a0c3cb9802 Remove Silence in Batched transcription (#1297) 2025-08-06 03:30:59 +03:00
Mahmoud Ashraf
fbeb1ba731 get correct index for samples (#1336) 2025-08-06 03:17:45 +03:00
Rishil
d3bfd0a305 feat: Allow loading of private HF models (#1309)
* feat: add HuggingFace auth token support to model download

* Format
2025-06-02 14:12:34 +03:00
Mahmoud Ashraf
43d4163fe0 Support distil-large-v3.5 (#1311) 2025-06-02 14:09:20 +03:00
Felix Mosheev
700584b2e6 feat: allow passing specific revision to download (#1292) 2025-04-30 00:55:48 +03:00
David Jiménez
1383fd4d37 Update README.md with speaches instead of faster-whisper-server (#1267)
Was previously named faster-whisper-server. They've decided to change the name from faster-whisper-server to speaches, as the project has evolved to support more than just ASR.
2025-03-20 17:20:26 +03:00
Mahmoud Ashraf
9e657b47cb Bump version to 1.1.1 2025-01-01 17:44:54 +03:00
Purfview
11fd8ab301 Fix neg_threshold (#1191) 2024-12-29 14:38:58 +03:00
Dragoș Bălan
95164297ff Add duration of audio and VAD removed duration to BatchedInferencePipeline (#1186)
Co-authored-by: MahmoudAshraf97 <hassouna97.ma@gmail.com>
2024-12-23 17:23:40 +02:00
Purfview
1b24f284c9 Reduce VAD memory usage (#1198)
Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>
2024-12-12 15:23:30 +03:00
Jordi Mas
b568faec40 Add Open-dubbing into community projects (#1034)
* Add Open-dubbing into community projects

* Update URL
2024-12-12 13:36:04 +03:00
Purfview
f32c0e8af3 Make batched suppress_tokens behaviour same as in sequential (#1194) 2024-12-11 14:51:38 +03:00
Purfview
8327d8cc64 Brings back original VAD parameters naming (#1181) 2024-12-01 20:41:53 +03:00
Mahmoud Ashraf
22a5238b56 Upgrade CI to 3.9 and drop Python 3.8 support(#1184) 2024-12-01 20:38:27 +03:00
Mahmoud Ashraf
97a4785fa1 Bump version to 1.1.0 and update benchmarks (#1161)
* update version

* Update CPU benchmarks

* Updated GPU benchmarks

* ..

* more gpu benchmarks
2024-11-21 19:22:01 +03:00
Mahmoud Ashraf
08f6900217 remove log_prob_low_threshold (#1160) 2024-11-21 00:03:21 +03:00
Mahmoud Ashraf
9c8ef76c98 use jiwer instead of evaluate in benchmarks (#1159) 2024-11-20 23:51:55 +03:00
Mahmoud Ashraf
491852e1b9 Add new tests (#1158) 2024-11-20 14:50:57 +03:00
Mahmoud Ashraf
f830c6f241 Fix list index out of range in word timestamps (#1157) 2024-11-20 13:36:58 +03:00
Mahmoud Ashraf
bcd8ce0fc7 refactor multilingual option (#1148)
* Added test for `multilingual` option with english-german audio
* removed `output_language` argument as it is redundant, you can get the same functionality with `task="translate"`
* use the correct `encoder_output` for language detection in sequential transcription
* enabled `multilingual` functionality for batched inference
2024-11-20 00:14:59 +03:00
Mahmoud Ashraf
be9fb36ed3 Cleanup of BatchedInferencePipeline (#1135) 2024-11-17 16:45:32 +03:00
Mahmoud Ashraf
a6f8fbae00 Refactor of language detection functions (#1146)
* Supported new options for batched transcriptions:
  * `language_detection_threshold`
  * `language_detection_segments`
* Updated `WhisperModel.detect_language` function to include the improved language detection from #732  and added docstrings, it's now used inside `transcribe` function.
* Removed the following functions as they are no longer needed:
  * `WhisperModel.detect_language_multi_segment` and its test
  * `BatchedInferencePipeline.get_language_and_tokenizer`
* Added tests for empty audios
2024-11-16 13:53:07 +03:00
黑墨水鱼
53bbe54016 fix: Use correct seek value in output, fix word timestamps when the initial timestamp is not zero (#1141)
Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>
2024-11-15 14:57:38 +03:00
Mahmoud Ashraf
85e61ea111 Add progress bar to WhisperModel.transcribe (#1138) 2024-11-14 17:12:39 +03:00
Mahmoud Ashraf
3e0ba86571 Remove torch dependency, Faster numpy Feature extraction (#1106) 2024-11-14 12:57:10 +03:00
Mahmoud Ashraf
8f01aee36b Update WhisperModel documentation to list all available models (#1137) 2024-11-13 19:26:01 +03:00
Mahmoud Ashraf
c2bf036234 change language_detection_threshold default value (#1134) 2024-11-13 17:07:46 +03:00
Mahmoud Ashraf
fb65cd387f Update cuda instructions in readme (#1125)
* Update README.md

* Update README.md

* Update version.py

* Update README.md

* Update README.md

* Update README.md
2024-11-12 15:51:26 +03:00
Mahmoud Ashraf
203dddb047 replace NamedTuple with dataclass (#1105)
* replace `NamedTuple` with `dataclass`

* add deprecation warnings
2024-11-05 12:32:20 +03:00
Mahmoud Ashraf
814472fdbf Revert CPU default threads to 0
https://github.com/SYSTRAN/faster-whisper/pull/965#issuecomment-2448208010
2024-10-30 23:00:36 +03:00
Ozan Caglayan
f978fa2979 Revert CPU default threads to 4 (#965)
Co-authored-by: Mahmoud Ashraf <hassouna97.ma@gmail.com>
2024-10-30 16:50:49 +03:00
Mahmoud Ashraf
2386843fd7 Use correct features padding for encoder input (#1101)
* pad to 3000 instead of `feature_extractor.nb_max_frames`

* correct trimming for batched features
2024-10-29 17:58:05 +03:00
黑墨水鱼
c2a1da1bd9 typo: trubo -> turbo (#1092) 2024-10-26 00:28:16 +03:00
Mahmoud Ashraf
b2da05582c Add support for turbo model (#1090) 2024-10-25 15:50:23 +03:00
Mahmoud Ashraf
2dbca5e559 Use Silero VAD in Batched Mode (#936)
Replace Pyannote VAD with Silero to reduce code duplication and requirements
2024-10-24 12:05:25 +03:00
Mahmoud Ashraf
574e2563e7 Update Dockerfile to ensure compatibility with CT2==4.5.0 2024-10-23 18:28:27 +03:00
Mahmoud Ashraf
42b8681edb revert back to using PyAV instead of torchaudio (#961)
* revert back to using PyAV instead of torch audio

* Update audio.py
2024-10-23 15:26:18 +03:00
Mahmoud Ashraf
d57c5b40b0 Remove the usage of transformers.pipeline from BatchedInferencePipeline and fix word timestamps for batched inference (#921)
* fix word timestamps for batched inference

* remove hf pipeline
2024-07-27 09:02:58 +07:00
zh-plus
83a368e98a Make vad-related parameters configurable for batched inference. (#923) 2024-07-24 09:00:32 +07:00
Jilt Sebastian
eb8390233c New PR for Faster Whisper: Batching Support, Speed Boosts, and Quality Enhancements (#856)
Batching Support, Speed Boosts, and Quality Enhancements

---------

Co-authored-by: Hargun Mujral <83234565+hargunmujral@users.noreply.github.com>
Co-authored-by: MahmoudAshraf97 <hassouna97.ma@gmail.com>
2024-07-18 16:48:52 +07:00
trungkienbkhn
fbcf58bf98 Fix language detection with non-speech audio (#895) 2024-07-05 14:43:45 +07:00
Jordi Mas
1195359984 Filter out non_speech_tokens in suppressed tokens (#898)
* Filter out non_speech_tokens in suppressed tokens
2024-07-05 14:43:11 +07:00
trungkienbkhn
c22db5125d Bump version to 1.0.3 (#887) 2024-07-01 16:36:12 +07:00
ABen
8862bee1f8 Improve language detection when using clip_timestamps (#867) 2024-07-01 16:12:45 +07:00
Ki Hoon Kim
8d400e9870 Upgrade to Silero-Vad V5 (#884)
* Fix window_size_samples to 512

* Update SileroVADModel

* Replace ONNX file with V5 version
2024-07-01 15:40:37 +07:00
Fedir Zadniprovskyi
bced5f04c0 docs: add 'faster-whisper-server' community integration (#861)
Co-authored-by: Fedir Zadniprovskyi <github.g1k56@simplelogin.com>
2024-06-05 22:27:41 +07:00
Fedir Zadniprovskyi
65551c081f Docker file improvements (#848)
Docker file improvements

Co-authored-by: Fedir Zadniprovskyi <github.g1k56@simplelogin.com>
2024-05-20 09:13:19 +07:00
Napuh
f53be1e811 Add distil models to WhisperModel init and download_model docstrings (#847)
* chore: add distil models to WhisperModel init docstring and download_model docstring
2024-05-20 08:51:22 +07:00
Natanael Tan
4acdb5c619 Fix #839 incorrect clip_timestamps being used in model (#842)
* Fix #839

Changed the code from updating the TranscriptionOptions class instead of the options object which likely was the cause of unexpected behaviour
2024-05-17 16:35:07 +07:00
Peter Krantz
a1c3583c96 Update README.md (#841)
Spelling correction for copy/pasters
2024-05-17 15:24:47 +07:00
trungkienbkhn
2036d12634 Add Dockerfile example (#828) 2024-05-13 16:33:09 +07:00
trungkienbkhn
2f6913efc8 Bump version to 1.0.2 (#816) 2024-05-06 09:02:54 +07:00
ddorian
e11d58599d Allow av to include version 12. (#819) 2024-05-06 08:57:35 +07:00
Keating Reid
49a80eb8a8 Clarify documentation for hotwords (#817)
* Clarify documentation for hotwords

* Remove redundant type specifications
2024-05-06 08:52:59 +07:00
trungkienbkhn
8d5e6d56d9 Support initializing more whisper model args (#807) 2024-05-04 15:12:59 +07:00
trungkienbkhn
6eec07739e Add benchmarking logic for memory, wer and speed (#773) 2024-05-04 15:12:43 +07:00
jax
847fec4492 Feature/add hotwords (#731)
* add hotword params

---------

Co-authored-by: jax <jax_builder@gamil.com>
2024-05-04 15:11:52 +07:00
Keating Reid
46080e584e Loosening tokenizers version constraint (#804) 2024-05-04 15:10:24 +07:00
Sidharth Rajaram
3d1de60ef3 CUDA version and updated installation instructions (#785)
* CUDA version note and updated instructions in README

* ctranslate2 downgrade note, cuDNN v9 consideration

* clearer note on cuDNN v9 package
2024-05-04 15:09:59 +07:00
otakutyrant
91c8307aa6 make faster_whisper.assets as a valid python package to distribute (#772) (#774) 2024-04-02 18:22:22 +02:00
Purfview
b024972a56 Foolproof: Disable VAD if clip_timestamps is in use (#769)
* Foolproof: Disable VAD if clip_timestamps is in use

Prevent silly things to happen.
2024-04-02 18:20:34 +02:00
Purfview
8ae82c8372 Bugfix: code breaks if audio is empty (#768)
* Bugfix: code breaks if audio is empty

Regression since https://github.com/SYSTRAN/faster-whisper/pull/732 PR
2024-04-02 18:18:12 +02:00
trungkienbkhn
e0c3a9ed34 Update project github link to SYSTRAN (#746) 2024-03-27 08:31:17 +01:00
Sanchit Gandhi
a67e0e47ae Add support for distil-large-v3 (#755)
* add distil-large-v3

* Update README.md

* use fp16 weights from Systran
2024-03-26 14:58:39 +01:00
trungkienbkhn
1eb9a8004c Improve language detection (#732) 2024-03-12 15:44:49 +01:00
trungkienbkhn
a342b028b7 Bump version to 1.0.1 (#725) 2024-03-01 11:32:12 +01:00
Purfview
5090cc9d0d Fix window end heuristic for hallucination_silence_threshold (#706)
Removes the wishful heuristic causing more issues than it's fixing.

Same as https://github.com/openai/whisper/pull/2043

Example of the issue: https://github.com/openai/whisper/pull/1838#issuecomment-1960041500
2024-02-29 17:59:32 +01:00
Gabriel F
09cd57e7f3 Fix typo 'ditil' (#721) 2024-02-29 17:08:58 +01:00
trungkienbkhn
16141e65d9 Add pad_or_trim function to handle segment before encoding (#705) 2024-02-29 17:08:28 +01:00
trungkienbkhn
06d32bf0c1 Bump version to 1.0.0 (#696) 2024-02-22 09:49:01 +01:00
Purfview
30d6043e90 Prevent infinite loop for out-of-bound timestamps in clip_timestamps (#697)
Same as https://github.com/openai/whisper/pull/2005
2024-02-22 09:48:35 +01:00
BBC-Esq
22c75d0cc3 Update README.md (#672)
Add Faster-Whisper-Transcriber to community integrations.
2024-02-21 10:18:11 +01:00
trungkienbkhn
092067208b Add clip_timestamps and hallucination_silence_threshold options (#646) 2024-02-20 17:34:54 +01:00
Jordi Mas
6ffcbdfbc2 Fix typos in README.md (#668) 2024-02-20 17:33:17 +01:00
Purfview
52695567c9 Bumps up PyAV version to support Python 3.12.x (#679) 2024-02-20 17:31:07 +01:00
IlianP
c6b28ed3a0 Update README.md (#685)
I'm surprised that WhisperX hasn't made it into this list yet, as it has more stars than faster-whisper itself 🚀
2024-02-20 17:28:00 +01:00
trungkienbkhn
4ab646035f Upgrade ctranslate2 version to support CUDA 12 (#694) 2024-02-20 17:26:55 +01:00
Purfview
f144e4c83d Expands the note for distil-whisper (#659) 2024-01-28 21:48:40 +01:00
Purfview
3aec421849 Add: More clarity of what "max_new_tokens" does (#658)
* Add: More clarity of what "max_new_tokens" does
2024-01-28 21:40:33 +01:00
Dominik Macháček
64b9f244bd Whisper-Streaming mention (#656)
under community integrations
2024-01-25 18:27:27 +01:00
Purfview
00efce1696 Bugfix: Illogical "Avoid computing higher temperatures on no_speech" (#652) 2024-01-24 11:54:43 +01:00
metame
ad3c83045b support distil-whisper (#557) 2024-01-24 10:17:12 +01:00
Jürgen Fleiß
72ff979a2e Add GUI faster-whisper project README.md (#554)
Added aTrain GUI faster-whisper transcription and diarization tool as community project.

Co-authored-by: JuergenFleiss <118339672+Juergen-J-F@users.noreply.github.com>
2024-01-18 13:01:02 +01:00
makaveli
615de0d2d9 add WhisperLive to community integration (#647) 2024-01-18 12:54:14 +01:00
Purfview
44f7e58947 Update whisper-standalone-win description in README.md (#508)
* Update whisper-standalone-win description in README.md
2023-12-14 13:03:46 +01:00
Purfview
ebcfd6b964 Fix broken prompt_reset_on_temperature (#604)
* Fix broken prompt_reset_on_temperature

Fixing: https://github.com/SYSTRAN/faster-whisper/issues/603

Broken because `generate_with_fallback()` doesn't return final temperature.

Regression since PR356 -> https://github.com/SYSTRAN/faster-whisper/pull/356
2023-12-13 13:14:39 +01:00
trungkienbkhn
19329a3611 Word timing tweaks (#616) 2023-12-13 12:38:44 +01:00
Purfview
65094b779e Update info on cuBLAS and cuDNN libs in README.md (#513) 2023-11-27 12:12:47 +01:00
Clayton Yochum
9641d5f56a Force read-mode in av.open (#566)
The `av.open` functions checks input metadata to determine the mode to open with ("r" or "w"). If an input to `decode_audio` is found to be in write-mode, without this change it can't be read. Forcing read mode solves this.
2023-11-27 10:43:35 +01:00
Dang Chuan Nguyen
e1a218fab1 Bump version to 0.10.0 2023-11-24 23:19:47 +01:00
Oscaarjs
3084409633 Add V3 Support (#578)
* Add V3 Support

* update conversion example

---------

Co-authored-by: oscaarjs <oscar.johansson@conversy.se>
2023-11-24 23:16:12 +01:00
35 changed files with 4084 additions and 506 deletions

View File

@@ -15,12 +15,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python 3.8
uses: actions/setup-python@v4
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: '3.10'
- name: Install module
run: |
@@ -45,12 +45,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python 3.8
uses: actions/setup-python@v4
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: '3.10'
- name: Install module
run: |
@@ -67,12 +67,12 @@ jobs:
needs: [check-code-format, run-tests]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python 3.8
uses: actions/setup-python@v4
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
python-version: 3.8
python-version: '3.10'
- name: Install dependencies
run: |

View File

@@ -7,7 +7,7 @@ Contributions are welcome! Here are some pointers to help you install the librar
We recommend installing the module in editable mode with the `dev` extra requirements:
```bash
git clone https://github.com/guillaumekln/faster-whisper.git
git clone https://github.com/SYSTRAN/faster-whisper.git
cd faster-whisper/
pip install -e .[dev]
```

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023 Guillaume Klein
Copyright (c) 2023 SYSTRAN
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,3 +1,3 @@
include faster_whisper/assets/silero_vad.onnx
include faster_whisper/assets/silero_vad_v6.onnx
include requirements.txt
include requirements.conversion.txt

144
README.md
View File

@@ -1,4 +1,4 @@
[![CI](https://github.com/guillaumekln/faster-whisper/workflows/CI/badge.svg)](https://github.com/guillaumekln/faster-whisper/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/faster-whisper.svg)](https://badge.fury.io/py/faster-whisper)
[![CI](https://github.com/SYSTRAN/faster-whisper/workflows/CI/badge.svg)](https://github.com/SYSTRAN/faster-whisper/actions?query=workflow%3ACI) [![PyPI version](https://badge.fury.io/py/faster-whisper.svg)](https://badge.fury.io/py/faster-whisper)
# Faster Whisper transcription with CTranslate2
@@ -8,37 +8,55 @@ This implementation is up to 4 times faster than [openai/whisper](https://github
## Benchmark
### Whisper
For reference, here's the time and memory usage that are required to transcribe [**13 minutes**](https://www.youtube.com/watch?v=0u7tTptBo9I) of audio using different implementations:
* [openai/whisper](https://github.com/openai/whisper)@[6dea21fd](https://github.com/openai/whisper/commit/6dea21fd7f7253bfe450f1e2512a0fe47ee2d258)
* [whisper.cpp](https://github.com/ggerganov/whisper.cpp)@[3b010f9](https://github.com/ggerganov/whisper.cpp/commit/3b010f9bed9a6068609e9faf52383aea792b0362)
* [faster-whisper](https://github.com/guillaumekln/faster-whisper)@[cce6b53e](https://github.com/guillaumekln/faster-whisper/commit/cce6b53e4554f71172dad188c45f10fb100f6e3e)
* [openai/whisper](https://github.com/openai/whisper)@[v20240930](https://github.com/openai/whisper/tree/v20240930)
* [whisper.cpp](https://github.com/ggerganov/whisper.cpp)@[v1.7.2](https://github.com/ggerganov/whisper.cpp/tree/v1.7.2)
* [transformers](https://github.com/huggingface/transformers)@[v4.46.3](https://github.com/huggingface/transformers/tree/v4.46.3)
* [faster-whisper](https://github.com/SYSTRAN/faster-whisper)@[v1.1.0](https://github.com/SYSTRAN/faster-whisper/tree/v1.1.0)
### Large-v2 model on GPU
| Implementation | Precision | Beam size | Time | Max. GPU memory | Max. CPU memory |
| --- | --- | --- | --- | --- | --- |
| openai/whisper | fp16 | 5 | 4m30s | 11325MB | 9439MB |
| faster-whisper | fp16 | 5 | 54s | 4755MB | 3244MB |
| faster-whisper | int8 | 5 | 59s | 3091MB | 3117MB |
| Implementation | Precision | Beam size | Time | VRAM Usage |
| --- | --- | --- | --- | --- |
| openai/whisper | fp16 | 5 | 2m23s | 4708MB |
| whisper.cpp (Flash Attention) | fp16 | 5 | 1m05s | 4127MB |
| transformers (SDPA)[^1] | fp16 | 5 | 1m52s | 4960MB |
| faster-whisper | fp16 | 5 | 1m03s | 4525MB |
| faster-whisper (`batch_size=8`) | fp16 | 5 | 17s | 6090MB |
| faster-whisper | int8 | 5 | 59s | 2926MB |
| faster-whisper (`batch_size=8`) | int8 | 5 | 16s | 4500MB |
*Executed with CUDA 11.7.1 on a NVIDIA Tesla V100S.*
### distil-whisper-large-v3 model on GPU
| Implementation | Precision | Beam size | Time | YT Commons WER |
| --- | --- | --- | --- | --- |
| transformers (SDPA) (`batch_size=16`) | fp16 | 5 | 46m12s | 14.801 |
| faster-whisper (`batch_size=16`) | fp16 | 5 | 25m50s | 13.527 |
*GPU Benchmarks are Executed with CUDA 12.4 on a NVIDIA RTX 3070 Ti 8GB.*
[^1]: transformers OOM for any batch size > 1
### Small model on CPU
| Implementation | Precision | Beam size | Time | Max. memory |
| Implementation | Precision | Beam size | Time | RAM Usage |
| --- | --- | --- | --- | --- |
| openai/whisper | fp32 | 5 | 10m31s | 3101MB |
| whisper.cpp | fp32 | 5 | 17m42s | 1581MB |
| whisper.cpp | fp16 | 5 | 12m39s | 873MB |
| faster-whisper | fp32 | 5 | 2m44s | 1675MB |
| faster-whisper | int8 | 5 | 2m04s | 995MB |
| openai/whisper | fp32 | 5 | 6m58s | 2335MB |
| whisper.cpp | fp32 | 5 | 2m05s | 1049MB |
| whisper.cpp (OpenVINO) | fp32 | 5 | 1m45s | 1642MB |
| faster-whisper | fp32 | 5 | 2m37s | 2257MB |
| faster-whisper (`batch_size=8`) | fp32 | 5 | 1m06s | 4230MB |
| faster-whisper | int8 | 5 | 1m42s | 1477MB |
| faster-whisper (`batch_size=8`) | int8 | 5 | 51s | 3608MB |
*Executed with 8 threads on an Intel Core i7-12700K.*
*Executed with 8 threads on a Intel(R) Xeon(R) Gold 6226R.*
## Requirements
* Python 3.8 or greater
* Python 3.10 or greater
Unlike openai-whisper, FFmpeg does **not** need to be installed on the system. The audio is decoded with the Python library [PyAV](https://github.com/PyAV-Org/PyAV) which bundles the FFmpeg libraries in its package.
@@ -46,31 +64,36 @@ Unlike openai-whisper, FFmpeg does **not** need to be installed on the system. T
GPU execution requires the following NVIDIA libraries to be installed:
* [cuBLAS for CUDA 11](https://developer.nvidia.com/cublas)
* [cuDNN 8 for CUDA 11](https://developer.nvidia.com/cudnn)
* [cuBLAS for CUDA 12](https://developer.nvidia.com/cublas)
* [cuDNN 9 for CUDA 12](https://developer.nvidia.com/cudnn)
There are multiple ways to install these libraries. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.
**Note**: The latest versions of `ctranslate2` only support CUDA 12 and cuDNN 9. For CUDA 11 and cuDNN 8, the current workaround is downgrading to the `3.24.0` version of `ctranslate2`, for CUDA 12 and cuDNN 8, downgrade to the `4.4.0` version of `ctranslate2`, (This can be done with `pip install --force-reinstall ctranslate2==4.4.0` or specifying the version in a `requirements.txt`).
There are multiple ways to install the NVIDIA libraries mentioned above. The recommended way is described in the official NVIDIA documentation, but we also suggest other installation methods below.
<details>
<summary>Other installation methods (click to expand)</summary>
**Note:** For all these methods below, keep in mind the above note regarding CUDA versions. Depending on your setup, you may need to install the _CUDA 11_ versions of libraries that correspond to the CUDA 12 libraries listed in the instructions below.
#### Use Docker
The libraries are installed in this official NVIDIA Docker image: `nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04`.
The libraries (cuBLAS, cuDNN) are installed in this official NVIDIA CUDA Docker images: `nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04`.
#### Install with `pip` (Linux only)
On Linux these libraries can be installed with `pip`. Note that `LD_LIBRARY_PATH` must be set before launching Python.
```bash
pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
pip install nvidia-cublas-cu12 nvidia-cudnn-cu12==9.*
export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
```
#### Download the libraries from Purfview's repository (Windows only)
#### Download the libraries from Purfview's repository (Windows & Linux)
Purfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA libraries for Windows in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs). Decompress the archive and place the libraries in a directory included in the `PATH`.
Purfview's [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) provides the required NVIDIA libraries for Windows & Linux in a [single archive](https://github.com/Purfview/whisper-standalone-win/releases/tag/libs). Decompress the archive and place the libraries in a directory included in the `PATH`.
</details>
@@ -88,23 +111,25 @@ pip install faster-whisper
### Install the master branch
```bash
pip install --force-reinstall "faster-whisper @ https://github.com/guillaumekln/faster-whisper/archive/refs/heads/master.tar.gz"
pip install --force-reinstall "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/refs/heads/master.tar.gz"
```
### Install a specific commit
```bash
pip install --force-reinstall "faster-whisper @ https://github.com/guillaumekln/faster-whisper/archive/a4f1cc8f11433e454c3934442b5e1a4ed5e865c3.tar.gz"
pip install --force-reinstall "faster-whisper @ https://github.com/SYSTRAN/faster-whisper/archive/a4f1cc8f11433e454c3934442b5e1a4ed5e865c3.tar.gz"
```
</details>
## Usage
### Faster-whisper
```python
from faster_whisper import WhisperModel
model_size = "large-v2"
model_size = "large-v3"
# Run on GPU with FP16
model = WhisperModel(model_size, device="cuda", compute_type="float16")
@@ -129,6 +154,40 @@ segments, _ = model.transcribe("audio.mp3")
segments = list(segments) # The transcription will actually run here.
```
### Batched Transcription
The following code snippet illustrates how to run batched transcription on an example audio file. `BatchedInferencePipeline.transcribe` is a drop-in replacement for `WhisperModel.transcribe`
```python
from faster_whisper import WhisperModel, BatchedInferencePipeline
model = WhisperModel("turbo", device="cuda", compute_type="float16")
batched_model = BatchedInferencePipeline(model=model)
segments, info = batched_model.transcribe("audio.mp3", batch_size=16)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
### Faster Distil-Whisper
The Distil-Whisper checkpoints are compatible with the Faster-Whisper package. In particular, the latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)
checkpoint is intrinsically designed to work with the Faster-Whisper transcription algorithm. The following code snippet
demonstrates how to run inference with distil-large-v3 on a specified audio file:
```python
from faster_whisper import WhisperModel
model_size = "distil-large-v3"
model = WhisperModel(model_size, device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5, language="en", condition_on_previous_text=False)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
For more information about the distil-large-v3 model, refer to the original [model card](https://huggingface.co/distil-whisper/distil-large-v3).
### Word-level timestamps
```python
@@ -147,7 +206,7 @@ The library integrates the [Silero VAD](https://github.com/snakers4/silero-vad)
segments, _ = model.transcribe("audio.mp3", vad_filter=True)
```
The default behavior is conservative and only removes silence longer than 2 seconds. See the available VAD parameters and default values in the [source code](https://github.com/guillaumekln/faster-whisper/blob/master/faster_whisper/vad.py). They can be customized with the dictionary argument `vad_parameters`:
The default behavior is conservative and only removes silence longer than 2 seconds. See the available VAD parameters and default values in the [source code](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/vad.py). They can be customized with the dictionary argument `vad_parameters`:
```python
segments, _ = model.transcribe(
@@ -156,6 +215,7 @@ segments, _ = model.transcribe(
vad_parameters=dict(min_silence_duration_ms=500),
)
```
Vad filter is enabled by default for batched transcription.
### Logging
@@ -170,32 +230,41 @@ logging.getLogger("faster_whisper").setLevel(logging.DEBUG)
### Going further
See more model and transcription options in the [`WhisperModel`](https://github.com/guillaumekln/faster-whisper/blob/master/faster_whisper/transcribe.py) class implementation.
See more model and transcription options in the [`WhisperModel`](https://github.com/SYSTRAN/faster-whisper/blob/master/faster_whisper/transcribe.py) class implementation.
## Community integrations
Here is a non exhaustive list of open-source projects using faster-whisper. Feel free to add your project to the list!
* [speaches](https://github.com/speaches-ai/speaches) is an OpenAI compatible server using `faster-whisper`. It's easily deployable with Docker, works with OpenAI SDKs/CLI, supports streaming, and live transcription.
* [WhisperX](https://github.com/m-bain/whisperX) is an award-winning Python library that offers speaker diarization and accurate word-level timestamps using wav2vec2 alignment
* [whisper-ctranslate2](https://github.com/Softcatala/whisper-ctranslate2) is a command line client based on faster-whisper and compatible with the original client from openai/whisper.
* [whisper-diarize](https://github.com/MahmoudAshraf97/whisper-diarization) is a speaker diarization tool that is based on faster-whisper and NVIDIA NeMo.
* [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) contains the portable ready to run binaries of faster-whisper for Windows.
* [whisper-standalone-win](https://github.com/Purfview/whisper-standalone-win) Standalone CLI executables of faster-whisper for Windows, Linux & macOS.
* [asr-sd-pipeline](https://github.com/hedrergudene/asr-sd-pipeline) provides a scalable, modular, end to end multi-speaker speech to text solution implemented using AzureML pipelines.
* [Open-Lyrics](https://github.com/zh-plus/Open-Lyrics) is a Python library that transcribes voice files using faster-whisper, and translates/polishes the resulting text into `.lrc` files in the desired language using OpenAI-GPT.
* [wscribe](https://github.com/geekodour/wscribe) is a flexible transcript generation tool supporting faster-whisper, it can export word level transcript and the exported transcript then can be edited with [wscribe-editor](https://github.com/geekodour/wscribe-editor)
* [aTrain](https://github.com/BANDAS-Center/aTrain) is a graphical user interface implementation of faster-whisper developed at the BANDAS-Center at the University of Graz for transcription and diarization in Windows ([Windows Store App](https://apps.microsoft.com/detail/atrain/9N15Q44SZNS2)) and Linux.
* [Whisper-Streaming](https://github.com/ufal/whisper_streaming) implements real-time mode for offline Whisper-like speech-to-text models with faster-whisper as the most recommended back-end. It implements a streaming policy with self-adaptive latency based on the actual source complexity, and demonstrates the state of the art.
* [WhisperLive](https://github.com/collabora/WhisperLive) is a nearly-live implementation of OpenAI's Whisper which uses faster-whisper as the backend to transcribe audio in real-time.
* [Faster-Whisper-Transcriber](https://github.com/BBC-Esq/ctranslate2-faster-whisper-transcriber) is a simple but reliable voice transcriber that provides a user-friendly interface.
* [Open-dubbing](https://github.com/softcatala/open-dubbing) is open dubbing is an AI dubbing system which uses machine learning models to automatically translate and synchronize audio dialogue into different languages.
* [Whisper-FastAPI](https://github.com/heimoshuiyu/whisper-fastapi) whisper-fastapi is a very simple script that provides an API backend compatible with OpenAI, HomeAssistant, and Konele (Android voice typing) formats.
## Model conversion
When loading a model from its size such as `WhisperModel("large-v2")`, the correspondig CTranslate2 model is automatically downloaded from the [Hugging Face Hub](https://huggingface.co/guillaumekln).
When loading a model from its size such as `WhisperModel("large-v3")`, the corresponding CTranslate2 model is automatically downloaded from the [Hugging Face Hub](https://huggingface.co/Systran).
We also provide a script to convert any Whisper models compatible with the Transformers library. They could be the original OpenAI models or user fine-tuned models.
For example the command below converts the [original "large-v2" Whisper model](https://huggingface.co/openai/whisper-large-v2) and saves the weights in FP16:
For example the command below converts the [original "large-v3" Whisper model](https://huggingface.co/openai/whisper-large-v3) and saves the weights in FP16:
```bash
pip install transformers[torch]>=4.23
ct2-transformers-converter --model openai/whisper-large-v2 --output_dir whisper-large-v2-ct2 \
--copy_files tokenizer.json --quantization float16
ct2-transformers-converter --model openai/whisper-large-v3 --output_dir whisper-large-v3-ct2
--copy_files tokenizer.json preprocessor_config.json --quantization float16
```
* The option `--model` accepts a model name on the Hub or a path to a model directory.
@@ -207,12 +276,12 @@ Models can also be converted from the code. See the [conversion API](https://ope
1. Directly load the model from a local directory:
```python
model = faster_whisper.WhisperModel("whisper-large-v2-ct2")
model = faster_whisper.WhisperModel("whisper-large-v3-ct2")
```
2. [Upload your model to the Hugging Face Hub](https://huggingface.co/docs/transformers/model_sharing#upload-with-the-web-interface) and load it from its name:
```python
model = faster_whisper.WhisperModel("username/whisper-large-v2-ct2")
model = faster_whisper.WhisperModel("username/whisper-large-v3-ct2")
```
## Comparing performance against other implementations
@@ -220,6 +289,7 @@ model = faster_whisper.WhisperModel("username/whisper-large-v2-ct2")
If you are comparing the performance against other Whisper implementations, you should make sure to run the comparison with similar settings. In particular:
* Verify that the same transcription options are used, especially the same beam size. For example in openai/whisper, `model.transcribe` uses a default beam size of 1 but here we use a default beam size of 5.
* Transcription speed is closely affected by the number of words in the transcript, so ensure that other implementations have a similar WER (Word Error Rate) to this one.
* When running on CPU, make sure to set the same number of threads. Many frameworks will read the environment variable `OMP_NUM_THREADS`, which can be set when running your script:
```bash

BIN
benchmark/benchmark.m4a Normal file

Binary file not shown.

View File

@@ -0,0 +1,80 @@
import argparse
import json
import os
from io import BytesIO
from datasets import load_dataset
from jiwer import wer
from pytubefix import YouTube
from pytubefix.exceptions import VideoUnavailable
from tqdm import tqdm
from transformers.models.whisper.english_normalizer import EnglishTextNormalizer
from faster_whisper import BatchedInferencePipeline, WhisperModel, decode_audio
def url_to_audio(row):
buffer = BytesIO()
yt = YouTube(row["link"])
try:
video = (
yt.streams.filter(only_audio=True, mime_type="audio/mp4")
.order_by("bitrate")
.desc()
.last()
)
video.stream_to_buffer(buffer)
buffer.seek(0)
row["audio"] = decode_audio(buffer)
except VideoUnavailable:
print(f'Failed to download: {row["link"]}')
row["audio"] = []
return row
parser = argparse.ArgumentParser(description="WER benchmark")
parser.add_argument(
"--audio_numb",
type=int,
default=None,
help="Specify the number of validation audio files in the dataset."
" Set to None to retrieve all audio files.",
)
args = parser.parse_args()
with open(os.path.join(os.path.dirname(__file__), "normalizer.json"), "r") as f:
normalizer = EnglishTextNormalizer(json.load(f))
dataset = load_dataset("mobiuslabsgmbh/youtube-commons-asr-eval", streaming=True).map(
url_to_audio
)
model = WhisperModel("large-v3", device="cuda")
pipeline = BatchedInferencePipeline(model, device="cuda")
all_transcriptions = []
all_references = []
# iterate over the dataset and run inference
for i, row in tqdm(enumerate(dataset["test"]), desc="Evaluating..."):
if not row["audio"]:
continue
result, info = pipeline.transcribe(
row["audio"][0],
batch_size=8,
word_timestamps=False,
without_timestamps=True,
)
all_transcriptions.append("".join(segment.text for segment in result))
all_references.append(row["text"][0])
if args.audio_numb and i == (args.audio_numb - 1):
break
# normalize predictions and references
all_transcriptions = [normalizer(transcription) for transcription in all_transcriptions]
all_references = [normalizer(reference) for reference in all_references]
# compute the WER metric
word_error_rate = 100 * wer(hypothesis=all_transcriptions, reference=all_references)
print("WER: %.3f" % word_error_rate)

View File

@@ -0,0 +1,94 @@
import argparse
import time
from typing import Callable
import py3nvml.py3nvml as nvml
from memory_profiler import memory_usage
from utils import MyThread, get_logger, inference
logger = get_logger("faster-whisper")
parser = argparse.ArgumentParser(description="Memory benchmark")
parser.add_argument(
"--gpu_memory", action="store_true", help="Measure GPU memory usage"
)
parser.add_argument("--device-index", type=int, default=0, help="GPU device index")
parser.add_argument(
"--interval",
type=float,
default=0.5,
help="Interval at which measurements are collected",
)
args = parser.parse_args()
device_idx = args.device_index
interval = args.interval
def measure_memory(func: Callable[[], None]):
if args.gpu_memory:
logger.info(
"Measuring maximum GPU memory usage on GPU device."
" Make sure to not have additional processes running on the same GPU."
)
# init nvml
nvml.nvmlInit()
handle = nvml.nvmlDeviceGetHandleByIndex(device_idx)
gpu_name = nvml.nvmlDeviceGetName(handle)
gpu_memory_limit = nvml.nvmlDeviceGetMemoryInfo(handle).total >> 20
gpu_power_limit = nvml.nvmlDeviceGetPowerManagementLimit(handle) / 1000.0
info = {"gpu_memory_usage": [], "gpu_power_usage": []}
def _get_gpu_info():
while True:
info["gpu_memory_usage"].append(
nvml.nvmlDeviceGetMemoryInfo(handle).used >> 20
)
info["gpu_power_usage"].append(
nvml.nvmlDeviceGetPowerUsage(handle) / 1000
)
time.sleep(interval)
if stop:
break
return info
stop = False
thread = MyThread(_get_gpu_info, params=())
thread.start()
func()
stop = True
thread.join()
result = thread.get_result()
# shutdown nvml
nvml.nvmlShutdown()
max_memory_usage = max(result["gpu_memory_usage"])
max_power_usage = max(result["gpu_power_usage"])
print("GPU name: %s" % gpu_name)
print("GPU device index: %s" % device_idx)
print(
"Maximum GPU memory usage: %dMiB / %dMiB (%.2f%%)"
% (
max_memory_usage,
gpu_memory_limit,
(max_memory_usage / gpu_memory_limit) * 100,
)
)
print(
"Maximum GPU power usage: %dW / %dW (%.2f%%)"
% (
max_power_usage,
gpu_power_limit,
(max_power_usage / gpu_power_limit) * 100,
)
)
else:
logger.info("Measuring maximum increase of memory usage.")
max_usage = memory_usage(func, max_usage=True, interval=interval)
print("Maximum increase of RAM memory usage: %d MiB" % max_usage)
if __name__ == "__main__":
measure_memory(inference)

1742
benchmark/normalizer.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,6 @@
transformers
jiwer
datasets
memory_profiler
py3nvml
pytubefix

View File

@@ -0,0 +1,31 @@
import argparse
import timeit
from typing import Callable
from utils import inference
parser = argparse.ArgumentParser(description="Speed benchmark")
parser.add_argument(
"--repeat",
type=int,
default=3,
help="Times an experiment will be run.",
)
args = parser.parse_args()
def measure_speed(func: Callable[[], None]):
# as written in https://docs.python.org/3/library/timeit.html#timeit.Timer.repeat,
# min should be taken rather than the average
runtimes = timeit.repeat(
func,
repeat=args.repeat,
number=10,
)
print(runtimes)
print("Min execution time: %.3fs" % (min(runtimes) / 10.0))
if __name__ == "__main__":
measure_speed(inference)

39
benchmark/utils.py Normal file
View File

@@ -0,0 +1,39 @@
import logging
from threading import Thread
from typing import Optional
from faster_whisper import WhisperModel
model_path = "large-v3"
model = WhisperModel(model_path, device="cuda")
def inference():
segments, info = model.transcribe("benchmark.m4a", language="fr")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
def get_logger(name: Optional[str] = None) -> logging.Logger:
formatter = logging.Formatter("%(levelname)s: %(message)s")
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
class MyThread(Thread):
def __init__(self, func, params):
super(MyThread, self).__init__()
self.func = func
self.params = params
self.result = None
def run(self):
self.result = self.func(*self.params)
def get_result(self):
return self.result

View File

@@ -0,0 +1,59 @@
import argparse
import json
import os
from datasets import load_dataset
from jiwer import wer
from tqdm import tqdm
from transformers.models.whisper.english_normalizer import EnglishTextNormalizer
from faster_whisper import WhisperModel
parser = argparse.ArgumentParser(description="WER benchmark")
parser.add_argument(
"--audio_numb",
type=int,
default=None,
help="Specify the number of validation audio files in the dataset."
" Set to None to retrieve all audio files.",
)
args = parser.parse_args()
model_path = "large-v3"
model = WhisperModel(model_path, device="cuda")
# load the dataset with streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
with open(os.path.join(os.path.dirname(__file__), "normalizer.json"), "r") as f:
normalizer = EnglishTextNormalizer(json.load(f))
def inference(batch):
batch["transcription"] = []
for sample in batch["audio"]:
segments, info = model.transcribe(sample["array"], language="en")
batch["transcription"].append("".join([segment.text for segment in segments]))
batch["reference"] = batch["text"]
return batch
dataset = dataset.map(function=inference, batched=True, batch_size=16)
all_transcriptions = []
all_references = []
# iterate over the dataset and run inference
for i, result in tqdm(enumerate(dataset), desc="Evaluating..."):
all_transcriptions.append(result["transcription"])
all_references.append(result["reference"])
if args.audio_numb and i == (args.audio_numb - 1):
break
# normalize predictions and references
all_transcriptions = [normalizer(transcription) for transcription in all_transcriptions]
all_references = [normalizer(reference) for reference in all_references]
# compute the WER metric
word_error_rate = 100 * wer(hypothesis=all_transcriptions, reference=all_references)
print("WER: %.3f" % word_error_rate)

6
docker/Dockerfile Normal file
View File

@@ -0,0 +1,6 @@
FROM nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04
WORKDIR /root
RUN apt-get update -y && apt-get install -y python3-pip
COPY infer.py jfk.flac ./
RUN pip3 install faster-whisper
CMD ["python3", "infer.py"]

7
docker/infer.py Normal file
View File

@@ -0,0 +1,7 @@
from faster_whisper import WhisperModel
jfk_path = "jfk.flac"
model = WhisperModel("tiny", device="cuda")
segments, info = model.transcribe(jfk_path, word_timestamps=True)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

BIN
docker/jfk.flac Normal file

Binary file not shown.

View File

@@ -1,5 +1,5 @@
from faster_whisper.audio import decode_audio
from faster_whisper.transcribe import WhisperModel
from faster_whisper.transcribe import BatchedInferencePipeline, WhisperModel
from faster_whisper.utils import available_models, download_model, format_timestamp
from faster_whisper.version import __version__
@@ -7,6 +7,7 @@ __all__ = [
"available_models",
"decode_audio",
"WhisperModel",
"BatchedInferencePipeline",
"download_model",
"format_timestamp",
"__version__",

View File

Binary file not shown.

View File

@@ -43,7 +43,7 @@ def decode_audio(
raw_buffer = io.BytesIO()
dtype = None
with av.open(input_file, metadata_errors="ignore") as container:
with av.open(input_file, mode="r", metadata_errors="ignore") as container:
frames = container.decode(audio=0)
frames = _ignore_invalid_frames(frames)
frames = _group_frames(frames, 500000)
@@ -56,6 +56,10 @@ def decode_audio(
# It appears that some objects related to the resampler are not freed
# unless the garbage collector is manually run.
# https://github.com/SYSTRAN/faster-whisper/issues/390
# note that this slows down loading the audio a little bit
# if that is a concern, please use ffmpeg directly as in here:
# https://github.com/openai/whisper/blob/25639fc/whisper/audio.py#L25-L62
del resampler
gc.collect()
@@ -102,3 +106,18 @@ def _resample_frames(frames, resampler):
# Add None to flush the resampler.
for frame in itertools.chain(frames, [None]):
yield from resampler.resample(frame)
def pad_or_trim(array, length: int = 3000, *, axis: int = -1):
"""
Pad or trim the Mel features array to 3000, as expected by the encoder.
"""
if array.shape[axis] > length:
array = array.take(indices=range(length), axis=axis)
if array.shape[axis] < length:
pad_widths = [(0, 0)] * array.ndim
pad_widths[axis] = (0, length - array.shape[axis])
array = np.pad(array, pad_widths)
return array

View File

@@ -1,7 +1,6 @@
import numpy as np
# Adapted from https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/feature_extraction_whisper.py # noqa: E501
class FeatureExtractor:
def __init__(
self,
@@ -20,12 +19,12 @@ class FeatureExtractor:
self.sampling_rate = sampling_rate
self.mel_filters = self.get_mel_filters(
sampling_rate, n_fft, n_mels=feature_size
)
).astype("float32")
def get_mel_filters(self, sr, n_fft, n_mels=128, dtype=np.float32):
@staticmethod
def get_mel_filters(sr, n_fft, n_mels=128):
# Initialize the weights
n_mels = int(n_mels)
weights = np.zeros((n_mels, int(1 + n_fft // 2)), dtype=dtype)
# Center freqs of each FFT bin
fftfreqs = np.fft.rfftfreq(n=n_fft, d=1.0 / sr)
@@ -36,8 +35,6 @@ class FeatureExtractor:
mels = np.linspace(min_mel, max_mel, n_mels + 2)
mels = np.asanyarray(mels)
# Fill in the linear scale
f_min = 0.0
f_sp = 200.0 / 3
@@ -52,112 +49,179 @@ class FeatureExtractor:
log_t = mels >= min_log_mel
freqs[log_t] = min_log_hz * np.exp(logstep * (mels[log_t] - min_log_mel))
mel_f = freqs
fdiff = np.diff(freqs)
ramps = freqs.reshape(-1, 1) - fftfreqs.reshape(1, -1)
fdiff = np.diff(mel_f)
ramps = np.subtract.outer(mel_f, fftfreqs)
lower = -ramps[:-2] / np.expand_dims(fdiff[:-1], axis=1)
upper = ramps[2:] / np.expand_dims(fdiff[1:], axis=1)
for i in range(n_mels):
# lower and upper slopes for all bins
lower = -ramps[i] / fdiff[i]
upper = ramps[i + 2] / fdiff[i + 1]
# .. then intersect them with each other and zero
weights[i] = np.maximum(0, np.minimum(lower, upper))
# Intersect them with each other and zero, vectorized across all i
weights = np.maximum(np.zeros_like(lower), np.minimum(lower, upper))
# Slaney-style mel is scaled to be approx constant energy per channel
enorm = 2.0 / (mel_f[2 : n_mels + 2] - mel_f[:n_mels])
weights *= enorm[:, np.newaxis]
enorm = 2.0 / (freqs[2 : n_mels + 2] - freqs[:n_mels])
weights *= np.expand_dims(enorm, axis=1)
return weights
def fram_wave(self, waveform, center=True):
"""
Transform a raw waveform into a list of smaller waveforms.
The window length defines how much of the signal is
contain in each frame (smalle waveform), while the hope length defines the step
between the beginning of each new frame.
Centering is done by reflecting the waveform which is first centered around
`frame_idx * hop_length`.
"""
frames = []
for i in range(0, waveform.shape[0] + 1, self.hop_length):
half_window = (self.n_fft - 1) // 2 + 1
if center:
start = i - half_window if i > half_window else 0
end = (
i + half_window
if i < waveform.shape[0] - half_window
else waveform.shape[0]
@staticmethod
def stft(
input_array: np.ndarray,
n_fft: int,
hop_length: int = None,
win_length: int = None,
window: np.ndarray = None,
center: bool = True,
mode: str = "reflect",
normalized: bool = False,
onesided: bool = None,
return_complex: bool = None,
):
# Default initialization for hop_length and win_length
hop_length = hop_length if hop_length is not None else n_fft // 4
win_length = win_length if win_length is not None else n_fft
input_is_complex = np.iscomplexobj(input_array)
# Determine if the output should be complex
return_complex = (
return_complex
if return_complex is not None
else (input_is_complex or (window is not None and np.iscomplexobj(window)))
)
if not return_complex and return_complex is None:
raise ValueError(
"stft requires the return_complex parameter for real inputs."
)
# Input checks
if not np.issubdtype(input_array.dtype, np.floating) and not input_is_complex:
raise ValueError(
"stft: expected an array of floating point or complex values,"
f" got {input_array.dtype}"
)
if input_array.ndim > 2 or input_array.ndim < 1:
raise ValueError(
f"stft: expected a 1D or 2D array, but got {input_array.ndim}D array"
)
# Handle 1D input
if input_array.ndim == 1:
input_array = np.expand_dims(input_array, axis=0)
input_array_1d = True
else:
input_array_1d = False
# Center padding if required
if center:
pad_amount = n_fft // 2
input_array = np.pad(
input_array, ((0, 0), (pad_amount, pad_amount)), mode=mode
)
batch, length = input_array.shape
# Additional input checks
if n_fft <= 0 or n_fft > length:
raise ValueError(
f"stft: expected 0 < n_fft <= {length}, but got n_fft={n_fft}"
)
if hop_length <= 0:
raise ValueError(
f"stft: expected hop_length > 0, but got hop_length={hop_length}"
)
if win_length <= 0 or win_length > n_fft:
raise ValueError(
f"stft: expected 0 < win_length <= n_fft, but got win_length={win_length}"
)
if window is not None:
if window.ndim != 1 or window.shape[0] != win_length:
raise ValueError(
f"stft: expected a 1D window array of size equal to win_length={win_length}, "
f"but got window with size {window.shape}"
)
frame = waveform[start:end]
# Handle padding of the window if necessary
if win_length < n_fft:
left = (n_fft - win_length) // 2
window_ = np.zeros(n_fft, dtype=window.dtype)
window_[left : left + win_length] = window
else:
window_ = window
if start == 0:
padd_width = (-i + half_window, 0)
frame = np.pad(frame, pad_width=padd_width, mode="reflect")
# Calculate the number of frames
n_frames = 1 + (length - n_fft) // hop_length
elif end == waveform.shape[0]:
padd_width = (0, (i - waveform.shape[0] + half_window))
frame = np.pad(frame, pad_width=padd_width, mode="reflect")
# Time to columns
input_array = np.lib.stride_tricks.as_strided(
input_array,
(batch, n_frames, n_fft),
(
input_array.strides[0],
hop_length * input_array.strides[1],
input_array.strides[1],
),
)
else:
frame = waveform[i : i + self.n_fft]
frame_width = frame.shape[0]
if frame_width < waveform.shape[0]:
frame = np.lib.pad(
frame,
pad_width=(0, self.n_fft - frame_width),
mode="constant",
constant_values=0,
)
if window_ is not None:
input_array = input_array * window_
frames.append(frame)
return np.stack(frames, 0)
# FFT and transpose
complex_fft = input_is_complex
onesided = onesided if onesided is not None else not complex_fft
def stft(self, frames, window):
if normalized:
norm = "ortho"
else:
norm = None
if complex_fft:
if onesided:
raise ValueError(
"Cannot have onesided output if window or input is complex"
)
output = np.fft.fft(input_array, n=n_fft, axis=-1, norm=norm)
else:
output = np.fft.rfft(input_array, n=n_fft, axis=-1, norm=norm)
output = output.transpose((0, 2, 1))
if input_array_1d:
output = output.squeeze(0)
return output if return_complex else np.real(output)
def __call__(self, waveform: np.ndarray, padding=160, chunk_length=None):
"""
Calculates the complex Short-Time Fourier Transform (STFT) of the given framed signal.
Should give the same results as `torch.stft`.
Compute the log-Mel spectrogram of the provided audio.
"""
frame_size = frames.shape[1]
fft_size = self.n_fft
if fft_size is None:
fft_size = frame_size
if chunk_length is not None:
self.n_samples = chunk_length * self.sampling_rate
self.nb_max_frames = self.n_samples // self.hop_length
if fft_size < frame_size:
raise ValueError("FFT size must greater or equal the frame size")
# number of FFT bins to store
num_fft_bins = (fft_size >> 1) + 1
if waveform.dtype is not np.float32:
waveform = waveform.astype(np.float32)
data = np.empty((len(frames), num_fft_bins), dtype=np.complex64)
fft_signal = np.zeros(fft_size)
for f, frame in enumerate(frames):
if window is not None:
np.multiply(frame, window, out=fft_signal[:frame_size])
else:
fft_signal[:frame_size] = frame
data[f] = np.fft.fft(fft_signal, axis=0)[:num_fft_bins]
return data.T
def __call__(self, waveform, padding=True):
"""
Compute the log-Mel spectrogram of the provided audio, gives similar results
whisper's original torch implementation with 1e-5 tolerance.
"""
if padding:
waveform = np.pad(waveform, [(0, self.n_samples)])
waveform = np.pad(waveform, (0, padding))
window = np.hanning(self.n_fft + 1)[:-1]
window = np.hanning(self.n_fft + 1)[:-1].astype("float32")
frames = self.fram_wave(waveform)
stft = self.stft(frames, window=window)
magnitudes = np.abs(stft[:, :-1]) ** 2
stft = self.stft(
waveform,
self.n_fft,
self.hop_length,
window=window,
return_complex=True,
).astype("complex64")
magnitudes = np.abs(stft[..., :-1]) ** 2
filters = self.mel_filters
mel_spec = filters @ magnitudes
mel_spec = self.mel_filters @ magnitudes
log_spec = np.log10(np.clip(mel_spec, a_min=1e-10, a_max=None))
log_spec = np.maximum(log_spec, log_spec.max() - 8.0)

View File

@@ -67,6 +67,12 @@ class Tokenizer:
def no_timestamps(self) -> int:
return self.tokenizer.token_to_id("<|notimestamps|>")
@cached_property
def no_speech(self) -> int:
return self.tokenizer.token_to_id("<|nospeech|>") or self.tokenizer.token_to_id(
"<|nocaptions|>"
)
@property
def timestamp_begin(self) -> int:
return self.no_timestamps + 1
@@ -105,10 +111,46 @@ class Tokenizer:
[s if isinstance(s, str) else self.tokenizer.decode(s) for s in outputs]
)
@cached_property
def non_speech_tokens(self) -> Tuple[int]:
"""
Returns the list of tokens to suppress in order to avoid any speaker tags or non-speech
annotations, to prevent sampling texts that are not actually spoken in the audio, e.g.
- ♪♪♪
- ( SPEAKING FOREIGN LANGUAGE )
- [DAVID] Hey there,
keeping basic punctuations like commas, periods, question marks, exclamation points, etc.
"""
symbols = list('"#()*+/:;<=>@[\\]^_`{|}~「」『』')
symbols += (
"<< >> <<< >>> -- --- -( -[ (' (\" (( )) ((( ))) [[ ]] {{ }} ♪♪ ♪♪♪".split()
)
# symbols that may be a single token or multiple tokens depending on the tokenizer.
# In case they're multiple tokens, suppress the first token, which is safe because:
# These are between U+2640 and U+267F miscellaneous symbols that are okay to suppress
# in generations, and in the 3-byte UTF-8 representation they share the first two bytes.
miscellaneous = set("♩♪♫♬♭♮♯")
assert all(0x2640 <= ord(c) <= 0x267F for c in miscellaneous)
# allow hyphens "-" and single quotes "'" between words, but not at the beginning of a word
result = {self.encode(" -")[0], self.encode(" '")[0]}
for symbol in symbols + list(miscellaneous):
for tokens in [
self.encode(symbol),
self.encode(" " + symbol),
]:
if len(tokens) == 1 or symbol in miscellaneous:
result.add(tokens[0])
return tuple(sorted(result))
def split_to_word_tokens(
self, tokens: List[int]
) -> Tuple[List[str], List[List[int]]]:
if self.language_code in {"zh", "ja", "th", "lo", "my"}:
if self.language_code in {"zh", "ja", "th", "lo", "my", "yue"}:
# These languages don't typically use spaces, so it is difficult to split words
# without morpheme analysis. Here, we instead split words at any
# position where the tokens are decoded as valid unicode points
@@ -274,4 +316,5 @@ _LANGUAGE_CODES = (
"yi",
"yo",
"zh",
"yue",
)

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,7 @@ import logging
import os
import re
from typing import List, Optional
from typing import List, Optional, Union
import huggingface_hub
import requests
@@ -10,17 +10,25 @@ import requests
from tqdm.auto import tqdm
_MODELS = {
"tiny.en": "guillaumekln/faster-whisper-tiny.en",
"tiny": "guillaumekln/faster-whisper-tiny",
"base.en": "guillaumekln/faster-whisper-base.en",
"base": "guillaumekln/faster-whisper-base",
"small.en": "guillaumekln/faster-whisper-small.en",
"small": "guillaumekln/faster-whisper-small",
"medium.en": "guillaumekln/faster-whisper-medium.en",
"medium": "guillaumekln/faster-whisper-medium",
"large-v1": "guillaumekln/faster-whisper-large-v1",
"large-v2": "guillaumekln/faster-whisper-large-v2",
"large": "guillaumekln/faster-whisper-large-v2",
"tiny.en": "Systran/faster-whisper-tiny.en",
"tiny": "Systran/faster-whisper-tiny",
"base.en": "Systran/faster-whisper-base.en",
"base": "Systran/faster-whisper-base",
"small.en": "Systran/faster-whisper-small.en",
"small": "Systran/faster-whisper-small",
"medium.en": "Systran/faster-whisper-medium.en",
"medium": "Systran/faster-whisper-medium",
"large-v1": "Systran/faster-whisper-large-v1",
"large-v2": "Systran/faster-whisper-large-v2",
"large-v3": "Systran/faster-whisper-large-v3",
"large": "Systran/faster-whisper-large-v3",
"distil-large-v2": "Systran/faster-distil-whisper-large-v2",
"distil-medium.en": "Systran/faster-distil-whisper-medium.en",
"distil-small.en": "Systran/faster-distil-whisper-small.en",
"distil-large-v3": "Systran/faster-distil-whisper-large-v3",
"distil-large-v3.5": "distil-whisper/distil-large-v3.5-ct2",
"large-v3-turbo": "mobiuslabsgmbh/faster-whisper-large-v3-turbo",
"turbo": "mobiuslabsgmbh/faster-whisper-large-v3-turbo",
}
@@ -44,19 +52,26 @@ def download_model(
output_dir: Optional[str] = None,
local_files_only: bool = False,
cache_dir: Optional[str] = None,
revision: Optional[str] = None,
use_auth_token: Optional[Union[str, bool]] = None,
):
"""Downloads a CTranslate2 Whisper model from the Hugging Face Hub.
Args:
size_or_id: Size of the model to download from https://huggingface.co/guillaumekln
(tiny, tiny.en, base, base.en, small, small.en medium, medium.en, large-v1, large-v2,
large), or a CTranslate2-converted model ID from the Hugging Face Hub
(e.g. guillaumekln/faster-whisper-large-v2).
size_or_id: Size of the model to download from https://huggingface.co/Systran
(tiny, tiny.en, base, base.en, small, small.en, distil-small.en, medium, medium.en,
distil-medium.en, large-v1, large-v2, large-v3, large, distil-large-v2,
distil-large-v3), or a CTranslate2-converted model ID from the Hugging Face Hub
(e.g. Systran/faster-whisper-large-v3).
output_dir: Directory where the model should be saved. If not set, the model is saved in
the cache directory.
local_files_only: If True, avoid downloading the file and return the path to the local
cached file if it exists.
cache_dir: Path to the folder where cached files are stored.
revision: An optional Git revision id which can be a branch name, a tag, or a
commit hash.
use_auth_token: HuggingFace authentication token or True to use the
token stored by the HuggingFace config folder.
Returns:
The path to the downloaded model.
@@ -76,6 +91,7 @@ def download_model(
allow_patterns = [
"config.json",
"preprocessor_config.json",
"model.bin",
"tokenizer.json",
"vocabulary.*",
@@ -85,6 +101,7 @@ def download_model(
"local_files_only": local_files_only,
"allow_patterns": allow_patterns,
"tqdm_class": disabled_tqdm,
"revision": revision,
}
if output_dir is not None:
@@ -94,6 +111,9 @@ def download_model(
if cache_dir is not None:
kwargs["cache_dir"] = cache_dir
if use_auth_token is not None:
kwargs["token"] = use_auth_token
try:
return huggingface_hub.snapshot_download(repo_id, **kwargs)
except (
@@ -141,3 +161,10 @@ class disabled_tqdm(tqdm):
def __init__(self, *args, **kwargs):
kwargs["disable"] = True
super().__init__(*args, **kwargs)
def get_end(segments: List[dict]) -> Optional[float]:
return next(
(w["end"] for s in reversed(segments) for w in reversed(s["words"])),
segments[-1]["end"] if segments else None,
)

View File

@@ -1,9 +1,9 @@
import bisect
import functools
import os
import warnings
from typing import List, NamedTuple, Optional
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple
import numpy as np
@@ -11,13 +11,19 @@ from faster_whisper.utils import get_assets_path
# The code below is adapted from https://github.com/snakers4/silero-vad.
class VadOptions(NamedTuple):
@dataclass
class VadOptions:
"""VAD options.
Attributes:
threshold: Speech threshold. Silero VAD outputs speech probabilities for each audio chunk,
probabilities ABOVE this value are considered as SPEECH. It is better to tune this
parameter for each dataset separately, but "lazy" 0.5 is pretty good for most datasets.
neg_threshold: Silence threshold for determining the end of speech. If a probability is lower
than neg_threshold, it is always considered silence. Values higher than neg_threshold
are only considered speech if the previous sample was classified as speech; otherwise,
they are treated as silence. This parameter helps refine the detection of speech
transitions, ensuring smoother segment boundaries.
min_speech_duration_ms: Final speech chunks shorter min_speech_duration_ms are thrown out.
max_speech_duration_s: Maximum duration of speech chunks in seconds. Chunks longer
than max_speech_duration_s will be split at the timestamp of the last silence that
@@ -25,23 +31,21 @@ class VadOptions(NamedTuple):
split aggressively just before max_speech_duration_s.
min_silence_duration_ms: In the end of each speech chunk wait for min_silence_duration_ms
before separating it
window_size_samples: Audio chunks of window_size_samples size are fed to the silero VAD model.
WARNING! Silero VAD models were trained using 512, 1024, 1536 samples for 16000 sample rate.
Values other than these may affect model performance!!
speech_pad_ms: Final speech chunks are padded by speech_pad_ms each side
"""
threshold: float = 0.5
min_speech_duration_ms: int = 250
neg_threshold: float = None
min_speech_duration_ms: int = 0
max_speech_duration_s: float = float("inf")
min_silence_duration_ms: int = 2000
window_size_samples: int = 1024
speech_pad_ms: int = 400
def get_speech_timestamps(
audio: np.ndarray,
vad_options: Optional[VadOptions] = None,
sampling_rate: int = 16000,
**kwargs,
) -> List[dict]:
"""This method is used for splitting long audios into speech chunks using silero VAD.
@@ -49,6 +53,7 @@ def get_speech_timestamps(
Args:
audio: One dimensional float array.
vad_options: Options for VAD processing.
sampling rate: Sampling rate of the audio.
kwargs: VAD options passed as keyword arguments for backward compatibility.
Returns:
@@ -58,19 +63,12 @@ def get_speech_timestamps(
vad_options = VadOptions(**kwargs)
threshold = vad_options.threshold
neg_threshold = vad_options.neg_threshold
min_speech_duration_ms = vad_options.min_speech_duration_ms
max_speech_duration_s = vad_options.max_speech_duration_s
min_silence_duration_ms = vad_options.min_silence_duration_ms
window_size_samples = vad_options.window_size_samples
window_size_samples = 512
speech_pad_ms = vad_options.speech_pad_ms
if window_size_samples not in [512, 1024, 1536]:
warnings.warn(
"Unusual window_size_samples! Supported window_size_samples:\n"
" - [512, 1024, 1536] for 16000 sampling_rate"
)
sampling_rate = 16000
min_speech_samples = sampling_rate * min_speech_duration_ms / 1000
speech_pad_samples = sampling_rate * speech_pad_ms / 1000
max_speech_samples = (
@@ -84,20 +82,17 @@ def get_speech_timestamps(
audio_length_samples = len(audio)
model = get_vad_model()
state = model.get_initial_state(batch_size=1)
speech_probs = []
for current_start_sample in range(0, audio_length_samples, window_size_samples):
chunk = audio[current_start_sample : current_start_sample + window_size_samples]
if len(chunk) < window_size_samples:
chunk = np.pad(chunk, (0, int(window_size_samples - len(chunk))))
speech_prob, state = model(chunk, state, sampling_rate)
speech_probs.append(speech_prob)
padded_audio = np.pad(
audio, (0, window_size_samples - audio.shape[0] % window_size_samples)
)
speech_probs = model(padded_audio)
triggered = False
speeches = []
current_speech = {}
neg_threshold = threshold - 0.15
if neg_threshold is None:
neg_threshold = max(threshold - 0.15, 0.01)
# to save potential segment end (and tolerate some silence)
temp_end = 0
@@ -188,12 +183,64 @@ def get_speech_timestamps(
return speeches
def collect_chunks(audio: np.ndarray, chunks: List[dict]) -> np.ndarray:
"""Collects and concatenates audio chunks."""
def collect_chunks(
audio: np.ndarray,
chunks: List[dict],
sampling_rate: int = 16000,
max_duration: float = float("inf"),
) -> Tuple[List[np.ndarray], List[Dict[str, float]]]:
"""This function merges the chunks of audio into chunks of max_duration (s) length."""
if not chunks:
return np.array([], dtype=np.float32)
chunk_metadata = {
"offset": 0,
"duration": 0,
"segments": [],
}
return [np.array([], dtype=np.float32)], [chunk_metadata]
return np.concatenate([audio[chunk["start"] : chunk["end"]] for chunk in chunks])
audio_chunks = []
chunks_metadata = []
current_segments = []
current_duration = 0
total_duration = 0
current_audio = np.array([], dtype=np.float32)
for chunk in chunks:
if (
current_duration + chunk["end"] - chunk["start"]
> max_duration * sampling_rate
):
audio_chunks.append(current_audio)
chunk_metadata = {
"offset": total_duration / sampling_rate,
"duration": current_duration / sampling_rate,
"segments": current_segments,
}
total_duration += current_duration
chunks_metadata.append(chunk_metadata)
current_segments = []
current_audio = audio[chunk["start"] : chunk["end"]]
current_duration = chunk["end"] - chunk["start"]
else:
current_segments.append(chunk)
current_audio = np.concatenate(
(current_audio, audio[chunk["start"] : chunk["end"]])
)
current_duration += chunk["end"] - chunk["start"]
audio_chunks.append(current_audio)
chunk_metadata = {
"offset": total_duration / sampling_rate,
"duration": current_duration / sampling_rate,
"segments": current_segments,
}
chunks_metadata.append(chunk_metadata)
return audio_chunks, chunks_metadata
class SpeechTimestampsMap:
@@ -219,15 +266,19 @@ class SpeechTimestampsMap:
self,
time: float,
chunk_index: Optional[int] = None,
is_end: bool = False,
) -> float:
if chunk_index is None:
chunk_index = self.get_chunk_index(time)
chunk_index = self.get_chunk_index(time, is_end)
total_silence_before = self.total_silence_before[chunk_index]
return round(total_silence_before + time, self.time_precision)
def get_chunk_index(self, time: float) -> int:
def get_chunk_index(self, time: float, is_end: bool = False) -> int:
sample = int(time * self.sampling_rate)
if sample in self.chunk_end_sample and is_end:
return self.chunk_end_sample.index(sample)
return min(
bisect.bisect(self.chunk_end_sample, sample),
len(self.chunk_end_sample) - 1,
@@ -237,7 +288,7 @@ class SpeechTimestampsMap:
@functools.lru_cache
def get_vad_model():
"""Returns the VAD model instance."""
path = os.path.join(get_assets_path(), "silero_vad.onnx")
path = os.path.join(get_assets_path(), "silero_vad_v6.onnx")
return SileroVADModel(path)
@@ -253,6 +304,7 @@ class SileroVADModel:
opts = onnxruntime.SessionOptions()
opts.inter_op_num_threads = 1
opts.intra_op_num_threads = 1
opts.enable_cpu_mem_arena = False
opts.log_severity_level = 4
self.session = onnxruntime.InferenceSession(
@@ -261,31 +313,39 @@ class SileroVADModel:
sess_options=opts,
)
def get_initial_state(self, batch_size: int):
h = np.zeros((2, batch_size, 64), dtype=np.float32)
c = np.zeros((2, batch_size, 64), dtype=np.float32)
return h, c
def __call__(
self, audio: np.ndarray, num_samples: int = 512, context_size_samples: int = 64
):
assert audio.ndim == 1, "Input should be a 1D array"
assert (
audio.shape[0] % num_samples == 0
), "Input size should be a multiple of num_samples"
def __call__(self, x, state, sr: int):
if len(x.shape) == 1:
x = np.expand_dims(x, 0)
if len(x.shape) > 2:
raise ValueError(
f"Too many dimensions for input audio chunk {len(x.shape)}"
h = np.zeros((1, 1, 128), dtype="float32")
c = np.zeros((1, 1, 128), dtype="float32")
context = np.zeros(
(1, context_size_samples),
dtype="float32",
)
batched_audio = audio.reshape(-1, num_samples)
context = batched_audio[..., -context_size_samples:]
context[-1] = 0
context = np.roll(context, 1, 0)
batched_audio = np.concatenate([context, batched_audio], 1)
batched_audio = batched_audio.reshape(-1, num_samples + context_size_samples)
encoder_batch_size = 10000
num_segments = batched_audio.shape[0]
outputs = []
for i in range(0, num_segments, encoder_batch_size):
output, h, c = self.session.run(
None,
{"input": batched_audio[i : i + encoder_batch_size], "h": h, "c": c},
)
if sr / x.shape[1] > 31.25:
raise ValueError("Input audio chunk is too short")
outputs.append(output)
h, c = state
out = np.concatenate(outputs, axis=0)
ort_inputs = {
"input": x,
"h": h,
"c": c,
"sr": np.array(sr, dtype="int64"),
}
out, h, c = self.session.run(None, ort_inputs)
state = (h, c)
return out, state
return out

View File

@@ -1,3 +1,3 @@
"""Version information."""
__version__ = "0.9.0"
__version__ = "1.2.0"

View File

@@ -1,5 +1,6 @@
av==10.*
ctranslate2>=3.17,<4
ctranslate2>=4.0,<5
huggingface_hub>=0.13
tokenizers>=0.13,<0.15
onnxruntime>=1.14,<2
tokenizers>=0.13,<1
onnxruntime>=1.14,<2
av>=11
tqdm

View File

@@ -37,7 +37,7 @@ setup(
long_description=get_long_description(),
long_description_content_type="text/markdown",
author="Guillaume Klein",
url="https://github.com/guillaumekln/faster-whisper",
url="https://github.com/SYSTRAN/faster-whisper",
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
@@ -45,14 +45,13 @@ setup(
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
keywords="openai whisper speech ctranslate2 inference quantization transformer",
python_requires=">=3.8",
python_requires=">=3.10",
install_requires=install_requires,
extras_require={
"conversion": conversion_requires,

View File

@@ -11,3 +11,8 @@ def data_dir():
@pytest.fixture
def jfk_path(data_dir):
return os.path.join(data_dir, "jfk.flac")
@pytest.fixture
def physcisworks_path(data_dir):
return os.path.join(data_dir, "physicsworks.wav")

BIN
tests/data/hotwords.mp3 Normal file

Binary file not shown.

BIN
tests/data/multilingual.mp3 Normal file

Binary file not shown.

BIN
tests/data/physicsworks.wav Normal file

Binary file not shown.

121
tests/test_tokenizer.py Normal file
View File

@@ -0,0 +1,121 @@
from faster_whisper import WhisperModel
from faster_whisper.tokenizer import Tokenizer
from faster_whisper.transcribe import get_suppressed_tokens
def test_suppressed_tokens_minus_1():
model = WhisperModel("tiny.en")
tokenizer = Tokenizer(model.hf_tokenizer, False)
tokens = get_suppressed_tokens(tokenizer, [-1])
assert tokens == (
1,
2,
7,
8,
9,
10,
14,
25,
26,
27,
28,
29,
31,
58,
59,
60,
61,
62,
63,
90,
91,
92,
93,
357,
366,
438,
532,
685,
705,
796,
930,
1058,
1220,
1267,
1279,
1303,
1343,
1377,
1391,
1635,
1782,
1875,
2162,
2361,
2488,
3467,
4008,
4211,
4600,
4808,
5299,
5855,
6329,
7203,
9609,
9959,
10563,
10786,
11420,
11709,
11907,
13163,
13697,
13700,
14808,
15306,
16410,
16791,
17992,
19203,
19510,
20724,
22305,
22935,
27007,
30109,
30420,
33409,
34949,
40283,
40493,
40549,
47282,
49146,
50257,
50357,
50358,
50359,
50360,
50361,
)
def test_suppressed_tokens_minus_value():
model = WhisperModel("tiny.en")
tokenizer = Tokenizer(model.hf_tokenizer, False)
tokens = get_suppressed_tokens(tokenizer, [13])
assert tokens == (13, 50257, 50357, 50358, 50359, 50360, 50361)
def test_split_on_unicode():
model = WhisperModel("tiny")
tokenizer = Tokenizer(model.hf_tokenizer, False)
tokens = [8404, 871, 287, 6, 246, 526, 3210, 20378]
words, word_tokens = tokenizer.split_tokens_on_unicode(tokens)
assert words == [" elle", " est", " l", "'", "\ufffd", "é", "rit", "oire"]
assert word_tokens == [[8404], [871], [287], [6], [246], [526], [3210], [20378]]

View File

@@ -1,6 +1,9 @@
import inspect
import os
from faster_whisper import WhisperModel, decode_audio
import numpy as np
from faster_whisper import BatchedInferencePipeline, WhisperModel, decode_audio
def test_supported_languages():
@@ -30,13 +33,68 @@ def test_transcribe(jfk_path):
segment = segments[0]
assert segment.text == (
" And so my fellow Americans ask not what your country can do for you, "
" And so my fellow Americans, ask not what your country can do for you, "
"ask what you can do for your country."
)
assert segment.text == "".join(word.word for word in segment.words)
assert segment.start == segment.words[0].start
assert segment.end == segment.words[-1].end
batched_model = BatchedInferencePipeline(model=model)
result, info = batched_model.transcribe(
jfk_path, word_timestamps=True, vad_filter=False
)
assert info.language == "en"
assert info.language_probability > 0.7
segments = []
for segment in result:
segments.append(
{"start": segment.start, "end": segment.end, "text": segment.text}
)
assert len(segments) == 1
assert segment.text == (
" And so my fellow Americans ask not what your country can do for you, "
"ask what you can do for your country."
)
def test_batched_transcribe(physcisworks_path):
model = WhisperModel("tiny")
batched_model = BatchedInferencePipeline(model=model)
result, info = batched_model.transcribe(physcisworks_path, batch_size=16)
assert info.language == "en"
assert info.language_probability > 0.7
segments = []
for segment in result:
segments.append(
{"start": segment.start, "end": segment.end, "text": segment.text}
)
# number of near 30 sec segments
assert len(segments) == 6
result, info = batched_model.transcribe(
physcisworks_path,
batch_size=16,
without_timestamps=False,
word_timestamps=True,
)
segments = []
for segment in result:
assert segment.words is not None
segments.append(
{"start": segment.start, "end": segment.end, "text": segment.text}
)
assert len(segments) > 7
def test_empty_audio():
audio = np.asarray([], dtype="float32")
model = WhisperModel("tiny")
pipeline = BatchedInferencePipeline(model=model)
assert list(model.transcribe(audio)[0]) == []
assert list(pipeline.transcribe(audio)[0]) == []
model.detect_language(audio)
def test_prefix_with_timestamps(jfk_path):
@@ -49,12 +107,12 @@ def test_prefix_with_timestamps(jfk_path):
segment = segments[0]
assert segment.text == (
" And so my fellow Americans ask not what your country can do for you, "
" And so my fellow Americans, ask not what your country can do for you, "
"ask what you can do for your country."
)
assert segment.start == 0
assert 10 < segment.end < 11
assert 10 < segment.end <= 11
def test_vad(jfk_path):
@@ -97,3 +155,138 @@ def test_stereo_diarization(data_dir):
segments, _ = model.transcribe(right)
transcription = "".join(segment.text for segment in segments).strip()
assert transcription == "The horizon seems extremely distant."
def test_multilingual_transcription(data_dir):
model = WhisperModel("tiny")
pipeline = BatchedInferencePipeline(model)
audio_path = os.path.join(data_dir, "multilingual.mp3")
audio = decode_audio(audio_path)
segments, info = model.transcribe(
audio,
multilingual=True,
without_timestamps=True,
condition_on_previous_text=False,
)
segments = list(segments)
assert (
segments[0].text
== " Permission is hereby granted, free of charge, to any person obtaining a copy of the"
" software and associated documentation files to deal in the software without restriction,"
" including without limitation the rights to use, copy, modify, merge, publish, distribute"
", sublicence, and or cell copies of the software, and to permit persons to whom the "
"software is furnished to do so, subject to the following conditions. The above copyright"
" notice and this permission notice, shall be included in all copies or substantial "
"portions of the software."
)
assert (
segments[1].text
== " Jedem, der dieses Software und die dazu gehöregen Dokumentationsdatein erhält, wird "
"hiermit unengeltlich die Genehmigung erteilt, wird der Software und eingeschränkt zu "
"verfahren. Dies umfasst insbesondere das Recht, die Software zu verwenden, zu "
"vervielfältigen, zu modifizieren, zu Samenzofügen, zu veröffentlichen, zu verteilen, "
"unterzulizenzieren und oder kopieren der Software zu verkaufen und diese Rechte "
"unterfolgen den Bedingungen anderen zu übertragen."
)
segments, info = pipeline.transcribe(audio, multilingual=True)
segments = list(segments)
assert (
segments[0].text
== " Permission is hereby granted, free of charge, to any person obtaining a copy of the"
" software and associated documentation files to deal in the software without restriction,"
" including without limitation the rights to use, copy, modify, merge, publish, distribute"
", sublicence, and or cell copies of the software, and to permit persons to whom the "
"software is furnished to do so, subject to the following conditions. The above copyright"
" notice and this permission notice, shall be included in all copies or substantial "
"portions of the software."
)
assert (
"Dokumentationsdatein erhält, wird hiermit unengeltlich die Genehmigung erteilt,"
" wird der Software und eingeschränkt zu verfahren. Dies umfasst insbesondere das Recht,"
" die Software zu verwenden, zu vervielfältigen, zu modifizieren"
in segments[1].text
)
def test_hotwords(data_dir):
model = WhisperModel("tiny")
pipeline = BatchedInferencePipeline(model)
audio_path = os.path.join(data_dir, "hotwords.mp3")
audio = decode_audio(audio_path)
segments, info = model.transcribe(audio, hotwords="ComfyUI")
segments = list(segments)
assert "ComfyUI" in segments[0].text
assert info.transcription_options.hotwords == "ComfyUI"
segments, info = pipeline.transcribe(audio, hotwords="ComfyUI")
segments = list(segments)
assert "ComfyUI" in segments[0].text
assert info.transcription_options.hotwords == "ComfyUI"
def test_transcribe_signature():
model_transcribe_args = set(inspect.getargs(WhisperModel.transcribe.__code__).args)
pipeline_transcribe_args = set(
inspect.getargs(BatchedInferencePipeline.transcribe.__code__).args
)
pipeline_transcribe_args.remove("batch_size")
assert model_transcribe_args == pipeline_transcribe_args
def test_monotonic_timestamps(physcisworks_path):
model = WhisperModel("tiny")
pipeline = BatchedInferencePipeline(model=model)
segments, info = model.transcribe(physcisworks_path, word_timestamps=True)
segments = list(segments)
for i in range(len(segments) - 1):
assert segments[i].start <= segments[i].end
assert segments[i].end <= segments[i + 1].start
for word in segments[i].words:
assert word.start <= word.end
assert word.end <= segments[i].end
assert segments[-1].end <= info.duration
segments, info = pipeline.transcribe(physcisworks_path, word_timestamps=True)
segments = list(segments)
for i in range(len(segments) - 1):
assert segments[i].start <= segments[i].end
assert segments[i].end <= segments[i + 1].start
for word in segments[i].words:
assert word.start <= word.end
assert word.end <= segments[i].end
assert segments[-1].end <= info.duration
def test_cliptimestamps_segments(jfk_path):
model = WhisperModel("tiny")
pipeline = BatchedInferencePipeline(model=model)
audio = decode_audio(jfk_path)
audio = np.concatenate([audio, audio])
clip_timestamps = [{"start": 0.0, "end": 11.0}, {"start": 11.0, "end": 22.0}]
segments, info = pipeline.transcribe(audio, clip_timestamps=clip_timestamps)
segments = list(segments)
assert len(segments) == 2
for segment, clip in zip(segments, clip_timestamps):
assert segment.start == clip["start"]
assert segment.end == clip["end"]
assert segment.text == (
" And so my fellow Americans ask not what your country can do for you, "
"ask what you can do for your country."
)