-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the decoding issues #1768
base: master
Are you sure you want to change the base?
Fix the decoding issues #1768
Conversation
revert change
Append issue with zero-filled WAV. |
File from #1881 (zero filled WAV) give a gallucination in this version too. $ ../whisper.cpp-bobqianic/main -m ./models/ggml-large-v3.bin -l ru --threads 8 -mc 0 samples/zeroes.wav
whisper_init_from_file_with_params_no_state: loading model from './models/ggml-large-v3.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070, compute capability 8.9, VMM: yes
whisper_backend_init: using CUDA backend
whisper_model_load: CUDA0 total size = 3094,86 MB (3 buffers)
whisper_model_load: model size = 3094,36 MB
whisper_backend_init: using CUDA backend
whisper_init_state: kv self size = 220,20 MB
whisper_init_state: kv cross size = 245,76 MB
whisper_init_state: compute buffer (conv) = 35,50 MB
whisper_init_state: compute buffer (encode) = 233,50 MB
whisper_init_state: compute buffer (cross) = 10,15 MB
whisper_init_state: compute buffer (decode) = 108,99 MB
system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0 |
run: processing 'samples/zeroes.wav' (19200 samples, 1,2 sec), 8 threads, 1 processors, 5 beams + best of 5, lang = ru, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:29.980] Продолжение следует...
whisper_print_timings: load time = 781,61 ms
whisper_print_timings: fallbacks = 0 p / 0 h
whisper_print_timings: mel time = 4,81 ms
whisper_print_timings: sample time = 28,10 ms / 79 runs ( 0,36 ms per run)
whisper_print_timings: encode time = 162,31 ms / 1 runs ( 162,31 ms per run)
whisper_print_timings: decode time = 0,00 ms / 1 runs ( 0,00 ms per run)
whisper_print_timings: batchd time = 482,89 ms / 77 runs ( 6,27 ms per run)
whisper_print_timings: prompt time = 0,00 ms / 1 runs ( 0,00 ms per run)
whisper_print_timings: total time = 1502,74 ms |
What's the status of this PR? is it safe to use? |
I'm thinking about including this pull request in the R wrapper at audio.whisper . There the current approach to handle some of the hallucinations is to use R packages audio.vadwebrtc or audio.vadsilero to detect silences or general non-voiced signals and either
I haven't looked into the extreme details on this pull request (only skimmed through the logic which was changed in main.cpp and whisper.cpp) but would it make sense already to incorporate this pull request in audio.whisper or are there a lot of changes to be expected here or is this pull request going to be split into a BPE change (#1854) and a change regarding how to handle non-speech? |
@bobqianic are you pursuing this at the moment? |
No, at least not in May. I'm really tied up with a lot of things this month. |
The best way to include Silero Voice Activity into whisper.cpp is to add thirdparty package of onnxruntime1.12.1 dll, then call silero onnx model. My branch had added it. Even VAD, the hallucinations on silent intervals is also happenning. |
I recommend considering a previous Silero VAD version, namely v3.1. The current version v4 (at the moment of writing) often hallucinates speech on lengthy chunks of silent or near-silent audio segments. But you have to add a heavyweight dependency like onnxruntime just to run a 750KB model. The smallest size I could possibly reduce onnxruntime.dll to was about 2.2MB, which is still 3x the size of silero weights, and requires a lengthy custom build of onnxruntime from source with reduced operator set configs and other size reduction options. And prebuilt redistributables are easily 5-9 MB or more. I have a working Silero v3.1 implementation in pure C, but as much as I would like to suggest it as an option, the code is quite bad, I wrote it as a personal project for learning low level neural nets. |
@bobqianic, Could you rebase your changes? I'd like to test those improvements of yours with production data on our setup. |
Fix compatibility issue
@ziegenberg I did some testing, and it LGTM. If the CI is mostly green, you can proceed with your testing now. |
I already did some testing and fixed some of the errors on my own. Looks promising. I see less hallucinations, but I need to do some more statistics. I will switch to your branch for the next tests. Is your PR #1854 also related to this improvement? |
What data/statistics would you need from my side to consider this PR validated and get it merged? |
Thank you. If you have the ground truth text, please calculate the WER. |
I tested the output of anime using medium.en and found a problem with time axis recognition in the middle. file: https://dropover.cloud/f7 e020 |
Hi @Makememo, |
I tested three videos and I had this problem. The common feature of these videos is that there is a music section that begins to mess up the timeline. |
@ziegenberg @bobqianic just wanted to check in on this: is it still needed? Will it land any time soon? |
We now extensively tested this patch. SummaryWe see fewer hallucinations and an overall improvement in accuracy. DetailsHallucinations still happen if there is a period of time with no spoken words or music without words. If this period of time is longer than 30 seconds, it completely messes up the next couple of minutes, and it hallucinates widely. We "fixed" this by processing the input with Silero VAD first and letting whisper.cpp only analyze the parts where speech was recognized using the We have no real statistics to show as we have mostly lecture recordings with heavy use of German and Austrian German dialects. This makes generating a validated transcription very difficult. Whisper.cpp mostly corrects the grammar mistakes from the lecturers, which would result in a higher Word Error Rate, but in reality, the transcription is better. ConclusionWe will use this patch in production from now on. In my opinion, this can be merged. |
Will test on our end now too. Anything "special" that needs to be done in order to test this using the Swift package? @ziegenberg ? |
I have no experience with Swift, sorry. |
Hi @bobqianic, would you be willing to rebase this once more? |
whisper_wrap_segment
RemoveThis is too trickyprint_realtime
token_nosp
UseWill be addressed in separate PRscompression ratio
instead ofentropy