-
-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to deal with this error? #390
Comments
Hi. The error message does not contain really useful information, is this the full stack trace? |
We are running Whisper in the latest version of Docker. Just this morning, everything was working fine. The error occurs during the initialization of the model for transcription. Could you please advise how to obtain the full stack trace in Docker Compose? |
Here is the full stack trace: app-1 | Use "faster-whisper" implementation We did not receive these messages before today. We have been working with this Whisper for a month. |
It says that it failed to download the model. Since the model does not exist in Since downloading model using the huggingface package is problematic and you may be mounting the model path locally to the Docker container, you can manually place any model in the This is the example model directory structure for If you place the model files with subdirectory name in there, the subdirectory name will appear in the models dropdown. |
Good afternoon! Thank you for your response. We followed your instructions, and another error has occurred: An error occured while synchronizing the model Systran/faster-whisper-large-v2 from the Hugging Face Hub: |
@kotovviktor Hi, Can you try again with correct model directory structure? It should be : not : Yeah it's a little too nested... And originally it was supposed to automatically create the directories and download the model in the path, |
I don't know why it happens. The error message just says that it could not download |
I have the same issue. Could you maybe document it a bit better, or point to where the code that imports the model is? (And it's probably replicatable by disabling internett access) I think the issue arrises from how people download the models. The file structure from |
This should no longer happen with #405. If anyone is still having issue, |
Good afternoon! Could you help me understand this error? What causes it and how can it be fixed?
Error transcribing file: 'NoneType' object is not iterable
app-1 | /Whisper-WebUI/venv/lib/python3.11/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
app-1 | warnings.warn(
app-1 | Traceback (most recent call last):
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/queueing.py", line 622, in process_events
app-1 | response = await route_utils.call_process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/route_utils.py", line 323, in call_process_api
app-1 | output = await app.get_blocks().process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 2024, in process_api
app-1 | data = await self.postprocess_data(block_fn, result["prediction"], state)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1780, in postprocess_data
app-1 | self.validate_outputs(block_fn, predictions) # type: ignore
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1739, in validate_outputs
app-1 | raise ValueError(
app-1 | ValueError: A function (transcribe_file) didn't return enough output values (needed: 2, returned: 1).
The text was updated successfully, but these errors were encountered: