Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deal with this error? #390

Closed
kotovviktor opened this issue Nov 12, 2024 · 11 comments
Closed

How to deal with this error? #390

kotovviktor opened this issue Nov 12, 2024 · 11 comments
Assignees
Labels
bug Something isn't working

Comments

@kotovviktor
Copy link

Good afternoon! Could you help me understand this error? What causes it and how can it be fixed?

Error transcribing file: 'NoneType' object is not iterable
app-1 | /Whisper-WebUI/venv/lib/python3.11/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
app-1 | warnings.warn(
app-1 | Traceback (most recent call last):
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/queueing.py", line 622, in process_events
app-1 | response = await route_utils.call_process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/route_utils.py", line 323, in call_process_api
app-1 | output = await app.get_blocks().process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 2024, in process_api
app-1 | data = await self.postprocess_data(block_fn, result["prediction"], state)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1780, in postprocess_data
app-1 | self.validate_outputs(block_fn, predictions) # type: ignore
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1739, in validate_outputs
app-1 | raise ValueError(
app-1 | ValueError: A function (transcribe_file) didn't return enough output values (needed: 2, returned: 1).

@kotovviktor kotovviktor added the bug Something isn't working label Nov 12, 2024
@jhj0517
Copy link
Owner

jhj0517 commented Nov 12, 2024

Hi. The error message does not contain really useful information, is this the full stack trace?
And by any chance, are you using an older version of this WebUI?

@kotovviktor
Copy link
Author

We are running Whisper in the latest version of Docker. Just this morning, everything was working fine. The error occurs during the initialization of the model for transcription. Could you please advise how to obtain the full stack trace in Docker Compose?

@kotovviktor
Copy link
Author

Here is the full stack trace:

app-1 | Use "faster-whisper" implementation
app-1 | Device "cuda" is detected
app-1 | * Running on local URL: http://0.0.0.0:7860
app-1 |
app-1 | To create a public link, set share=True in launch().
app-1 | An error occured while synchronizing the model Systran/faster-whisper-small from the Hugging Face Hub:
app-1 | An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.
app-1 | Trying to load the model directly from the local cache, if it exists.
app-1 | Error transcribing file: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.
app-1 | /Whisper-WebUI/venv/lib/python3.11/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
app-1 | warnings.warn(
app-1 | Traceback (most recent call last):
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/queueing.py", line 622, in process_events
app-1 | response = await route_utils.call_process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/route_utils.py", line 323, in call_process_api
app-1 | output = await app.get_blocks().process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 2024, in process_api
app-1 | data = await self.postprocess_data(block_fn, result["prediction"], state)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1780, in postprocess_data
app-1 | self.validate_outputs(block_fn, predictions) # type: ignore
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1739, in validate_outputs
app-1 | raise ValueError(
app-1 | ValueError: A function (transcribe_file) didn't return enough output values (needed: 2, returned: 1).
app-1 | Output components:
app-1 | [textbox, file]
app-1 | Output values returned:
app-1 | [None]

We did not receive these messages before today. We have been working with this Whisper for a month.

@jhj0517
Copy link
Owner

jhj0517 commented Nov 12, 2024

An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.

It says that it failed to download the model. Since the model does not exist in path/Whisper-WebUI/models/Whisper/faster-whisper/models--Systran--faster-whisper-small, It would try to automatically download the model to the path and fail because of the internet connection.

Since downloading model using the huggingface package is problematic and you may be mounting the model path locally to the Docker container, you can manually place any model in the path/Whisper-WebUI/models/Whisper/faster-whisper.

This is the example model directory structure for faster-whisper-large-v3-turbo-ct2:

image

If you place the model files with subdirectory name in there, the subdirectory name will appear in the models dropdown.

@kotovviktor
Copy link
Author

Good afternoon! Thank you for your response. We followed your instructions, and another error has occurred:

An error occured while synchronizing the model Systran/faster-whisper-large-v2 from the Hugging Face Hub:
app-1 | An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.
app-1 | Trying to load the model directly from the local cache, if it exists.
app-1 | Error transcribing file: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.
app-1 | /Whisper-WebUI/venv/lib/python3.11/site-packages/torch/cuda/memory.py:330: FutureWarning: torch.cuda.reset_max_memory_allocated now calls torch.cuda.reset_peak_memory_stats, which resets /all/ peak memory stats.
app-1 | warnings.warn(
app-1 | Traceback (most recent call last):
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/queueing.py", line 622, in process_events
app-1 | response = await route_utils.call_process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/route_utils.py", line 323, in call_process_api
app-1 | output = await app.get_blocks().process_api(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 2024, in process_api
app-1 | data = await self.postprocess_data(block_fn, result["prediction"], state)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1780, in postprocess_data
app-1 | self.validate_outputs(block_fn, predictions) # type: ignore
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/Whisper-WebUI/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1739, in validate_outputs
app-1 | raise ValueError(
app-1 | ValueError: A function (transcribe_mic) didn't return enough output values (needed: 2, returned: 1).
app-1 | Output components:
app-1 | [textbox, file]
app-1 | Output values returned:
app-1 | [None]

photo_2024-11-13_13-19-42

@jhj0517
Copy link
Owner

jhj0517 commented Nov 13, 2024

@kotovviktor Hi, Can you try again with correct model directory structure?
There should be "Whisper" directory after "models",

It should be : Whisper-WebUI/models/Whisper/faster-whisper/faster-whisper-large-v2

not : Whisper-WebUI/models/faster-whisper/faster-whisper-large-v2

Yeah it's a little too nested... And originally it was supposed to automatically create the directories and download the model in the path,
I don't know why your internet is not accessible to Hugging Face Hub?

@kotovviktor
Copy link
Author

We followed your instructions, but encountered a different problem. Could you advise why this might happen during the container build?
photo_2024-11-13_19-34-30
P.S. We don't know why Hugging Face Hub is not accessible to us

@jhj0517
Copy link
Owner

jhj0517 commented Nov 13, 2024

I don't know why it happens. The error message just says that it could not download openai-whisper package.
I tried to reproduce it but failed.

@MarlinMr
Copy link

MarlinMr commented Nov 21, 2024

I have the same issue. Whisper-WebUI/models/Whisper/faster-whisper/models--Systran--faster-whisper-base/ is where the model is located, but it still tries to download. Even after setting --faster_whisper_model_dir /Whisper-WebUI/models/Whisper/faster-whisper.

Could you maybe document it a bit better, or point to where the code that imports the model is?

(And it's probably replicatable by disabling internett access)

I think the issue arrises from how people download the models. The file structure from git clone <huggingface repo> will not be the same as when this project downloads the repo itself.

@jhj0517
Copy link
Owner

jhj0517 commented Nov 21, 2024

@MarlinMr Hi, I'm not sure if this helps, but I added local_files_only in #405 to fix this.

Can you check the latest version?

@jhj0517
Copy link
Owner

jhj0517 commented Nov 23, 2024

This should no longer happen with #405. If anyone is still having issue,
Please feel free to reopen.

@jhj0517 jhj0517 closed this as completed Nov 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants