Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'demo' #23

Open
Wangmmstar opened this issue Jan 22, 2024 · 5 comments
Open

ModuleNotFoundError: No module named 'demo' #23

Wangmmstar opened this issue Jan 22, 2024 · 5 comments

Comments

@Wangmmstar
Copy link

Question

Hi, does anyone encounters this issue when calling the tool worker like grounding dino?
$ python serve/grounding_dino_worker.py
Traceback (most recent call last):
File "/media/mwang34/study/mengjun/llm/LLaVA-Plus/serve/grounding_dino_worker.py", line 31, in
from demo.inference_on_a_image import get_grounding_output
ModuleNotFoundError: No module named 'demo'

@labuladon
Copy link

I met this problem also.

@liwenyang-911
Copy link

liwenyang-911 commented Feb 2, 2024

Is it solved? I already solved!

@pedramaghazadeh
Copy link

You need to move the grounding_dino_worker.py to the same directory as GroundingDINO file you've installed. Or you can move the demo folder from GroundingDINO to the serve directory.

@Wangmmstar
Copy link
Author

Wangmmstar commented Feb 13, 2024

Hey, thank you guys. I moved those files to correct position. The old error disappears while new error jumps out when I run the chatbot on localserver. IT always shows missing scheme. SO that the answer always shows error.

Does anyone have any ideas?

File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict
2024-02-13 09:58:49 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api
2024-02-13 09:58:49 | ERROR | stderr | result = await self.call_function(
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/gradio/blocks.py", line 1093, in call_function
2024-02-13 09:58:49 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/gradio/utils.py", line 341, in async_iteration
2024-02-13 09:58:49 | ERROR | stderr | return await iterator.anext()
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/gradio/utils.py", line 334, in anext
2024-02-13 09:58:49 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
2024-02-13 09:58:49 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
2024-02-13 09:58:49 | ERROR | stderr | return await future
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
2024-02-13 09:58:49 | ERROR | stderr | result = context.run(func, *args)
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async
2024-02-13 09:58:49 | ERROR | stderr | return next(iterator)
2024-02-13 09:58:49 | ERROR | stderr | File "/media/mwang34/study/mengjun/llm/LLaVA-Plus/llava/serve/gradio_web_server_llava_plus.py", line 552, in http_bot
2024-02-13 09:58:49 | ERROR | stderr | tool_response = requests.post(
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/requests/api.py", line 115, in post
2024-02-13 09:58:49 | ERROR | stderr | return request("post", url, data=data, json=json, **kwargs)
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/requests/api.py", line 59, in request
2024-02-13 09:58:49 | ERROR | stderr | return session.request(method=method, url=url, **kwargs)
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/requests/sessions.py", line 575, in request
2024-02-13 09:58:49 | ERROR | stderr | prep = self.prepare_request(req)
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/requests/sessions.py", line 486, in prepare_request
2024-02-13 09:58:49 | ERROR | stderr | p.prepare(
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/requests/models.py", line 368, in prepare
2024-02-13 09:58:49 | ERROR | stderr | self.prepare_url(url, params)
2024-02-13 09:58:49 | ERROR | stderr | File "/home/mwang34/anaconda3/envs/llavaplus/lib/python3.10/site-packages/requests/models.py", line 439, in prepare_url
2024-02-13 09:58:49 | ERROR | stderr | raise MissingSchema(
2024-02-13 09:58:49 | ERROR | stderr | requests.exceptions.MissingSchema: Invalid URL '/worker_generate': No scheme supplied. Perhaps you meant https:///worker_generate?

@pedramaghazadeh
Copy link

Make sure the ports and addresses you're specifying for each worker, controller, and model worker make sense. I also noticed that launching the model worker first and then workers might help with the inference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants