Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistral Support #224

Closed
ASH1998 opened this issue Nov 26, 2023 · 2 comments
Closed

Mistral Support #224

ASH1998 opened this issue Nov 26, 2023 · 2 comments

Comments

@ASH1998
Copy link

ASH1998 commented Nov 26, 2023

I am not able to load a custom model finetuned from Zephyr 7B model. I suppose this is not supported yet, is there any plan to support this ?

~/lightllm$ python -m lightllm.server.api_server --model_dir ~/model_path/ --tokenizer_mode auto --max_total_token_num 100000 --tp 2 --max_req_input_len 5000 --max_req_total_len 8000

Error :

################
load model error: can not support mistral now can not support mistral now <class 'Exception'>
Traceback (most recent call last):
  File "lightllm/lightllm/server/router/model_infer/model_rpc.py", line 110, in exposed_init_model
    raise Exception(f"can not support {self.model_type} now")
Exception: can not support mistral now
################
load model error: can not support mistral now can not support mistral now <class 'Exception'>
Traceback (most recent call last):
  File "/lightllm/lightllm/server/router/model_infer/model_rpc.py", line 110, in exposed_init_model
    raise Exception(f"can not support {self.model_type} now")
Exception: can not support mistral now
router init state: Traceback (most recent call last):

  File "/lightllm/lightllm/server/router/manager.py", line 291, in start_router_process
    asyncio.run(router.wait_to_model_ready())

  File "/opt/conda/envs/llmserver/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)

  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete

  File "/lightllm/lightllm/server/router/manager.py", line 74, in wait_to_model_ready
    await asyncio.gather(*init_model_ret)

  File "/lightllm/lightllm/server/router/model_infer/model_rpc.py", line 267, in init_model
    await ans

  File "/lightllm/lightllm/server/router/model_infer/model_rpc.py", line 243, in _func
    return ans.value

  File "/opt/conda/envs/llmserver/lib/python3.10/site-packages/rpyc-5.3.1-py3.10.egg/rpyc/core/async_.py", line 108, in value
    raise self._obj

_get_exception_class.<locals>.Derived: can not support mistral now

========= Remote Traceback (1) =========
Traceback (most recent call last):
  File "/opt/conda/envs/llmserver/lib/python3.10/site-packages/rpyc-5.3.1-py3.10.egg/rpyc/core/protocol.py", line 359, in _dispatch_request
    res = self._HANDLERS[handler](self, *args)
  File "/opt/conda/envs/llmserver/lib/python3.10/site-packages/rpyc-5.3.1-py3.10.egg/rpyc/core/protocol.py", line 837, in _handle_call
    return obj(*args, **dict(kwargs))
  File "/lightllm/lightllm/server/router/model_infer/model_rpc.py", line 116, in exposed_init_model
    raise e
  File "/lightllm/lightllm/server/router/model_infer/model_rpc.py", line 110, in exposed_init_model
    raise Exception(f"can not support {self.model_type} now")
Exception: can not support mistral now

 detoken init state: init ok```
@ASH1998
Copy link
Author

ASH1998 commented Nov 26, 2023

Also if anyone can point to any other library which i can use other than HF TGI to host this llm, thanks.

@hiworldwzj
Copy link
Collaborator

@ASH1998 We will support this model soon。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants