Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: 启动chatglm2-6b-32k报错,got multiple values for keyword argument 'trust_remote_code',模型已缓存 #498

Closed
tjudadc opened this issue Sep 27, 2023 · 5 comments · Fixed by #500
Labels
bug Something isn't working
Milestone

Comments

@tjudadc
Copy link

tjudadc commented Sep 27, 2023

Describe the bug

A clear and concise description of what the bug is.
image

To Reproduce

To help us to reproduce this bug, please provide information below:

  1. Your Python version.
  2. The version of xinference you use.
  3. Versions of crucial packages.
  4. Full stack of the error.
  5. Minimized code to reproduce the error.

Expected behavior

A clear and concise description of what you expected to happen.

Additional context

Add any other context about the problem here.

@XprobeBot XprobeBot added this to the v0.5.1 milestone Sep 27, 2023
@UranusSeven
Copy link
Contributor

Hi, thanks for reporting this issue! We are aware of the problem and will fix it ASAP.

A workaround for this issue is to downgrade to v0.4.4.

I will let you know once this problem has been resolved.

@UranusSeven UranusSeven changed the title BUG-启动chatglm2-6b-32k报错,got multiple values for keyword argument 'trust_remote_code',模型已缓存 BUG: 启动chatglm2-6b-32k报错,got multiple values for keyword argument 'trust_remote_code',模型已缓存 Sep 27, 2023
@XprobeBot XprobeBot added the bug Something isn't working label Sep 27, 2023
@XprobeBot XprobeBot modified the milestones: v0.5.1, v0.5.2 Sep 27, 2023
@UranusSeven UranusSeven linked a pull request Sep 27, 2023 that will close this issue
@UranusSeven
Copy link
Contributor

Hi! This issue has been resolved in v0.5.2. Please upgrade xinference by pip install -U xinference and try again!

@richzw
Copy link
Contributor

richzw commented Oct 18, 2023

@UranusSeven , I got the same error under version 0.5.3.

File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/core/restful_api.py", line 419, in launch_model
    model_uid = await self._supervisor_ref.launch_builtin_model(
  File "xoscar/core.pyx", line 288, in __pyx_actor_method_wrapper
  File "xoscar/core.pyx", line 422, in _handle_actor_result
  File "xoscar/core.pyx", line 465, in _run_actor_async_generator
  File "xoscar/core.pyx", line 466, in xoscar.core._BaseActor._run_actor_async_generator
  File "xoscar/core.pyx", line 471, in xoscar.core._BaseActor._run_actor_async_generator
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/core/supervisor.py", line 249, in launch_builtin_model
    yield _launch_one_model(rep_model_uid)
  File "xoscar/core.pyx", line 476, in xoscar.core._BaseActor._run_actor_async_generator
  File "xoscar/core.pyx", line 422, in _handle_actor_result
  File "xoscar/core.pyx", line 465, in _run_actor_async_generator
  File "xoscar/core.pyx", line 466, in xoscar.core._BaseActor._run_actor_async_generator
  File "xoscar/core.pyx", line 471, in xoscar.core._BaseActor._run_actor_async_generator
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/core/supervisor.py", line 223, in _launch_one_model
    yield worker_ref.launch_builtin_model(
  File "xoscar/core.pyx", line 476, in xoscar.core._BaseActor._run_actor_async_generator
  File "xoscar/core.pyx", line 396, in _handle_actor_result
  File "xoscar/core.pyx", line 284, in __pyx_actor_method_wrapper
  File "xoscar/core.pyx", line 287, in xoscar.core.__pyx_actor_method_wrapper
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/core/utils.py", line 27, in wrapped
    ret = await func(*args, **kwargs)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/core/worker.py", line 187, in launch_builtin_model
    await model_ref.load()
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xoscar/backends/context.py", line 227, in send
    return self._process_result_message(result)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xoscar/backends/context.py", line 102, in _process_result_message
    raise message.as_instanceof_cause()
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xoscar/backends/pool.py", line 657, in send
    result = await self._run_coro(message.message_id, coro)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xoscar/backends/pool.py", line 368, in _run_coro
    return await coro
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xoscar/api.py", line 306, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
  File "xoscar/core.pyx", line 558, in __on_receive__
  File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__
  File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__
  File "xoscar/core.pyx", line 524, in xoscar.core._BaseActor.__on_receive__
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/core/model.py", line 117, in load
    self._model.load()
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/model/llm/pytorch/core.py", line 180, in load
    self._model, self._tokenizer = self._load_model(kwargs)
  File "/home/ec2-user/.local/lib/python3.9/site-packages/xinference/model/llm/pytorch/falcon.py", line 113, in _load_model
    model = AutoModelForCausalLM.from_pretrained(
TypeError: [address=127.0.0.1:40739, pid=8715] transformers.models.auto.auto_factory._BaseAutoModelClass.from_pretrained() got multiple values for keyword argument 'trust_remote_code'

Is there anything am I missing?

The command line is xinference launch --model-name "falcon-instruct" --model-format pytorch --size-in-billions 7 --quantization 8-bit

@richzw
Copy link
Contributor

richzw commented Oct 18, 2023

@UranusSeven , I try to fix it per your previous solution. Could you please review it?

@UranusSeven
Copy link
Contributor

@richzw Sure. Thanks for your contribution :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants