Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

user_bio is None by default, get an error when replacing the character names #5717

Closed
1 task done
Yiximail opened this issue Mar 17, 2024 · 13 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@Yiximail
Copy link
Contributor

Yiximail commented Mar 17, 2024

Describe the bug

Using API /v1/chat/completions (None stream mode) without user_bio.


The new parameter user_bio in API chat mode raises an error because it's None as default.

user_bio: str | None = Field(default=None, description="The user description/personality.")


Then, in chat.py can't replace the names correctly.

user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),

get this error
image


An empty string as default in webui.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Any request in /v1/chat/completions

Screenshot

No response

Logs

    text = text.replace('{{user}}', name1).replace('{{char}}', name2)
AttributeError: 'NoneType' object has no attribute 'replace'

System Info

None
@Yiximail Yiximail added the bug Something isn't working label Mar 17, 2024
@Yiximail
Copy link
Contributor Author

Yiximail commented Mar 17, 2024

Should we bring all the default values of the WebUI to API, or just fix user_bio?

No, I think it's better to keep them None.

@oldgithubman
Copy link

You should do whatever doesn't break stuff.

INFO:httpx:HTTP Request: POST http://192.168.1.79:5000/v1/chat/completions "HTTP/1.1 200 OK"
Traceback (most recent call last):

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 113, in __iter__
    for part in self._httpcore_stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 367, in __iter__
    raise exc from None

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 363, in __iter__
    for part in self._stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 349, in __iter__
    raise exc

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 341, in __iter__
    for chunk in self._connection._receive_response_body(**kwargs):

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 220, in _receive_event
    with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):

  File "/home/j/miniconda3/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc

httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)


The above exception was the direct cause of the following exception:


Traceback (most recent call last):

  File "/home/j/miniconda3/bin/gpte", line 8, in <module>
    sys.exit(app())
             ^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/applications/cli/main.py", line 191, in main
    files_dict = agent.improve(files_dict, prompt)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/applications/cli/cli_agent.py", line 132, in improve
    files_dict = self.improve_fn(
                 ^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/default/steps.py", line 172, in improve
    messages = ai.next(messages, step_name=curr_fn())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/ai.py", line 118, in next
    response = self.backoff_inference(messages)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/backoff/_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/gpt_engineer/core/ai.py", line 162, in backoff_inference
    return self.llm.invoke(messages)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 173, in invoke
    self.generate_prompt(

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 571, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 434, in generate
    raise e

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 424, in generate
    self._generate_with_cache(

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 608, in _generate_with_cache
    result = self._generate(
             ^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 455, in _generate
    return generate_from_stream(stream_iter)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 62, in generate_from_stream
    for chunk in stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 419, in _stream
    for chunk in self.client.create(messages=message_dicts, **params):

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 46, in __iter__
    for item in self._iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 61, in __stream__
    for sse in iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 53, in _iter_events
    yield from self._decoder.iter(self.response.iter_lines())

  File "/home/j/miniconda3/lib/python3.12/site-packages/openai/_streaming.py", line 287, in iter
    for line in iterator:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 861, in iter_lines
    for text in self.iter_text():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 848, in iter_text
    for byte_content in self.iter_bytes():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 829, in iter_bytes
    for raw_bytes in self.iter_raw():

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_models.py", line 883, in iter_raw
    for raw_stream_bytes in self.stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_client.py", line 126, in __iter__
    for chunk in self._stream:

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 112, in __iter__
    with map_httpcore_exceptions():

  File "/home/j/miniconda3/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)

  File "/home/j/miniconda3/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc

httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

@ThatCoffeeGuy
Copy link

Just facing the same issue.

@Nero10578
Copy link

Same issue API no longer works with my previous formatting. So this needs a new user bio field?

@Yiximail
Copy link
Contributor Author

Currently, we can simply add an empty string as user_bio to avoid this problem.

Yiximail added a commit to Yiximail/text-generation-webui that referenced this issue Mar 18, 2024
… The default value same as in the webui.
@LoseAustyn
Copy link

Add keys 'user_bio' and 'user_name' in request data can avoid this problem

{
        "mode": "chat",
        "character": "Ami",
        "messages": history,
        "stop": ["\n", "<|im_end|>"],
        "temperature": 0.8,
        "user_bio": "",
        "user_name": ""
}

@zhenweiding
Copy link

Facing the same issue too.

@sophosympatheia
Copy link

I encountered this too and worked around it by adding a sanity check to the replace_character_names function.

def replace_character_names(text, name1, name2):
    if text:
        text = text.replace('{{user}}', name1).replace('{{char}}', name2)
        return text.replace('<USER>', name1).replace('<BOT>', name2)
    else:
        return ""

@Yiximail
Copy link
Contributor Author

I encountered this too and worked around it by adding a sanity check to the replace_character_names function.

def replace_character_names(text, name1, name2):
    if text:
        text = text.replace('{{user}}', name1).replace('{{char}}', name2)
        return text.replace('<USER>', name1).replace('<BOT>', name2)
    else:
        return ""

Yes, it's perfectly handled.

@senchpimy
Copy link

Add keys 'user_bio' and 'user_name' in request data can avoid this problem

{
        "mode": "chat",
        "character": "Ami",
        "messages": history,
        "stop": ["\n", "<|im_end|>"],
        "temperature": 0.8,
        "user_bio": "",
        "user_name": ""
}

This works when you use "requests"/http, but you cant use it with the openai lib

@EduardoABG
Copy link

I'm having this error too, when I try to use the openai library with baseURL for http://127.0.0.1:5000/v1

@Nicolas-Alexander
Copy link

Same issue. Can confirm with a locally ran http://127.0.0.1:5000/v1
I'll see about implementing one or both of the above fixes as a workaround until it's fixed.

@ViktorAgafonov
Copy link

I don’t know how it should have been, but I just edited the file "text-generation-webui/extensions/openai/typing.py" to changed string n.110 (user_bio: str | None = Field(default='' ...)

PoetOnTheRun pushed a commit to PoetOnTheRun/text-generation-webui that referenced this issue Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests