Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom OpenAI provider: Handshake failed with status: 400 #76

Closed
smkrv opened this issue Oct 30, 2024 · 8 comments
Closed

Custom OpenAI provider: Handshake failed with status: 400 #76

smkrv opened this issue Oct 30, 2024 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@smkrv
Copy link

smkrv commented Oct 30, 2024

Helo! I just wanted to express my heartfelt thanks for the incredible work you’ve done!

I am trying to connect to the OpenAI proxy, i.e., setting up the integration through “Configure Custom OpenAI provider,” but in the logs, I always see:

2024-10-31 02:04:42.021 ERROR (MainThread) [custom_components.llmvision.config_flow] Handshake failed with status: 400
2024-10-31 02:04:42.022 ERROR (MainThread) [custom_components.llmvision.config_flow] Could not connect to Custom OpenAI server.
2024-10-31 02:04:42.022 ERROR (MainThread) [custom_components.llmvision.config_flow] Validation failed: handshake_failed

I can't understand what the error is, as a standard, OpenAI-compatible authorization method is being used.

Here’s an example curl:

~ % curl https://CUSTOM_ENDPOINT/openai/v1/chat/completions \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $API-KEY" \
        -d '{
            "model": "gpt-4-turbo",
            "messages": [{"role": "user", "content": "Say this is a test!"}]
        }'

    {
      "id": "chatcmpl-AOC91bm3wA20UchF3iLiVFuZyZaEM",
      "object": "chat.completion",
      "created": 1730329919,
      "model": "gpt-4-turbo-2024-04-09",
      "choices": [
        {
          "index": 0,
          "message": {
            "role": "assistant",
            "content": "This is a test!",
            "refusal": null
          },
          "logprobs": null,
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 13,
        "completion_tokens": 5,
        "total_tokens": 18,
        "prompt_tokens_details": {
          "cached_tokens": 0
        },
        "completion_tokens_details": {
          "reasoning_tokens": 0
        }
      },
      "system_fingerprint": "fp_5db30363dd"
    }

Additionally, everything works fine with the official OpenAI library for Python:

from openai import OpenAI

client = OpenAI(
    api_key="{PROXY_API_KEY}",
    base_url="https://CUSTOM_ENDPOINT/openai/v1/chat/completions",
)

chat_completion = client.chat.completions.create(
    model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello world"}]
)

I would appreciate any help. Thank you in advance!

@smkrv smkrv added the bug Something isn't working label Oct 30, 2024
@valentinfrlch
Copy link
Owner

valentinfrlch commented Oct 31, 2024

You're welcome!
What did you enter for CUSTOM_ENDPOINT? The problem is likely the /openai base_url. For reference, the official endpoint is:
https://api.openai.com/v1/chat/completions. You likely need to include the /openai as part of your custom_endpoint.

@smkrv
Copy link
Author

smkrv commented Oct 31, 2024

You're welcome!
What did you enter for CUSTOM_ENDPOINT? The problem is likely the /openai base_url. For reference, the official endpoint is:
https://api.openai.com/v1/chat/completions. You likely need to include the /openai as part of your custom_endpoint.

Thank you for the response! I think I was able to identify the issue: no matter what path I specify after the host, the requests always go to: https://api.proxy***.net/v1/models, meaning that "openai" is always stripped off. Do you have any ideas on how to fix this, or how I can set it up manually?

2024-10-31 13:23:15.025 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to: [protocol: https, base_url: api.proxy***.net, port: , endpoint: /v1/models]
2024-10-31 13:23:15.025 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to https://api.proxy***.net/v1/models
2024-10-31 16:52:33.467 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to: [protocol: https, base_url: api.proxy***.net, port: , endpoint: /v1/models]
2024-10-31 16:52:33.468 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to https://api.proxy***.net/v1/models
2024-10-31 16:56:35.023 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to: [protocol: https, base_url: api.proxy***.net, port: , endpoint: /v1/models]
2024-10-31 16:56:35.023 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to https://api.proxy***.net/v1/models
2024-10-31 16:58:53.970 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to: [protocol: https, base_url: api.proxy***.net, port: , endpoint: /v1/models]
2024-10-31 16:58:53.971 DEBUG (MainThread) [custom_components.llmvision.config_flow] Connecting to https://api.proxy***.net/v1/models
SCR-20241031-ovme

@valentinfrlch
Copy link
Owner

Thanks for including the logs. I checked the code and you're right, the url is split up and the endpoint you provided is ignored. For now there is nothing you can do about this but I have already changed this and it will be in the next version. If you want, you can help test the beta and give feedback.

valentinfrlch added a commit that referenced this issue Oct 31, 2024
@valentinfrlch
Copy link
Owner

I have pushed v1.3 beta 2. If you want to test it and provide feedback that would be much appreciated. You can find the full changelog here: https://github.com/valentinfrlch/ha-llmvision/releases/tag/v1.3-beta.2
The path in the url should now no longer be ignored.
Let me know if this fixes your issue.

@smkrv
Copy link
Author

smkrv commented Nov 1, 2024

@valentinfrlch, Thank you for the prompt resolution of the problem! Yes, I will definitely install the second pre-release beta today, conduct testing, and provide feedback.

@smkrv
Copy link
Author

smkrv commented Nov 1, 2024

Thank you! Everything works great with the Blueprint from the second branch. I'll wait for new versions and test further.
P.S.: Regarding the Blueprint, it would be great to expand it with some of the functions - https://github.com/SgtBatten/HA_blueprints/blob/main/Frigate_Camera_Notifications/Stable.yaml, but without overloading it :)

@valentinfrlch
Copy link
Owner

valentinfrlch commented Nov 1, 2024

Thats great to hear!
There are some good ideas for new features in the forum and I'd love to add them all but you are right. It's a fine line between useful features and an overloaded mess. I might make an 'advanced' version of the blueprint that has everything but that's just an idea.

I have seen the blueprint you attached before and have been in contact with the developer. Maybe he'd be interested in adding LLM Vision support in his blueprint as it is quite a bit more advanced than what the llmvision blueprint currently is.

Also: Have you had a chance to test the 'remember' feature yet? It's the biggest change/addition in this release and I haven't heard from others how well it works yet. Would be great to get some feedback before the launch.

@valentinfrlch
Copy link
Owner

Closing this as it appears to be fixed in v1.3. Feel free to reopen this, should you experience this issue again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants