Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE_REQUEST] Custom (OpenAI-compatible) Text Completion API #1722

Closed
tnunamak opened this issue Jan 21, 2024 · 4 comments
Closed

[FEATURE_REQUEST] Custom (OpenAI-compatible) Text Completion API #1722

tnunamak opened this issue Jan 21, 2024 · 4 comments
Labels
🦄 Feature Request [ISSUE] Suggestion for new feature, update or change

Comments

@tnunamak
Copy link

tnunamak commented Jan 21, 2024

Have you searched for similar requests?

Yes

Is your feature request related to a problem? If so, please describe.

Compiling a complete prompt from chat messages is difficult. SillyTavern already does it well for Text Completion APIs, it would be nice if I could take advantage of that by connecting SillyTavern to my custom text completion API rather than a custom chat completion API.

Describe the solution you'd like

It is already possible to connect to a Text Generation Web UI Text Completion API, which is Open-AI compatible. It would be great if we could do the same (reuse some of that), but with a custom endpoint, including setting extra parameters and headers (like in the custom Chat Completion API configuration).

Describe alternatives you've considered

I tried using pretending that my custom API was a Text Generation WebUI API, and it kind of worked, but I couldn't set any additional parameters or headers.

Additional context

The details of my specific use case:

I'm am connecting SillyTavern to my custom API, which acts as a kind of middleware before passing the text generation request through to Text Generation WebUI. I can only connect to my custom API's chat completion API.

SillyTavern sends multiple system messages in its Chat Completion API requests:

image

Unfortunately, Text Generation WebUI only includes one system message in the compiled prompt, and drops the others (code). This basically ruins the quality of the responses.

Priority

Medium (Would be very useful)

Is this something you would be keen to implement?

None

@tnunamak tnunamak added the 🦄 Feature Request [ISSUE] Suggestion for new feature, update or change label Jan 21, 2024
@Cohee1207
Copy link
Member

Duplicate #1627
TL;DR: use Default (ooba) type, bypass status check and a type-in model (if needed)

@Cohee1207
Copy link
Member

@tnunamak
Copy link
Author

@Cohee1207 thanks for that link. Is there a way to replicate Include Body Parameters via config.yaml?

@houmie
Copy link

houmie commented Nov 11, 2024

@tnunamak have you found a way? I have the exact same issue. My local API requires an API key that I need to pass in via headers. But how? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🦄 Feature Request [ISSUE] Suggestion for new feature, update or change
Projects
None yet
Development

No branches or pull requests

3 participants