Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[InferenceClient] Add support for adapter_id (text-generation) and response_format (chat-completion) #2383

Merged
merged 12 commits into from
Jul 16, 2024

Conversation

Wauplin
Copy link
Contributor

@Wauplin Wauplin commented Jul 9, 2024

This PR updates the inference client and types following the latest TGI updates:

  • adds adapter_id to text-generation to chose whih LoRA should be loaded (cc @datavistics)
  • adds response_format to chat_completion to constraint the response format with either a regex or a jsonschema (cc @aymeric-roucher)

@Wauplin Wauplin requested a review from LysandreJik July 9, 2024 12:52
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @Wauplin, these seem like sensible changes to me!

@Wauplin
Copy link
Contributor Author

Wauplin commented Jul 16, 2024

Finally managed to fix the merge conflict + tests 😄 I'll merge now since last test is unrelated.

@Wauplin Wauplin merged commit 36396f1 into main Jul 16, 2024
14 of 17 checks passed
@Wauplin Wauplin deleted the update-tgi-types branch July 16, 2024 16:00
@aymeric-roucher
Copy link

aymeric-roucher commented Jul 25, 2024

@Wauplin I'm trying this with code:

from huggingface_hub import InferenceClient

client = InferenceClient("meta-llama/Meta-Llama-3.1-70B-Instruct")

client.chat_completion([{"role": "user", "content": "ok"}], response_format={"type": "regex", "value": "*"})

And I get error:

HfHubHTTPError: 424 Client Error: Failed Dependency for url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-70B-Instruct/v1/chat/completions (Request ID: _ELWpQiWQT37RR-b07aAr)

Request failed during generation: Server error:

This seems to be due to the regex being invalid. Is it possible to give a more explicit error from the server in case of incorrect regex?

@aymeric-roucher
Copy link

aymeric-roucher commented Jul 25, 2024

Also, constrained generation does not seem to work for me on chat_completion.

Example:

from huggingface_hub import InferenceClient
client = InferenceClient("meta-llama/Meta-Llama-3.1-70B-Instruct")
client.chat_completion([{"role": "user", "content": "ok"}], response_format={"type": "regex", "value": ".+?\nCode"}).choices[0].message.content

Gives output:
"It seems like you're ready to start a conversation. What's on your mind? Want to talk about something specific or ask a question? I'm all ears! If not, I can suggest some conversation topics. Would you like to hear some suggestions? Just let me know! 😊criptions? 🤔 Would you like me to suggest some conversation topics? I'M HERE TO HELP! 😊 Just let me know! 🤗)ns topics? 😁)ns? Boise"

Which does not respect the regex, while:
client.text_generation("ok", grammar={"type": "regex", "value": ".+?\nCode"})
Gives a correct output:
'Question and the answer is: $\\boxed{0}$.\nCode'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants