Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate_text_func: support finish_reason=TOKEN_LIMIT #253

Open
dtrifiro opened this issue Oct 30, 2023 · 0 comments
Open

generate_text_func: support finish_reason=TOKEN_LIMIT #253

dtrifiro opened this issue Oct 30, 2023 · 0 comments

Comments

@dtrifiro
Copy link
Contributor

dtrifiro commented Oct 30, 2023

generate_text_func currently does not correctly return finish_reason=TOKEN_LIMIT when reaching the model token limit:

TOKEN_LIMIT refers to the maximum number of tokens limit defined by the model whereas the MAX_TOKENS refers to the maximum number defined by the user. So one can reach TOKEN_LIMIT before MAX_TOKENS

Originally posted by @gkumbhat in #210 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant