-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the usage of "CompletionResponseUsage" returned by OpenAI API? #477
Comments
@GeorgeGalway have you tried the |
|
Hmmm yeah the |
I just did a quick test w/ the I may be looking in the wrong place, but I think we just don't get this info from OpenAI in the stream version. |
It's always a |
Perhaps we can know the tokens that were sent? |
I can calculate the number of characters from the returned result and can also see the sent tokens in the log, but I cannot retrieve the number of sent tokens from the |
Hmmm that'd be nice, yeah. There are It might be nice to expose this in a cleaner way, but I don't have time; PRs welcome :) |
According to the OpenAI cookbook, usage info is not available when
|
Guys, please take a look at my proposal: #507 |
Describe the feature
stream
mode currently does not include prompt_tokens, completion_tokens, and total_tokens. Would it be possible to add these features?The text was updated successfully, but these errors were encountered: