Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No perceptual clear the conversation context #41

Open
emperorjoker opened this issue Mar 7, 2023 · 4 comments
Open

No perceptual clear the conversation context #41

emperorjoker opened this issue Mar 7, 2023 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@emperorjoker
Copy link

This model's maximum context length is 4096 tokens. However, your messages resulted in 4119 tokens. Please reduce the length of the messages. Sorry, I am not feeling well. Please try again.

After exceeding the limit of tokens, we can only execute /clear to clear conversation context to restore function.

This seems to be the only option, so is it possible that sending the message again after reaching the limit will automatically execute the /clear command?

@peteisacat
Copy link

or automatically delete the earlier conversation?

@flynnoct
Copy link
Owner

flynnoct commented Mar 7, 2023

We will implement this later.

@flynnoct flynnoct added the enhancement New feature or request label Mar 7, 2023
@em108
Copy link

em108 commented Mar 8, 2023

I think it can also be possible with embeddings. Another advantage can store days worth of conversations in case you need long term memory and brings the cost of conversations down by 5 times depending on usage. I've been reading on ways to implement it from these sources

Article:
https://towardsdatascience.com/generative-question-answering-with-long-term-memory-c280e237b144

Implementation examples:
https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb

There's also a guide on pinecone too. A site one can utilize for the database
https://docs.pinecone.io/docs/openai

@emperorjoker
Copy link
Author

I think it can also be possible with embeddings. Another advantage can store days worth of conversations in case you need long term memory and brings the cost of conversations down by 5 times depending on usage. I've been reading on ways to implement it from these sources

This seems to be a much better and elegant solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants