Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Openai embedding fix to support jina-embeddings-v2 #4642

Merged
merged 11 commits into from
Nov 18, 2023

Conversation

wizd
Copy link
Contributor

@wizd wizd commented Nov 18, 2023

@wizd
Copy link
Contributor Author

wizd commented Nov 18, 2023

tested embedding model:

jinaai/jina-embeddings-v2-base-en
BAAI/bge-large-zh-v1.5

@@ -235,7 +235,7 @@ def chat_completions_common(body: dict, is_legacy: bool = False, stream=False) -

max_tokens = generate_params['max_new_tokens']
if max_tokens in [None, 0]:
generate_params['max_new_tokens'] = 4096
generate_params['max_new_tokens'] = 200
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting this high isn't necessary, as the auto_max_new_tokens fills the context. The 200 reference value is used when the context is fully used to remove old messages.

@oobabooga
Copy link
Owner

Looks good, thanks

@oobabooga oobabooga changed the base branch from main to dev November 18, 2023 23:24
@oobabooga oobabooga merged commit af76fbe into oobabooga:dev Nov 18, 2023
@yhyu13
Copy link
Contributor

yhyu13 commented Nov 20, 2023

@wizd Does Jina embedding runs on cuda devices? Other setense transformer embedding always lies on CPU even if specifiying cuda

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants