-
-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete and re-index docs when embedding model changes #137
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dlqqq
requested changes
May 2, 2023
Nice! When a user changes the embedding model, and the re-indexing gets triggered, let's make sure to have Jupyter AI send a chat message when it is done reindexing. |
Looks like you are already doing that, great work! |
dlqqq
approved these changes
May 3, 2023
dlqqq
added a commit
that referenced
this pull request
May 5, 2023
* Refactored provider load, decompose logic, aded model provider list api * Renamed model * Sorted the provider names * WIP: Embedding providers * Added embeddings provider api * Added missing import * Moved providers to ray actor, added config actor * Ability to load llm and embeddings from config * Moved llm creation to specific actors * Added apis for fetching, updating config. Fixed config update, error handling * Updated as per PR feedback * Fixes issue with cohere embeddings, api keys not working * Added an error check when embedding change causes read error * Delete and re-index docs when embedding model changes (#137) * Added an error check when embedding change causes read error * Refactored provider load, decompose logic, aded model provider list api * Re-indexes dirs when embeddings change, learn list command * Fixed typo, simplified adding metadata * Moved index dir, metadata path to constants * Chat settings UI (#141) * remove unused div * automatically create config if not present * allow all-caps envvars in config * implement basic chat settings UI * hide API key text inputs * limit popup size, show success banner * show welcome message if no LM is selected * fix buggy UI with no selected LM/EM * exclude legacy OpenAI chat provider used in magics * Added a button with welcome message --------- Co-authored-by: Jain <[email protected]> * Various chat chain enhancements and fixes (#144) * fix /clear command * use model IDs to compare LLMs instead * specify stop sequence in chat chain * add empty AI message, improve system prompt * add RTD configuration --------- Co-authored-by: Piyush Jain <[email protected]> Co-authored-by: Jain <[email protected]>
dbelgrod
pushed a commit
to dbelgrod/jupyter-ai
that referenced
this pull request
Jun 10, 2024
* Refactored provider load, decompose logic, aded model provider list api * Renamed model * Sorted the provider names * WIP: Embedding providers * Added embeddings provider api * Added missing import * Moved providers to ray actor, added config actor * Ability to load llm and embeddings from config * Moved llm creation to specific actors * Added apis for fetching, updating config. Fixed config update, error handling * Updated as per PR feedback * Fixes issue with cohere embeddings, api keys not working * Added an error check when embedding change causes read error * Delete and re-index docs when embedding model changes (jupyterlab#137) * Added an error check when embedding change causes read error * Refactored provider load, decompose logic, aded model provider list api * Re-indexes dirs when embeddings change, learn list command * Fixed typo, simplified adding metadata * Moved index dir, metadata path to constants * Chat settings UI (jupyterlab#141) * remove unused div * automatically create config if not present * allow all-caps envvars in config * implement basic chat settings UI * hide API key text inputs * limit popup size, show success banner * show welcome message if no LM is selected * fix buggy UI with no selected LM/EM * exclude legacy OpenAI chat provider used in magics * Added a button with welcome message --------- Co-authored-by: Jain <[email protected]> * Various chat chain enhancements and fixes (jupyterlab#144) * fix /clear command * use model IDs to compare LLMs instead * specify stop sequence in chat chain * add empty AI message, improve system prompt * add RTD configuration --------- Co-authored-by: Piyush Jain <[email protected]> Co-authored-by: Jain <[email protected]>
Marchlak
pushed a commit
to Marchlak/jupyter-ai
that referenced
this pull request
Oct 28, 2024
* Refactored provider load, decompose logic, aded model provider list api * Renamed model * Sorted the provider names * WIP: Embedding providers * Added embeddings provider api * Added missing import * Moved providers to ray actor, added config actor * Ability to load llm and embeddings from config * Moved llm creation to specific actors * Added apis for fetching, updating config. Fixed config update, error handling * Updated as per PR feedback * Fixes issue with cohere embeddings, api keys not working * Added an error check when embedding change causes read error * Delete and re-index docs when embedding model changes (jupyterlab#137) * Added an error check when embedding change causes read error * Refactored provider load, decompose logic, aded model provider list api * Re-indexes dirs when embeddings change, learn list command * Fixed typo, simplified adding metadata * Moved index dir, metadata path to constants * Chat settings UI (jupyterlab#141) * remove unused div * automatically create config if not present * allow all-caps envvars in config * implement basic chat settings UI * hide API key text inputs * limit popup size, show success banner * show welcome message if no LM is selected * fix buggy UI with no selected LM/EM * exclude legacy OpenAI chat provider used in magics * Added a button with welcome message --------- Co-authored-by: Jain <[email protected]> * Various chat chain enhancements and fixes (jupyterlab#144) * fix /clear command * use model IDs to compare LLMs instead * specify stop sequence in chat chain * add empty AI message, improve system prompt * add RTD configuration --------- Co-authored-by: Piyush Jain <[email protected]> Co-authored-by: Jain <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #133
Summary
Sends a reply to user when embedding model changes. Updates the user when re-index is completed.
A new command
learn -l
that will list currently indexed directories.