-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continue: Write comments for this code produces math questions #1132
Comments
@max-wittig Probably there's a very clear prompt formatting problem somewhere. The most transparent way to debug this would be to open the Output panel next to the Terminal in VS Code and then select "Continue - LLM Prompts/Completions" in the dropdown. This will display the exact prompt sent to LiteLLM so that we can determine if the problem comes before the request. Continue should be attempting to format the prompt on its own, but if LiteLLM's README is up to date, then they might not support the /completions endpoint, in which case they are probably double wrapping the prompt. I'll try setting LiteLLM on my own tomorrow as well to investigate. Separately, I don't know if you're just trying many models, but I can share a few recommendations if you are interested. |
(I'm from Max' team) @sestinj This is the output when trying to create comments for a simple hello world function:
Not sure if it's correctly formatted. Looks like Continue.dev is doing some templating on its own. Usually this is done at the server-side I think. |
This seems to have broken recently. It is working with the same configuration in version 0.8.22 where the log output looks like this:
|
Seems to be caused by #1029 |
@bufferoverflow That is the PR that changed the prompt, but it's not clear that the prompt alone is the problem. Both prompts take the same approach of pre-empting the model. So it seems something is amiss in the request itself. I ran a few tests. Tried Mistral-7b-Instruct with each of the following pairs: [LiteLLM, TogetherAI], [LiteLLM, Ollama], [Direct, TogetherAI], [Direct, Ollama]. All of these worked fine, except for the first, in which LiteLLM just threw an error that seems unrelated to all of this. This leads to me to think that the LiteLLM+vLLM combination is causing the problem. Since the log shared above by Max shows that vLLM got an empty prompt in the request, my guess would be that when LiteLLM is setup with vLLM like this it does something to re-format the prompt. Still working on figuring out figuring out what exactly is going on, as I don't want to put blame on another tool until I've verified it's not us. I'll hopefully have more to share soon after looking through the LiteLLM code, and if it's something there I'll aim to make whatever PR is necessary. |
@sestinj I found the problem 🎉 It works with the following hack, which forces the use of the diff --git a/core/llm/index.ts b/core/llm/index.ts
index ed920eaa..a9e03095 100644
--- a/core/llm/index.ts
+++ b/core/llm/index.ts
@@ -58,7 +58,7 @@ export abstract class BaseLLM implements ILLM {
return false;
}
}
- return true;
+ return false;
}
supportsPrefill(): boolean { |
@sestinj not sure how you envision this. What about extending https://github.com/continuedev/continue/blob/preview/schema/json/ModelDescription.json with |
We do indeed have a custom chat template configured for vLLM via
We have adapted this from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/blob/main/tokenizer_config.json#L42 so that we can use system prompts. |
For completeness sake: The new prompt does not work with the legacy It does work well with the So as @bufferoverflow suggested above it would probably be best to let users select which endpoint to use. |
@bufferoverflow @fgreinacher this sounds like a good solution to me. I'll make an update now. Is my understanding correct that what happens here is that vLLM applies the chat template on top of whatever is sent through the /completions endpoint? |
@bufferoverflow yes this looks perfect! |
PR is now merged: #1151, and release is coming out in 20min To solve, you now can set |
Works pretty well with Thanks for your efforts! |
* ✨ shared indexing * 🎨 indexing * 🧑💻 npm i --no-save in prepackage.js * fix issue with /edit not acknowledging highlighted code after retrying * 🚚 rename addLogs to addPromptCompletionPair * Bedrock chat completion, Anthropic models * 🩹 add filter for midline imports/top-level keywords and encoding header * 🩹 add .t. to stop words * 🔥 Improved Ctrl/Cmd+I (#1023) * ⚡️ improved diff streaming algo * 🎨 better messaging/formatting for further cmd/ctrl+I instructions * ⚡️ more reliably filter out unwanted explanations * 🚸 better follow up edits * 💄 accept/reject diffs block-by-block * ✨ cmd/ctrl+z to reject diff * 🚚 rename variables * 💄 allow switching files when inline diff still visible * 🚸 don't show quick pick if not ctx providers exist * 🚧 (sort of) allow switching editors while streaming diff * 💄 show model being used for cmd/ctrl+I * 💄 don't add undo stops when generating diff * 🐛 fix shortcuts for accept/reject diff blocks * ✨ improved GPT edit prompt, taking prefix/suffix into account * ✨ improved prompting for empty selection ctrl/cmd+I * ⚡️ immediately refresh codelens * 🐛 use first model if default undefined * ⚡️ refresh codelens after diff cleared * 💄 update keyboard shortcuts * ⚡️ Improved edit prompts for OS models (#1029) * 💄 refresh codelens more frequently * ⚡️ improved codellama edit prompt * ⚡️ better codellama prompt * ⚡️ use same improved prompt for most OS models * 🎨 refactor chat templates * 🎨 refactor llama2 prompt to allow ending assistant message * ⚡️ separate os models prompt when no prefix/suffix * 🎨 refactor to allow putting words in the model's mouth * ⚡️ prune code around cmd/ctrl+I * 🚚 rename to cmd/ctrl+I * 🎨 make raw a base completion option * 🩹 small improvements * 🩹 use different prompt when completions not supported * Keep the same statusBar item when updating it to prevent flickering of the status bar. (#1022) * 🎨 add getRepoName to IDE, use for indexing * 🎨 implement server client interface * 📌 pin to vectordb=0.4.12 * 🧑💻 mark xhr-sync-worker.js as external in esbuild * 🎨 break out ignore defaults into core * 🎨 update getRepoName * 🐛 fix import error * 🩹 fix chat.jsonl logging * ⚡️ improved OpenAI autocomplete support * 🐛 fix bug causing part of completions to be skipped * 🔥 remove URLContextProvider * ✨ Add Groq as an official provider * 🩹 make sure autocomplete works with claude * 💄 update positioning of code block toolbar to not cover code * ✨ Run in terminal button * ✨ insert at cursor button * ✨ Regenerate and copy buttons * ✨ Button to force re-indexing * 🐛 make sure tooltip IDs are unique * ✨ Button to continue truncated response * 🚧 WIP on inline edit browser embedding * 🚧 inline TipTapEditor * 🚧 WIP on inline TipTapEditor * 🔥 remove unused test component * 🚧 native inline edit * 💄 nicer looking input box * ✨ Diff Streaming in JetBrains * 💄 line highlighting * 💄 arial font * ✨ Retry with further instructions * 🚧 drop shadow * ✨ accept/reject diffs * ✨ accept/reject diffs * 🐛 fix off-by-one errors * 🚧 swap out button on enter * 💄 styling and auto-resize * 💄 box shadow * 🚧 fix keyboard shortcuts to accept/reject diff * 💄 improve small interactions * 💄 loading icon, cancellation logic * 🐛 handle next.value being undefined * ✨ latex support * Bug Fix: Add ternary operator to prevent nonexistant value error (#1052) * add terniary operator * Removing logging * remove comment --------- Co-authored-by: Justin Milner <[email protected]> Co-authored-by: Nate Sesti <[email protected]> * 🎨 small formatting change * 🩹 tweak /edit solution * ✨ Dropdown to select model * 🔊 print when SSL verification disabled * 📌 pin esbuild version to match our hosted binary * 🔥 remove unused package folder * 👷 add note about pinning esbuild * 🚚 rename pkg to binary * ⚡️ update an important stop word for starcoder2, improve dev data * 🐛 fix autocomplete bug * Update completionProvider.ts Add \r\n\r\n stop to tab completion * 📌 update package-locks * 🐛 fix bug in edit prompt * 🔊 log extension version * 🐛 handle repo undefined in vscode * ⏪ revert back to esbuild ^0.17.19 to solve no backend found error with onnxruntime * 🩹 set default autocomplete temp to 0.01 to be strictly positive * make the useCopyBuffer option effective (#1062) * Con-1037: Toggle full screen bug (#1065) * webview reset * add warning --------- Co-authored-by: Justin Milner <[email protected]> * Update completionProvider.ts as @rootedbox suggested * Resolve conflict, accept branch being merged in (#1076) * Resolve conflict, accept branch being merged in * remove accidental .gitignore add * whoops, put gitignore back * fix --------- Co-authored-by: Justin Milner <[email protected]> * #1073: update outdated documentation (#1074) * 🩹 small tweaks to stop words * Add abstraction for fetch to easily allow using request options (#1059) * add fetch helper function with request options * add support for request options for Jira context provider * Add a new slash command to review code. (#1071) * Add a new slash command to review code. * clean code * 🩹 add new starcoder artifact as stopword * 💄 slight improvements to inline edit UI * 🔖 update default models, bump gradle version * 📝 recommend starcoder2 * 🐛 fix jetbrains encoding issue * 🩹 don't index site-packages * 🩹 error handling in JetBrains * 🐛 fix copy to clipboard in jetbrains * fix: cursor focus issue causing unwanted return to text area (#1086) * 📝 mention autocomplete in jetbrains * 📝 Tab-autocomplete README * 🔥 remove note about custom ctx providers only being on VS Code * 📝 docs about http context provider * 👥 pull request template * Update from Claude 2 to Claude 3 (#1078) * 📝 add FAQ about single-line completions * 📝 update autocomplete docs * fix cursor focus issue causing unwanted return to text area --------- Co-authored-by: Nate Sesti <[email protected]> Co-authored-by: Ty Dunn <[email protected]> Co-authored-by: Nate Sesti <[email protected]> * Update tree-sitter-wasms to 0.1.11 (which includes Solidity) * Make use of solidity tree-sitter parser * 🔧 option to disable autocomplete from config.json * ✨ option to disable streaming with anthropic * ✅ Test to verify that files are packaged * Add FIM template for CodeGemma (#1097) Also pass stop tokens to llama.cpp. * ✨ customizable rerankers (#1088) * ✨ customizable rerankers * 💄 fix early truncation button * ⚡️ improvements to full text search + reranking * ⚡️ only use starcoder2 stop words for starcoder2 * ⚡️ crawl code graph for call expressions * 🚧 starcoder2-7b free trial * 🚧 free trial client for embeddings and re-ranking * 🚧 embeddings provider * ✅ test for presence of files in CI * 🐛 fixes to reranking * ✨ new onboarding experience * ✨ new onboarding experience * 💄 small tweaks to onboarding * 🩹 add stopAtLines filter to /edit * 🐛 clean up vite build errors * 👷 make vscode external in binary build * 💄 improved models onboarding for existing users * 💄 default indexing progress to 0.0 * 🐛 small fixes to reranking * 👷 clear folders before prepackage * 👷 say where .vsix is output * 👷 also download arm packages outside of gh actions * 🎨 add AbortSignal to indexing * 🔧 starcoder, not 2 in config_schema * 🚚 again, starcoder, not 2 * 🐛 fix bug when reranker undefined * 🩹 fix binary tsc error * ✨ configure context menu prompts * 🐛 acknowledge useLegacyCompletionsEndpoint * 🚑 fix keep existing config option * 🔊 learn about selection * ⚡️ improvements to indexing reporting when not in git repo * 🥅 handle situation where git doesn't exist in workspace * ✨ support for gemini 1.5 pro * 🐛 handle embeddingProvider name not found * ✨ Gemini 1.5 and GPT-4 Turbo * 👷 fix os, arch undefined in prepackage.js * ⚡️ better detection of terminal code blocks * 🧑💻 solve tailwind css warnings * ✨ cmd/ctrl+L to select terminal contents * 🐛 correctly handle remotes not found * ✨ allow templating for custom commands * 🔥 temporarily remove cmd+L to select terminal contents * 🐛 remove quotes around Ollama stop words * ✨ add Cohere as Model Provider (#1119) * 🩹 add gpt-4-turbo to list of chat_only models * feat: use exponential backoff in llm chat (#1115) Signed-off-by: inimaz <[email protected]> * 🩹 update exponential backoff timing * 💄 spell out Alt in keyboard shortcuts * 🩹 don't set edit prompt for templateType "none" * Adds additional ignores for C-fmilies langs (#1129) Ignored: - cache directory `.cache`, used by clangd - dependency files `*o.d`, used by object files - LLVM and GNU coverage files: `*.profraw`, `*.gcda` and `*.gcno` * 🔥 temporarily remove problematic expandSnippet import * 👷 add npx to prefix vsce in build * 🐛 handle messages sent in multiple parts over stdin * 🔖 update gradle version * 🩹 for now, skip onboarding in jetbrains * 🩹 temporarily don't show use codebase on jetbrains * 🐛 use system certificates in binary * 🔖 update jetbrains version * 🩹 correctly contruct set of certs * 🔖 bump intellij version to 0.0.45 * 🩹 update to support images for gpt-4-turbo * 🐛 fix image support autodetection * ⚡️ again, improve image support autodetection * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🔧 forceCompletionsEndpointType option * Revert "🔧 forceCompletionsEndpointType option" This reverts commit dd51fcb. * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * ✨ disableInFiles option for autocomplete * feat(httpContextProvider): load AC on fetch client (#1150) Co-authored-by: Bertrand Pinel <[email protected]> * ✨ global filewatcher for config.json/ts changes * 🐛 retry webview requests so that first cmd+L works * ✨ Improved onboarding experience (#1155) * 🚸 onboarding improvements * 🧑💻 keyboard shortcuts to toggle autocomplete and open config.json * ⚡️ improve detection of terminal code blocks * 🚧 onboarding improvements * 🚧 more onboarding improvements * 💄 last session button * 🚸 show more fallback options in dropdown * 💄 add sectioning to models page * 💄 clean up delete model button * 💄 make tooltip look nicer * 🚸 download Ollama button * 💄 local LLM onboarding * 🐛 select correct terminal on "runCommand" message * 💄 polish onboarding * 💚 fix gui build errors * 📝 add /v1 to OpenAI examples in docs * 🚑 hotfix for not iterable error * 💄 add llama3 to UI * 🔥 remove disable indexing * 🍱 update continue logo * 🐛 fix language undefined bug --------- Signed-off-by: inimaz <[email protected]> Co-authored-by: Nithish <[email protected]> Co-authored-by: EC2 Default User <[email protected]> Co-authored-by: LapinMalin <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: lmaosweqf1 <[email protected]> Co-authored-by: ading2210 <[email protected]> Co-authored-by: Martin Mois <[email protected]> Co-authored-by: Tobias Jung <[email protected]> Co-authored-by: Jason Jacobs <[email protected]> Co-authored-by: Nithish <[email protected]> Co-authored-by: Ty Dunn <[email protected]> Co-authored-by: Riccardo Schirone <[email protected]> Co-authored-by: postmasters <[email protected]> Co-authored-by: Maxime Brunet <[email protected]> Co-authored-by: inimaz <[email protected]> Co-authored-by: SR_team <[email protected]> Co-authored-by: Roger Meier <[email protected]> Co-authored-by: Bertrand P <[email protected]> Co-authored-by: Bertrand Pinel <[email protected]>
* 🐛 fix off-by-one errors * 🚧 swap out button on enter * 💄 styling and auto-resize * 💄 box shadow * 🚧 fix keyboard shortcuts to accept/reject diff * 💄 improve small interactions * 💄 loading icon, cancellation logic * 🐛 handle next.value being undefined * ✨ latex support * Bug Fix: Add ternary operator to prevent nonexistant value error (#1052) * add terniary operator * Removing logging * remove comment --------- Co-authored-by: Justin Milner <[email protected]> Co-authored-by: Nate Sesti <[email protected]> * 🎨 small formatting change * 🩹 tweak /edit solution * ✨ Dropdown to select model * 🔊 print when SSL verification disabled * 📌 pin esbuild version to match our hosted binary * 🔥 remove unused package folder * 👷 add note about pinning esbuild * 🚚 rename pkg to binary * ⚡️ update an important stop word for starcoder2, improve dev data * 🐛 fix autocomplete bug * Update completionProvider.ts Add \r\n\r\n stop to tab completion * 📌 update package-locks * 🐛 fix bug in edit prompt * 🔊 log extension version * 🐛 handle repo undefined in vscode * ⏪ revert back to esbuild ^0.17.19 to solve no backend found error with onnxruntime * 🩹 set default autocomplete temp to 0.01 to be strictly positive * make the useCopyBuffer option effective (#1062) * Con-1037: Toggle full screen bug (#1065) * webview reset * add warning --------- Co-authored-by: Justin Milner <[email protected]> * Update completionProvider.ts as @rootedbox suggested * Resolve conflict, accept branch being merged in (#1076) * Resolve conflict, accept branch being merged in * remove accidental .gitignore add * whoops, put gitignore back * fix --------- Co-authored-by: Justin Milner <[email protected]> * #1073: update outdated documentation (#1074) * 🩹 small tweaks to stop words * Add abstraction for fetch to easily allow using request options (#1059) * add fetch helper function with request options * add support for request options for Jira context provider * Add a new slash command to review code. (#1071) * Add a new slash command to review code. * clean code * 🩹 add new starcoder artifact as stopword * 💄 slight improvements to inline edit UI * 🔖 update default models, bump gradle version * 📝 recommend starcoder2 * 🐛 fix jetbrains encoding issue * 🩹 don't index site-packages * 🩹 error handling in JetBrains * 🐛 fix copy to clipboard in jetbrains * fix: cursor focus issue causing unwanted return to text area (#1086) * 📝 mention autocomplete in jetbrains * 📝 Tab-autocomplete README * 🔥 remove note about custom ctx providers only being on VS Code * 📝 docs about http context provider * 👥 pull request template * Update from Claude 2 to Claude 3 (#1078) * 📝 add FAQ about single-line completions * 📝 update autocomplete docs * fix cursor focus issue causing unwanted return to text area --------- Co-authored-by: Nate Sesti <[email protected]> Co-authored-by: Ty Dunn <[email protected]> Co-authored-by: Nate Sesti <[email protected]> * Update tree-sitter-wasms to 0.1.11 (which includes Solidity) * Make use of solidity tree-sitter parser * 🔧 option to disable autocomplete from config.json * ✨ option to disable streaming with anthropic * ✅ Test to verify that files are packaged * Add FIM template for CodeGemma (#1097) Also pass stop tokens to llama.cpp. * ✨ customizable rerankers (#1088) * ✨ customizable rerankers * 💄 fix early truncation button * ⚡️ improvements to full text search + reranking * ⚡️ only use starcoder2 stop words for starcoder2 * ⚡️ crawl code graph for call expressions * 🚧 starcoder2-7b free trial * 🚧 free trial client for embeddings and re-ranking * 🚧 embeddings provider * ✅ test for presence of files in CI * 🐛 fixes to reranking * ✨ new onboarding experience * ✨ new onboarding experience * 💄 small tweaks to onboarding * 🩹 add stopAtLines filter to /edit * 🐛 clean up vite build errors * 👷 make vscode external in binary build * 💄 improved models onboarding for existing users * 💄 default indexing progress to 0.0 * 🐛 small fixes to reranking * 👷 clear folders before prepackage * 👷 say where .vsix is output * 👷 also download arm packages outside of gh actions * 🎨 add AbortSignal to indexing * 🔧 starcoder, not 2 in config_schema * 🚚 again, starcoder, not 2 * 🐛 fix bug when reranker undefined * 🩹 fix binary tsc error * ✨ configure context menu prompts * 🐛 acknowledge useLegacyCompletionsEndpoint * 🚑 fix keep existing config option * 🔊 learn about selection * ⚡️ improvements to indexing reporting when not in git repo * 🥅 handle situation where git doesn't exist in workspace * ✨ support for gemini 1.5 pro * 🐛 handle embeddingProvider name not found * ✨ Gemini 1.5 and GPT-4 Turbo * 👷 fix os, arch undefined in prepackage.js * ⚡️ better detection of terminal code blocks * 🧑💻 solve tailwind css warnings * ✨ cmd/ctrl+L to select terminal contents * 🐛 correctly handle remotes not found * ✨ allow templating for custom commands * 🔥 temporarily remove cmd+L to select terminal contents * 🐛 remove quotes around Ollama stop words * ✨ add Cohere as Model Provider (#1119) * 🩹 add gpt-4-turbo to list of chat_only models * feat: use exponential backoff in llm chat (#1115) Signed-off-by: inimaz <[email protected]> * 🩹 update exponential backoff timing * 💄 spell out Alt in keyboard shortcuts * 🩹 don't set edit prompt for templateType "none" * Adds additional ignores for C-fmilies langs (#1129) Ignored: - cache directory `.cache`, used by clangd - dependency files `*o.d`, used by object files - LLVM and GNU coverage files: `*.profraw`, `*.gcda` and `*.gcno` * 🔥 temporarily remove problematic expandSnippet import * 👷 add npx to prefix vsce in build * 🐛 handle messages sent in multiple parts over stdin * 🔖 update gradle version * 🩹 for now, skip onboarding in jetbrains * 🩹 temporarily don't show use codebase on jetbrains * 🐛 use system certificates in binary * 🔖 update jetbrains version * 🩹 correctly contruct set of certs * 🔖 bump intellij version to 0.0.45 * 🩹 update to support images for gpt-4-turbo * 🐛 fix image support autodetection * ⚡️ again, improve image support autodetection * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🔧 forceCompletionsEndpointType option * Revert "🔧 forceCompletionsEndpointType option" This reverts commit dd51fcb. * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * Fix slash command params loading Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. * Add outputDir param to /share * Enable basic tilde expansion for /share outputDir * Add ability to specify workspace for /share * Add datetimestamp to exported session filename * Use `.`, `./`, or `.\` for current workspace * Add description of outputDir param for /share * Ensure replacement only at start of string * Create user-specified directory if necessary * Change "Continue" to "Assistant" in export * Consolidate to single replace regex * Reformat markdown code blocks Currently, user-selected code blocks are formatted with range in file (rif) info on the same line as the triple backticks, which means that when exported to markdown they don't have the language info needed on that line for syntax highlighting. This update moves the rif info to the following line as a comment in the language of the file and with the language info in the correct place. Before: ```example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` After: ```ts // example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` * Tidy regex to capture filename * Tidy regex to capture filename * Ensure adjacent codeblocks separated by newline * Aesthetic tweaks to output format * ✨ disableInFiles option for autocomplete * feat(httpContextProvider): load AC on fetch client (#1150) Co-authored-by: Bertrand Pinel <[email protected]> * ✨ global filewatcher for config.json/ts changes * 🐛 retry webview requests so that first cmd+L works * ✨ Improved onboarding experience (#1155) * 🚸 onboarding improvements * 🧑💻 keyboard shortcuts to toggle autocomplete and open config.json * ⚡️ improve detection of terminal code blocks * 🚧 onboarding improvements * 🚧 more onboarding improvements * 💄 last session button * 🚸 show more fallback options in dropdown * 💄 add sectioning to models page * 💄 clean up delete model button * 💄 make tooltip look nicer * 🚸 download Ollama button * 💄 local LLM onboarding * 🐛 select correct terminal on "runCommand" message * 💄 polish onboarding * 💚 fix gui build errors * 📝 add /v1 to OpenAI examples in docs * 🚑 hotfix for not iterable error * ✨ add Cohere as Embeddings Provider * 💄 add llama3 to UI * 🔥 remove disable indexing * 🍱 update continue logo * 🐛 fix language undefined bug * 🐛 fix merge mistake * 📝 update mistral models * ✨ global request options (#1153) * ✨ global request options * 🐛 fix jira context provider by injecting fetch * ✨ request options for embeddings providers * ✨ add Cohere as Reranker (#1159) * ♻️ use custom requestOptions with CohereEmbeddingsProvider * Update preIndexedDocs.ts (#1154) Add WordPress and WooCommerce as preIndexedDocs. * 🩹 remove example "outputDir" from default config * Fix slash command params loading (#1084) Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. * 🐛 don't index if no open workspace folders * 💄 improve onboarding language * 🚸 improve onboarding * 🐛 stop loading when error * 💄 replace text in input box * Respect Retry-After header when available from 429 responses (#1182) * 🩹 remove dead code for exponential backoff This has been replaced by the withExponentialBackoff helper * 🩹 respect Retry-After header when available * 🚸 update inline tips language * ✨ input box history * 📌 update package-locks * 🔊 log errors in prepackage * 🐛 err to string * 📌 pin llama-tokenizer-js * 📌 update lockfile * 🚚 change /docs to docs. * 📦 package win-ca dependencies in binary * 🔥 remove unpopular models from UI * 🍱 new logo in jetbrains * 🎨 use node-fetch everywhere * 🚸 immediately select newly added models * 🚸 spell out Alt instead of using symbol * 🔥 remove config shortcut * 🐛 fix changing model bug * 🩹 de-duplicate before adding models * 🔧 add embeddingsProvider specific request options * 🎨 refactor to always use node-fetch from LLM * 🔥 remove duplicate tokens generated * 🔊 add timestamp to JetBrains logs * 🎨 maxStopWords for Groq * 🐛 fix groq provider calling /completions * 🐛 correctly adhere to LanceDB table name spec * 🐛 fix sqlite NOT NULL constraint failed error with custom model * Fix issue where Accept/Reject All only accepts/rejects a single diff hunk. (#1197) * Fix issues parsing Ollama /api/show endpoint payloads. (#1199) * ✨ model role for inlineEdit * 🩹 various small updates * 🐛 fix openai image support * 🔖 update gradle version * 🍱 update jetbrains icon * 🐛 fix autocomplete in notebook cells * 🔥 remove unused media * 🔥 remove unused files * Fix schema to allow 'AUTODETECT' sentinel for model when provider is 'groq'. (#1203) * 🐛 small improvements * Fix issue with @codebase provider when n becomes odd due to a divide by 2 during the full text search portion of the query. (#1204) * 🐛 add skipLines * ✨ URLContextProvider * 🥅 improved error handling for codebase indexing * 🏷️ use official Git extension types * ➕ declare vscode.git extension dependency * ⚡️ use reranker for docs context provider * 🚸 Use templating in default customCommand * 🎨 use U+23CE * 🚸 disable autocomplete in commit message box * 🩹 add gems to default ignored paths * ⚡️ format markdown blocks as comments in .ipynb completions * 🐛 don't strip port in URL * 🐛 fix "gemini" provider spacing issues * 📦 update posthog version * 🏷️ update types.ts * 🐛 fix copy/paste/cut behavior in VS Code notebooks * ✨ llama3 prompt template * 🐛 fix undefined prefix, suffix and language for `/edit` (#1216) * 🐛 add .bind to fix templating in systemMessage * 🐛 small improvements to autocomplete * Update DocsContextProvider.ts (#1217) I fixed a bug where you were sending the query variable (which holds the base URL of the doc) to the rerank method, and it made no sense to rerank the chunks based on a URL. So I changed it to extras.fullInput because it should rerank based on the user input, which should provide better results. * 📝 select-provider.md update * 🐛 fix merge errors --------- Signed-off-by: inimaz <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: lmaosweqf1 <[email protected]> Co-authored-by: ading2210 <[email protected]> Co-authored-by: Martin Mois <[email protected]> Co-authored-by: Tobias Jung <[email protected]> Co-authored-by: Jason Jacobs <[email protected]> Co-authored-by: Nithish <[email protected]> Co-authored-by: Ty Dunn <[email protected]> Co-authored-by: Riccardo Schirone <[email protected]> Co-authored-by: postmasters <[email protected]> Co-authored-by: Maxime Brunet <[email protected]> Co-authored-by: inimaz <[email protected]> Co-authored-by: SR_team <[email protected]> Co-authored-by: Roger Meier <[email protected]> Co-authored-by: Peter Zaback <[email protected]> Co-authored-by: Bertrand P <[email protected]> Co-authored-by: Bertrand Pinel <[email protected]> Co-authored-by: Jose Vega <[email protected]> Co-authored-by: Nejc Habjan <[email protected]> Co-authored-by: Chad Yates <[email protected]> Co-authored-by: 小颚虫 <[email protected]>
* Add abstraction for fetch to easily allow using request options (#1059) * add fetch helper function with request options * add support for request options for Jira context provider * Add a new slash command to review code. (#1071) * Add a new slash command to review code. * clean code * 🩹 add new starcoder artifact as stopword * 💄 slight improvements to inline edit UI * 🔖 update default models, bump gradle version * 📝 recommend starcoder2 * 🐛 fix jetbrains encoding issue * 🩹 don't index site-packages * 🩹 error handling in JetBrains * 🐛 fix copy to clipboard in jetbrains * fix: cursor focus issue causing unwanted return to text area (#1086) * 📝 mention autocomplete in jetbrains * 📝 Tab-autocomplete README * 🔥 remove note about custom ctx providers only being on VS Code * 📝 docs about http context provider * 👥 pull request template * Update from Claude 2 to Claude 3 (#1078) * 📝 add FAQ about single-line completions * 📝 update autocomplete docs * fix cursor focus issue causing unwanted return to text area --------- Co-authored-by: Nate Sesti <[email protected]> Co-authored-by: Ty Dunn <[email protected]> Co-authored-by: Nate Sesti <[email protected]> * Update tree-sitter-wasms to 0.1.11 (which includes Solidity) * Make use of solidity tree-sitter parser * 🔧 option to disable autocomplete from config.json * ✨ option to disable streaming with anthropic * ✅ Test to verify that files are packaged * Add FIM template for CodeGemma (#1097) Also pass stop tokens to llama.cpp. * ✨ customizable rerankers (#1088) * ✨ customizable rerankers * 💄 fix early truncation button * ⚡️ improvements to full text search + reranking * ⚡️ only use starcoder2 stop words for starcoder2 * ⚡️ crawl code graph for call expressions * 🚧 starcoder2-7b free trial * 🚧 free trial client for embeddings and re-ranking * 🚧 embeddings provider * ✅ test for presence of files in CI * 🐛 fixes to reranking * ✨ new onboarding experience * ✨ new onboarding experience * 💄 small tweaks to onboarding * 🩹 add stopAtLines filter to /edit * 🐛 clean up vite build errors * 👷 make vscode external in binary build * 💄 improved models onboarding for existing users * 💄 default indexing progress to 0.0 * 🐛 small fixes to reranking * 👷 clear folders before prepackage * 👷 say where .vsix is output * 👷 also download arm packages outside of gh actions * 🎨 add AbortSignal to indexing * 🔧 starcoder, not 2 in config_schema * 🚚 again, starcoder, not 2 * 🐛 fix bug when reranker undefined * 🩹 fix binary tsc error * ✨ configure context menu prompts * 🐛 acknowledge useLegacyCompletionsEndpoint * 🚑 fix keep existing config option * 🔊 learn about selection * ⚡️ improvements to indexing reporting when not in git repo * 🥅 handle situation where git doesn't exist in workspace * ✨ support for gemini 1.5 pro * 🐛 handle embeddingProvider name not found * ✨ Gemini 1.5 and GPT-4 Turbo * 👷 fix os, arch undefined in prepackage.js * ⚡️ better detection of terminal code blocks * 🧑💻 solve tailwind css warnings * ✨ cmd/ctrl+L to select terminal contents * 🐛 correctly handle remotes not found * ✨ allow templating for custom commands * 🔥 temporarily remove cmd+L to select terminal contents * 🐛 remove quotes around Ollama stop words * ✨ add Cohere as Model Provider (#1119) * 🩹 add gpt-4-turbo to list of chat_only models * feat: use exponential backoff in llm chat (#1115) Signed-off-by: inimaz <[email protected]> * 🩹 update exponential backoff timing * 💄 spell out Alt in keyboard shortcuts * 🩹 don't set edit prompt for templateType "none" * Adds additional ignores for C-fmilies langs (#1129) Ignored: - cache directory `.cache`, used by clangd - dependency files `*o.d`, used by object files - LLVM and GNU coverage files: `*.profraw`, `*.gcda` and `*.gcno` * 🔥 temporarily remove problematic expandSnippet import * 👷 add npx to prefix vsce in build * 🐛 handle messages sent in multiple parts over stdin * 🔖 update gradle version * 🩹 for now, skip onboarding in jetbrains * 🩹 temporarily don't show use codebase on jetbrains * 🐛 use system certificates in binary * 🔖 update jetbrains version * 🩹 correctly contruct set of certs * 🔖 bump intellij version to 0.0.45 * 🩹 update to support images for gpt-4-turbo * 🐛 fix image support autodetection * ⚡️ again, improve image support autodetection * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🔧 forceCompletionsEndpointType option * Revert "🔧 forceCompletionsEndpointType option" This reverts commit dd51fcb. * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * Fix slash command params loading Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. * Add outputDir param to /share * Enable basic tilde expansion for /share outputDir * Add ability to specify workspace for /share * Add datetimestamp to exported session filename * Use `.`, `./`, or `.\` for current workspace * Add description of outputDir param for /share * Ensure replacement only at start of string * Create user-specified directory if necessary * Change "Continue" to "Assistant" in export * Consolidate to single replace regex * Reformat markdown code blocks Currently, user-selected code blocks are formatted with range in file (rif) info on the same line as the triple backticks, which means that when exported to markdown they don't have the language info needed on that line for syntax highlighting. This update moves the rif info to the following line as a comment in the language of the file and with the language info in the correct place. Before: ```example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` After: ```ts // example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` * Tidy regex to capture filename * Tidy regex to capture filename * Ensure adjacent codeblocks separated by newline * Aesthetic tweaks to output format * ✨ disableInFiles option for autocomplete * feat(httpContextProvider): load AC on fetch client (#1150) Co-authored-by: Bertrand Pinel <[email protected]> * ✨ global filewatcher for config.json/ts changes * 🐛 retry webview requests so that first cmd+L works * ✨ Improved onboarding experience (#1155) * 🚸 onboarding improvements * 🧑💻 keyboard shortcuts to toggle autocomplete and open config.json * ⚡️ improve detection of terminal code blocks * 🚧 onboarding improvements * 🚧 more onboarding improvements * 💄 last session button * 🚸 show more fallback options in dropdown * 💄 add sectioning to models page * 💄 clean up delete model button * 💄 make tooltip look nicer * 🚸 download Ollama button * 💄 local LLM onboarding * 🐛 select correct terminal on "runCommand" message * 💄 polish onboarding * 💚 fix gui build errors * 📝 add /v1 to OpenAI examples in docs * 🚑 hotfix for not iterable error * ✨ add Cohere as Embeddings Provider * 💄 add llama3 to UI * 🔥 remove disable indexing * 🍱 update continue logo * 🐛 fix language undefined bug * 🐛 fix merge mistake * 📝 rename googlepalmapi.md to googlegeminiapi.md * 📝 update mistral models * Rename to geminiapi & change filename this time * ✨ global request options (#1153) * ✨ global request options * 🐛 fix jira context provider by injecting fetch * ✨ request options for embeddings providers * ✨ add Cohere as Reranker (#1159) * ♻️ use custom requestOptions with CohereEmbeddingsProvider * Update preIndexedDocs.ts (#1154) Add WordPress and WooCommerce as preIndexedDocs. * 🩹 remove example "outputDir" from default config * Fix slash command params loading (#1084) Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. * 🐛 don't index if no open workspace folders * 💄 improve onboarding language * 🚸 improve onboarding * 🐛 stop loading when error * 💄 replace text in input box * Respect Retry-After header when available from 429 responses (#1182) * 🩹 remove dead code for exponential backoff This has been replaced by the withExponentialBackoff helper * 🩹 respect Retry-After header when available * 🚸 update inline tips language * ✨ input box history * 📌 update package-locks * 🔊 log errors in prepackage * 🐛 err to string * 📌 pin llama-tokenizer-js * 📌 update lockfile * 🚚 change /docs to docs. * 📦 package win-ca dependencies in binary * 🔥 remove unpopular models from UI * 🍱 new logo in jetbrains * 🎨 use node-fetch everywhere * 🚸 immediately select newly added models * 🚸 spell out Alt instead of using symbol * 🔥 remove config shortcut * 🐛 fix changing model bug * 🩹 de-duplicate before adding models * 🔧 add embeddingsProvider specific request options * 🎨 refactor to always use node-fetch from LLM * 🔥 remove duplicate tokens generated * 🔊 add timestamp to JetBrains logs * 🎨 maxStopWords for Groq * 🐛 fix groq provider calling /completions * 🐛 correctly adhere to LanceDB table name spec * 🐛 fix sqlite NOT NULL constraint failed error with custom model * Fix issue where Accept/Reject All only accepts/rejects a single diff hunk. (#1197) * Fix issues parsing Ollama /api/show endpoint payloads. (#1199) * ✨ model role for inlineEdit * 🩹 various small updates * 🐛 fix openai image support * 🔖 update gradle version * 🍱 update jetbrains icon * 🐛 fix autocomplete in notebook cells * 🔥 remove unused media * 🔥 remove unused files * Fix schema to allow 'AUTODETECT' sentinel for model when provider is 'groq'. (#1203) * 🐛 small improvements * Fix issue with @codebase provider when n becomes odd due to a divide by 2 during the full text search portion of the query. (#1204) * 🐛 add skipLines * ✨ URLContextProvider * 🥅 improved error handling for codebase indexing * 🏷️ use official Git extension types * ➕ declare vscode.git extension dependency * ⚡️ use reranker for docs context provider * 🚸 Use templating in default customCommand * 🎨 use U+23CE * 🚸 disable autocomplete in commit message box * 🩹 add gems to default ignored paths * ⚡️ format markdown blocks as comments in .ipynb completions * 🐛 don't strip port in URL * 🐛 fix "gemini" provider spacing issues * 📦 update posthog version * 🏷️ update types.ts * 🐛 fix copy/paste/cut behavior in VS Code notebooks * ✨ llama3 prompt template * 🐛 fix undefined prefix, suffix and language for `/edit` (#1216) * 🐛 add .bind to fix templating in systemMessage * 🐛 small improvements to autocomplete * Update DocsContextProvider.ts (#1217) I fixed a bug where you were sending the query variable (which holds the base URL of the doc) to the rerank method, and it made no sense to rerank the chunks based on a URL. So I changed it to extras.fullInput because it should rerank based on the user input, which should provide better results. * 📝 select-provider.md update * 🐛 fix merge errors * Nate/autocomplete-metrics (#1230) * ⚡️ use context.selectedCompletionInfo, deduplicate logs * ⚡️ don't reject if user keeps typing same as completion * ⚡️ vscode autocomplete edge cases * 🚧 WIP on vscode autocomplete * ⚡️ better bracket handlng * ⚡️ improved multi-line detection * Active file default context (#1231) * 🚸 include currently active file by default * 🚸 warn if non-autocomplete model being used * ✨ try hole filling template for gpt * 💄 ui for no context * ⚡️ leave out bottom of excessively large files * 🚧 experimenting with perplexity style streaming * 🐛 fix #1237 * 💚 fix type error * ⚡️ improve LSP usage in autocomplete * 🐛 fix content parsing regression in /edit * add PySide6 docs to preindexed docs (#1236) * CON-232 bring custom docs to top, alphabetize doc results, make scrol… (#1239) * CON-232 bring custom docs to top, alphabetize doc results, make scrollable * CON-232 cleanup --------- Co-authored-by: Justin Milner <[email protected]> * 🚚 [Auxiliary -> Continue] Sidebar * 🔊 log completion options in ~/.continue/sessions * ⚡️ filter out completions that are only punctuation/space * ⚡️ inject intellisense docs, no multi-line on comments * ⚡️ crawl type definitions for autocomplete * ⚡️ truncate function text * ⚡️ cache LSP calls * ⚡️ find recently edited ranges with perfect prefix match * 🐛 fix gif paths * ⚡️ bring back double new line stop words * 📌 add yarn lock files * 🐛 allow language keywords to be generated * 💄 toggle on help button * 🎨 defaultContext option * 🐛 fix lancedb bug by upgrading * 🐛 fix groq stop tokens * 🐛 preventDefault to avoid double paste * 🚸 don't repeatedly override cmd+J * 🧑💻 fix npm run test in core * 📝 change description * 🐛 silence Ollama invalid server state warning --------- Signed-off-by: inimaz <[email protected]> Co-authored-by: Tobias Jung <[email protected]> Co-authored-by: Jason Jacobs <[email protected]> Co-authored-by: Nithish <[email protected]> Co-authored-by: Ty Dunn <[email protected]> Co-authored-by: Riccardo Schirone <[email protected]> Co-authored-by: postmasters <[email protected]> Co-authored-by: Maxime Brunet <[email protected]> Co-authored-by: inimaz <[email protected]> Co-authored-by: SR_team <[email protected]> Co-authored-by: Roger Meier <[email protected]> Co-authored-by: Peter Zaback <[email protected]> Co-authored-by: Bertrand P <[email protected]> Co-authored-by: Bertrand Pinel <[email protected]> Co-authored-by: Jose Vega <[email protected]> Co-authored-by: Nejc Habjan <[email protected]> Co-authored-by: Chad Yates <[email protected]> Co-authored-by: 小颚虫 <[email protected]> Co-authored-by: 5eqn <[email protected]> Co-authored-by: Pixel <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: Justin Milner <[email protected]>
commit d71f4f7 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Aesthetic tweaks to output format commit 2565fa4 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Ensure adjacent codeblocks separated by newline commit b82ccb7 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Tidy regex to capture filename commit 7e0b4e4 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Tidy regex to capture filename commit 3380620 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Reformat markdown code blocks Currently, user-selected code blocks are formatted with range in file (rif) info on the same line as the triple backticks, which means that when exported to markdown they don't have the language info needed on that line for syntax highlighting. This update moves the rif info to the following line as a comment in the language of the file and with the language info in the correct place. Before: ```example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` After: ```ts // example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` commit 81103ff Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Consolidate to single replace regex commit 7d697d5 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Change "Continue" to "Assistant" in export commit 41b9ef0 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Create user-specified directory if necessary commit ca44dd4 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Ensure replacement only at start of string commit 5f6077d Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Add description of outputDir param for /share commit 91d92ab Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Use `.`, `./`, or `.\` for current workspace commit e44ea2b Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Add datetimestamp to exported session filename commit 10b4701 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Add ability to specify workspace for /share commit 618906a Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Enable basic tilde expansion for /share outputDir commit fadf70e Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Add outputDir param to /share commit faabd38 Author: Peter Zaback <[email protected]> Date: Fri Apr 19 14:29:30 2024 -0500 Fix slash command params loading Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. commit 1cb0455 Author: Nate Sesti <[email protected]> Date: Fri Apr 19 10:39:27 2024 -0700 🩹 use title for autodetect commit 6a5c4ec Author: Nate Sesti <[email protected]> Date: Fri Apr 19 10:12:08 2024 -0700 🩹 look for bedrock credentials in homedir commit 388df20 Author: Nate Sesti <[email protected]> Date: Thu Apr 18 15:35:17 2024 -0700 🩹 set default useLegacyCompletionsEndpoint to undefined commit 245936e Author: Roger Meier <[email protected]> Date: Thu Apr 18 22:00:07 2024 +0200 📝 useLegacyCompletionsEndpoint within OpenAI docs commit 514a642 Author: Roger Meier <[email protected]> Date: Thu Apr 18 21:42:32 2024 +0200 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes continuedev#1132
* 🩹 for now, skip onboarding in jetbrains * 🩹 temporarily don't show use codebase on jetbrains * 🐛 use system certificates in binary * 🔖 update jetbrains version * 🩹 correctly contruct set of certs * 🔖 bump intellij version to 0.0.45 * 🩹 update to support images for gpt-4-turbo * 🐛 fix image support autodetection * ⚡️ again, improve image support autodetection * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🔧 forceCompletionsEndpointType option * Revert "🔧 forceCompletionsEndpointType option" This reverts commit dd51fcb. * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * 🐛 set supportsCompletions based on useLegacyCompletionsEndpoint model setting Closes #1132 * 📝 useLegacyCompletionsEndpoint within OpenAI docs * 🩹 set default useLegacyCompletionsEndpoint to undefined * 🩹 look for bedrock credentials in homedir * 🩹 use title for autodetect * Fix slash command params loading Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. * Add outputDir param to /share * Enable basic tilde expansion for /share outputDir * Add ability to specify workspace for /share * Add datetimestamp to exported session filename * Use `.`, `./`, or `.\` for current workspace * Add description of outputDir param for /share * Ensure replacement only at start of string * Create user-specified directory if necessary * Change "Continue" to "Assistant" in export * Consolidate to single replace regex * Reformat markdown code blocks Currently, user-selected code blocks are formatted with range in file (rif) info on the same line as the triple backticks, which means that when exported to markdown they don't have the language info needed on that line for syntax highlighting. This update moves the rif info to the following line as a comment in the language of the file and with the language info in the correct place. Before: ```example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` After: ```ts // example.ts (3-6) function fib(n) { if (n <= 1) return n; return fib(n - 2) + fib(n - 1); } ``` * Tidy regex to capture filename * Tidy regex to capture filename * Ensure adjacent codeblocks separated by newline * Aesthetic tweaks to output format * ✨ disableInFiles option for autocomplete * feat(httpContextProvider): load AC on fetch client (#1150) Co-authored-by: Bertrand Pinel <[email protected]> * ✨ global filewatcher for config.json/ts changes * 🐛 retry webview requests so that first cmd+L works * ✨ Improved onboarding experience (#1155) * 🚸 onboarding improvements * 🧑💻 keyboard shortcuts to toggle autocomplete and open config.json * ⚡️ improve detection of terminal code blocks * 🚧 onboarding improvements * 🚧 more onboarding improvements * 💄 last session button * 🚸 show more fallback options in dropdown * 💄 add sectioning to models page * 💄 clean up delete model button * 💄 make tooltip look nicer * 🚸 download Ollama button * 💄 local LLM onboarding * 🐛 select correct terminal on "runCommand" message * 💄 polish onboarding * 💚 fix gui build errors * 📝 add /v1 to OpenAI examples in docs * 🚑 hotfix for not iterable error * ✨ add Cohere as Embeddings Provider * 💄 add llama3 to UI * 🔥 remove disable indexing * 🍱 update continue logo * 🐛 fix language undefined bug * 🐛 fix merge mistake * 📝 rename googlepalmapi.md to googlegeminiapi.md * 📝 update mistral models * Rename to geminiapi & change filename this time * ✨ global request options (#1153) * ✨ global request options * 🐛 fix jira context provider by injecting fetch * ✨ request options for embeddings providers * ✨ add Cohere as Reranker (#1159) * ♻️ use custom requestOptions with CohereEmbeddingsProvider * Update preIndexedDocs.ts (#1154) Add WordPress and WooCommerce as preIndexedDocs. * 🩹 remove example "outputDir" from default config * Fix slash command params loading (#1084) Existing slash commands expect an object named "params" so mapping to "options" here caused params to be undefined within the run scope. I renamed from 'm' to 's' just to avoid potential confusion with the model property mapping above. * 🐛 don't index if no open workspace folders * 💄 improve onboarding language * 🚸 improve onboarding * 🐛 stop loading when error * 💄 replace text in input box * Respect Retry-After header when available from 429 responses (#1182) * 🩹 remove dead code for exponential backoff This has been replaced by the withExponentialBackoff helper * 🩹 respect Retry-After header when available * 🚸 update inline tips language * ✨ input box history * 📌 update package-locks * 🔊 log errors in prepackage * 🐛 err to string * 📌 pin llama-tokenizer-js * 📌 update lockfile * 🚚 change /docs to docs. * 📦 package win-ca dependencies in binary * 🔥 remove unpopular models from UI * 🍱 new logo in jetbrains * 🎨 use node-fetch everywhere * 🚸 immediately select newly added models * 🚸 spell out Alt instead of using symbol * 🔥 remove config shortcut * 🐛 fix changing model bug * 🩹 de-duplicate before adding models * 🔧 add embeddingsProvider specific request options * 🎨 refactor to always use node-fetch from LLM * 🔥 remove duplicate tokens generated * 🔊 add timestamp to JetBrains logs * 🎨 maxStopWords for Groq * 🐛 fix groq provider calling /completions * 🐛 correctly adhere to LanceDB table name spec * 🐛 fix sqlite NOT NULL constraint failed error with custom model * Fix issue where Accept/Reject All only accepts/rejects a single diff hunk. (#1197) * Fix issues parsing Ollama /api/show endpoint payloads. (#1199) * ✨ model role for inlineEdit * 🩹 various small updates * 🐛 fix openai image support * 🔖 update gradle version * 🍱 update jetbrains icon * 🐛 fix autocomplete in notebook cells * 🔥 remove unused media * 🔥 remove unused files * Fix schema to allow 'AUTODETECT' sentinel for model when provider is 'groq'. (#1203) * 🐛 small improvements * Fix issue with @codebase provider when n becomes odd due to a divide by 2 during the full text search portion of the query. (#1204) * 🐛 add skipLines * ✨ URLContextProvider * 🥅 improved error handling for codebase indexing * 🏷️ use official Git extension types * ➕ declare vscode.git extension dependency * ⚡️ use reranker for docs context provider * 🚸 Use templating in default customCommand * 🎨 use U+23CE * 🚸 disable autocomplete in commit message box * 🩹 add gems to default ignored paths * ⚡️ format markdown blocks as comments in .ipynb completions * 🐛 don't strip port in URL * 🐛 fix "gemini" provider spacing issues * 📦 update posthog version * CON-1067: Failed state seems to be toggling as intended * 🏷️ update types.ts * 🐛 fix copy/paste/cut behavior in VS Code notebooks * ✨ llama3 prompt template * Rework for proper initialization on start up * CON-1067 Clean-up * CON-1067 more clean-up * Add indexingNotLoaded state * CON-1067 communicate progress to frontend * 🐛 fix undefined prefix, suffix and language for `/edit` (#1216) * 🐛 add .bind to fix templating in systemMessage * CON-1067 clean up * 🐛 small improvements to autocomplete * Update DocsContextProvider.ts (#1217) I fixed a bug where you were sending the query variable (which holds the base URL of the doc) to the rerank method, and it made no sense to rerank the chunks based on a URL. So I changed it to extras.fullInput because it should rerank based on the user input, which should provide better results. * 📝 select-provider.md update * 🐛 fix merge errors * Nate/autocomplete-metrics (#1230) * ⚡️ use context.selectedCompletionInfo, deduplicate logs * ⚡️ don't reject if user keeps typing same as completion * ⚡️ vscode autocomplete edge cases * 🚧 WIP on vscode autocomplete * ⚡️ better bracket handlng * ⚡️ improved multi-line detection * Active file default context (#1231) * 🚸 include currently active file by default * 🚸 warn if non-autocomplete model being used * ✨ try hole filling template for gpt * 💄 ui for no context * ⚡️ leave out bottom of excessively large files * 🚧 experimenting with perplexity style streaming * 🐛 fix #1237 * 💚 fix type error * ⚡️ improve LSP usage in autocomplete * 🐛 fix content parsing regression in /edit * add PySide6 docs to preindexed docs (#1236) * CON-232 bring custom docs to top, alphabetize doc results, make scrol… (#1239) * CON-232 bring custom docs to top, alphabetize doc results, make scrollable * CON-232 cleanup --------- Co-authored-by: Justin Milner <[email protected]> * CON-1067 condense some things * 🚚 [Auxiliary -> Continue] Sidebar * 🔊 log completion options in ~/.continue/sessions * CON-1067 wrong ret val fix * CON-1067: fixes from testing * ⚡️ filter out completions that are only punctuation/space * ⚡️ inject intellisense docs, no multi-line on comments * ⚡️ crawl type definitions for autocomplete * ⚡️ truncate function text * ⚡️ cache LSP calls * ⚡️ find recently edited ranges with perfect prefix match * 🐛 fix gif paths * ⚡️ bring back double new line stop words * 📌 add yarn lock files * 🐛 allow language keywords to be generated * 💄 toggle on help button * 🎨 defaultContext option * 🐛 fix lancedb bug by upgrading * 🐛 fix groq stop tokens * 🐛 preventDefault to avoid double paste * 🚸 don't repeatedly override cmd+J * 🧑💻 fix npm run test in core * 📝 change description * 🐛 silence Ollama invalid server state warning * ⚡️ more accurate getTerminalContents * ⚡️ make getTerminalContents more accurate * 🧑💻 use yarn instead of npm * 👷 fix yarn add --no-save in prepackge * 🐛 correctly read entire notebook file contents * ➕ import handlebars * 🔥 remove unnecessary migrations * ⚡️ improve /comment prompt * Add debug terminal context menu (#1261) * Add --no-dependencies to vsce package (#1255) This is not needed because we bundle first with esbuild, and vsce pack has issues with modern package managers. see: microsoft/vscode-vsce#421 (comment) * ui: change line decoration color to use vscode theme (#1253) * ui: change line decoration color to use vscode theme to give user a more consistent experience by letting the decoration color to user the color defined in the theme. * fix: incorrect color item should be line background not text background because the decoration is for the whole line * 🎨 refactor indexing state into a single object * CON-223 Correct diff streaming abort (#1263) Co-authored-by: Justin Milner <[email protected]> * 📦 switch to pnpm (#1265) * 🎨 use pnpm instead of yarn * ➕ make dependencies explicit for pnpm * 🐛 add powershell to extension mapping, and default to ext * 🎨 make llamatokenizer commonjs compatible * ➕ add esbuild optional deps * 🚚 rename vendor/node_modules -> modules * 🔖 update core version * 🐛 fixes for transformers.js compatibility * 🔖 update core version * 🎨 set modelPath in constructor * 🎨 fix transformers.js import * 🎨 eslint enforce import extensions * 🎨 require -> import * 🚸 notify user if not diff in /commit * 💄 Improve colors of the IntelliJ tool window icon (#1273) Without this fix, the continue icon sticks out from the other toolwindow icons, resulting in an inconsistent appearance of the whole IDE and creates a feeling that the continue plugin "doesn't fit, something must be broken". According to https://plugins.jetbrains.com/docs/intellij/icons.html#new-ui-icon-colors specific colors are needed to work nicely with dark and light modes. Bonus is that the active tool window icon color then changes automatically to white. Co-authored-by: Lukas Baron <[email protected]> * ✨ send Bearer token to Ollama if apiKey set * 🐛 fix slash command bug * 🧑💻 add onnxruntime (--save-dev) for types * 🐛 don't apply to file when running code block in terminal * 🐛 avoid double paste * ✨ gpt-4o * 🎨 pass uniqueId to constructor * 🚸 focus without scrolling into vie * 🎨 refactoring for continue-proxy LLM (#1277) * 🐛 continue server client fixes * 🎨 refactor OpenAI _getHeaders * 🎨 pass ideSettings to llmFromDescription * ⚡️ improve add docstring command * 💚 ci updates * 🐛 fix repeated paste bug * 🐛 fix build script * 🩹 various small improvements * 📌 pin esbuild 0.17.19 * 🧑💻 pnpm i gui in prepackage * 🐛 show all diff changes in vscode * 🩹 getMetaKeyName * 🐛 fix reading of unopened ipynb files * ⚡️ gpt-4o system prompt * 💄 make font size configurable in config.json ui.fontSize * 🩹 properly dispose of diff handler * 🐛 fix indexing status display * ⚡️ context pruning * 🎨 update free trial models * fix: remove some backup files generated by pkg if present (#1287) --------- Co-authored-by: Roger Meier <[email protected]> Co-authored-by: Peter Zaback <[email protected]> Co-authored-by: Bertrand P <[email protected]> Co-authored-by: Bertrand Pinel <[email protected]> Co-authored-by: Maxime Brunet <[email protected]> Co-authored-by: Jose Vega <[email protected]> Co-authored-by: Nejc Habjan <[email protected]> Co-authored-by: Chad Yates <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: 小颚虫 <[email protected]> Co-authored-by: 5eqn <[email protected]> Co-authored-by: Pixel <[email protected]> Co-authored-by: Justin Milner <[email protected]> Co-authored-by: Devin Gould <[email protected]> Co-authored-by: Dipta Mahardhika <[email protected]> Co-authored-by: tnglemongrass <[email protected]> Co-authored-by: Lukas Baron <[email protected]> Co-authored-by: Fernando <[email protected]>
Before submitting your bug report
Relevant environment info
Description
When selecting code and asking continue.dev to comment on it, it will produce math equations instead
Using OpenAI configured like this:
Using Mistal with Ollama the extension produces the expected result, but it seems using the OpenAI provider only empty strings get send, which will lead Mistral to dream about math.
async_llm_engine.py:436] Received request cmpl-6def2b5f4c4d4d61952dbf6a82262f4d: prompt: '<s>',
To reproduce
CMD + P
Continue: Write comments for this code
Log output
async_llm_engine.py:436] Received request cmpl-6def2b5f4c4d4d61952dbf6a82262f4d: prompt: '<s>',
The text was updated successfully, but these errors were encountered: