diff --git a/src/content/docs/general/chat.md b/src/content/docs/general/chat.md index 5f70191..601df10 100644 --- a/src/content/docs/general/chat.md +++ b/src/content/docs/general/chat.md @@ -5,24 +5,26 @@ description: Chat with twinny Chat with twinny and leverage workspace embeddings for enhanced context. -## Open Side Panel +### Open Side Panel To use twinny Chat, access it from the VSCode sidebar. twinny will retain the chat history between sessions. You can find the chat history by clicking on the History icon on the top panel. -## Context and Code Selection +### Context and Code Selection When you highlight/select code in your editor, twinny will use that as the context for the chat message. If you have not selected any code, it will use the message alone and any previous messages. You can also right-click on selected code and select a twinny option to refactor, explain and perform other actions. -## Workspace Embeddings +### Workspace Embeddings twinny now supports workspace embeddings to provide more relevant context for your queries. -### How it Works +### RAG and Mentions How it Works 1. Your workspace documents are embedded and stored when you click the "Embed workspace documents" button. 2. When you send a message, twinny looks up relevant chunks from the embeddings. 3. These chunks are reranked and used as additional context for your query. 4. Use the `@workspace` mention in the chat to search for relevant documents. +5. Use `@problems` for code issues +6. Use `@` to add context for specific files in the workspace. ### Embedding Settings diff --git a/src/content/docs/general/quick-start.md b/src/content/docs/general/quick-start.md index 626016c..933d144 100644 --- a/src/content/docs/general/quick-start.md +++ b/src/content/docs/general/quick-start.md @@ -16,7 +16,7 @@ The recommended way to do this is to use [Ollama](https://ollama.com/). Ollama ## Installing Ollama as an inference provider 1. Visit [Install Ollama](https://ollama.com/) and follow the instructions to install Ollama on your machine. -2. Choose a model from the list of models available on Ollama. The recommended models are [codellama:7b-instruct](https://ollama.com/library/codellama:instruct) for chat and [codellama:7b-code](https://ollama.com/library/codellama:code) for fill-in-middle. +2. Choose a model from the list of models available on Ollama. Two recommended models to get started are [codellama:7b-instruct](https://ollama.com/library/codellama:instruct) for chat and [codellama:7b-code](https://ollama.com/library/codellama:code) for fill-in-middle. ```sh ollama run codellama:7b-instruct