Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
shlebbypops authored Sep 25, 2024
1 parent 032a5a7 commit f0f356d
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 5 deletions.
10 changes: 6 additions & 4 deletions src/content/docs/general/chat.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,26 @@ description: Chat with twinny

Chat with twinny and leverage workspace embeddings for enhanced context.

## Open Side Panel
### Open Side Panel

To use twinny Chat, access it from the VSCode sidebar. twinny will retain the chat history between sessions. You can find the chat history by clicking on the History icon on the top panel.

## Context and Code Selection
### Context and Code Selection

When you highlight/select code in your editor, twinny will use that as the context for the chat message. If you have not selected any code, it will use the message alone and any previous messages. You can also right-click on selected code and select a twinny option to refactor, explain and perform other actions.

## Workspace Embeddings
### Workspace Embeddings

twinny now supports workspace embeddings to provide more relevant context for your queries.

### How it Works
### RAG and Mentions How it Works

1. Your workspace documents are embedded and stored when you click the "Embed workspace documents" button.
2. When you send a message, twinny looks up relevant chunks from the embeddings.
3. These chunks are reranked and used as additional context for your query.
4. Use the `@workspace` mention in the chat to search for relevant documents.
5. Use `@problems` for code issues
6. Use `@` to add context for specific files in the workspace.

### Embedding Settings

Expand Down
2 changes: 1 addition & 1 deletion src/content/docs/general/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The recommended way to do this is to use [Ollama](https://ollama.com/). Ollama
## Installing Ollama as an inference provider

1. Visit [Install Ollama](https://ollama.com/) and follow the instructions to install Ollama on your machine.
2. Choose a model from the list of models available on Ollama. The recommended models are [codellama:7b-instruct](https://ollama.com/library/codellama:instruct) for chat and [codellama:7b-code](https://ollama.com/library/codellama:code) for fill-in-middle.
2. Choose a model from the list of models available on Ollama. Two recommended models to get started are [codellama:7b-instruct](https://ollama.com/library/codellama:instruct) for chat and [codellama:7b-code](https://ollama.com/library/codellama:code) for fill-in-middle.

```sh
ollama run codellama:7b-instruct
Expand Down

0 comments on commit f0f356d

Please sign in to comment.