Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat Implement lingoose threads #158

Merged
merged 65 commits into from
Feb 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
934d340
chore: refactor
henomis Dec 20, 2023
52edba5
refactor
henomis Dec 20, 2023
b458f6b
fix
henomis Dec 20, 2023
6ea2ba1
fix
henomis Dec 20, 2023
f113af4
refactor
henomis Dec 21, 2023
68af541
fix
henomis Dec 21, 2023
e2149f5
revert legacy code
henomis Dec 21, 2023
51c54a5
linting
henomis Dec 21, 2023
f0994f3
refactor: move legacy code
henomis Dec 21, 2023
5d52a23
fix
henomis Dec 21, 2023
4724622
fix
henomis Dec 21, 2023
56da378
fix
henomis Dec 21, 2023
a766086
fix
henomis Dec 21, 2023
284cfd8
fix
henomis Dec 21, 2023
32b8cfb
use cache
henomis Dec 21, 2023
3ed7f32
fix
henomis Dec 21, 2023
290c44c
fix
henomis Dec 21, 2023
f752f78
fix
henomis Dec 21, 2023
cbfb6ae
add assistant
henomis Dec 27, 2023
afa326d
implement rag fusion
henomis Dec 28, 2023
02a3b7c
fix
henomis Dec 28, 2023
db933de
fix
henomis Dec 28, 2023
30ec2ef
fix
henomis Dec 28, 2023
864c09b
add cohere implementation
henomis Dec 28, 2023
c0628f2
fix linting
henomis Jan 2, 2024
e7c8c1c
implement ollama
henomis Jan 9, 2024
a35ebfe
fix
henomis Jan 9, 2024
44060ef
fix
henomis Jan 9, 2024
ad2143a
fix
henomis Jan 23, 2024
b63864e
chore: add new Loader Method
henomis Jan 23, 2024
bd00054
chore: assistant must run a thread
henomis Jan 23, 2024
14b9be4
refactor code
henomis Jan 23, 2024
f849e6b
fix
henomis Jan 23, 2024
32877ee
use pinecone-go v2
henomis Jan 25, 2024
1ecbea5
fix
henomis Jan 25, 2024
a740032
chore: add liglet
henomis Jan 26, 2024
900942d
fix
henomis Jan 26, 2024
f126cfd
liglet sql
henomis Jan 26, 2024
65e3911
refactor rag
henomis Jan 27, 2024
cc168cc
fix
henomis Jan 27, 2024
ea54263
fix
henomis Jan 28, 2024
3fc9ee5
fix
henomis Jan 28, 2024
12caa7f
add cohere chat stream
henomis Jan 28, 2024
5bfb5a8
fix
henomis Jan 28, 2024
c560ddb
ollama embedder
henomis Jan 30, 2024
ea72117
fix
henomis Jan 30, 2024
dab6909
fix
henomis Jan 30, 2024
88d039f
upgrade golang openai lib and models
henomis Feb 1, 2024
35a0242
update tiktoken
henomis Feb 1, 2024
b65bda0
remove tiktoken
henomis Feb 1, 2024
f9a564a
feat: add nomic embedder
henomis Feb 1, 2024
83f69b0
add documentation
henomis Feb 2, 2024
f8ed9b9
fix doc
henomis Feb 3, 2024
e158691
add llms and embedding
henomis Feb 3, 2024
de17647
fix
henomis Feb 3, 2024
6696d78
update loaders
henomis Feb 3, 2024
ddaf3ba
fix rag
henomis Feb 3, 2024
b7a1e1f
add loader
henomis Feb 3, 2024
5bc2d82
fix
henomis Feb 4, 2024
b000718
refactor cache
henomis Feb 4, 2024
d4e3412
add cache doc
henomis Feb 5, 2024
8523b16
docs: add rag
henomis Feb 5, 2024
35b4ea6
docs: add assistant and linglet
henomis Feb 5, 2024
c4ff594
fix
henomis Feb 5, 2024
ffbea94
add deploy website
henomis Feb 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions .github/workflows/hugo.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Sample workflow for building and deploying a Hugo site to GitHub Pages
name: Deploy Hugo site to Pages

on:
# Runs on pushes targeting the default branch
push:
branches:
- main

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false

# Default to bash
defaults:
run:
shell: bash

jobs:
# Build job
build:
runs-on: ubuntu-latest
env:
HUGO_VERSION: 0.122.0
steps:
- name: Install Hugo CLI
run: |
wget -O ${{ runner.temp }}/hugo.deb https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb \
&& sudo dpkg -i ${{ runner.temp }}/hugo.deb
- name: Install Dart Sass
run: sudo snap install dart-sass
- name: Checkout
uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0
- name: Setup Pages
id: pages
uses: actions/configure-pages@v4
- name: Install Node.js dependencies
run: "[[ -f package-lock.json || -f npm-shrinkwrap.json ]] && npm ci || true"
- name: Build with Hugo
env:
# For maximum backward compatibility with Hugo modules
HUGO_ENVIRONMENT: production
HUGO_ENV: production
run: |
cd docs && hugo \
--gc \
--minify \
--baseURL "${{ steps.pages.outputs.base_url }}/"
- name: Upload artifact
uses: actions/upload-pages-artifact@v2
with:
path: ./public

# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v3
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,6 @@ bin/
# Dependency directories (remove the comment below to include it)
# vendor/
llama.cpp/
whisper.cpp/
whisper.cpp/

*/.hugo_build.lock
110 changes: 49 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,91 +1,79 @@
<p align="center"><img src="https://lingoose.io/assets/img/lingoose-small.png" alt="LINGOOSE"/></p>
![lingoose](docs/static/lingoose-small.png)

# 🪿 LinGoose

[![Build Status](https://github.com/henomis/lingoose/actions/workflows/checks.yml/badge.svg)](https://github.com/henomis/lingoose/actions/workflows/checks.yml) [![GoDoc](https://godoc.org/github.com/henomis/lingoose?status.svg)](https://godoc.org/github.com/henomis/lingoose) [![Go Report Card](https://goreportcard.com/badge/github.com/henomis/lingoose)](https://goreportcard.com/report/github.com/henomis/lingoose) [![GitHub release](https://img.shields.io/github/release/henomis/lingoose.svg)](https://github.com/henomis/lingoose/releases)
# 🪿 LinGoose [![Build Status](https://github.com/henomis/lingoose/actions/workflows/checks.yml/badge.svg)](https://github.com/henomis/lingoose/actions/workflows/checks.yml) [![GoDoc](https://godoc.org/github.com/henomis/lingoose?status.svg)](https://godoc.org/github.com/henomis/lingoose) [![Go Report Card](https://goreportcard.com/badge/github.com/henomis/lingoose)](https://goreportcard.com/report/github.com/henomis/lingoose) [![GitHub release](https://img.shields.io/github/release/henomis/lingoose.svg)](https://github.com/henomis/lingoose/releases)

**LinGoose** (_Lingo + Go + Goose_ 🪿) aims to be a complete Go framework for creating LLM apps. 🤖 ⚙️

> **Did you know?** A goose 🪿 fills its car 🚗 with goose-line ⛽!

<p align="center"><b>Connect with the Creator </b></p>
<p align="center">
<a href="https://twitter.com/simonevellei" target="blank">
<img src="https://img.shields.io/twitter/follow/simonevellei?label=Follow:%20Simone%20Vellei&style=social" alt="Follow Simone Vellei"/>
</a>
<a href='https://github.com/henomis'>
<img alt="Follow on Github" src="https://img.shields.io/badge/Follow-henomis-green?logo=github&link=https%3A%2F%2Fgithub.com%2Fhenomis">
</a>
</p>
## What is LinGoose?

### Help support this project by giving it a star! ⭐ 🪿
[LinGoose](https://github.com/henomis/lingoose) is a Go framework for building awesome AI/LLM applications.<br/>

### Start learning LinGoose on Replit [LinGoose course](https://replit.com/@henomis/Building-AI-Applications-with-LinGoose)
- **LinGoose is modular** — You can import only the modules you need to build your application.
- **LinGoose is an abstraction of features** — You can choose your preferred implementation of a feature and/or create your own.
- **LinGoose is a complete solution** — You can use LinGoose to build your AI/LLM application from the ground up.

# Overview

**LinGoose** is a powerful Go framework for developing Large Language Model (LLM) based applications using pipelines. It is designed to be a complete solution and provides multiple components, including Prompts, Templates, Chat, Output Decoders, LLM, Pipelines, and Memory. With **LinGoose**, you can interact with LLM AI through prompts and generate complex templates. Additionally, it includes a chat feature, allowing you to create chatbots. The Output Decoders component enables you to extract specific information from the output of the LLM, while the LLM interface allows you to send prompts to various AI, such as the ones provided by OpenAI. You can chain multiple LLM steps together using Pipelines and store the output of each step in Memory for later retrieval. **LinGoose** also includes a Document component, which is used to store text, and a Loader component, which is used to load Documents from various sources. Finally, it includes TextSplitters, which are used to split text or Documents into multiple parts, Embedders, which are used to embed text or Documents into embeddings, and Indexes, which are used to store embeddings and documents and to perform searches.

# Components
## Quick start
1. [Initialise a new go module](https://golang.org/doc/tutorial/create-module)

**LinGoose** is composed of multiple components, each one with its own purpose.

| Component | Package | Description |
| ----------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Prompt** | [prompt](prompt/) | Prompts are the way to interact with LLM AI. They can be simple text, or more complex templates. Supports **Prompt Templates** and **[Whisper](https://openai.com) prompt** |
| **Chat Prompt** | [chat](chat/) | Chat is the way to interact with the chat LLM AI. It can be a simple text prompt, or a more complex chatbot. |
| **Decoders** | [decoder](decoder/) | Output decoders are used to decode the output of the LLM. They can be used to extract specific information from the output. Supports **JSONDecoder** and **RegExDecoder** |
| **LLMs** | [llm](llm/) | LLM is an interface to various AI such as the ones provided by OpenAI. It is responsible for sending the prompt to the AI and retrieving the output. Supports **[OpenAI](https://openai.com)**, **[HuggingFace](https://huggingface.co)** and **[Llama.cpp](https://github.com/ggerganov/llama.cpp)**. |
| **Pipelines** | [pipeline](pipeline/) | Pipelines are used to chain multiple LLM steps together. |
| **Memory** | [memory](memory/) | Memory is used to store the output of each step. It can be used to retrieve the output of a previous step. Supports memory in **Ram** |
| **Document** | [document](document/) | Document is used to store a text |
| **Loaders** | [loader](loader/) | Loaders are used to load Documents from various sources. Supports **TextLoader**, **DirectoryLoader**, **PDFToTextLoader** and **PubMedLoader** . |
| **TextSplitters** | [textsplitter](textsplitter/) | TextSplitters are used to split text or Documents into multiple parts. Supports **RecursiveTextSplitter**. |
| **Embedders** | [embedder](embedder/) | Embedders are used to embed text or Documents into embeddings. Supports **[OpenAI](https://openai.com)** |
| **Indexes** | [index](index/) | Indexes are used to store embeddings and documents and to perform searches. Supports **SimpleVectorIndex**, **[Pinecone](https://pinecone.io)** and **[Qdrant](https://qdrant.tech)** |

# Usage

Please refer to the documentation at [lingoose.io](https://lingoose.io/docs/) to understand how to use LinGoose. If you prefer the 👉 [examples directory](examples/) contains a lot of examples 🚀.
However, here is a **powerful** example of what **LinGoose** is capable of:
```sh
mkdir example
cd example
go mod init example
```

_Talk is cheap. Show me the [code](examples/)._ - Linus Torvalds
2. Create your first LinGoose application

```go
package main

import (
"context"

openaiembedder "github.com/henomis/lingoose/embedder/openai"
"github.com/henomis/lingoose/index"
"github.com/henomis/lingoose/index/option"
"github.com/henomis/lingoose/index/vectordb/jsondb"
"github.com/henomis/lingoose/llm/openai"
"github.com/henomis/lingoose/loader"
qapipeline "github.com/henomis/lingoose/pipeline/qa"
"github.com/henomis/lingoose/textsplitter"
"context"
"fmt"

"github.com/henomis/lingoose/llm/openai"
)

func main() {
docs, _ := loader.NewPDFToTextLoader("./kb").WithTextSplitter(textsplitter.NewRecursiveCharacterTextSplitter(2000, 200)).Load(context.Background())
index := index.New(jsondb.New().WithPersist("db.json"), openaiembedder.New(openaiembedder.AdaEmbeddingV2)).WithIncludeContents(true)
index.LoadFromDocuments(context.Background(), docs)
qapipeline.New(openai.NewChat().WithVerbose(true)).WithIndex(index).Query(context.Background(), "What is the NATO purpose?", option.WithTopK(1))

llm := openai.NewCompletion()

response, err := llm.Completion(context.Background(), "Tell me a joke about geese")
if err != nil {
panic(err)
}

fmt.Println(response)

}
```

This is the _famous_ 4-lines **lingoose** knowledge base chatbot. 🤖
3. Install the Go dependencies
```sh
go mod tidy
```

4. Start the example application

# Installation
```sh
export OPENAI_API_KEY=your-api-key

Be sure to have a working Go environment, then run the following command:
go run .

```shell
go get github.com/henomis/lingoose
A goose fills its car with goose-line!
```

# License
## Reporting Issues

If you think you've found a bug, or something isn't behaving the way you think it should, please raise an [issue](https://github.com/henomis/lingoose/issues) on GitHub.

## Contributing

We welcome contributions, Read our [Contribution Guidelines](https://github.com/henomis/lingoose/blob/master/CONTRIBUTING.md) to learn more about contributing to **LinGoose**

## License

© Simone Vellei, 2023~`time.Now()`
Released under the [MIT License](LICENSE)
Released under the [MIT License](LICENSE)
102 changes: 102 additions & 0 deletions assistant/assistant.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
package assistant

import (
"context"
"strings"

"github.com/henomis/lingoose/thread"
"github.com/henomis/lingoose/types"
)

type Assistant struct {
llm LLM
rag RAG
thread *thread.Thread
}

type LLM interface {
Generate(context.Context, *thread.Thread) error
}

type RAG interface {
Retrieve(ctx context.Context, query string) ([]string, error)
}

func New(llm LLM) *Assistant {
assistant := &Assistant{
llm: llm,
thread: thread.New(),
}

return assistant
}

func (a *Assistant) WithThread(thread *thread.Thread) *Assistant {
a.thread = thread
return a
}

func (a *Assistant) WithRAG(rag RAG) *Assistant {
a.rag = rag
return a
}

func (a *Assistant) Run(ctx context.Context) error {
if a.thread == nil {
return nil
}

if a.rag != nil {
err := a.generateRAGMessage(ctx)
if err != nil {
return err
}
}

return a.llm.Generate(ctx, a.thread)
}

func (a *Assistant) RunWithThread(ctx context.Context, thread *thread.Thread) error {
a.thread = thread
return a.Run(ctx)
}

func (a *Assistant) Thread() *thread.Thread {
return a.thread
}

func (a *Assistant) generateRAGMessage(ctx context.Context) error {
lastMessage := a.thread.LastMessage()
if lastMessage.Role != thread.RoleUser || len(lastMessage.Contents) == 0 {
return nil
}

query := ""
for _, content := range lastMessage.Contents {
if content.Type == thread.ContentTypeText {
query += content.Data.(string) + "\n"
} else {
continue
}
}

searchResults, err := a.rag.Retrieve(ctx, query)
if err != nil {
return err
}

context := strings.Join(searchResults, "\n\n")

a.thread.AddMessage(thread.NewUserMessage().AddContent(
thread.NewTextContent(
baseRAGPrompt,
).Format(
types.M{
"question": query,
"context": context,
},
),
))

return nil
}
8 changes: 8 additions & 0 deletions assistant/prompt.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
package assistant

const (
//nolint:lll
baseRAGPrompt = `You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: {{.question}}
Context: {{.context}}`
)
39 changes: 39 additions & 0 deletions docs/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
baseurl: https://lingoose.io/
metadataformat: yaml
title: lingoose
enableGitInfo: true
pygmentsCodeFences: true
pygmentsUseClasses: true
canonifyURLs: true

params:
name: lingoose
description: Go framework for building awesome AI/LLM applications

menu:
main:
- name: Introduction
url: /
weight: -10
- name: Reference
identifier: reference
weight: 5
- name: Legacy
identifier: legacy
weight: 6
- name: pkg.go.dev →
parent: reference
url: https://pkg.go.dev/github.com/henomis/lingoose

security:
funcs:
getenv:
- '^HUGO_'
- 'VERSIONS'
- 'CURRENT_VERSION'

go_initialisms:
replace_defaults: false
initialisms:
- 'CC'
- 'BCC'
1 change: 1 addition & 0 deletions docs/content/_introduction.md
Loading
Loading