Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(tests): use testcontainers-go's Ollama module for running tests with Ollama #1079

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

mdelapenya
Copy link
Contributor

  • fix(llms:ollama): be more explicit in the test propmt
  • chore(llms:ollama): support running ollama tests with testcontainers-go's Ollama module
  • chore(stores:milvus): support running milvus tests with testcontainers-go's Ollama module
  • chore(tests): use require as much as possible
  • fix(stores:redis): update docs count in tests
  • chore(stores:redis): support running Redis tests with testcontainers-go's Ollama module
  • fix(stores:weaviate): use proper query attributes when creating test store
  • chore(stores:weaviate): refine assertion in test
  • chore(stores:weaviate): support running Weaviate tests with testcontainers-go's Ollama module

PR Checklist

  • Read the Contributing documentation.
  • Read the Code of conduct documentation.
  • Name your Pull Request title clearly, concisely, and prefixed with the name of the primarily affected package you changed according to Good commit messages (such as memory: add interfaces for X, Y or util: add whizzbang helpers).
  • Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate.
  • Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. Fixes #123).
  • Describes the source of new concepts.
  • References existing implementations as appropriate.
  • Contains test coverage for new functions.
  • Passes all golangci-lint checks.

What does this PR do?

It uses Testcontainers for Go's Ollama module enabling the execution of the integration tests that interact with a local LLM. Before this PR, tests are skipped in the case 1) there is no OPENAI key in the env, 2) there is not OLLAMA_URL/OLLAMA_MODEL in the env.

This PR bypasses that limitation using the Ollama module in a container, using a small LLM like llama3.2:1b, which fits easily in CPUs instead of GPUs.

To run the docker images for Ollama, we are using mdelapenya's Docker Hub, where I push already backed Ollama images with the models already loaded, so the build times gets reduced only to the pull time. Please check https://hub.docker.com/r/mdelapenya/llama3.2, and the repository that generates the Docker images: https://github.com/mdelapenya/dockerize-ollama-models/

This PR tries to honour the current approach: if the env vars exist, they will be used, so users can test with their own local environment if needed.

Why is it important?

Adds the ability to test locally, avoiding the skipping of lots of tests at CI time.

Other concers

Weaviate tests are failing for me with the current model I chose, not sure if with OpenAI it will work as expected. Also they fail in the chains call, as the relevant documents seem to be not used in the chain call, therefore, the model always responds with "I don't know", making the tests to fail. Can anybody check with OpenAI?

…go's Ollama module

It uses a small model (llama3.2:1b) so that it's performant in CPUs
…s-go's Ollama module

It uses a small model (llama3.2:1b) so that it's performant in CPUs
Given the threshold is hight, is expected to receive less documents
…go's Ollama module

It uses a small model (llama3.2:1b) so that it's performant in CPUs
…iners-go's Ollama module

It uses a small model (llama3.2:1b) so that it's performant in CPUs
@mdelapenya
Copy link
Contributor Author

@codefromthecrypt @devalexandre I think this would close #994. Could you please take a look?

@codefromthecrypt
Copy link
Contributor

on the benefits of this, not just do we skip less tests, but we also understand failure cases common for local model serving runtimes. This includes aspects about the service (e.g. ollama only recently supports streaming with tool calls) and also open models.

I use all-minilm:33m and qwen2.5:0.5B in simple tests, or the normal size qwen2.5 if the test is a little hard to control.

Also, there's an alternative to testcontainers-go which is running ollama directly. While that's not the aim of this PR it has some pros and cons to it, and possibly something to note as an alternative considered.

@mdelapenya
Copy link
Contributor Author

@codefromthecrypt you are probably interested in testcontainers/testcontainers-go#2906 😉

@codefromthecrypt
Copy link
Contributor

@mdelapenya thanks I subscribed. Not sure if they will accept it or prior art around native process management. However, it does feel like it can workaround the lifecycle issue if accepted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants