Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RAG blogpost announcement #777

Merged
merged 4 commits into from
Jan 15, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/art/posts/rag_finetuning.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 43 additions & 0 deletions docs/blog/posts/2024-15-01|RAG_finetuning_with_Fondant.md
mrchtr marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
---
date:
created: 2024-01-15
authors:
- mrchtr
---

# Let's tune RAG pipelines with Fondant

Retrieval Augmented Generation (RAG) has quickly become the go-to architecture for providing large
language models (LLM) with specific knowledge. Optimizing a custom setup requires days to find the
right set of parameters and system configuration.

We have created an example use case to showcase how you can enhance your RAG setup by using Fondant.
Checkout out the resources:

- [GitHub repository](https://github.com/ml6team/fondant-usecase-RAG)
- [Blog post](https://medium.com/)
mrchtr marked this conversation as resolved.
Show resolved Hide resolved

<!-- more -->

Off-the-shelf solutions might be easy to set up for a quick proof of concept, but their performance
is usually insufficient for production usage since they are not adapted to the complexities of
specific situations.

By employing several different methods in an iterative manner, we can more than double the accuracy
of an RAG system and maximize its performance. We have built two Fondant pipelines that can assist
you in finding the optimal combinations of parameters for your unique setup.

![RAG finetuning pipelines](../../art/posts/rag_finetuning.png)

In the [example repository](https://github.com/ml6team/fondant-usecase-RAG), you can find notebooks
that will help you customize your setup. More information can be found in
the [blog post](https://medium.com/).
mrchtr marked this conversation as resolved.
Show resolved Hide resolved
mrchtr marked this conversation as resolved.
Show resolved Hide resolved









6 changes: 3 additions & 3 deletions docs/overrides/main.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

{% block announce %}
<p style="text-align: center">
Fondant 0.8 is out: Simplification, Sagemaker, RAG, and more!

<a href="https://fondant.ai/en/latest/blog/2023/12/13/fondant-08-simplification-sagemaker-rag-and-more/"
style="color: white; text-decoration: underline">Read more</a>
Let's tune RAG pipelines with Fondant.
<a href="[TODO MEDIUM LINK]"
mrchtr marked this conversation as resolved.
Show resolved Hide resolved
style="color: white; text-decoration: underline">Check out our recent blog post!</a>
</p>
{% endblock %}
Loading