-
Notifications
You must be signed in to change notification settings - Fork 808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: Moving Caching part of query-frontend to separate project. #1672
Comments
I'm actually working on something very similar, I'm extracting everything Cortex related from the frontend package to make it reusable in Loki. |
What exactly? (: |
yeah, I'm about to introduce cortex-frontend into existing Thanos setup and natural for Thanos "partial response" which cortex doesn't support - introduces some complexity into the configuration. I mean I understand that cortex-frontend is something that claims to be Prom API compatible and partial_response is somewhat Thanos extension to it thus isn't supposed to be supported out of the box, but It'd be nice to have. |
My first step is to make the frontend package fully agnostic of backends.
The idea is to have backend specific middleware, hooked on startup, so Loki can have its own way of splitting queries but can still use the same retry, transport and queue mechanism as Cortex. In the proposition here, we want to re-use the caching middleware, but create new Thanos specific ones (That may need to use the Thanos Store API), I believe the work I'm doing should also help. However I totally agree that having this into another project/repo would be easier for everyone, my only concern is can we keep Loki on the table ? e.g I should be able to create middleware that are not compatible with the |
@homelessnessbo seems like my work would be beneficial to you too. Basically use the front-end package with a non-compatible Prometheus API. |
Sorry, I was out on holidays for a bit.
@cyriltovena that's definitely a good question how "generic" we want to be. For logs characteristic is totally different, different APIs, probably format in cache backend will be totally different. The risk with being generic is that we might end up with yet another L7 proxy (like envoy) ;p So the question is how much we can reuse.
So we don't want to use StoreAPI directly. In the same way, in Cortex, this caching middleware does not talk (queue) to ingesters or chunk stores directly. In both projects there is something like
Yup, but for this our potential To sum up, bringing Loki support fully here might be difficult, but not sure, maybe we can be generic enough or maybe we can allow reusing some key middlewares only. (: @gouthamve @tomwilkie @codesome @brancz any thoughts? |
This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This would be still nice. (: We are just about to start putting more work and design into this piece from Thanos perspective. BTW how Query parallel/sharding is going? |
Ok, we ultimately bumped into a bit unexpected issue which is "confusion" (: TL;DR: From Thanos user side it's quite hard to deploy Cortex frontend, as it's a bit inconsistent vs what we have for Thanos (for example configuration), so it's quite confusing for the community. Still, we want to use Cortex code for it so we decided to create a new Thanos component called We will make sure we will contribute more to the Cortex frontend, it needs some care for sure (downsampling, subqueries and more). |
Context: thanos-io/thanos#2454 |
Hi 👋
A month ago @tomwilkie merged a PR that makes
query-frontend
capable to cache responses for the queries against any Prometheus API. Details were presented at Prometheus London Meetup:Now, this is amazing piece of work as it allows simple and clear Cortex response caching (with days splitting!) to be used against any Prometheus-based backend. Requests against metric backends are often expensive, have small result output, are simultaneous and repetitive, so it makes sense to treat such caching component as must-have - even for vanilla Prometheus. As the Thanos maintainers we were looking exactly for something like this for some time. Overall it definitely looks like both Cortex and Thanos are looking to solve a very similar goal.
From Thanos side we want to make it a default caching solution that we want to recommend, document and maintain.
However, still, such caching is heavily bound to Cortex. It has quite a complex Queuing engine that already was proposed to be extracted from caching. I believe that splitting caching into a separate project (
promcache
?), in some common org like https://github.com/prometheus-community can have many advantages around contributing, clarity and adoption. I enumerated some benefits further down.Proposal
query-frontend
caching logic to separate Go module (plus cmd to run it) e.g https://github.com/prometheus-community/promcachequery-frontend
or just point toquery-frontend
(without caching)If we agree on this, we (Thanos team) are happy to spin this project up, prepare repo, go module, initial docs and extract caching logic from query-frontend. Then we can focus on embedded caching in existing components like Querier or Query-frontend and use
promcache
as a library if needed.Benefits of moving caching part of
query-frontend
into a separate project?promcache
across both Thanos and Cortex teams.What could be missing in the current
query-frontend
caching layer?Initial google doc proposal.
Thanks, @gouthamve for the input so far!
cc @bboreham @tomwilkie and others (: What do you think?
The text was updated successfully, but these errors were encountered: