-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Appetite for a query language? #284
Comments
Yes! This is definitely something we want to add. Thanks for opening up an issue about it as it's definitely something we need to track. |
As there are many things we don't understand yet, we want to write an in-depth design doc discussing various details for a query language in the next months. For now, we are probably going to focus on persistent storage a bit more. |
Fun fact, we already have a language and a parser, it's just super small right now. It's how autocompletion works today. I always imagined there to be an "advanced" mode that was just a plain query input a la Prometheus, that doesn't use any of the guiding UI elements. Let's use this issue as a place to collect use cases. My top use case that I would like to see, that I cannot do today: I already know the function name of the function that I want to optimize (for example through distributed tracing), so I want to see all data merged that includes traces that include that function, visualized as a flamegraph. |
Raw thoughts and it's perfectly possible I'm completely wrong (thoughts still developing): I think function selection should be a secondary filter of some sort. My thinking is so we can do something like:
(the Not saying that I necessarily like this notation, but I think it demonstrates why I think it should be a "second step" filter. |
One thing I found hard to understand is the Query data model in Parca. In this case, what's the meaning of |
Yeah, I think it can be confusing because query_range and query don't have the same relationship as in Prometheus, but I do think the |
Makes sense. One additional use-case might be release qualification/roll-out qualification. This might be a bit far fetched but in a canary judge I would like to know if the canary is (significantly) less efficient than before (or the other running tasks). Questions: |
I'd like to think we can get quite far knowing the duration, period and samples and using that for relative comparisons, but I agree the moment where the canary is not an equal participant in the system it gets significantly harder to judge. I think the need for weighting is inevitable. |
I think some things are starting to crystalize for me. Primarily that the language should evolve around selection, aggregation, and manipulation of stack traces, as opposed to thinking of "profiles" as a unit (stack traces that have a selector attached to them are instead the unit). If we think of it in that way, there is no more merging or no merging, everything becomes an aggregation of stack traces, and this can be either at a specific point in time or across time. Happy little accident that so far that's how the selectors happened to also work. A couple of things in addition to what I think we need to be able to express (and some of these need to be changed in the general UX of querying, not just a query language but I think it goes hand in hand):
Any combination of these should be diff-able against each other. |
Agree with all of the above ^^ I would also love to also see how Parca query language can be used to- let me know if that does not make sense :) |
|
Yes @brancz - that makes sense! I was suggesting this as something we can consider in the mid-term as the project matures. There may also be a case here to see how other tools like Grafana might be open to extending in this direction as well to complement Parca's ability to be a great datastore. (profiling can be a great add-on there from their POV too). I have just started to use Parca here - so take my suggestions with a grain of salt :) On [1] my thought was we could look at ability to measure things like time spent in mutex contention or locks or ttot (in python) spent on a fn over some cycles. We could use this together with alerts to highlight regressions or some bad state that the code lead into. We will have more concrete ideas here as we start using this more! |
@javierhonduco and I just had a conversation about the use case for parca-dev/parca-agent#1001
|
One use-case I would like to experiment with is to be able to answer questions across a larger set of deployments to drive optimization efforts. Some of the queries might be along the lines of:
The result will be a flat report and not a flamegraph. I wondered whether to approach this by introducing a query language? This requires more thought but on a high-level something like this:
Or something more advanced like finding the binaries that allocate most memory in a specific function?
The text was updated successfully, but these errors were encountered: