-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "save a trace" functionality. #1093
Comments
neat idea. One way could be to save them off to a user profile (even
browser storage), since traces are so small. Ex,
zipkin/api/v1/trace/trace_id becomes searchable locally. Just an idea.
PS this is only an issue in cassandra at the moment, since others don't
support TTLs anyway. It might be tricky to do this on the storage side
portably, since we don't have a portable feature for TTL. That doesn't
preclude custom logic, just explanation.
|
Since the UI has been decoupled from the server side, another possible solution is to have a Save As, so that the user can save the trace json as a file on local disk and later load it into the UI. It's especially useful considering that tracing ultimately is used to troubleshoot perf issues, so one could save a trace, attach it to a ticket, and someone else can later load the trace and see it in the UI. We already have the JSON button, so Save As is there, but we don't have a Load function in the UI. |
|
huh, I didn't realize, thought it was in already. |
PS there's now a download button on a trace. |
We had a note in #1222 about just making a separate non-bucketed index for elasticsearch and just move docs to it. There's no TTL support in mysql anyway, so noop there. Cassandra might take some thinking. We can address this by making it possible to query the "infinite" index routinely, and we could also make a "request save" api, which either moves the trace there or returns a message if unsupported. cc @openzipkin/elasticsearch @openzipkin/cassandra |
also in mysql I suppose we could double the table-count to provide an "infinite" index separate from the routine one (cc @jcarres-mdsol) |
Cassandra Technical: This will leave some old sstables around and prevent them from being wiped off disk in one go (tombstone compactions will instead be required to clean them out). But I can't see this being a visible issue to the zipkin operator. |
We'd likely want some signal to imply that the trace is special. One way is to add a binary annotation (tag) to the saved trace, like.. "representative" -> "fastestest" We wouldn't care what the values are, but it allows zipkin query of "representative" to include them, and also would allow any UI to distinguish them from something else (cc @rogeralsing) This would also permit those doing modeling or analysis research to ask folks for representative traces in some easy-to-grab fashion (cc @adrianco @rfonseca) |
Moving discussion from #1222 - Briefly I'm trying to design a way for generic saving and eviction of threads and I proposed adding some methods - I could see MySql having a ttl column in the Spans table that could be used, Elasticsearch could just drop daily indexes, etc... I not a huge fan of this as it is backwards incompatible with existing implementations, but throwing it out there. |
I thought about this a bit over the weekend. Here's what that ended up as.
I've an alternative proposal: do it all on the client Instead of creating a secondary tier storage in our api, simply deploy twice. Ex use a keyspace/DB for the transient trace depot and another for the "permanent" one. Ex. index=zipkin (for transient) and index=zipkin-4ever for permanent. The second is fronted by vanilla zipkin-servers that don't run any collectors except http. The act of "saving a trace" is just taking the json from the transient one and POSTing it to the permanent one. Someone could later change the zipkin-ui (or as some sort of plugin) to query across both and/or create a automatic flow (such as a button which clicked posts to the "permanent" zipkin). cc @rogeralsing @eirslett This automatically solves any future needs around retention, as the same mechanics can be used. The only difference is that in the case of cassandra, the keyspace should be affected prior to use, notably to remove the TTL (or set it to a very long value). The best win is that there's no code impact on server components. They remain simple and probably more "microservice" as a result. thoughts? |
Sounds good to me if we take the plugin approach for zipkin-ui, that way people have the biggest flexibility to tie this together with their usage concerns and make their own infrastructure choices with the greatest ease of use. |
@adriancole making favoriting/saving a tracing part of the client and then the client copy the traces between the 2 stores is my preferred approach for the following reasons: However, I prefer making this a feature of the current backend instead of having separate clusters though. I think that way the backend would be easier to operate. Multiple clusters increase operational overhead in large organizations. |
Thanks for the feedback. I have one question on your comment.
If by backend you mean zipkin servers that would really complicate |
FYI "permanent traces" will eventually clash, even if unlikely for some. While not a strict dependency, this is certainly related to the 128bit trace id work #1262 |
ps the original version of zipkin had a "favorite" button (trivia) |
As of Zipkin 2.21, trace archiving is now supported: https://github.com/openzipkin/zipkin/tree/master/zipkin-server#trace-archival . In the screenshot below you can see the 'Archive Trace' button appearing once everything is configured: Note there is an ongoing discussion whether queries should fan out to archival instances. |
It'd be really nice if individual traces could be tagged to not age out of Cassandra through the UI.
The text was updated successfully, but these errors were encountered: