-
-
Notifications
You must be signed in to change notification settings - Fork 388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: multiple x axis to combine different time precisions #405
Comments
you're wrong :) you need to fill them with there is now a utility function that can do this for you called |
as i said ..
this still *feels* like a non-ideal solution the feature would allow a space-time tradeoff
sweet, but on runtime, i want to do as little work as possible one problem: we get more snap points |
the complexity of doing anything else will be significant. the overhead does become significant for aligning many completely unaligned datasets that are several thousand points each, but in what i think are typical cases, i've tried to make the join function as efficient as possible. i have an unlisted synthetic demo that allows you to assess the alignment cost here: https://github.com/leeoniya/uPlot/blob/master/demos/align-data.html. i don't expect real-world cases to be i'd be interested to see your actual datasets and how much it costs to align them. |
im happy to help .. or do you mean SIGNIFICANT?
simple, im plotting global population data over the last 500 years (or more)
|
you're welcome to help. but, yes, i think it will be very substantial as the underlying data format assumptions permeate many parts of the internals. it's not gonna be as simple as tweaking just the pathbuilder, for example. the probability of not breaking many things to get this done is basically zero, imo. the ultimate question is, what gains do you expect in non-artificial cases, and can you prove that this will work robustly and generally. if someone tells me that they need better perf on a 10M pts scatter dataset, the simple answer is that this is not the right library to use - at some point, this just becomes true. so, it's important to evaluate real use-cases, costs and possible gains from this effort.
this is an arbitrary number. is that 500 datapoints? 500 * 52 datapoints? 500 * 365 * 24 * 3600 datapoints? as i said, i would be interested to see how
scatter/bubble is much easier (not easy) to solve with a different underlying data structure (as described there), which is the plan. if you'd like to help with that, i think it will be a much better use of time :) i don't plan to work on that for a few more months due to other work, so cannot promise a timely PR review either, unfortunately. |
gonna close this since i don't think there is anything actionable here. feel free to follow up in this thread if you have perf issues with a specific use-case/dataset. |
Hey @leeoniya, My use case: Actually I use Plotly but if possible I'd like to switch cause of issues, but this is holding me back - was there a take on in the last years and I just didn't found it? Or do I need to update the old datas which are already plotted, to insert the data? Thanks so far & Have a great cup of coffee |
you will have to preprocess your data, so all graphs have the same time resolution, i your case 30Hz
to get 30Hz, repeat all S values 3 times, all T values 6 times |
you can keep a separate data buffer for each device and use |
Thanks for the fast responses!
Is there any sample I can quickly have a look at? |
you can search the demos folder here for "uPlot.join`. e.g. https://github.com/leeoniya/uPlot/blob/master/demos/nearest-non-null.html |
assume we have multiple time series with different time precisions / resolutions / divisions:
some series are value per year, others are value per month, others are value per week
currently we must preprocess our data to match the highest value frequency (here: value per week)
and repeat values with lower frequencies (annual data: same value for all 52 weeks)
(please tell me im wrong ..)
here is a plot of annual and monthly values:
what is ugly here are the circles in the white annual line - they are too many
disabling the circles completely is a bad solution
possible workarounds:
use nulls/gaps to encode missing values
show only every N-th circle
(others?)
.. but these still require to merge x values into one axis
possible solution:
currently,
data[0]
holds the x values, and all otherdata[i]
hold y valueswe could introduce a mapping between arrays, mapping x values to y values
or more general, map input values to output values
the default mapping would be
to combine annual and weekly x values, we could then use
or plot functions with multiple inputs
or we extend
opt.series[i]
likein the future we might need 3D plotting and MISO functions (multi input, single output)
(not sure if MIMO makes much sense)
@leeoniya please share your thoughts so i can make a better PR : )
The text was updated successfully, but these errors were encountered: