-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
throttling technique for data cube (space + time) requests #5
Comments
Coral Reef Watch (CRW) is a good test case b/c has daily data |
For |
Let's try this new data cube throttling approach with this salinity dataset having |
bbest
added a commit
that referenced
this issue
Dec 19, 2023
bbest
added a commit
that referenced
this issue
Mar 2, 2024
Hi Ben, I know things are already well along but I came across Roy Mendleson's script griddap_split and I wonder if there is anything there worth copying over here or vice-versa? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
ERDDAP servers have limits. Ideally, we have a simple threshold like
max_points = 1,000,000
, and then the metadata for the gridded dataset gets interrogated so we:n_points_grid
n_slices_time
rerddap::griddap()
requests based onn_batches = n_points_grid * n_slices_time %/% max_points + 1
(assuming a remainder)The text was updated successfully, but these errors were encountered: