-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconcile chunk_limit and single_download_limit #332
Conversation
* Also have check_chunk_limit return the chunk limit to avoid repeated calls to bcdc_single_download_limit. * Use check_chunk_limit in paginated requests
* Don't use a default * Return chunk_limit early if chunk_value is NULL
All passing now! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly can't remember why I had these separate other than to trigger pagination. thanks for doing this.
Thanks both! I guess the remaining question is - should I remove one of the options (likely |
Is there ever a use case where a user would want to set a lower |
Not really - as it can be done with the chunk limit. |
I think that if we do remove it, we should probably properly deprecated. I'd advocate just to leave it for now rather than risk breaking anyone's code. |
* wrapper function to consult the option and warn once per session if it is set. * Use check_chunk_limit throughout to be more efficient in checking both options * Update documentation
* remove context() * update expect_is() to expect_s3_class() and expect_type()
Summary of new changes @boshek @stephhazlitt:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple place where I think some comments would help.
Co-authored-by: Stephanie Hazlitt <[email protected]>
Co-authored-by: Stephanie Hazlitt <[email protected]>
Co-authored-by: Stephanie Hazlitt <[email protected]>
#330 surfaced the change to the single download limit on the WFS server to 500 records (down from 10,000). This exposed the potential incompatibility between
bcdcd.single_download_limit
andbcdc.chunk_limit
options, where it was possible for the latter to be larger than the former. Sincebcdcd.single_download_limit
determined whether or not we needed pagination for a request, andbcdc.chunk_limit
is used to determined the page size (number of records requested in a page), we were getting into the situation where we were attempting to retrieve more records in a page than is allowed by the server.This PR sets the default chunk limit to the
bcdc.single_download_limit
by default, and also checks more comprehensively that the two values are compatible.This address part of #330, but unfortunately does not fix all of it.