Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing permissible range when in theory when there is no upper limit? #66

Closed
helske opened this issue Nov 23, 2021 · 4 comments
Closed
Labels
enhancement New feature or request

Comments

@helske
Copy link
Contributor

helske commented Nov 23, 2021

This is more of a question, what would be the correct way to document a positive integer argument that can in theory get arbitrarily large values, but that will also increase computation time and memory requirements so that in practice there is some upper limit which depends, not only on the computational resources but on other parameters? For example in my case the number of particles in the particle filter, where the upper limit depends on how big your model is, how much memory you have and long you are willing to wait.

@mpadge
Copy link
Member

mpadge commented Nov 24, 2021

The quick answer: I was avoiding having to think about this. One possible route would be to have some phrase like "should generally be less than X", which autotest should pick up and not test beyond. You'd then be free to note the additional considerations like you describe there without having to actually implement a hard limit yourself.

Note, however, that current implementation of autotest does not pick that up, so I'll modify asap to ensure it does so. Would that seem acceptable to you?

@mpadge mpadge added the enhancement New feature or request label Nov 24, 2021
@helske
Copy link
Contributor Author

helske commented Nov 24, 2021

Yes, something like that could work at least for the time being. The problem I see in this solution is that X can still depend on lots of things so it is actually difficult to define even general X. For example, in particle-MCMC, the total number of stored samples depends on multiple arguments and the model size, which then affect the realistic upper bound for the number of iterations. But yeah, I can see myself writing something along the lines "This parameter affects the run time and should be generally less than 1e6 (depending on the model and other sampling parameters) at least for initial testing".

I'll think about this as well, one solution could be some kind of tag dontautotest in the docs, which would skip the parameter, but that doesn't sound optimal either because then other suitable tests are skipped as well.

mpadge added a commit that referenced this issue Nov 24, 2021
@mpadge
Copy link
Member

mpadge commented Nov 24, 2021

@helske The above commit sets up the greping for patterns like that, via these two lines:

autotest/R/input-int.R

Lines 150 to 151 in 2b2c824

ptn_lower <- "(more|greater|larger)\\sthan|lower\\slimit\\sof|above"
ptn_upper <- "(less|lower|smaller)\\sthan|upper\\slimit\\sof|below"

That should cover most cases (although it does require limits to be given as integers and not words). Next few commits will actually implement it.

mpadge added a commit that referenced this issue Nov 24, 2021
That restricts the actual ranges tested to the specified values, rather than previous default
of +/- .Machine.max
@mpadge mpadge closed this as completed in dbdf264 Nov 24, 2021
@mpadge
Copy link
Member

mpadge commented Nov 24, 2021

@helske That should now catch your case, but I'll re-open in order to:

  1. Ensure that this is appropriately documented somewhere; and
  2. Improve the main single_int_range() function which still contains now mostly redundant code from prior form which used to first test then grep; new form greps first then tests, so lots of code can be tidied and removed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants