You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You've got a lot of tests, which is good, but I'm not sure they're very comprehensive. I'm definitely guilty of this, but it seems most of your tests just verify things like the output type. This really isn't the nature of what tests should do - they really ought to verify the output is as you intended. For example, you're not writing a function to return a list, rather, you're writing a function to model spatial relationships in stream networks. Your test should verify the model output is as expected, not that it's just an ssn_lm object. You would have more confidence that your package is doing what it claims to do if you tested for specific output. How would you know something has gone awry if you're just verifying the output class? Often testing specific results can help solve this problem, e.g.:
A more simple example... you've got a workflow in R/coef.R that will return model coefficients depending on the type argument. The last step of the if/else chain will return an error if the type argument is invalid. Does this function really return this error if an incorrect entry for type is used? An explicit test would look something like this:
expect_error(coef(ssn_mod, type='error'), 'Invalid type argument. The type argument must be "fixed", "ssn", "tailup", "taildown", "euclid", "nugget", or "randcov"')
Thank you for the feedback @fawda123 and @k-doering-NOAA! I will respond in this post which I will edit as I progress through adding testing functionality. Please reach out if you have any questions/comments/concerns!
We have updated our unit testing infrastructure to include many unit tests that verify output as intended. The following files in tests/testthat have been updated:
You've got a lot of tests, which is good, but I'm not sure they're very comprehensive. I'm definitely guilty of this, but it seems most of your tests just verify things like the output type. This really isn't the nature of what tests should do - they really ought to verify the output is as you intended. For example, you're not writing a function to return a list, rather, you're writing a function to model spatial relationships in stream networks. Your test should verify the model output is as expected, not that it's just an
ssn_lm
object. You would have more confidence that your package is doing what it claims to do if you tested for specific output. How would you know something has gone awry if you're just verifying the output class? Often testing specific results can help solve this problem, e.g.:A more simple example... you've got a workflow in
R/coef.R
that will return model coefficients depending on thetype
argument. The last step of theif/else
chain will return an error if thetype
argument is invalid. Does this function really return this error if an incorrect entry fortype
is used? An explicit test would look something like this:openjournals/joss-reviews#6389
The text was updated successfully, but these errors were encountered: