Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JOSS Review] More detailed tests #13

Closed
fawda123 opened this issue Mar 11, 2024 · 2 comments
Closed

[JOSS Review] More detailed tests #13

fawda123 opened this issue Mar 11, 2024 · 2 comments

Comments

@fawda123
Copy link

You've got a lot of tests, which is good, but I'm not sure they're very comprehensive. I'm definitely guilty of this, but it seems most of your tests just verify things like the output type. This really isn't the nature of what tests should do - they really ought to verify the output is as you intended. For example, you're not writing a function to return a list, rather, you're writing a function to model spatial relationships in stream networks. Your test should verify the model output is as expected, not that it's just an ssn_lm object. You would have more confidence that your package is doing what it claims to do if you tested for specific output. How would you know something has gone awry if you're just verifying the output class? Often testing specific results can help solve this problem, e.g.:

result <- coef(mod)
expect_equal(result, c(`(Intercept)` = 76.195, ELEV_DEM = -0.027, AREAWTMAP = -0.009), tolerance = 0.001)

A more simple example... you've got a workflow in R/coef.R that will return model coefficients depending on the type argument. The last step of the if/else chain will return an error if the type argument is invalid. Does this function really return this error if an incorrect entry for type is used? An explicit test would look something like this:

expect_error(coef(ssn_mod, type = 'error'), 'Invalid type argument. The type argument must be "fixed", "ssn", "tailup",  "taildown",  "euclid",  "nugget", or "randcov"')

openjournals/joss-reviews#6389

@k-doering-NOAA
Copy link

One idea for testing for specific output is using Snapshot testing.

@michaeldumelle
Copy link
Collaborator

michaeldumelle commented Jun 26, 2024

Thank you for the feedback @fawda123 and @k-doering-NOAA! I will respond in this post which I will edit as I progress through adding testing functionality. Please reach out if you have any questions/comments/concerns!

We have updated our unit testing infrastructure to include many unit tests that verify output as intended. The following files in tests/testthat have been updated:

  • helper-data.R
  • test-dist_objects.R
  • test-extras.R
  • test-predict.R
  • test-ssn_glm.R
  • test-ssn_lm.R
  • test-ssn_simulate.R
  • test-ssn-object.R
  • test-Torgegram.R
  • test-utils.R

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants