Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS Review: Improvements to testing architecture #14

Closed
emilydolson opened this issue Jan 12, 2022 · 9 comments
Closed

JOSS Review: Improvements to testing architecture #14

emilydolson opened this issue Jan 12, 2022 · 9 comments

Comments

@emilydolson
Copy link

According to the JOSS guidelines, ideally your code should have automated tests that run on new commits (e.g. via Github Actions or another continuous integration platform). At a minimum it should have documentation for how to assess that the code is working as intended.

Currently it looks like Cacatoo has a unittests directory with an html file and a javascript file. That's a good start, but it's not quite sufficient. At a minimum, the expected behavior for those files should be documented somewhere so that a user can confirm that what they observe matches that. A better approach would be to include some assertions in your javascript so that it will throw an error if it doesn't behave as its supposed to. A Javascript testing framework might make this easier (I happen to like mocha, but there are many options).

(This issue is part of my review for JOSS: openjournals/joss-reviews#3948)

@bramvandijk88
Copy link
Owner

Dear Emily Dolson,

I have thought about other testing possibilities (like Mocha), but decided it wasn't necessary (or useful even). I included a quasi "unit test" in the repository to make sure all the main functions work as intended, but other than that most of the programming comes from the user itself (iow Cacatoo not an independent piece of software, it's a framework!). In my experience with bioinformatic/computational tools, tests are most convenient to make sure everything is installed correctly (which isn't relevant for a JS bundle), or to test the expected input-output relations of functions (which my "unit test" actually does). So I think I've got the important stuff covered. If you disagree, let me know on what level you would implement more testing, and I can work on that.

Best,

Bram

@emilydolson
Copy link
Author

I totally agree that tests to make sure that things are installed correctly don't make sense for a javascript framework like Cacatoo, and that they key tests to have are about the input-output relations of functions. My concern is that the current unit test doesn't seem like it enforces very many input-output relations. Maybe I'm misunderstanding how it works, but it seems like line 99 and line 106 are the only places that it actually verifies that a particular behavior occurred. Obviously we can see what the other reviewer and the editor think, but my perspective is that it would be ideal to have a lot more points where you explicitly check for expected behavior. Currently it doesn't seem like the unit test will catch most bugs that don't directly produce errors.

Certainly when I personally think about adopting Cacatoo as a user (which I could imagine doing for some projects - it seems like a very nice tool!), the lack of thorough tests would give me pause. I currently contribute to the development of a different framework for web-based scientific software, and after seeing how many bugs our unit tests have caught, I have a hard time trusting any code (including my own!) without a solid testing suite.

My recommendation would be:

Minimally

Have a unit test for each Cacatoo-provided function that the user is expected to call and which does not have an immediate visual effect. This would include all of the neighborhood retrieval functions, the grid events, the ODE solver, this.synchronous, this.asynchronous, etc. A side benefit is that this would create example of how to call all of those functions, which would also be helpful.

Ideally

  • Also have tests for functions that have visual effects (e.g. adding a graph). I think the cost-benefit ratio on these is lower (since they are harder to test and it will usually be obvious if they break, rather than resulting in insidious bugs that no one is aware of), but it's still nice to know that your testing framework will tell you if you break them.
  • Set up continuous integration so those tests run whenever you make a commit

@bramvandijk88
Copy link
Owner

bramvandijk88 commented Jan 14, 2022

Thanks for the clear suggestions, that helps.

I’ll set up a more extensive test and also see if I can integrate it so it is ran upon commit. (That last one I’m a bit more skeptical on because in my experience such a setup is more likely to catch errors in your tests than errors in your code, which I find more than a bit frustrating, but I’ll give it a good old-fashioned try)

@bramvandijk88
Copy link
Owner

Alright, I've just pushed a new commit in which I use Mocha and Chai to test all non-visual functions of Cacatoo. This luckily didn't reveal any bugs (which is a good thing... I've spent months testing and debugging this library manually).

I've also updated the compilation script so these tests are performed every time I distribute a new version of the bundle. If I (or a user through a pull request) ever breaks something, I should automatically know. I do not want to burden users with mandatory testing though, although they are free to run the Mocha tests themselves if they want.

Finally, although I agree the cost/benefit ratio isn't very high to do the same with the visual components, the bundle_test (which was already present at previous commits) will be able to catch most of those too.

@emilydolson
Copy link
Author

This looks good, and I'm glad to hear there were no bugs! Thanks for adding the tests!

I agree that the bundle_test is a good way to know that the visual components are working. If you included a screenshot somewhere of what it is supposed to look like, that would make it even more useful in that capacity to someone who is new to the project.

@Bisaloo
Copy link
Contributor

Bisaloo commented Jan 21, 2022

@bramvandijk88, could you still please mention in the CONTRIBUTING.md guide how an external contributor could run the tests? If I followed correctly, they would have to re-compile the library with make_bundle.sh, correct?

@bramvandijk88
Copy link
Owner

I agree that the bundle_test is a good way to know that the visual components are working. If you included a screenshot somewhere of what it is supposed to look like, that would make it even more useful in that capacity to someone who is new to the project.

That's a great idea actually, I've added a screenshot in the directory!

@bramvandijk88
Copy link
Owner

@bramvandijk88, could you still please mention in the CONTRIBUTING.md guide how an external contributor could run the tests? If I followed correctly, they would have to re-compile the library with make_bundle.sh, correct?

Actually, no. As mentioned in the top comments of make_bundle, it is only intended to be used by the end-stage developer (so me, currently). However, as requested, I've added the command used to run the mocha tests to CONTRIBUTING.md in the section "pull requests", so that users can report the output of the test when making a pull request.

@Bisaloo
Copy link
Contributor

Bisaloo commented Jan 21, 2022

Perfect, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants