Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JOSS review: testing #10

Closed
lheagy opened this issue Sep 27, 2018 · 5 comments
Closed

JOSS review: testing #10

lheagy opened this issue Sep 27, 2018 · 5 comments

Comments

@lheagy
Copy link

lheagy commented Sep 27, 2018

I don't see a clear indication of automated tests or manual steps described so that the function of the software can be verified. Are there manual tests that can be run to check that the software is performing as expected?

For example, if there are simple examples where you know what the output of a function should be, then you can test that. The pytest module is a good resource to check out for setting up testing: https://docs.pytest.org/en/latest/

In addition, you can use a continuous integration service that will run the tests every time you update the code (or if a contributor suggests an update). This is a really helpful thing to do if you would like to invite others to participate in the development and maintenance of the code base. For example using TravisCI, there are GitHub instructions here: https://help.github.com/enterprise/2.14/admin/guides/developer-workflow/continuous-integration-using-travis-ci/

@marcoalopez
Copy link
Owner

Hi @lheagy, sorry for the delay in the response. There is already a manual way to test the script. The way I check the different functions is by using the example file data_set.txt and checking the outputs, as these are known. I use this approach because the script is so far small, very "encapsulated" (i.e. I mostly use one-task functions), and I introduce modifications incrementally, this is affecting only one function or two, making it easy to check.

The file data_set.txt is always included with the code so others can check it as well. Indeed, the documentation states "...you will be able to reproduce all the results shown in this tutorial using the dataset provided with the script, the attached data_set.txt file". I admit however that this sentence is a bit "buried" in the text and may go unnoticed. So I've decided to put it at the beginning in the "Getting started" section.

Having said this, I have a look at the examples you mention above (pytest & TravisCI), and I realize that the way I do tests is somewhat precarious. I could prepare some tests although this will take me some time.

@lheagy
Copy link
Author

lheagy commented Sep 30, 2018

Hi @marcoalopez, thanks for your reply. As long as there is testing in place, that is fine for the submission. It is up to you if you would like to have automated testing in place prior to publishing the paper or if you would like to draft an issue outlining the steps you plan on taking and provide information in the documentation about how to run the tests (thanks for the update here - it is much more clear!).

How would you like to proceed: Would you like to improve the testing before we move to publish? or outline a few more details in an issue and leave that for future work?

Thanks!

@marcoalopez
Copy link
Owner

marcoalopez commented Sep 30, 2018

Hi @lheagy, I would rather leave this for a future work if possible, right now I have a lot of work accumulation and it would take me a while to implement the automatic tests. Anyway, I could provide more details on this in the documentation and open an issue as this is not very time-consuming.

Thanks

@lheagy
Copy link
Author

lheagy commented Sep 30, 2018

sounds like a plan! Feel free to close this issue when you are happy with the level of detail in the docs and have another issue started

@marcoalopez
Copy link
Owner

Hi @lheagy, I added a new section outlining the neccesary steps to manually test the script. I also added some information on this at the end of the Requirements & development section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants