Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: BayesianNetwork: Interactive Bayesian Network Modeling and Analysis #425

Closed
18 tasks done
whedon opened this issue Oct 6, 2017 · 29 comments
Closed
18 tasks done
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS.

Comments

@whedon
Copy link

whedon commented Oct 6, 2017

Submitting author: @paulgovan (Paul Govan)
Repository: https://github.com/paulgovan/BayesianNetwork
Version: v0.1.3
Editor: @katyhuff
Reviewer: @rgiordan
Archive: 10.5281/zenodo.596010

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/b7f635bc64f1585ea24a0e9ebea2ff21"><img src="http://joss.theoj.org/papers/b7f635bc64f1585ea24a0e9ebea2ff21/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/b7f635bc64f1585ea24a0e9ebea2ff21/status.svg)](http://joss.theoj.org/papers/b7f635bc64f1585ea24a0e9ebea2ff21)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer questions

@rgiordan, please carry out your review in this issue by updating the checklist below (please make sure you're logged in to GitHub). The reviewer guidelines are available here: http://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @katyhuff know.

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v0.1.3)?
  • Authorship: Has the submitting author (@paulgovan) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon
Copy link
Author

whedon commented Oct 6, 2017

Hello human, I'm @whedon. I'm here to help you with some common editorial tasks for JOSS. @rgiordan it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As as reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all JOSS reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

@rgiordan
Copy link

The package is missing automated tests.

paulgovan/BayesianNetwork#6

@rgiordan
Copy link

@katyhuff

Sorry, this is the first time I've done this. Where do I find the software paper? I don't see it in the repo, nor in any of the issues or emails.

@katyhuff
Copy link
Member

@rgiordan Thanks for the question. This varies by repository. It looks like the paper for this one is at : https://github.com/paulgovan/BayesianNetwork/blob/master/inst/paper

@rgiordan
Copy link

paulgovan/BayesianNetwork#7

The author and affiliation are missing from paper.md, and I feel that bnlearn could be given clearer credit for the underlying learning algorithms.

@rgiordan
Copy link

paulgovan/BayesianNetwork#8

The paper is missing references.

@rgiordan
Copy link

I played around with the package, but there are many combinations of options so I didn't evaluate them exhaustively. As such I'm not 100% sure what the standards are for "Functionality: Have the functional claims of the software been confirmed?" I'll check it, but maybe @katyhuff can clarity the expectation are?

I would also feel more comfortable if there were unit tests.

@rgiordan
Copy link

@katyhuff @paulgovan

This is very nice work! It's a beautiful and, I imagine, usable interface to a powerful set of learning algorithms. I think it is well worth publishing. However, the reviewing standards are quite clear -- it needs automated tests, and if they are in there now, I can't see them. I'd recommend acceptance as long as good tests can be added.

Is there anything else you need me to fill out or do for the time being?

@katyhuff
Copy link
Member

Thanks so much, @rgiordan, for your review. Great work!

The next step is for @paulgovan to look over the comments and issues you've created to make appropriate changes.

@rgiordan Regarding your question on the confirmation of the functional claims, we expect that the few claims of functionality in paper.md be checked by the reviewer to their satisfaction. This is where domain expertise comes in. For some packages and some reviewers, this means running an example, checking that a plot is as expected, or reviewing the test coverage. Indeed, as you point out @rgiordan, it is usually made much easier with unit tests and examples, so that's why we require unit tests.

@paulgovan
Copy link

@rgiordan, @katyhuff

Please reference #6, #7, and #8. Working on unit tests, but this could take several days.

Thank you for the thorough review!

@paulgovan
Copy link

@rgiordan, @katyhuff

Please see my changes referenced in the issues above. Let me know if you have any questions.

@paulgovan
Copy link

@rgiordan, @katyhuff

Just checking to see if you have had a chance to review my changes. Let me know if you have any additional comments.

@katyhuff
Copy link
Member

katyhuff commented Nov 20, 2017

Ah!! Sorry, I was so confused -- the issues are still open! If they have been fixed, it's appropriate to close them (only you can.. we don't own this repository). I'm looking now to ensure that I can run the tests.

(cc @paulgovan )

@katyhuff
Copy link
Member

katyhuff commented Nov 20, 2017

  • The issue with the paper references hasn't been fixed. Much like an ordinary paper, you must actually cite things in the text in order for them to appear in the references.
  • I strongly recommend a caption for the figure in your paper.
  • It's not clear that your unit tests are robustly testing the method itself.
  • We require either automated tests (e.g. with travis) or instructions on how to run the tests ( in the readme, for future users, not just for us). Your travis instance currently doesn't run your new shiny tests. I recommend adding those commands to your travis.yml file. Once that's done, travis will automatically run your tests. If you'd rather not, then please at least include instructions in the readme on how a user can run your tests.

(cc @paulgovan )

@paulgovan
Copy link

paulgovan commented Dec 3, 2017

@rgiordan, @katyhuff

I believe that issues #6, #7, and #8 have been fixed and they are now closed.

@paulgovan
Copy link

@katyhuff, just circling back to this. Any update?

@katyhuff
Copy link
Member

katyhuff commented Jan 4, 2018

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jan 4, 2018

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Jan 4, 2018

https://github.com/openjournals/joss-papers/blob/joss.00425/joss.00425/10.21105.joss.00425.pdf

@katyhuff
Copy link
Member

katyhuff commented Jan 4, 2018

@paulgovan This looks good to me! I believe it can be accepted! At this point could you make an updated archive of the reviewed version of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission.

@paulgovan
Copy link

paulgovan commented Jan 4, 2018

Thanks @katyhuff! Here is the new DOI:
DOI

@katyhuff
Copy link
Member

katyhuff commented Jan 4, 2018

@whedon set 10.5281/zenodo.596010 as archive

@whedon
Copy link
Author

whedon commented Jan 4, 2018

OK. 10.5281/zenodo.596010 is the archive.

@katyhuff
Copy link
Member

katyhuff commented Jan 4, 2018

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jan 4, 2018

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Jan 4, 2018

https://github.com/openjournals/joss-papers/blob/joss.00425/joss.00425/10.21105.joss.00425.pdf

@katyhuff katyhuff added accepted and removed review labels Jan 4, 2018
@katyhuff
Copy link
Member

katyhuff commented Jan 4, 2018

@arfon we're ready to accept this!

@arfon
Copy link
Member

arfon commented Jan 4, 2018

@rgiordan - many thanks for your review here and to @katyhuff for editing this submission ✨

@paulgovan - your paper is now accepted into JOSS and your DOI is https://doi.org/10.21105/joss.00425 ⚡️ 🚀 💥

@arfon arfon closed this as completed Jan 4, 2018
@whedon
Copy link
Author

whedon commented Jan 4, 2018

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippet:

[![DOI](http://joss.theoj.org/papers/10.21105/joss.00425/status.svg)](https://doi.org/10.21105/joss.00425)

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider volunteering to review for us sometime in the future. You can add your name to the reviewer list here: http://joss.theoj.org/reviewer-signup.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS.
Projects
None yet
Development

No branches or pull requests

5 participants