Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: GPJax #4455

Closed
editorialbot opened this issue Jun 4, 2022 · 63 comments
Closed

[REVIEW]: GPJax #4455

editorialbot opened this issue Jun 4, 2022 · 63 comments
Assignees
Labels
accepted Makefile published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jun 4, 2022

Submitting author: @thomaspinder (Thomas Pinder)
Repository: https://github.com/thomaspinder/GPJax
Branch with paper.md (empty if default branch):
Version: 0.4.9
Editor: @dfm
Reviewers: @gpleiss, @theorashid
Archive: 10.5281/zenodo.6882220

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/5f3c1d4f6470b0ef3c843d7b8fe3cf27"><img src="https://joss.theoj.org/papers/5f3c1d4f6470b0ef3c843d7b8fe3cf27/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/5f3c1d4f6470b0ef3c843d7b8fe3cf27/status.svg)](https://joss.theoj.org/papers/5f3c1d4f6470b0ef3c843d7b8fe3cf27)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@gpleiss & @theorashid, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @dfm know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @theorashid

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.06 s (960.7 files/s, 102313.3 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          39           1026           1377           2935
Markdown                         7            133              0            266
TeX                              3             32              0            214
reStructuredText                 4            161            186            104
YAML                             4              6              4             61
make                             2             11             14             34
DOS Batch                        1              8              1             26
INI                              1              0              0              2
XML                              1              0              0              2
-------------------------------------------------------------------------------
SUM:                            62           1377           1582           3644
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 667

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@dfm
Copy link

dfm commented Jun 4, 2022

@gpleiss, @theorashid — This is the review thread for the paper. All of our communications will happen here from now on. Thanks again for agreeing to participate!

Please read the "Reviewer instructions & questions" in the first comment above, and generate your checklists by commenting @editorialbot generate my checklist on this issue ASAP. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#4455 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please try to make a start ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@theorashid
Copy link

theorashid commented Jun 4, 2022

Review checklist for @theorashid

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/thomaspinder/GPJax?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@thomaspinder) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
    • Looks like jejjohnson (4,286 ++ 1,779 --) committed far more than Daniel-Dodd. The former is not on the paper, the latter is. Does their contribution warrant authorship?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
    • not tested on GPU as I don't have access to one, but I'm sure as it is written in pure jax, all is well.
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
    • dealt with using pypi
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
    • done using pytest and codecov. Impressively high coverage too ~99%
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
    • there are clear instrustions on CONTRIBUTING.md, but if you don't know github, there's no clear link in the docs or on the repo README linking to this.

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
    • should probably mention at least bayesnewton and tinygp as jax-based alternatives. Are there also no R packages for GPs? I find that difficult to believe. Perhaps people only do GPs within PPLs there
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
    • structure is fine, minor typos being dealt with in a PR
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
    • one reference was faulty and being dealt with in a PR

Nice to haves:

  • API which you can click through on the menu bar rather than one big dump at https://gpjax.readthedocs.io/en/latest/api.html
  • numpyro integration. numpyro is much easier to use than tfp. I understand that perhaps distrax works better with tfp, so it would therefore be nice to see distrax in numpyro and gpjax with numpyro – it's all jax
  • the information in CONTRIBUTING.md should be linked in the repo README and the documentation. For those who aren't into software, it's especially important to visibly link to the where users can get support (looks like the discussion section to me). I hope this will increase usage of the package beyond the ML community too and into applied fields.
  • it would be good to automate linting and formatting, ideally using a pre-commit and a workflow
  • some more likelihoods: binomial, Poisson, Student-t etc

@theorashid
Copy link

Hey @dfm , I've done the first round of reviewing. It's a good package. I've commented inline whilst working through the checklist. Could I have a bit of advice on the next steps?

  • There are a few points which I haven't ticked off. Let me know what you think about these: if any of them should be mentioned in an issue and dealt with
  • I've made a list of things that would be "nice to have" in the package but the package is in good condition and I don't think they warrant non-acceptance. Do you think any of them worth delaying accepting the paper? From your experience, are they worth filing issues on the repo or are these all uncommon in python packages / GP packages?

@dfm
Copy link

dfm commented Jun 14, 2022

Thanks @theorashid! I think would be better to open an issue with those comments so that we keep a record, but that's also fine, just don't delete them when they're finished (you can just add a strike-through maybe?).

@thomaspinder: heads up that @theorashid's comments are currently inline above: #4455 (comment)

@theorashid
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@thomaspinder
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@thomaspinder
Copy link

Updating the pdf as we have made capitalisation of paper titles consistent now in our references e.g., "Gaussian" -> "{G}aussian".

@theorashid
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@dfm
Copy link

dfm commented Jun 21, 2022

@gpleiss — Just checking in here to keep this on your radar. Let me know if you have any questions or issues. Thanks!

@gpleiss
Copy link

gpleiss commented Jun 27, 2022

Hi @dfm I was out on vacation for the last two weeks, but I should have my review done tomorrow or Wednesday!

@dfm
Copy link

dfm commented Jun 27, 2022

@gpleiss perfect! Thanks for the update!

@gpleiss
Copy link

gpleiss commented Jun 30, 2022

Review checklist for @gpleiss

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v0.4.6)?
  • Authorship: Has the submitting author (@thomaspinder) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
    • Issue: GPU setup is a bit challenging. To get GPU support, I have to pip install --update jax[cuda] before installing GPJax. However, GPJax's requirements lock the Jax version to a specific release (0.3.2). I got installation to work, but it's a bit clunky. I would recommend that the authors:
      • Create a CUDA specific install (e.g. pip install "gpjax[cuda]") that would install `jax[cuda])
      • Make the requirements less strict (unless there is a good reason to do so?
    • It's possible that there's an easier way to get CUDA support (I haven't used JAX much!). @thomaspinder I'm curious to hear your thoughts.
  • Functionality: Have the functional claims of the software been confirmed?
    • Nice to have: Since the software is designed to pair well with the GPML book, it would be beneficial to include more of the kernels discussed in that book. Right now the set of kernels is limited (although the included kernels are arguably the most useful). Off the top of my head, it would be useful to include the RQ kernel, the periodic kernel, the linear kernel, and maybe something more advanced like the Gibbs kernel.
      • The authors include good documentation for writing custom kernels, and it looks like it would be easy for a user to implement any of these kernels themselves. However, I think the library could benefit from including some of these more common kernels to reduce user overhead (especially for new users).
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
    • Nice to have: It would be good to have at least one code example with higher dimensional data (e.g. a UCI dataset). I recognize that 1D data is much nicer to visualize, but having a "real dataset" would make for a more comprehensive example, especially for very new users.
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
    • Very thorough!
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
    • Issue: You should set up a GitHub action to automatically run your tests for PRs and commits to master.
    • Suggestion: It could be useful to have some larger end-to-end tests (beyond the unit tests) to make sure that all the pieces stay glued together. The pyro folks have a very clever way of doing this: they run all of their example notebooks as a smoke test.
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
    • Nit: the "statement of need" in the paper doesn't actually address the need; that comes in the "wider software ecosystem" section. I would suggest that the authors reorganize these two sections?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
    • I didn't read too closely for typos
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

I'm really excited about this library, excellent work! Beyond my (minor) comments above, this package will be a great resource for the GP community, and it has been built to high open source standards.


Update @thomaspinder addressed all of my questions and concerns. I'm very excited about this library, and wholeheartedly recommend it for publication in JOSS.

@gpleiss
Copy link

gpleiss commented Jun 30, 2022

@dfm @thomaspinder I'll post the actionable items from my review in @theorashid 's review issue

@thomaspinder
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.7717/peerj-cs.55 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3393, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jul 23, 2022
@dfm
Copy link

dfm commented Jul 23, 2022

@thomaspinder — I've now handed this off to the managing editors who may have some final edits before publication. Thanks for your submission and your responses to the reviewers!!

@gpleiss, @theorashid — Thanks again for your reviews!!

@thomaspinder
Copy link

Great! Thanks for coordinating such a smooth and enjoyable review process @dfm!

@kyleniemeyer
Copy link

Hi @thomaspinder, I'm doing some final checks before publishing. It looks like a few references are missing DOIs (not sure why editorialbot didn't catch them), specifically for articles from the Journal of Statistical Software. Could you add those? I would also point out that arXiv articles also now have DOIs that can/probably should be included, though we are not currently able to automatically check for those.

@thomaspinder
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.18637/jss.v076.i01 is OK
- 10.7717/peerj-cs.55 is OK
- 10.18637/jss.v102.i01 is OK
- 10.2113/gsecongeo.58.8.1246 is OK
- 10.1007/3-540-07165-2_55 is OK

MISSING DOIs

- None

INVALID DOIs

- https://doi.org/10.7551/mitpress/3206.001.0001 is INVALID because of 'https://doi.org/' prefix

@thomaspinder
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.7551/mitpress/3206.001.0001 is OK
- 10.18637/jss.v076.i01 is OK
- 10.7717/peerj-cs.55 is OK
- 10.18637/jss.v102.i01 is OK
- 10.2113/gsecongeo.58.8.1246 is OK
- 10.1007/3-540-07165-2_55 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@thomaspinder
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@thomaspinder
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.7551/mitpress/3206.001.0001 is OK
- 10.48550/arxiv.1912.11554 is OK
- 10.18637/jss.v076.i01 is OK
- 10.7717/peerj-cs.55 is OK
- 10.48550/arxiv.2111.01721 is OK
- 10.18637/jss.v102.i01 is OK
- 10.48550/arxiv.2106.01982 is OK
- 10.2113/gsecongeo.58.8.1246 is OK
- 10.1007/3-540-07165-2_55 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@thomaspinder
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@thomaspinder
Copy link

Thanks for catching that @kyleniemeyer. All the DOIs should be accounted for now, including arXiv references.

@kyleniemeyer
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.04455 joss-papers#3398
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.04455
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jul 26, 2022
@kyleniemeyer
Copy link

Congratulations @thomaspinder on your article's publication in JOSS!

Many thanks to @gpleiss and @theorashid for reviewing this, and @dfm for editing.

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04455/status.svg)](https://doi.org/10.21105/joss.04455)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.04455">
  <img src="https://joss.theoj.org/papers/10.21105/joss.04455/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.04455/status.svg
   :target: https://doi.org/10.21105/joss.04455

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Makefile published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX
Projects
None yet
Development

No branches or pull requests

6 participants