Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: PyLogGrid: A Python package for fluid dynamics on logarithmic lattices #6439

Open
editorialbot opened this issue Mar 4, 2024 · 79 comments
Assignees
Labels
Cython Jupyter Notebook Python recommend-accept Papers recommended for acceptance in JOSS. review Track: 3 (PE) Physics and Engineering

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Mar 4, 2024

Submitting author: @hippalectryon-0 (Amaury Barral)
Repository: https://github.com/hippalectryon-0/pyloggrid
Branch with paper.md (empty if default branch): joss
Version: 2.3.2
Editor: @philipcardiff
Reviewers: @slaizet, @marlonsmathias, @weiwangstfc
Archive: 10.5281/zenodo.14356262

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63"><img src="https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63/status.svg)](https://joss.theoj.org/papers/002028861bf61a4a81b3edde40db3d63)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@slaizet & @marlonsmathias, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @philipcardiff know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @marlonsmathias

📝 Checklist for @slaizet

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.31 s (258.0 files/s, 29135.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          30           1136           1093           3533
Markdown                         9            159              0            396
reStructuredText                16            347            207            354
Jupyter Notebook                 1              0            344            320
YAML                             5             24             30            242
Cython                           1             22              9            218
C                                1             26              0            169
TeX                              1             13              0             96
PowerShell                       2             17             13             81
Bourne Shell                     5             17             15             70
TOML                             1              9              1             69
SVG                              5              1              1             31
make                             2              9              9             29
DOS Batch                        2              8              1             28
-------------------------------------------------------------------------------
SUM:                            81           1788           1723           5636
-------------------------------------------------------------------------------

Commit count by author:

    29	Hippalectryon

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 499

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: GNU General Public License v3.0 (Check here for OSI approval)

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1146/annurev.fluid.35.101101.161122 is OK
- 10.1103/PhysRevLett.77.5369 is OK
- 10.48550/arXiv.2006.00047 is OK
- 10.1017/jfm.2023.204 is OK
- 10.3390/atmos14111690 is OK

MISSING DOIs

- 10.1103/physreve.107.065106 may be a valid DOI for title: Reversible Navier-Stokes equation on logarithmic lattices
- 10.1016/j.jcp.2014.09.038 may be a valid DOI for title: Exponential time-differencing with embedded Runge–Kutta adaptive step control

INVALID DOIs

- https://doi.org/10.1016/0167-2789(85)90002-8 is INVALID because of 'https://doi.org/' prefix

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@marlonsmathias
Copy link

marlonsmathias commented Mar 4, 2024

Review checklist for @marlonsmathias

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hippalectryon-0/pyloggrid?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hippalectryon-0) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@philipcardiff
Copy link

@editorialbot add @weiwangstfc as reviewer

@editorialbot
Copy link
Collaborator Author

@weiwangstfc added to the reviewers list!

@marlonsmathias
Copy link

@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue?

@philipcardiff
Copy link

@philipcardiff I have found some issues in the installation process, but I'm unsure how to proceed. Do I report them in this issue or do I open a new issue?

Hi @marlonsmathias, please create a new issue in the main repository at https://github.com/hippalectryon-0/pyloggrid. Thanks.

@slaizet
Copy link

slaizet commented Apr 8, 2024

Review checklist for @slaizet

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hippalectryon-0/pyloggrid?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hippalectryon-0) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@slaizet
Copy link

slaizet commented Apr 8, 2024

Few comments about this submission:

Contribution and authorship:
It seems than only hippalectryon-0 & logseq7546515162 have contributed to Github. It would be good to clarify the role of the other authors in the paper.

Data sharing & Reproducibility:
It would be good to provide data for comparison for various examples. It will help with reproducibility.

Performance:
Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.

Example usage:
More examples would be great.

Automated tests:
Missing.

State of the field:
Missing in the paper.

@hippalectryon-0
Copy link

@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ?
Thanks !

@philipcardiff
Copy link

@philipcardiff I'm not sure what the procedure is in JOSS now that we have two reviews. Should I reply to the reviews directly here and make modification to check all the boxes ? Should I wait for a signal on your end ? Thanks !

Hi @hippalectryon-0, the JOSS review process is (should be) interactive. You iteratively address review comments as they come up; there is no need to wait for the reviewers to finish their reviews. So, please address or respond to any issues that have been raised here or in the issues section of your repository. If anything is unclear, feel free to ask the reviewer to clarify (or for my interpretation). Once both reviewers state that they are happy that everything is in order (and they have completed their checkbox list) then the review is complete.

@philipcardiff
Copy link

Hi @hippalectryon-0, let me know if any of the reviewers' comments are unclear to you.

@hippalectryon-0
Copy link

Sorry for the delay, I've been more busy than expected - I'll answer the comments shortly !

@hippalectryon-0
Copy link

@marlonsmathias Thanks for your review !

  • I see you've left the "performance" checkbox unchecked. Could you indicate more precisely which part is lacking ?
  • Likewise, what more in "Functionality documentation" would you have liked ?

@hippalectryon-0
Copy link

@slaizet Thanks for your review !

Regarding the points you have deemed lacking:

  • "Contribution and authorship": could you indicate what is missing ?
  • "Performance": same question
  • "Example usage": could you indicate what is missing in the example that is provided in the documentation ?
  • "Automated tests": I believe there are many automated tests in https://github.com/hippalectryon-0/pyloggrid/tree/joss/log-grid/tests
  • "State of the field": Given that there are no other packages to make simulations on shell models other than the Matlab package by Campolina&Al that is mentioned in the doc, could you expand on what is missing ?

Best,
A.B.

@slaizet
Copy link

slaizet commented Jun 14, 2024

  • "Contribution and authorship": could you indicate what is missing ?
    The paper has 5 authors, I cannot see the contributions from all the authors (for instance, it seems than only hippalectryon-0 & logseq7546515162 have contributed to Github).
  • "Example usage": could you indicate what is missing in the example that is provided in the documentation ?
    More than one example would be great.
  • "State of the field": Given that there are no other packages to make simulations on shell models other than the Matlab package by Campolina&Al that is mentioned in the doc, could you expand on what is missing ?
    Shell models are not my area of expertise but I would expect that simulations of shell models have been performed in the past (which seems to be the case by a quick look at Google Scholar) and it would be nice to briefly discuss how those simulations were performed and what your solver can provide in terms of capability.

@hippalectryon-0
Copy link

hippalectryon-0 commented Jul 2, 2024

@slaizet Thank you for your answer.

The paper has 5 authors, I cannot see the contributions from all the authors (for instance, it seems than only hippalectryon-0 & logseq7546515162 have contributed to Github).

Since the Github repository is a mirror of the private CEA Gitlab repository, indeed the direct contribution of each author is not clearly visible. As indicated in the original submission, "The authors contributed to this work in unequal proportions, with Amaury Barral taking the lead in the majority of the research, while the remaining authors made valuable but comparatively minor contributions."

If this is still not precise enough, let me know precisely what you would like to know.

Performance data are missing (more than just Windows versus Linux). This can be done here https://pyloggrid.readthedocs.io/en/latest/documentation/doc_performance.html or in the paper.

What kind of performance data do you feel is lacking precisely ? I believe that the page you mention explains all the existing tools to perform your own benchmark on your machine. Performance can greatly vary depending on CPU architecture, compiler version, etc.

More than one example would be great.

I agree with you on that point. The reason why I did not e.g. provide a Rayleigh-Bénard example is because such an example also requires a fait bit of mathematical explanation regarding for example why log-lattice equations are different from usual RB equations (which is explained in our research paper), and I felt that overall that would be too convoluted for the documentation. Let me know if you really feel that this is a blocking point for the acceptation of the paper.

Are those tests automated? How often are they performed? Do they go through the different parts of the code?

The tests are 100% automated (see e.g. https://github.com/hippalectryon-0/pyloggrid/blob/joss/log-grid/.gitlab-ci.yml). They're performed on each commit and merge request. The test coverage is 85%, with all major parts 100% covered.

Shell models are not my area of expertise but I would expect that simulations of shell models have been performed in the past (which seems to be the case by a quick look at Google Scholar) and it would be nice to briefly discuss how those simulations were performed and what your solver can provide in terms of capability.

Shell model simulations have indeed been performed in the past, but: 1) there's no widely used numerical library (that I know of) for shell models, as shell models are easy to re-implement from scratch for each paper 2) although log-lattices share some similarities with shell models, they're also fundamentally different. Therefore I can't think of a relevant package to compare this one to.

@slaizet
Copy link

slaizet commented Jul 3, 2024

Thank you for your reply.

I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer).

In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.

Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software.

@philipcardiff
Copy link

Thank you for your reply.

I do not know if your answer regarding the contributions of the authors and having a single example of use of the software is satisfactory for the journal (@philipcardiff should be able to answer).

In terms of performance, it would be great to see some scalability plots, memory usage, etc on various hardware.

Finally, I still think that a small literature review on shell model simulations would be helpful for the readers and for potential users of your software.

Regarding authorship, the JOSS documentation states (note that authoring a commit is not a requirement):

Purely financial (such as being named on an award) and organizational (such as general supervision of a research group) contributions are not considered sufficient for co-authorship of JOSS submissions, but active project direction and other forms of non-code contributions are. The authors themselves assume responsibility for deciding who should be credited with co-authorship, and co-authors must always agree to be listed. In addition, co-authors agree to be accountable for all aspects of the work, and to notify JOSS if any retraction or correction of mistakes are needed after publication.

@hippalectryon-0: based on your comments, I believe all five authors satisfy these requirements. Is this correct?

Regarding the number of example cases, the JOSS documentation states:

The authors should include examples of how to use the software (ideally to solve real-world analysis problems).

Although possibly ambiguous, note the use of the word "examples" (plural).

Also, the JOSS documentation states

Reviewers should rely on their expert understanding of their domain to judge whether the software is of broad interest (likely to be cited by other researchers) or more narrowly focused around the needs of an individual researcher or lab.

Based on these two points, it seems reasonable that more than one test case should be included in the documentation.

@philipcardiff
Copy link

@editorialbot set 10.5281/zenodo.14356262 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.14356262

@philipcardiff
Copy link

@editorialbot generate pdf

@philipcardiff
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1016/0167-2789(85)90002-8 is OK
- 10.1146/annurev.fluid.35.101101.161122 is OK
- 10.1103/PhysRevLett.77.5369 is OK
- 10.48550/arXiv.2006.00047 is OK
- 10.1017/jfm.2023.204 is OK
- 10.1103/PhysRevE.107.065106 is OK
- 10.3390/atmos14111690 is OK
- 10.1016/j.jcp.2014.09.038 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Fluid Dynamics on Logarithmic Lattices and Singula...
- No DOI given, and none found for title: Fluid Flows and Boundaries on Logarithmic Lattices

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@hippalectryon-0
Copy link

Thanks, @hippalectryon-0. Normally, the JOSS-specific version would follow the same naming/numbering convention that is adopted in the repository. Would you consider renaming the release as 2.3.2, for example?

Sure ! https://github.com/hippalectryon-0/pyloggrid/releases/tag/2.3.2

@philipcardiff
Copy link

@editorialbot set 2.3.2 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now 2.3.2

@philipcardiff
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@philipcardiff
Copy link

Thanks, @hippalectryon-0. Can you update this line in the paper, This corresponds to version 2.2.1. to 2.3.2? I assume the text is still relevant for 2.3.2.

@hippalectryon-0
Copy link

done !

@philipcardiff
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@philipcardiff
Copy link

@editorialbot generate pdf

@philipcardiff
Copy link

@kyleniemeyer: this submission is ready for processing.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@kyleniemeyer
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1016/0167-2789(85)90002-8 is OK
- 10.1146/annurev.fluid.35.101101.161122 is OK
- 10.1103/PhysRevLett.77.5369 is OK
- 10.48550/arXiv.2006.00047 is OK
- 10.1017/jfm.2023.204 is OK
- 10.1103/PhysRevE.107.065106 is OK
- 10.3390/atmos14111690 is OK
- 10.1016/j.jcp.2014.09.038 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Fluid Dynamics on Logarithmic Lattices and Singula...
- No DOI given, and none found for title: Fluid Flows and Boundaries on Logarithmic Lattices

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/pe-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6250, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Dec 11, 2024
@kyleniemeyer
Copy link

@hippalectryon-0 it looks like the version on the Zenodo archive (v1) does not match the version accepted (v2.3.2). Can you correct the archive metadata?

In addition, there are a number of proper names in the references that are not being capitalized properly, likely due to them missing curly brackets to protect in the title field (e.g., {R}unge--{K}utta). Can you correct all these?

@hippalectryon-0
Copy link

Thanks for the feedback, both points should be fixed.

@hippalectryon-0
Copy link

However, the release 2.3.2 now points towards the "old" version - as does the zenodo archive (without the proper references). How should we resolve this ?

@hippalectryon-0
Copy link

@kyleniemeyer

@kyleniemeyer
Copy link

@hippalectryon-0 looking at https://doi.org/10.5281/zenodo.14356262, it has the correct version number. I'm not sure what the problem is?

@kyleniemeyer
Copy link

Changes to the JOSS paper itself do not need to be reflected in the software version release / archive.

@hippalectryon-0
Copy link

Alright ! Let me know if anything else is missing. @kyleniemeyer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Cython Jupyter Notebook Python recommend-accept Papers recommended for acceptance in JOSS. review Track: 3 (PE) Physics and Engineering
Projects
None yet
Development

No branches or pull requests

6 participants