Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: A-SLOTH: Ancient Stars and Local Observables by Tracing Haloes #4417

Closed
editorialbot opened this issue May 24, 2022 · 45 comments
Closed
Assignees
Labels
AAS Papers being published together with a AAS submission accepted Makefile published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Roff TeX

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented May 24, 2022

Submitting author: @HartwigTilman (Tilman Hartwig)
Repository: https://gitlab.com/thartwig/asloth
Branch with paper.md (empty if default branch):
Version: 1.1.0
Editor: @dfm
Reviewers: @gregbryan, @kaleybrauer
Archive: 10.5281/zenodo.6683682

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/596b9be484b4145accfd7c6cdda147ba"><img src="https://joss.theoj.org/papers/596b9be484b4145accfd7c6cdda147ba/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/596b9be484b4145accfd7c6cdda147ba/status.svg)](https://joss.theoj.org/papers/596b9be484b4145accfd7c6cdda147ba)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@gregbryan & @kaleybrauer, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @dfm know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @gregbryan

📝 Checklist for @kaleybrauer

@editorialbot editorialbot added AAS Papers being published together with a AAS submission Makefile review Roff TeX labels May 24, 2022
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.10 s (964.1 files/s, 146231.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Fortran 90                      52            921           1788           6302
Python                          22            889            985           2900
Markdown                        10             97              0            387
TeX                              1             27              0            230
reStructuredText                 9            108            153            125
make                             3             35             20            124
DOS Batch                        1              8              1             26
C/C++ Header                     1             10              0             20
INI                              1              1              0             11
-------------------------------------------------------------------------------
SUM:                           100           2096           2947          10125
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 756

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/j.1365-2966.2007.12517.x is OK
- 10.1093/mnras/stu2740 is OK
- 10.1093/mnras/stw1775 is OK
- 10.1093/mnrasl/slw074 is OK
- 10.1093/mnras/sty1176 is OK
- 10.1093/mnras/stx2729 is OK
- 10.1093/mnras/stw1882 is OK
- 10.3847/1538-4357/ab960d is OK
- 10.1186/s40668-014-0006-2 is OK
- 10.1093/mnras/stw2687 is OK
- 10.1093/mnras/sty142 is OK
- 10.3847/1538-4357/aafafb is OK
- 10.3847/0004-637X/826/1/9 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@dfm
Copy link

dfm commented May 24, 2022

@gregbryan, @kaleybrauer — This is the review thread for the paper. All of our communications will happen here from now on. Thanks again for agreeing to participate!

Please read the "Reviewer instructions & questions" in the first comment above, and generate your checklists by commenting @editorialbot generate my checklist on this issue ASAP. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please try to make a start ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@HartwigTilman
Copy link

Thank you very much for accepting to review A-SLOTH, Greg and Kaley!

This is a parallel submission to JOSS and ApJ. Our methods paper, which we had submitted to ApJ, was accepted a few days ago. I have just added the ApJ paper draft to the repository under doc/paper_draft_ApJ.pdf. I hope you find these additional explanations useful for the review process.

@gregbryan
Copy link

gregbryan commented Jun 6, 2022

Review checklist for @gregbryan

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://gitlab.com/thartwig/asloth?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@HartwigTilman) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@kaleybrauer
Copy link

kaleybrauer commented Jun 9, 2022

Review checklist for @kaleybrauer

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://gitlab.com/thartwig/asloth?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@HartwigTilman) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@HartwigTilman
Copy link

Greg has still some specific boxed unchecked and I was wondering if I can help the reviewers to find the relevant information? If this is not appropriate, please feel free to ignore or delete my comment.

So I guess all the information is available and the question is rather if users can easily find it, or if there would be a more intuitive place to make such information accessible?

@gregbryan
Copy link

Thanks Hartwig for the pointers (apologies for the delay but I did want to take the time to look over everything carefully). Overall, it seems really great -- the code is in excellent condition and the documentation and paper are both very nice. I think this shouldn't take too long to finish up, but before signing off, I do have two thoughts/suggestions that you might want to think about.

  • Although you have a nice statement of need in the paper, the documentation itself (https://a-sloth.readthedocs.io/en/latest/index.html) does not and really starts off without describing what the software is for. This is a minor issue (although note that the review checklist specifically ask for a statement of need in both the paper and documentation), and easily fixed: why not just copy the statement of need from the paper into the documentation. That will help people who run into the documentation before reading the paper.
  • Automated tests: you do have a very nice set of tests that are easy to run and easy to check the plots match (which I did, on two systems!), but they are not really automated in the sense that they are not run automatically when changes to the code (i.e. pull-requests) are accepted. The falls under the "OK" heading of the JOSS review criteria (https://joss.readthedocs.io/en/latest/review_criteria.html). I recognize that setting up automated testing on CircleCI (or similar) is not a negligible amount of work and so I wanted to check if you felt like that was something you wanted to look into (and in my experience, well worth it). I am OK accepting the submission without that but wanted to check with you first.
  • Finally, there is a very minor typo in the paper itself: "provide a full" -> "provide full"

Nice work!

@HartwigTilman
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@HartwigTilman
Copy link

Dear Greg,
thank you very much for carefully testing A-SLOTH and for your valuable feedback.

We fixed the typo in the paper.

Thanks for pointing out that our documentation lacked a proper introduction. I have changed this by adding a brief summary to the main page of our documentation: https://a-sloth.readthedocs.io/en/latest/index.html (sometimes you need to reload the page in order to see updates).

Automated tests are indeed an excellent idea that we did not yet think about.
Status Quo:

  • At the moment, we are only 4 developers and a handful of active A-SLOTH users. Hence, we tested the code manually after each major push to the main branch.
  • Motivated by your suggestion, I have added an explanation of these manual code verification steps to the documentation: https://a-sloth.readthedocs.io/en/latest/How-Tos/CheckChanges.html Now, users can follow these steps if they want to check their code updates.
  • We have set up read-the-docs so that it automatically compiles the code and creates new documentation once someone pushes to the main branch. If the code compilation fails, I receive an e-mail. While this is not yet a fully automated CI/CD process, it allows us to check automatically if the code still compiles after every update.

As you have pointed out, a fully automated CI/CD process would take a significant amount of time and I am afraid it would not yet amortize with the currently small A-SLOTH community. However, we are curious to learn about CI/CD and might implement it for a future version.
What would such an automated test look like for an astronomical software? There are some changes (cosmetic/performance) for which one expects the output to remain unchanged. An automated test would be easy for such a case. However, we might also have changes that affect the physics and might influence the outcome (sometimes only minor, but still not binary identical). How would we create an automated test for this? Or is the test just asking "does the code run?" and not "do the results still make sense?"?
I looked into the Enzo Test Suite for inspiration. It seems there are several test problems and eventually, the results are compared to some previously run test suite (--answer-store)? We already do something similar in Tutorial1, where we compare results from a locally run version to saved runs from the vanilla version.

@gregbryan
Copy link

Hi Hartwig -- Perfect! I'm happy to sign off on the review -- as I said, a really nice piece of software.

Regarding the automated testing, there are many nice descriptions online of the various approaches, so I'll just quickly answer your immediate questions. The ideal would be unit testing, which tests individual components (subroutines) to make sure they return what is expected. This is done for large mission-critical software packages, but often for astronomical software, the more typical approach is 'answer testing', which runs the whole code and compares some part of the output to the expected result in a numerical way (which is what Enzo does). As you note, this approach fails when you update the implemented physics in such a way that the answer changes -- in that case, you would have to update the answer as well. Obviously, that limits the utility of the approach, but it does catch errors having to do with cosmetic changes, as you note, or optimization changes. Alternately, if you can test different parts of the solution (e.g. simplified runs which don't include some particular optional feature), then you can have multiple 'answer' tests and one would hope that a change that adds a new feature wouldn't change the the results with the old features. For example, you could do a test without LW feedback and then that test would not change for updates to the LW part of the machinery. Of course, this adds more work to the developers, particularly at the beginning (although arguably, it can decrease downstream work because it automates the testing - I'm not sure I would claim the investment is necessarily a net positive!). I certainly wouldn't require it for this review -- just wanted to mention it so you are aware of the options.

@kaleybrauer
Copy link

Apologies for my delay as well, but I have been playing with the code throughout this week and am very excited that your group has developed and openly shared this. I haven't found any issues; everything seems to run easily and correctly, and the tutorials are thorough and well done.

I second Greg's suggestion about automated tests, but also don't think it is necessary for a project this size, more something to keep in mind if the number of developers and code base increases. One thing I wanted to see was more description in the documentation of what exactly the code is doing in terms of handling star formation, supernovae, etc, but that was just because I hadn't read the ApJ paper yet and thus had to check the code to see what models/physics were implemented. So maybe just having the methods paper featured more explicitly on the documentation site, possibly even with a high-level summary, would be helpful to users. You may already be planning this for after the paper goes public, but wanted to mention it. Other than that, one very minor thing is the typo of "False" here: https://a-sloth.readthedocs.io/en/latest/How-Tos/CheckChanges.html . And also, very cute logo. :)

@HartwigTilman
Copy link

Dear Kaley,
thank you very much for the thorough testing and constructive feedback.
Yes, we will definitely keep the automated tests in mind and implement them once the A-SLOTH community grows to a size where we get frequent pull requests.

You are right, our online documentation is rather technical and lacks astrophysical explanations. Following your suggestion, I have added a link to the ApJ paper draft (which contains detailed explanations of the astrophysics) prominently to the main page of the documentation. We will improve and unify this once both papers are accepted.

I have also fixed the typo – thanks for spotting it!

@dfm
Copy link

dfm commented Jun 21, 2022

@gregbryan, @kaleybrauer — Thanks for your reviews! Since your checklists are completed, I just wanted to confirm that you're happy to sign off on acceptance of this submission. Let me know if you have any remaining comments.

@HartwigTilman — I'm going to do a last re-through and I may have a PR for you with some edits. Then I'll have a few last processing steps for you to do before final acceptance, but we're close!

@dfm
Copy link

dfm commented Jun 21, 2022

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/j.1365-2966.2007.12517.x is OK
- 10.1093/mnras/stu2740 is OK
- 10.1093/mnras/stw1775 is OK
- 10.1093/mnrasl/slw074 is OK
- 10.1093/mnras/sty1176 is OK
- 10.1093/mnras/stx2729 is OK
- 10.1093/mnras/stw1882 is OK
- 10.3847/1538-4357/ab960d is OK
- 10.1186/s40668-014-0006-2 is OK
- 10.1093/mnras/stw2687 is OK
- 10.1093/mnras/sty142 is OK
- 10.3847/1538-4357/aafafb is OK
- 10.3847/0004-637X/826/1/9 is OK

MISSING DOIs

- 10.1093/mnras/stac1664 may be a valid DOI for title: Effect of the cosmological transition to metal-enriched star-formation on the hydrogen 21-cm signal

INVALID DOIs

- None

@dfm
Copy link

dfm commented Jun 21, 2022

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@gregbryan
Copy link

@dfm Yes, I am happy for this to be accepted.

@HartwigTilman
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/j.1365-2966.2007.12517.x is OK
- 10.1093/mnras/stu2740 is OK
- 10.1093/mnras/stw1775 is OK
- 10.1093/mnrasl/slw074 is OK
- 10.1093/mnras/sty1176 is OK
- 10.1093/mnras/stac1664 is OK
- 10.1093/mnras/stx2729 is OK
- 10.1093/mnras/stw1882 is OK
- 10.3847/1538-4357/ab960d is OK
- 10.1186/s40668-014-0006-2 is OK
- 10.1093/mnras/stw2687 is OK
- 10.1093/mnras/sty142 is OK
- 10.3847/1538-4357/aafafb is OK
- 10.3847/0004-637X/826/1/9 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@HartwigTilman
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@dfm
Copy link

dfm commented Jun 22, 2022

@HartwigTilman - Thanks for merging the edits! Here are the final steps that I'll need from you:

  1. Take one last read through the manuscript to make sure that you're happy with it (it's harder to make changes later!), especially the author names and affiliations. I've taken a pass and it looks good to me!
  2. Increment the version number of the software and report that version number back here.
  3. Create an archived release of that version of the software (using Zenodo or something similar). Please make sure that the metadata (title and author list) exactly match the paper. Then report the DOI of the release back to this thread.

@HartwigTilman
Copy link

Dear all,
thank you very much for your help with this code review and publication.

  1. I checked the paper draft again and it looks good to me.
  2. I incremented the version number. The current version is 1.1.0
  3. I have archived this version on Zenodo and the DOI is 10.5281/zenodo.6683682

@dfm
Copy link

dfm commented Jun 22, 2022

@editorialbot set 10.5281/zenodo.6683682 as archive

@editorialbot
Copy link
Collaborator Author

Done! Archive is now 10.5281/zenodo.6683682

@dfm
Copy link

dfm commented Jun 22, 2022

@editorialbot set 1.1.0 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now 1.1.0

@dfm
Copy link

dfm commented Jun 22, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/j.1365-2966.2007.12517.x is OK
- 10.1093/mnras/stu2740 is OK
- 10.1093/mnras/stw1775 is OK
- 10.1093/mnrasl/slw074 is OK
- 10.1093/mnras/sty1176 is OK
- 10.1093/mnras/stac1664 is OK
- 10.1093/mnras/stx2729 is OK
- 10.1093/mnras/stw1882 is OK
- 10.3847/1538-4357/ab960d is OK
- 10.1186/s40668-014-0006-2 is OK
- 10.1093/mnras/stw2687 is OK
- 10.1093/mnras/sty142 is OK
- 10.3847/1538-4357/aafafb is OK
- 10.3847/0004-637X/826/1/9 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#3295

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3295, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jun 22, 2022
@dfm
Copy link

dfm commented Jun 22, 2022

@HartwigTilman — I've now handed this off to the managing editors to do the final processing. There may be some final edits or other changes, but the process should be fairly quick. Thanks again for your submission and for your responses to all the suggestions from @gregbryan and @kaleybrauer!

@gregbryan, @kaleybrauer — Thanks again for your reviews of this submission. Thanks for the time that you took and the thorough and constructive comments that you made. We couldn't do this without you, and I really appreciate you volunteering your time!!

@arfon
Copy link
Member

arfon commented Jun 26, 2022

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.04417 joss-papers#3307
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.04417
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jun 26, 2022
@arfon
Copy link
Member

arfon commented Jun 26, 2022

@gregbryan, @kaleybrauer – many thanks for your reviews here and to @dfm for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@HartwigTilman – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Jun 26, 2022
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.04417/status.svg)](https://doi.org/10.21105/joss.04417)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.04417">
  <img src="https://joss.theoj.org/papers/10.21105/joss.04417/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.04417/status.svg
   :target: https://doi.org/10.21105/joss.04417

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AAS Papers being published together with a AAS submission accepted Makefile published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Roff TeX
Projects
None yet
Development

No branches or pull requests

6 participants