Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Empirical and non-parametric copula models with the cort R package #2653

Closed
40 tasks done
whedon opened this issue Sep 9, 2020 · 99 comments
Closed
40 tasks done
Assignees
Labels
accepted CSS HTML published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Sep 9, 2020

Submitting author: @lrnv (Oskar Laverny)
Repository: https://github.com/lrnv/cort
Version: v0.3.2
Editor: @pdebuyl
Reviewer: @coatless, @agisga
Archive: 10.5281/zenodo.4301435

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/c762db03e770002189ebe8dd7e7bfe16"><img src="https://joss.theoj.org/papers/c762db03e770002189ebe8dd7e7bfe16/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/c762db03e770002189ebe8dd7e7bfe16/status.svg)](https://joss.theoj.org/papers/c762db03e770002189ebe8dd7e7bfe16)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@coatless & @agisga, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @pdebuyl know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @coatless

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@lrnv) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @agisga

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@lrnv) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Sep 9, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @coatless, @agisga it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 9, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.ejor.2015.06.028 is OK
- 10.1080/03610926.2017.1285929 is OK
- 10.1016/j.jmva.2016.07.003 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Sep 9, 2020

@pdebuyl
Copy link

pdebuyl commented Sep 22, 2020

@coatless do you need any further information to start the review? The review can proceed by steps, so that you can send partial feedback to the author and start with a smaller time chunk than for a full review.

@pdebuyl
Copy link

pdebuyl commented Oct 5, 2020

@coatless @agisga gentle reminder

@pdebuyl
Copy link

pdebuyl commented Oct 12, 2020

@coatless @agisga a complete review is expected by the middle of next week. Intermediate questions and comments to the author are also welcome on this page, as the review process of JOSS enables this type of communication.

@lrnv
Copy link

lrnv commented Oct 13, 2020

@pdebuyl @coatless @agisga I will be pleased to answer any questions or comments you might have, for sure.

@pdebuyl
Copy link

pdebuyl commented Oct 21, 2020

@coatless @agisga I see that progress has been made in the checklists. Could you finish the review? We have reached 6 weeks since starting the review.

@coatless
Copy link

coatless commented Oct 21, 2020

I'm a huge fan of the package. Long ago I stepped foot into copula estimation with R and was disappointed by the tools available. cort fills portions of that void. Examples with the package are easily accessible outside of R thanks to a pkgdown website. Having said this, I do have a few comments and suggestions around the package. I've listed them below.

README

Installation

Please indicate on the README that installation from GitHub will require the system to have a compiler.

  • Windows: Rtools.
  • macOS: Xcode CLI
  • Linux: r-base-dev (debian)

I would also add a note that the estimation routines for copulas are written in C++ within the JOSS paper. At present, the use of C++ isn't stated as it only mentions S4 and R.

Contributing

There isn't a Contributor Code of Conduct, contributor guidelines, or how to report a problem list. Perhaps this could be added to the end of the README?

Vignettes

Reproducibility

When using the r*() functions, sometimes a code chunk has a set.seed() component. On other chunks, this is missing. Thus, there may unintentionally be a hidden state that is present. Moreover, in one vignette, we have the advocation for a non-standard set_seed() function (note: the _ instead of .).

https://lrnv.github.io/cort/articles/vignette02_cort_clayton.html#dataset-1

At this point, it would be better to explicitly set a dependency on R >= 3.6

Styling

Depending on the Vignette, the R code shown either lacks space or has a lot of space.

For example:

Consistency here would be appreciated. Consider running styler::style_dir("vignettes").

Code

API

snake_case vs. camelCase

Functions within the package switch back and forth between using snake_case and CamelCase. It would be ideal if there could be some consistency.

cort-api-reference

https://lrnv.github.io/cort/reference/index.html

Moreover, could the reference portion of the API on the pkgdown website be split into subcategories? e.g. data/cort/copulas.

Minor note: Cort() uses N= whereas elsewhere all function parameters are lower-case.

Documentation

There is a documentation article for each function. However, I think some of the documentation entries should receive additional insight into the implementation, improved explanation of what is happening in the example, and perhaps a periodic change in which data set is used for the computation. (~4 ship with the package but only LifeCycleSavings from datasets is used.)

For instance, in the the titular function Cort() for the cort package, there is sparse details and no indication of what is contained in a Cort object:

https://lrnv.github.io/cort/reference/Cort-Class.html

R> cop <- Cort(LifeCycleSavings[,1:3])
#Splitting...
#
#     1 leaves to split...
#     5 leaves to split...
#     10 leaves to split...
#     1 leaves to split...
#Enforcing constraints...
#Done !
R> str(cop)
#Formal class 'Cort' [package "cort"] with 11 slots
#  ..@ p_value_for_dim_red: num 0.75
#  ..@ number_max_dim     : int 3
#  ..@ min_node_size      : num 1
#  ..@ verbose_lvl        : num 1
#  ..@ vols               : num [1:88] 0.05559 0.16964 0.04987 0.06598 0.00195 ...
#  ..@ f                  : num [1:88] 0 0.02 0.02 0.02 0.02 0 0 0 0.02 0 ...
#  ..@ p                  : num [1:88] 1.95e-09 5.75e-02 2.66e-02 7.37e-03 9.78e-04 ...
#  ..@ a                  : num [1:88, 1:3] 0 0.227 0 0 0 ...
#  ..@ b                  : num [1:88, 1:3] 0.227 1 0.227 0.227 0.227 ...
#  ..@ data               : num [1:50, 1:3] 0.627 0.686 0.824 0.235 0.804 ...
#  ..@ dim                : int 3

In contrast, the S4 documentation for Matrix() in the Matrix package looks like: https://stat.ethz.ch/R-manual/R-devel/library/Matrix/html/Matrix-class.html

In another documentation entry, I think a numerical tolerance issue is present due to using == instead of all.equal():
https://lrnv.github.io/cort/reference/quad_prod-methods.html

R> quad_prod(cop,cop)
# [1] 10.81164
R> quad_norm(cop)
# [1] 10.81164
R> quad_norm(cop) == quad_prod(cop,cop)
# [1] FALSE

Simulated Data

For the simulated data sets, there isn't a simulation script associated with the help file:

Though, the scripts can be found in:

https://github.com/lrnv/cort/tree/master/data-raw

I think it would be helpful to link to data-raw/ or directly embed it within the Rd file.

@pdebuyl
Copy link

pdebuyl commented Oct 21, 2020

Thank you for the review @coatless !

@agisga
Copy link

agisga commented Oct 21, 2020

I'm deeply sorry for the delay in the review (the timing turned out to be very far from ideal for me unfortunately). Thank you for the patience.

I think that cort is quality software and a substantial scholarly effort, and I would suggest acceptance subject to minor revisions correcting the issues outlined in this thread.
Some of the issues that I came across were already mentioned by @coatless above, most importantly, the lack of community guidelines, instructions for potential contributors, for issue reporting or other support. So, I will try not to repeat what has already been said.

Vignettes

In addition to what @coatless has mentioned, the second code block in vignette 4 (https://cran.r-project.org/web/packages/cort/vignettes/vignette04_bootstrap_varying_m.html) has the following redundant lines that should be removed:

        test <- 
        train <- 

Automated tests

I get some warnings when running the automated tests manually with devtools::test (tested with R version 4.0.3 (2020-10-10) on Arch Linux):

test-CortForest.R:5: warning: (unknown)
UNRELIABLE VALUE: Future ('<none>') unexpectedly generated random numbers without specifying argument '[future.]seed'. There is a risk that those random numbers are not statistically sound and the overall results might be invalid. To fix this, specify argument '[future.]seed', e.g. 'seed=TRUE'. This ensures that proper, parallel-safe random numbers are produced via the L'Ecuyer-CMRG method. To disable this check, use [future].seed=NULL, or set option 'future.rng.onMisuse' to "ignore".

test-CortForest.R:9: warning: (unknown)
UNRELIABLE VALUE: Future ('<none>') unexpectedly generated random numbers without specifying argument '[future.]seed'. There is a risk that those random numbers are not statistically sound and the overall results might be invalid. To fix this, specify argument '[future.]seed', e.g. 'seed=TRUE'. This ensures that proper, parallel-safe random numbers are produced via the L'Ecuyer-CMRG method. To disable this check, use [future].seed=NULL, or set option 'future.rng.onMisuse' to "ignore".

While this is very minor issue, I think such warnings should be fixed (or maybe explicitly set to be ignored) unless there are reasons not to.

State of the field

State of the field: Do the authors describe how this software compares to other commonly-used packages?

I'm not very familiar with estimation of non-parametric copulas and so this may not be relevant, but I don't really see a comparison to other R or Python packages. A simple web search shows that other copula estimation packages exist. While the cort paper mentions publications by Li et al. and Nagler et al. as alternative approaches, it is not clear whether these papers correspond to some commonly-used software packages or are mainly theoretical/methodological papers.

Quality of writing

There are some minor grammatical and spelling mistakes in the paper. So, a proofreading would be beneficial. Here are some of the mistakes I noticed:

  • Page 1, third paragraph:

    There also exists several tree-structured piecewise constant density estimators,

    "exists" should be "exist"

    The new models that are implemented in this package tries to solve these issues.

    "tries" should be 'try"

  • Page 2:

    • "pakcage" should be "package"

    • "was design" should be "was designed"

  • Page 2:

    "Examples datasets are included in the package, and the many vignettes gives examples of usecases."

    "Examples" should be "Example", "gives" should be "give", "usecases" should be "use cases".

@pdebuyl
Copy link

pdebuyl commented Oct 22, 2020

Thank you for the review @agisga !

@lrnv if you would like to see examples of community guidelines, for instance, you can take a look at already published JOSS articles.

@lrnv
Copy link

lrnv commented Oct 22, 2020

Well, first, thanks to you two @coatless and @agisga for taking the time. At first glance, the review you produced are very interesting, and i can only nod to every point you highlighted.

The plan i have right now:
1° Fix issues that the reviewers highlighted (and more if i find more) in a new branch on my repo, and prepare a Pull request. This will take me a few weeks.
2° Wait for the reviewers to see the Pull request and eventualy discuss it on it's own thread.
3° Once everyone agree, merge the PR into master and send the package to CRAN
4° Once CRAN agree (again, a few weeks delai), accept here the new version of the paper (with the right verison number and everything..)
5° Continue the publication in JOSS after that.

@pdebuyl do you agree with this plan ? Please tell me what you think.

@pdebuyl One way i might do it is to create a bunch of issues in my repo corresponding to each point of the reviews; will that make sense to you ?

@pdebuyl
Copy link

pdebuyl commented Oct 22, 2020

@lrnv Your plan is fine. You will need to archive the code to zenodo as well, though. This is part of the JOSS process and the zenodo archive DOI will be included in the article. I will also perform a final proofreading before forwarding the article to an editor in chief for publication. The paper might thus cause a few more commits to the git repository but this won't change the software itself so that the CRAN and zenodo version are good.

Typo, page 2, first line. "efficicent" -> "efficient"

Typo, page 2, acknowledgments. "meaningfull" -> "meaningful".

Grammar, page 2, second paragraph. "by statistician" -> "by statisticians" (if you pick singular instead, then conjugation of "need" should be changed as well).

Affiliation: please add "France" to the first affiliation and spell out the second affiliation (+ add city and country).

For specific issues such as the API naming, you can discuss them in cort issues and report the conclusion here. For changes to the paper, I prefer if they are done in this thread.

@pdebuyl
Copy link

pdebuyl commented Nov 3, 2020

@lrnv can you let us know of the progress here? Even though most of the review is done, there are still a few checkboxes to tick.

@lrnv
Copy link

lrnv commented Nov 3, 2020

@pdebuyl The progress is zero for the moment, i did not took the time yet. Could i have a few extra weeks ?

@whedon
Copy link
Author

whedon commented Dec 3, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.ejor.2015.06.028 is OK
- 10.1080/03610926.2017.1285929 is OK
- 10.1016/j.jmva.2016.07.003 is OK
- 10.18637/jss.v021.i04 is OK
- 10.18637/jss.v040.i08 is OK
- 10.1007/978-1-4614-6868-4 is OK
- 10.7287/peerj.preprints.3188v1 is OK
- 10.1016/j.csda.2013.02.005 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@pdebuyl
Copy link

pdebuyl commented Dec 3, 2020

Thanks. Could you add the arxiv link to the papers missing one, as it is done for Bengtsson 2020 ? Also, can you use the DOI instead of the url for the J Stat Soft papers? DOIs should be the most stable identifiers and are preferred.

Ram & Gray has doi 10.1145/2020408.2020507 can you use it?

@lrnv
Copy link

lrnv commented Dec 3, 2020

I added some urls and doi in the bib file. Sklar's paper is too old to have any of these unfortunately;

@lrnv
Copy link

lrnv commented Dec 3, 2020

@whedon generate pdf

@lrnv
Copy link

lrnv commented Dec 3, 2020

@whedon check references

@whedon
Copy link
Author

whedon commented Dec 3, 2020

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@whedon
Copy link
Author

whedon commented Dec 3, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.ejor.2015.06.028 is OK
- 10.1145/2020408.2020507 is OK
- 10.1080/03610926.2017.1285929 is OK
- 10.1016/j.jmva.2016.07.003 is OK
- 10.18637/jss.v021.i04 is OK
- 10.18637/jss.v040.i08 is OK
- 10.1007/978-1-4614-6868-4 is OK
- 10.7287/peerj.preprints.3188v1 is OK
- 10.1016/j.csda.2013.02.005 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@pdebuyl
Copy link

pdebuyl commented Dec 3, 2020

The reference checker is an indicative first pass. It's almost there lrnv/cort#31

@pdebuyl
Copy link

pdebuyl commented Dec 3, 2020

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Dec 3, 2020

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@pdebuyl
Copy link

pdebuyl commented Dec 4, 2020

@whedon accept

@whedon
Copy link
Author

whedon commented Dec 4, 2020

Attempting dry run of processing paper acceptance...

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Dec 4, 2020
@pdebuyl
Copy link

pdebuyl commented Dec 4, 2020

Thanks @coatless @agisga for the review, @lrnv for submitting to JOSS! The editor in chief in rotation will take over from now.

@whedon
Copy link
Author

whedon commented Dec 4, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1957

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1957, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Dec 4, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.ejor.2015.06.028 is OK
- 10.1145/2020408.2020507 is OK
- 10.1080/03610926.2017.1285929 is OK
- 10.1016/j.jmva.2016.07.003 is OK
- 10.18637/jss.v021.i04 is OK
- 10.18637/jss.v034.i09 is OK
- 10.18637/jss.v039.i09 is OK
- 10.18637/jss.v040.i08 is OK
- 10.1007/978-1-4614-6868-4 is OK
- 10.7287/peerj.preprints.3188v1 is OK
- 10.1016/j.csda.2013.02.005 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@kyleniemeyer
Copy link

@whedon accept deposit=true

@whedon whedon added accepted published Papers published in JOSS labels Dec 4, 2020
@whedon
Copy link
Author

whedon commented Dec 4, 2020

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Dec 4, 2020

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Dec 4, 2020

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02653 joss-papers#1961
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02653
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@kyleniemeyer
Copy link

Congrats @lrnv on your article's publication in JOSS!

Many thanks to @coatless and @agisga for reviewing this, and @pdebuyl for editing.

@whedon
Copy link
Author

whedon commented Dec 4, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02653/status.svg)](https://doi.org/10.21105/joss.02653)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02653">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02653/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02653/status.svg
   :target: https://doi.org/10.21105/joss.02653

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@lrnv
Copy link

lrnv commented Dec 4, 2020

Thanks a lot to the reviewers @coatless @agisga , and of course to @pdebuyl for handling this. The code itself improved a lot from this review, which is quite impressive.

@whedon Wait for me, I'll come back next time :)

Cheers !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted CSS HTML published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

6 participants