-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: Autorank: A Python package for automated ranking of classifiers #2173
Comments
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @JonathanReardon, @ejhigson it looks like you're currently assigned to review this paper 🎉. ⭐ Important ⭐ If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿 To fix this do the following two things:
For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
@JonathanReardon, @ejhigson - please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html Any questions/concerns please let me know. |
Hi @sherbold - this is a really nice piece of software. I really like the idea and definitely see the practical usefulness! Installing your package and starting playing around with it, I noticed a few things which could be changed or fixed - I will comment about them below. |
Upgrading scipy to 1.4.1 fixed the issue. Perhaps you can consider specifying minimum version requirements for your dependencies in setup.py? E.g. "scipy>=1.4" or whatever the appropriate version is. |
I get the following error:
I see similar errors when I try running the module tests with You may also want to change the code in this cell at https://sherbold.github.io/autorank/#documentation to use the variable name "results" instead of "res" for consistency with earlier cells, and to have an "import matplotlib.pyplot as plt" statement. For reference, I am using python 3.6.9 with these module versions:
My laptop is running Ubuntu 18, and my latex version is:
|
|
|
@whedon generate pdf |
@ejhigson Thanks for the good feedback. I fixed 1, 3, and 4. Unfortunately, neither arXiv, nor JLMR have DOIs. Why this is a case for JLMR is a mystery for me. Regarding 2: I fixed the documentation. They are not updated yet, because GH Actions seem to be broken right now, though there is no incident report filed yet. However, I cannot replicate any test failures. I ran the tests under Windows and Ubuntu 18.04 and did not see anything. I am waiting for the GH actions to become live again, there tests are running there as well. Or do you see the error, when you try to compile the generated latex code? |
The documentation is now also updated. I also did some maintenance of the Github Action and documentation build, because there were breaking changes in the latest releases. I will release 1.0.1 once I addressed all review comments. I can release this earlier, if you want to try from scratch with the latest version from PyPi. |
@sherbold I found the source of the problem. The error was in the creation of matplotlib figures not the generated latex. I had specified the matplotlib parameter text.usetex as True in my matplotlibrc but it seems your example and tests require text.usetex = False to run. With this matplotlib setting then everything works perfectly for me. You could perhaps consider adding a statement to check the matplotlib rc parameter text.usetex is set to False and, if not, to issue a UserWarning or similar to tell people they need to change it. |
Here are a few other minor suggestions before I finish my review (sorry any of the below already exist and I have missed them):
|
Thanks for the additional comments. I addressed them as follows:
|
@whedon generate pdf |
@sherbold thank you for these changes! Everything looks great to me. Congratulations on making a really neat piece of software. |
Hi @sherbold, what a useful tool, I can see this getting a lot of use. I've finished my review and I only have two minor (aesthetic) points: The 'print(result)' output [https://sherbold.github.io/autorank/] does not display accurately. For me the 'RankResult(rankdf=' displays on the same line as the column titles and so shifts all titles to the right, making the overall display less clear. The bottom part of the output does not display clearly either (example below):
I got this same output using a Jupyter Notebook, Ipython3 (at the terminal), and using VSCodium. |
There is also a missing word in the 'output' example (https://sherbold.github.io/autorank/). In the last sentence I think it should be 'Based on the post-hoc Nemenyi test' rather than 'Based the post..'. |
Thank you @JonathanReardon . I have overwritten the namedtuple with a customized str function. The output now uses newlines and looks as follows:
I also fixed the typo with the Nemenyi test. I did not find the missing 'output'. Could you tell me exactly where to look? I seem to be typo-blind. |
Thank you for making these changes @sherbold. The missing word in the example 'create_report(result)' output was the part you already corrected, sorry for the slight lack of clarity there on my part :) My review is now complete, all boxes ticked. Congratulations on a great piece of software, I look forward to seeing it published and widely used. Cheers |
Thank you both @ejhigson and @JonathanReardon for the fast reviews and the helpful comments that really improved the tool and documentation again. The changes were small, but the impact in terms of user experience should be large :) |
@sherbold - could you merge this fix to one of the DOIs? sherbold/autorank#2 |
@arfon done :) |
@whedon generate pdf |
@sherbold - At this point could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:
I can then move forward with accepting the submission. |
Version 1.0.1 with all changes is released and available on PyPi: https://pypi.org/project/autorank/1.0.1/ There is also a long-term archive of 1.0.1 on Zenodo: @arfon Thank you for taking over and the fast process. I also want to stress how much I like the approach by JOSS and look forward to contributing back to the journal at some point, when things get back to (more) normal :) |
@whedon set 10.5281/zenodo.3751301 as archive |
OK. 10.5281/zenodo.3751301 is the archive. |
@whedon accept |
|
|
👋 @openjournals/joss-eics, this paper is ready to be accepted and published. Check final proof 👉 openjournals/joss-papers#1420 If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1420, then you can now move forward with accepting the submission by compiling again with the flag
|
@whedon accept deposit=true |
|
🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦 |
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨 Here's what you must now do:
Any issues? notify your editorial technical team... |
@JonathanReardon, @ejhigson - many thanks for your reviews here ✨ @sherbold - your paper is now accepted into JOSS ⚡🚀💥
Sure thing! |
🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉 If you would like to include a link to your paper from your README use the following code snippets:
This is how it will look in your documentation: We need your help! Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
|
Submitting author: @sherbold (Steffen Herbold)
Repository: https://github.com/sherbold/autorank
Version: 1.0.0
Editor: @arfon
Reviewer: @JonathanReardon, @ejhigson
Archive: 10.5281/zenodo.3751301
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@JonathanReardon & @ejhigson, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.
✨ Please try and complete your review in the next two weeks ✨
Review checklist for @JonathanReardon
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @ejhigson
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
The text was updated successfully, but these errors were encountered: