Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: ReLax: Efficient and Scalable Recourse Explanation Benchmarking using JAX #6567

Closed
editorialbot opened this issue Apr 1, 2024 · 65 comments
Assignees
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Apr 1, 2024

Submitting author: @BirkhoffG (Hangzhi Guo)
Repository: https://github.com/BirkhoffG/jax-relax/
Branch with paper.md (empty if default branch): joss
Version: v0.2.8
Editor: @Fei-Tao
Reviewers: @GarrettMerz, @duhd1993
Archive: 10.5281/zenodo.13957805

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/ed8b635440789ebe1ff496021455df1e"><img src="https://joss.theoj.org/papers/ed8b635440789ebe1ff496021455df1e/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/ed8b635440789ebe1ff496021455df1e/status.svg)](https://joss.theoj.org/papers/ed8b635440789ebe1ff496021455df1e)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@GarrettMerz & @duhd1993, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Fei-Tao know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @GarrettMerz

📝 Checklist for @duhd1993

@editorialbot editorialbot added Jupyter Notebook Python review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning waitlisted Submissions in the JOSS backlog due to reduced service mode. labels Apr 1, 2024
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.11 s (787.5 files/s, 230925.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          38           1009            734           5796
Jupyter Notebook                30              0           9144           5784
TeX                              1            171             13           1277
YAML                             8             19              8            228
Markdown                         3             88              0            223
CSS                              1             10              6             58
INI                              1              1              0             37
Sass                             1              3              4             17
SVG                              1              0              0              1
-------------------------------------------------------------------------------
SUM:                            84           1301           9909          13421
-------------------------------------------------------------------------------

Commit count by author:

   317	BirkhoffG
    59	Xinchang Xiong
    32	Hangzhi Guo
     6	Birkhoffg
     5	Firdaus Choudhury
     2	Praneyg
     1	Amulya Yadav
     1	root

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1177

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

✅ License found: Apache License 2.0 (Valid open source OSI approved license)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@Fei-Tao
Copy link

Fei-Tao commented Apr 1, 2024

Hi @GarrettMerz, @duhd1993, please generate your checklist at your convenience. Please let me know if you need any help. Thanks for your time.

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1145/3375627.3375812 is OK
- 10.1145/3351095.3375624 is OK
- 10.1145/3447548.3467333 is OK
- 10.1145/3375627.3375812 is OK
- 10.1145/3580305.3599290 is OK
- 10.1109/CVPR.2009.5206848 is OK
- 10.18653/v1/N19-1423 is OK

MISSING DOIs

- No DOI given, and none found for title: Truncated back-propagation for bilevel optimizatio...
- 10.2139/ssrn.3254511 may be a valid DOI for title: The European Union general data protection regulat...
- No DOI given, and none found for title: Min-Max Bilevel Multi-objective Optimization with ...
- No DOI given, and none found for title: Gradient-based hyperparameter optimization through...
- No DOI given, and none found for title: UCI machine learning repository
- No DOI given, and none found for title: Using data mining to predict secondary school stud...
- No DOI given, and none found for title: Explaining and harnessing adversarial examples
- No DOI given, and none found for title: Hidden trigger backdoor attacks
- 10.1109/access.2019.2909068 may be a valid DOI for title: Badnets: Evaluating backdooring attacks on deep ne...
- No DOI given, and none found for title: Metapoison: Practical general-purpose clean-label ...
- No DOI given, and none found for title: Witches’ Brew: Industrial Scale Data Poisoning via...
- No DOI given, and none found for title: What Doesn’t Kill You Makes You Robust (er): Adver...
- No DOI given, and none found for title: On the Effectiveness of Adversarial Training again...
- 10.1609/aaai.v36i6.20571 may be a valid DOI for title: Efficient robust training via backward smoothing
- No DOI given, and none found for title: Towards deep learning models resistant to adversar...
- No DOI given, and none found for title: Adversarial Examples Are Not Bugs, They Are Featur...
- 10.24963/ijcai.2018/520 may be a valid DOI for title: Curriculum adversarial training
- No DOI given, and none found for title: On the Convergence and Robustness of Adversarial T...
- No DOI given, and none found for title: Adversarial training for free!
- No DOI given, and none found for title: Fast is better than free: Revisiting adversarial t...
- No DOI given, and none found for title: Overfitting in adversarially robust deep learning
- No DOI given, and none found for title: Fooling lime and shap: Adversarial attacks on post...
- No DOI given, and none found for title: Model-agnostic meta-learning for fast adaptation o...
- 10.1109/access.2021.3051315 may be a valid DOI for title: A survey of contrastive and counterfactual explana...
- No DOI given, and none found for title: Counterfactual explanations can be manipulated
- No DOI given, and none found for title: Algorithmic Recourse in the Face of Noisy Human Re...
- No DOI given, and none found for title: On counterfactual explanations under predictive mu...
- 10.1145/3603195.3603204 may be a valid DOI for title: The hidden assumptions behind counterfactual expla...
- No DOI given, and none found for title: Towards Robust and Reliable Algorithmic Recourse
- No DOI given, and none found for title: Algorithmic Recourse in the Wild: Understanding th...
- 10.1109/dsaa.2018.00018 may be a valid DOI for title: Explaining explanations: An overview of interpreta...
- No DOI given, and none found for title: Definitions, methods, and applications in interpre...
- No DOI given, and none found for title: Identifying Homeless Youth at-risk of Substance Us...
- No DOI given, and none found for title: Interpretable Machine Learning
- No DOI given, and none found for title: Trade-Offs between Fairness and Interpretability i...
- No DOI given, and none found for title: UCI repository of machine learning databases
- No DOI given, and none found for title: Resistance to medical artificial intelligence
- 10.3386/w23180 may be a valid DOI for title: Human decisions and machine predictions
- 10.1145/3236386.3241340 may be a valid DOI for title: The mythos of model interpretability
- 10.1038/s42256-019-0048-x may be a valid DOI for title: Stop explaining black box machine learning models ...
- 10.1145/2487575.2487579 may be a valid DOI for title: Accurate intelligible models with pairwise interac...
- No DOI given, and none found for title: Intelligible models for healthcare: Predicting pne...
- 10.1214/15-aoas848 may be a valid DOI for title: Interpretable classifiers using rules and bayesian...
- No DOI given, and none found for title: Interpretable decision sets: A joint framework for...
- No DOI given, and none found for title: Distill-and-compare: Auditing black-box models usi...
- No DOI given, and none found for title: " Why should I trust you?" Explaining the predicti...
- No DOI given, and none found for title: A unified approach to interpreting model predictio...
- No DOI given, and none found for title: Interpretability beyond feature attribution: Quant...
- No DOI given, and none found for title: Understanding black-box predictions via influence ...
- 10.1145/3306618.3314229 may be a valid DOI for title: Faithful and customizable explanations of black bo...
- 10.2139/ssrn.3063289 may be a valid DOI for title: Counterfactual explanations without opening the bl...
- 10.1093/bioinformatics/btq134 may be a valid DOI for title: Permutation importance: a corrected feature import...
- 10.1609/aaai.v32i1.11771 may be a valid DOI for title: Deep Learning for Case-Based Reasoning Through Pro...
- No DOI given, and none found for title: Towards robust interpretability with self-explaini...
- No DOI given, and none found for title: How do humans understand explanations from machine...
- 10.1145/3313831.3376219 may be a valid DOI for title: Interpreting Interpretability: Understanding Data ...
- No DOI given, and none found for title: Explaining models: an empirical study of how expla...
- 10.1145/3290605.3300234 may be a valid DOI for title: Human-centered tools for coping with imperfect alg...
- 10.1145/3290605.3300831 may be a valid DOI for title: Designing theory-driven user-centric explainable A...
- No DOI given, and none found for title: Interacting with predictions: Visual inspection of...
- 10.1609/hcomp.v7i1.5280 may be a valid DOI for title: Human evaluation of models built for interpretabil...
- No DOI given, and none found for title: To explain or to predict?
- 10.1016/j.artint.2018.07.007 may be a valid DOI for title: Explanation in artificial intelligence: Insights f...
- No DOI given, and none found for title: Examples are not enough, learn to criticize! criti...
- No DOI given, and none found for title: ’It’s Reducing a Human Being to a Percentage’ Perc...
- 10.1609/aaai.v33i01.33013681 may be a valid DOI for title: Interpretation of neural networks is fragile
- 10.1145/3287560.3287574 may be a valid DOI for title: Explaining explanations in AI
- No DOI given, and none found for title: Genesim: genetic extraction of a single, interpret...
- 10.1609/aaai.v34i01.5427 may be a valid DOI for title: AdaCare: Explainable Clinical Health Status Repres...
- No DOI given, and none found for title: A case study of algorithm-assisted decision making...
- No DOI given, and none found for title: Using Social Networks to Aid Homeless Shelters: Dy...
- No DOI given, and none found for title: Using artificial intelligence to augment network-b...
- No DOI given, and none found for title: Artificial Intelligence and Social Work
- 10.1007/978-3-030-81907-1_9 may be a valid DOI for title: How to Design AI for Social Good: Seven Essential ...
- 10.1609/aaai.v30i2.19070 may be a valid DOI for title: Deploying PAWS: Field Optimization of the Protecti...
- 10.1145/2783258.2788620 may be a valid DOI for title: A machine learning framework to identify students ...
- 10.1145/3351095.3372850 may be a valid DOI for title: Explaining machine learning classifiers through di...
- No DOI given, and none found for title: Multi-Objective Counterfactual Explanations
- No DOI given, and none found for title: Item-based collaborative filtering recommendation ...
- No DOI given, and none found for title: Distance metric learning for large margin nearest ...
- No DOI given, and none found for title: Case-based explanation of non-case-based learning ...
- 10.1109/cvpr.2019.00880 may be a valid DOI for title: Learning to explain with complemental examples
- No DOI given, and none found for title: This looks like that: deep learning for interpreta...
- No DOI given, and none found for title: Can we do better explanations? A proposal of user-...
- 10.24963/ijcai.2018/761 may be a valid DOI for title: Bridging the Gap Between Theory and Practice in In...
- No DOI given, and none found for title: Explanation Systems for Influence Maximization Alg...
- No DOI given, and none found for title: Personalized explanation in machine learning: A co...
- 10.1007/978-3-030-56485-8_3 may be a valid DOI for title: Random forests
- No DOI given, and none found for title: Robust and stable black box explanations
- 10.1145/1772690.1772758 may be a valid DOI for title: A contextual-bandit approach to personalized news ...
- 10.1016/j.knosys.2011.07.021 may be a valid DOI for title: A collaborative filtering approach to mitigate the...
- No DOI given, and none found for title: Artificial intelligence for social good: A survey
- 10.1145/170036.170072 may be a valid DOI for title: Mining association rules between sets of items in ...
- No DOI given, and none found for title: The eu general data protection regulation (gdpr)
- 10.1007/978-3-030-86520-7_40 may be a valid DOI for title: Interpretable counterfactual explanations guided b...
- No DOI given, and none found for title: Preserving causal constraints in counterfactual ex...
- 10.1609/aaai.v32i1.11491 may be a valid DOI for title: Anchors: High-Precision Model-Agnostic Explanation...
- No DOI given, and none found for title: Scikit-learn: Machine learning in Python
- 10.1109/cvpr.2016.90 may be a valid DOI for title: Deep residual learning for image recognition
- No DOI given, and none found for title: Empirical evaluation of rectified activations in c...
- No DOI given, and none found for title: Batch normalization: Accelerating deep network tra...
- No DOI given, and none found for title: Improved techniques for training gans
- 10.7551/mitpress/10761.003.0012 may be a valid DOI for title: Adversarial Perturbations of Deep Neural Networks
- 10.1016/j.eswa.2007.12.020 may be a valid DOI for title: The comparisons of data mining techniques for the ...
- No DOI given, and none found for title: UCI Machine Learning Repository: Adult Data Set
- No DOI given, and none found for title: Explainable Machine Learning Challenge
- No DOI given, and none found for title: Titanic - Machine Learning from Disaster
- No DOI given, and none found for title: When does label smoothing help?
- No DOI given, and none found for title: US Law School Disclosures to the ABA: Standard 509...
- No DOI given, and none found for title: Open university learning analytics dataset
- No DOI given, and none found for title: Dropout: a simple way to prevent neural networks f...
- No DOI given, and none found for title: Sparse autoencoder
- 10.1109/iccv.2015.123 may be a valid DOI for title: Delving Deep into Rectifiers: Surpassing Human-Lev...
- No DOI given, and none found for title: Generative Adversarial Nets
- 10.1145/3287560.3287566 may be a valid DOI for title: Actionable recourse in linear classification
- No DOI given, and none found for title: The inverse classification problem
- No DOI given, and none found for title: Explanations Based on the Missing: Towards Contras...
- No DOI given, and none found for title: Counterfactual Explanations for Machine Learning: ...
- No DOI given, and none found for title: A survey of algorithmic recourse: definitions, for...
- 10.1017/s0269888921000102 may be a valid DOI for title: Contrastive explanation: A structural-model approa...
- 10.1145/3366423.3380087 may be a valid DOI for title: Learning model-agnostic counterfactual explanation...
- No DOI given, and none found for title: Algorithmic recourse: from counterfactual explanat...
- 10.1145/3583780.3615040 may be a valid DOI for title: RoCourseNet: Robust Training of a Prediction Aware...
- 10.1080/10691898.2018.1434342 may be a valid DOI for title: “Should This Loan be Approved or Denied?”: A Large...
- No DOI given, and none found for title: UCI Machine Learning Repository
- No DOI given, and none found for title: Model agnostic contrastive explanations for struct...
- 10.1109/test.2018.8624792 may be a valid DOI for title: Influence-directed explanations for deep convoluti...
- No DOI given, and none found for title: How important is a neuron?
- No DOI given, and none found for title: Generalized Inner Loop Meta-Learning
- No DOI given, and none found for title: Compiling machine learning programs via high-level...
- No DOI given, and none found for title: JAX: composable transformations of Python+NumPy pr...
- No DOI given, and none found for title: Haiku: Sonnet for JAX
- No DOI given, and none found for title: Alibi Explain: Algorithms for Explaining Machine L...
- No DOI given, and none found for title: CARLA: A Python Library to Benchmark Algorithmic R...
- 10.1145/3065386 may be a valid DOI for title: ImageNet Classification with Deep Convolutional Ne...
- No DOI given, and none found for title: Language models are few-shot learners
- No DOI given, and none found for title: Training language models to follow instructions wi...
- No DOI given, and none found for title: Spambase data set
- 10.1007/s10115-007-0095-1 may be a valid DOI for title: Forecasting skewed biased stochastic ozone days: a...
- 10.1021/ci4000213 may be a valid DOI for title: Quantitative structure–activity relationship model...
- No DOI given, and none found for title: Predicting a Biological Response
- No DOI given, and none found for title: Telco Customer Churn
- No DOI given, and none found for title: Road Safety Data
- No DOI given, and none found for title: Retiring Adult: New Datasets for Fair Machine Lear...
- No DOI given, and none found for title: Getting a {CLUE}: A  Method for Explaining Uncerta...
- No DOI given, and none found for title: Towards realistic individual recourse and actionab...
- No DOI given, and none found for title: CounteRGAN: Generating counterfactuals for real-ti...
- 10.1145/3534678.3539065 may be a valid DOI for title: Rax: Composable Learning-to-Rank using JAX
- 10.1109/cvpr52688.2022.02070 may be a valid DOI for title: Scenic: A JAX library for computer vision research...
- No DOI given, and none found for title: Enabling fast differentially private sgd via just-...
- No DOI given, and none found for title: The DeepMind JAX Ecosystem
- No DOI given, and none found for title: On Neural Differential Equations
- No DOI given, and none found for title: Composable Effects for Flexible and Accelerated Pr...
- No DOI given, and none found for title: Inverse classification for comparison-based interp...
- No DOI given, and none found for title: CounterfactualExplanations.jl - a Julia package fo...
- 10.1145/3580305.3599343 may be a valid DOI for title: Feature-based Learning for Diverse and Privacy-Pre...

INVALID DOIs

- None

@GarrettMerz
Copy link

GarrettMerz commented Apr 11, 2024

Review checklist for @GarrettMerz

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/BirkhoffG/jax-relax/?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@BirkhoffG) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@Fei-Tao
Copy link

Fei-Tao commented Apr 19, 2024

Hi @duhd1993, can you generate your checklist at your convenience? Thanks for your time.

@duhd1993
Copy link

duhd1993 commented Apr 19, 2024

Review checklist for @duhd1993

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/BirkhoffG/jax-relax/?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@BirkhoffG) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@crvernon
Copy link

@Fei-Tao will you please check in on this review when you have a moment to make sure things keep moving? Thanks!

@GarrettMerz
Copy link

GarrettMerz commented May 20, 2024

Apologies for the delay on this! Here is the current status of my review. I've checked what I can, but left some checkboxes open for now:

-The "installation instructions" are incomplete. I'd like some more clarity on how and when to install JAX (the repo says to install jax-relax before the GPU version of JAX- is this correct?). Is this something that can be clarified? Additionally, several required packages are not listed: fastcore, nbdev, ipywidgets>=7.0.

-Much of the documentation is located on the linked doc page rather than in the repository. This is fine, especially for things like comprehensive API documentation, but the "community guidelines" section (or a link to it) should be in the main repository.

-The repository does not have automated tests. However, there is a comprehensive tutorial that describes how to use the software. The tutorial is somewhat spread across the repo- the steps in the README give a way to test a sample classifier, but the 'Getting Started' section refers to the tutorial notebooks. If possible, this should be centralized and streamlined more- if I can test that everything runs out-of-the-box by clicking through one or more notebooks, that's ideal.

-The datasets needed to reproduce the plots in the paper are not provided in the repository, but are linked in the paper. When links are provided, as with the 'adult' dataset mentioned in the tutorials notebook, the available data is not formatted as needed by the code. I'd like to reproduce the benchmarks in the paper, but it's unclear what I need to do in order to get a hold of the data to do so.

@Fei-Tao
Copy link

Fei-Tao commented May 21, 2024

Hi guys, I was traveling for the past two weeks and had limited access to my laptop. Sorry to reply late.
@GarrettMerz thanks for your testing and constructive comments. That would be very helpful for the author to improve this submission.
@BirkhoffG would you please address the issues above at your convenience?
@duhd1993 would you please start your reviewing at your convenience? Thanks in advance for your time.

@BirkhoffG
Copy link

@GarrettMerz Thank you very much for your constructive feedback and suggestions. Below, we clarify your concerns about our library:

-The "installation instructions" are incomplete. I'd like some more clarity on how and when to install JAX (the repo says to install jax-relax before the GPU version of JAX- is this correct?). Is this something that can be clarified? Additionally, several required packages are not listed: fastcore, nbdev, ipywidgets>=7.0.

Our latest installation guidance clarifies how to install JAX with GPU (see the latest Installation section). Essentially, it is advised to first install jax-relax (i.e., pip install jax-relax), then install the GPU version of JAX according to the JAX official installation instructions.

Additionally, these mentioned packages (including fastcore, nbdev) are not required for using jax-relax as an end-user. For example, you do not need these packages if you run code examples in the getting started section. On the other hand, if you are a contributor to jax-relax, you do need these packages, and you should install them via pip install "jax-relax[dev]", which will automatically install jax-relax and all required packages for development. We have updated the docs to reflect those requirements.

-Much of the documentation is located on the linked doc page rather than in the repository. This is fine, especially for things like comprehensive API documentation, but the "community guidelines" section (or a link to it) should be in the main repository.

All of the documentations are written in Jupyter notebook, which is automatically converted to the documentation web pages. Those Jupyter notebooks are located in the GitHub repository. For example, the "community guidelines" section is located at https://github.com/BirkhoffG/jax-relax/blob/master/nbs/tutorials/contribution.ipynb.

-The repository does not have automated tests. However, there is a comprehensive tutorial that describes how to use the software. The tutorial is somewhat spread across the repo- the steps in the README give a way to test a sample classifier, but the 'Getting Started' section refers to the tutorial notebooks. If possible, this should be centralized and streamlined more- if I can test that everything runs out-of-the-box by clicking through one or more notebooks, that's ideal.

Our repository contains automated tests by simply running nbdev_test, which is enabled in the continuous integration (CI) via the GitHub Action. See https://github.com/BirkhoffG/jax-relax/blob/c722a81bbbd3012bcd46a9154529303302c604e5/.github/workflows/test.yaml#L85-L86

All test cases are located in each Jupyter notebook. Essentially, all code blocks without directives are considered test cases (directives are special comments start with #|, e.g., #| export is the most common directive. See this). Refer to this section on how we write unit tests.

In terms of the tutorials, the getting started and ReLax as a Recourse Library are supposed to be the superset of the contents covered in README, i.e., these two tutorials should cover everything explained in the Dive into ReLax section. If we miss something, please let us know, and we will update the tutorials.

-The datasets needed to reproduce the plots in the paper are not provided in the repository, but are linked in the paper. When links are provided, as with the 'adult' dataset mentioned in the tutorials notebook, the available data is not formatted as needed by the code. I'd like to reproduce the benchmarks in the paper, but it's unclear what I need to do in order to get a hold of the data to do so.

Due to the concerns about the file size limit in GitHub, we choose to host datasets and models on huggingface. All datasets are located in https://huggingface.co/datasets/birkhoffg/ReLax-Assets/tree/main/{data_name}/data/data.csv (replace {data_name} to the real dataset name, such as "adult").

We provide an easy-to-use API to automatically load the data modules VIA load_data:

import relax

data = relax.load_data('adult')

To reproduce the results in the paper, you can run

python -m benchmarks.built-in.run_all

cc @Fei-Tao

@crvernon crvernon removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Jun 12, 2024
@GarrettMerz
Copy link

@BirkhoffG Thanks for your response! Given all this, I've completed the checklist- thanks for the additional clarity. This is a great paper and codebase, looking forward to seeing it published :)

@Fei-Tao
Copy link

Fei-Tao commented Jul 1, 2024

Hi @duhd1993 , can you finish your review at your convenience? Thanks for your time.

@duhd1993
Copy link

duhd1993 commented Jul 1, 2024

@Fei-Tao No problem. I will finish it this week. Sorry for delay.

@Fei-Tao
Copy link

Fei-Tao commented Jul 1, 2024

@duhd1993 no problem. Thanks for your willingness to help improve this submission.

@BirkhoffG
Copy link

Hi @duhd1993, Is there anything else you need our help to clarify about this repo?

@duhd1993
Copy link

duhd1993 commented Aug 9, 2024

Sorry for the delay. I have reviewed the paper and the Github repo. @BirkhoffG

The author developed ReLax, a JAX-based library for efficient and scalable benchmarking of recourse and counterfactual explanation methods for machine learning models. It implements 9 popular recourse methods and includes 14 medium-sized datasets and 1 large-scale dataset for benchmarking. ReLax achieves significant performance improvements.

There are a few issues that needs improvement or clarifications

  • The installation instruction is not complete. When I tried to reproduce the results, there are errors for missing libraries nbdev, fastcore. I think you need to write a requirements.txt and test from a clean environment.
  • Figure 3 demonstrated the efficiency on large scale dataset. But how do you ensure the quality and correctness of results given the lack of baselines at that scale?
  • Is there a review on most popular recourse methods? Why you choose the existing ones? How easy is it to implement new recourse methods in ReLax? Is there a standardized interface?
  • I tried to run the benchmark script and got this error: ModuleNotFoundError: No module named 'relax.module'

@BirkhoffG
Copy link

@duhd1993 Thank you very much for your constructive feedback and suggestions. Below, we clarify your concerns about our library:

The installation instruction is not complete. When I tried to reproduce the results, there are errors for missing libraries nbdev, fastcore. I think you need to write a requirements.txt and test from a clean environment.

As we suggested here, we deliberately exclude these requirements (including fastcore, nbdev) for normal installation because these packages are not required for using jax-relax as an end-user. For example, you do not need these packages if you run code examples in the getting started section. These packages are only required if you are a contributor/developer to jax-relax. In that case, you should install them via pip install "jax-relax[dev]".

We have updated the installation docs to reflect those requirements. The dependencies are specified in settings.ini.

Figure 3 demonstrated the efficiency on large scale dataset. But how do you ensure the quality and correctness of results given the lack of baselines at that scale?

You are right in pointing out that existing libraries/implementations cannot scale, and this is the main limitation that we aim to address in jax-relax. We try our best to implement each recourse method faithfully to their original papers and the official code provided (if provided). We evaluate the quality of the recourse methods in medium-sized datasets, and we use this implementation (evaluated correctly at a medium-sized scale) to test how these methods work at a larger scale at which there is no point of comparison for jax-relax simply because no other implementation of recourse methods scales up to such large-sized datasets.

Our experiments show a similar pattern between the results achieved by the implementations of recourse methods in jax-relax on the medium-sized datasets and the large-scale dataset (see this figure. We will update this result in the final paper.

Is there a review on most popular recourse methods? Why you choose the existing ones?

There exists a large amount of recourse methods proposed by the research communities. These surveys [1, 2, 3] provide a comprehensive review of the various problems and methods for algorithmic recourses. Based on the use of parametric models, we categorize existing methods into non-parametric, semi-parametric, and parametric methods, and we select 3 representative methods for each category. Additionally, given the rapid progress in this field, it is impractical to incorporate every existing recourse method, and we welcome the research communities to contribute new recourse methods to jax-relax.

How easy is it to implement new recourse methods in ReLax? Is there a standardized interface?

Yes, you can easily implement new recourse methods and integrate them into the jax-relax pipeline. Essentially, to implement a new recourse method, you need to ensure that it inherits from CFModule class. Check out our tutorials on implementing your own recourse methods.

I tried to run the benchmark script and got this error: ModuleNotFoundError: No module named 'relax.module'

We have updated a README on requirements and instructions on how to run the benchmark scripts.

Reference

[1] https://arxiv.org/abs/2010.10596
[2] https://ieeexplore.ieee.org/abstract/document/9321372
[3] https://arxiv.org/abs/2010.04050

@Fei-Tao
Copy link

Fei-Tao commented Oct 22, 2024

Hi @BirkhoffG the article and reference look good to me now.
Would you please make a new release of the software with the latest changes from the review and post the version number here? Thanks for your time.

@BirkhoffG
Copy link

Hi @Fei-Tao
Our current latest release has addressed all the comments from the reviewers. The version number is v0.2.8.

@Fei-Tao
Copy link

Fei-Tao commented Oct 22, 2024

@editorialbot set v0.2.8 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now v0.2.8

@Fei-Tao
Copy link

Fei-Tao commented Oct 22, 2024

@editorialbot set 10.5281/zenodo.13957805 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.13957805

@Fei-Tao
Copy link

Fei-Tao commented Oct 22, 2024

@editorialbot set v0.2.8 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.2.8

@Fei-Tao
Copy link

Fei-Tao commented Oct 22, 2024

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1145/3287560.3287566 is OK
- 10.2139/ssrn.3063289 is OK
- 10.1109/ACCESS.2021.3051315 is OK
- 10.1145/3527848 is OK
- 10.1145/3173574.3173951 is OK
- 10.1016/J.ARTINT.2018.07.007 is OK
- 10.1145/3351095.3375624 is OK
- 10.1145/3351095.3372850 is OK
- 10.1145/3580305.3599343 is OK
- 10.1145/3580305.3599290 is OK
- 10.1145/3583780.3615040 is OK
- 10.1007/978-3-030-86520-7_40 is OK
- 10.1145/3366423.3380087 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Counterfactual Explanations for Machine Learning: ...
- No DOI given, and none found for title: Towards robust and reliable algorithmic recourse
- No DOI given, and none found for title: CARLA: A Python Library to Benchmark Algorithmic R...
- No DOI given, and none found for title: Alibi Explain: Algorithms for Explaining Machine L...
- No DOI given, and none found for title: UCI Machine Learning Repository: Adult Data Set
- No DOI given, and none found for title: JAX: composable transformations of Python+NumPy pr...
- No DOI given, and none found for title: Compiling machine learning programs via high-level...
- No DOI given, and none found for title: Inverse classification for comparison-based interp...
- No DOI given, and none found for title: Getting a {CLUE}: A  Method for Explaining Uncerta...
- No DOI given, and none found for title: Preserving causal constraints in counterfactual ex...
- No DOI given, and none found for title: Retiring adult: new datasets for fair machine lear...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6035, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Oct 22, 2024
@crvernon
Copy link

crvernon commented Oct 23, 2024

@editorialbot generate pdf

🔍 checking out the following:

  • reviewer checklists are completed or addressed
  • version set
  • archive set
  • archive names (including order) and title in archive matches those specified in the paper
  • archive uses the same license as the repo and is OSI approved as open source
  • archive DOI and version match or redirect to those set by editor in review thread
  • paper is error free - grammar and typos
  • paper is error free - test links in the paper and bib
  • paper is error free - refs preserve capitalization where necessary
  • paper is error free - no invalid refs without justification

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

crvernon commented Nov 12, 2024

👋 @BirkhoffG - I just need you to address the following before I move to accept this submission for publication:

  • LINE 112: Curly brackets are showing up once rendered around "CLUE". In your bib file, you have: {Getting a {\{}CLUE{\}}: A Method for Explaining Uncertainty Estimates} which can be reduced to {Getting a {CLUE}: A Method for Explaining Uncertainty Estimates} to fix this.
  • LINE 156: The "p" in "python" should be capitalized. You can just wrap the "P" in your bib file with curly brackets like the previous bullet points out.

Let me know when these have been taken care of. Thanks.

@BirkhoffG
Copy link

Hi @crvernon - I have fixed these two issues. Thank you for your help in this submission.

@crvernon
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Guo
  given-names: Hangzhi
  orcid: "https://orcid.org/0009-0000-6277-9003"
- family-names: Xiong
  given-names: Xinchang
- family-names: Zhang
  given-names: Wenbo
- family-names: Yadav
  given-names: Amulya
  orcid: "https://orcid.org/0009-0005-4638-9140"
contact:
- family-names: Guo
  given-names: Hangzhi
  orcid: "https://orcid.org/0009-0000-6277-9003"
doi: 10.5281/zenodo.13957805
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Guo
    given-names: Hangzhi
    orcid: "https://orcid.org/0009-0000-6277-9003"
  - family-names: Xiong
    given-names: Xinchang
  - family-names: Zhang
    given-names: Wenbo
  - family-names: Yadav
    given-names: Amulya
    orcid: "https://orcid.org/0009-0005-4638-9140"
  date-published: 2024-11-12
  doi: 10.21105/joss.06567
  issn: 2475-9066
  issue: 103
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6567
  title: "ReLax: Efficient and Scalable Recourse Explanation
    Benchmarking using JAX"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06567"
  volume: 9
title: "ReLax: Efficient and Scalable Recourse Explanation Benchmarking
  using JAX"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🦋🦋🦋 👉 Bluesky post for this paper 👈 🦋🦋🦋

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.06567 joss-papers#6124
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.06567
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Nov 12, 2024
@crvernon
Copy link

🥳 Congratulations on your new publication @BirkhoffG! Many thanks to @Fei-Tao for editing and @GarrettMerz and @duhd1993 for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following

code snippets

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06567/status.svg)](https://doi.org/10.21105/joss.06567)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.06567">
  <img src="https://joss.theoj.org/papers/10.21105/joss.06567/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.06567/status.svg
   :target: https://doi.org/10.21105/joss.06567

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

6 participants