Skip to content

Commit

Permalink
joss rev 7
Browse files Browse the repository at this point in the history
  • Loading branch information
enricgrau committed Oct 31, 2023
1 parent 0140b63 commit 1fb3fcd
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Even though there are methods and libraries available for explaining different t

# Overview

**pudu** is a Python library that helps to make sense of ML models for spectroscopic data by quantifying changes in spectral features and explaining their effect to the target instances. In other words, it perturbates the features in a predictable and deliberate way and evaluates the features based on how the final prediction changes. For this, four main methods are included and defined. **Importance** quantifies the relevance of the features according to the changes in the prediction. Thus, this is measured in probability or target value difference for classification or regression problems, respectively. **Speed** quantifies how fast a prediction changes according to perturbations in the features. For this, the Importance is calculated at different perturbation levels, and a line is fitted to the obtained values and the slope, or the rate of change of Importance, is extracted as the Speed. **Synergy** indicates how features complement each other in terms of prediction change after perturbations. Finally, **Re-activations** account for the number of unit activations in a Convolutional Neural Network (CNN) that after perturbation, the value goes above the original activation criteria. The latter is only applicable for CNNs, but the rest can be applied to any other ML problem, including CNNs. To read in more detail how these techniques work, please refer to the [definitions](https://pudu-py.github.io/pudu/definitions.html) in the documentation.
**pudu** is a Python library that quantifies the effect of changes in spectral features over the predictions of ML models and their effect to the target instances. In other words, it perturbates the features in a predictable and deliberate way and evaluates the features based on how the final prediction changes. For this, four main methods are included and defined. **Importance** quantifies the relevance of the features according to the changes in the prediction. Thus, this is measured in probability or target value difference for classification or regression problems, respectively. **Speed** quantifies how fast a prediction changes according to perturbations in the features. For this, the Importance is calculated at different perturbation levels, and a line is fitted to the obtained values and the slope, or the rate of change of Importance, is extracted as the Speed. **Synergy** indicates how features complement each other in terms of prediction change after perturbations. Finally, **Re-activations** account for the number of unit activations in a Convolutional Neural Network (CNN) that after perturbation, the value goes above the original activation criteria. The latter is only applicable for CNNs, but the rest can be applied to any other ML problem, including CNNs. To read in more detail how these techniques work, please refer to the [definitions](https://pudu-py.github.io/pudu/definitions.html) in the documentation.

pudu is versatile as it can analyze classification and regression algorithms for both 1- and 2-dimensional problems, offering plenty of flexibility with parameters, , and the ability to provide localized explanations by selecting specific areas of interest. To illustrate this, \autoref{fig:figure1} shows two analysis instances using the same `importance` method but with different parameters. Additionally, its other functionalities are shown in examples using scikit-learn [@Pedregosa2011], keras [@chollet2018keras], and localreg [@Marholm2022] are found in the documentation, along with XAI methods including LIME and GradCAM.

Expand Down

0 comments on commit 1fb3fcd

Please sign in to comment.