Skip to content

Commit

Permalink
joss rev 4
Browse files Browse the repository at this point in the history
  • Loading branch information
enricgrau committed Oct 31, 2023
1 parent 55996ba commit 8c4c381
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 5 deletions.
3 changes: 1 addition & 2 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -633,9 +633,8 @@ @article{Hollig2023

@article{Bhatt2020,
abstract = {Explainable machine learning offers the potential to provide stake-holders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little understanding of how organizations use these methods in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that, currently, the majority of deployments are not for end users affected by the model but rather for machine learning engineers, who use explainability to debug the model itself. There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations of current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability. We end by discussing concerns raised regarding explainability. CCS CONCEPTS • Human-centered computing; • Computing methodologies → Philosophical/theoretical foundations of artificial intelligence ; Machine learning; KEYWORDS machine learning, explainability, transparency, deployed systems, qualitative study ACM Reference Format:},
author = {Bhatt, Umang and Xiang, Alice and Sharma, Shubham and Weller, Adrian and Taly, Ankur and Jia, Yunhan and Ghosh, Joydeep and Puri, Ruchir and Moura, Jos{\'{e}} M F and Eckersley, Peter},
author = {Bhatt, Umang and Xiang, Alice and Sharma, Shubham and Weller, Adrian and Taly, Ankur and Jia, Yunhan and Ghosh, Joydeep and Puri, Ruchir and Moura, José M F and Eckersley, Peter},
doi = {10.1145/3351095.3375624},
file = {:C\:/Users/etgrau/AppData/Local/Mendeley Ltd./Mendeley Desktop/Downloaded/Bhatt et al. - 2020 - Explainable Machine Learning in Deployment.pdf:pdf},
isbn = {9781450369367},
keywords = {deployed systems,explainability,machine learning,qualitative study,transparency},
mendeley-groups = {pudu},
Expand Down
6 changes: 3 additions & 3 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@ bibliography: paper.bib

# Statement of need

Spectroscopic techniques (e.g. Raman, photoluminescence, reflectance, transmittance, X-ray fluorescence) are an important and widely used resource in different fields of science, such as photovoltaics [@Fonoll-Rubio2022][@Grau-Luque2021], cancer [@Bellisola2012], superconductors [@Fischer2007], polymers [@Easton2020], corrosion [@Haruna2023], forensics [@Bhatt2023], and environmental sciences [@Estefany2023], to name just a few. This is due to the versatile, non-destructive and fast acquisition nature that provides a wide range of material properties, such as composition, morphology, molecular structure, optical and electronic properties. As such, machine learning (ML) has been used to analyze spectral data for several years, elucidating their vast complexity, and uncovering further potential on the information contained within them [@Goodacre2003][@Luo2022]. Unfortunately, most of these ML analyses lack further interpretation of the derived results due to the complex nature of such algorithms. In this regard, interpreting the results of ML algorithms has become an increasingly important topic, as concerns about the lack of interpretability of these models have grown [@Burkart2021]. In natural sciences (like materials, physical, chemistry, etc.), as ML becomes more common, this concern has gained especial interest, since results obtained from ML analyses may lack scientific value if they cannot be properly interpreted, which can affect scientific consistency and strongly diminish the significance and confidence in the results, particularly when tackling scientific problems [@Roscher2020].
Spectroscopic techniques (e.g. Raman, photoluminescence, reflectance, transmittance, X-ray fluorescence) are an important and widely used resource in different fields of science, such as photovoltaics [@Fonoll-Rubio2022] [@Grau-Luque2021], cancer [@Bellisola2012], superconductors [@Fischer2007], polymers [@Easton2020], corrosion [@Haruna2023], forensics [@Bhatt2023], and environmental sciences [@Estefany2023], to name just a few. This is due to the versatile, non-destructive and fast acquisition nature that provides a wide range of material properties, such as composition, morphology, molecular structure, optical and electronic properties. As such, machine learning (ML) has been used to analyze spectral data for several years, elucidating their vast complexity, and uncovering further potential on the information contained within them [@Goodacre2003] [@Luo2022]. Unfortunately, most of these ML analyses lack further interpretation of the derived results due to the complex nature of such algorithms. In this regard, interpreting the results of ML algorithms has become an increasingly important topic, as concerns about the lack of interpretability of these models have grown [@Burkart2021]. In natural sciences (like materials, physical, chemistry, etc.), as ML becomes more common, this concern has gained especial interest, since results obtained from ML analyses may lack scientific value if they cannot be properly interpreted, which can affect scientific consistency and strongly diminish the significance and confidence in the results, particularly when tackling scientific problems [@Roscher2020].

Even though there are methods and libraries available for explaining different types of algorithms such as SHAP [@Lundberg2017], LIME [@Ribeiro2016], or GradCAM [@Selvaraju2017], they can be difficult to interpret or understand even for data scientists, leading to problems such as miss-interpretation, miss-use and over-trust [@Kaur2020]. Adding this to other human-related issues [@Krishna12022], researchers with expertise in natural sciences with little or no data science background may face further issues when using such methodologies [@Zhong2022]. Furthermore, these types of libraries normally aim for problems composed of image, text, or tabular data, which cannot be associated in a straightforward way with spectroscopic data. On the other hand, time series (TS) data shares similarities with spectroscopy, and while still having specific needs and differences, TS dedicated tools can be a better approach. Unfortunately, despite the existence of several libraries that aim to explain models for TS with the potential to be applied to spectroscopic data, they are mostly designed for a specialized audience, and many are model-specific [@Rojat2021]. In contrast, other libraries have a more end-user approach for TS specific problems [@Hollig2023] but lack the versatility that spectroscopy experts need for deep spectroscopic analysis. Spectral data typically manifests as an array of peaks that are typically overlapped and can be distinguished by their shape, intensity, and position. Minor shifts in these patterns can indicate significant alterations in the fundamental properties of the subject material. Conversely, pronounced variations in the other case might only indicate negligible differences. Therefore, comprehending such alterations and their implications is paramount. This is still true with ML spectroscopic analysis where the spectral variations are still of primary concern. In this context, a tool with an easy and understandable approach that offers spectroscopy-aimed functionalities that allow to aim for specific patterns, areas, and variations, and that is beginner and non-specialist friendly is of high interest. This can help the different stakeholders to better understand the ML models that they employ and considerably increase the transparency, comprehensibility, and scientific impact of ML results [@Bhatt2020][@Belle2021].
Even though there are methods and libraries available for explaining different types of algorithms such as SHAP [@Lundberg2017], LIME [@Ribeiro2016], or GradCAM [@Selvaraju2017], they can be difficult to interpret or understand even for data scientists, leading to problems such as miss-interpretation, miss-use and over-trust [@Kaur2020]. Adding this to other human-related issues [@Krishna12022], researchers with expertise in natural sciences with little or no data science background may face further issues when using such methodologies [@Zhong2022]. Furthermore, these types of libraries normally aim for problems composed of image, text, or tabular data, which cannot be associated in a straightforward way with spectroscopic data. On the other hand, time series (TS) data shares similarities with spectroscopy, and while still having specific needs and differences, TS dedicated tools can be a better approach. Unfortunately, despite the existence of several libraries that aim to explain models for TS with the potential to be applied to spectroscopic data, they are mostly designed for a specialized audience, and many are model-specific [@Rojat2021]. In contrast, other libraries have a more end-user approach for TS specific problems [@Hollig2023] but lack the versatility that spectroscopy experts need for deep spectroscopic analysis. Spectral data typically manifests as an array of peaks that are typically overlapped and can be distinguished by their shape, intensity, and position. Minor shifts in these patterns can indicate significant alterations in the fundamental properties of the subject material. Conversely, pronounced variations in the other case might only indicate negligible differences. Therefore, comprehending such alterations and their implications is paramount. This is still true with ML spectroscopic analysis where the spectral variations are still of primary concern. In this context, a tool with an easy and understandable approach that offers spectroscopy-aimed functionalities that allow to aim for specific patterns, areas, and variations, and that is beginner and non-specialist friendly is of high interest. This can help the different stakeholders to better understand the ML models that they employ and considerably increase the transparency, comprehensibility, and scientific impact of ML results [@Bhatt2020] [@Belle2021].


# Overview
Expand All @@ -47,7 +47,7 @@ pudu is versatile as it can analyze classification and regression algorithms for

**pudu** is built in Python 3 [@VanRossum2009] and uses third-party packages including numpy [@Harris2020], matplotlib [@Caswell2021], and keras. It is available in both PyPI and conda, and comes with complete documentation, including quick start, examples, and contribution guidelines. Source code and documentation are available in https://github.com/pudu-py/pudu.

![Two ways of using the same method 'importance' by A) using a sequential change pattern over all the spectral features and B) selecting peaks of interest. In A), the impact of the peak in the range of 1200-1400 opaques the impact of the rest. In contrast, in B) only the first four main peaks are selected to be analyzed and better visualize their impact in the prediction.]{label="figure1"}(figure1.png)
![Two ways of using the same method 'importance' by A) using a sequential change pattern over all the spectral features and B) selecting peaks of interest. In A), the impact of the peak in the range of 1200-1400 opaques the impact of the rest. In contrast, in B) only the first four main peaks are selected to be analyzed and better visualize their impact in the prediction.](figure1.png)


# Acknowledgements
Expand Down

0 comments on commit 8c4c381

Please sign in to comment.