Daniele Rege Cambrin1 · Eleonora Poeta1 · Eliana Pastor1
Tania Cerquitelli1 · Elena Baralis1 · Paolo Garza1
1Politecnico di Torino, Italy
This paper analyzes the integration of KAN layers into the U-Net architecture (U-KAN) to segment crop fields using Sentinel-2 and Sentinel-1 satellite images and provides an analysis of the performance and explainability of these networks. Our findings indicate a 2% improvement in IoU compared to the traditional full-convolutional U-Net model in fewer GFLOPs. Furthermore, gradient-based explanation techniques show that U-KAN predictions are highly plausible and that the network has a very high ability to focus on the boundaries of cultivated areas rather than on the areas themselves. The per-channel relevance analysis also reveals that some channels are irrelevant to this task.
REPOSITORY IN CONSTRUCTION SOME FILES COULD BE MISSING
Install the dependencies of the requirements.txt file. Make sure to edit the config files in the configs/
folder. Then, simply run main.py to train the models.
Use the xai.ipynb for the explainability part.
The repository setup is by Eleonora Poeta for the XAI section and Daniele Rege Cambrin for the remaining.
You can find the computed cloud masks for Sentinel-2 on HuggingFace.
This project is licensed under the Apache 2.0 license. See LICENSE for more information.
U-Net is licensed under GPL-3 license. See LICENSE for more information.
U-KAN is licensed under MIT license. See LICENSE for more information.
If you find this project useful, please consider citing:
@misc{cambrin2024kanitkanssentinel,
title={KAN You See It? KANs and Sentinel for Effective and Explainable Crop Field Segmentation},
author={Daniele Rege Cambrin and Eleonora Poeta and Eliana Pastor and Tania Cerquitelli and Elena Baralis and Paolo Garza},
year={2024},
eprint={2408.07040},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.07040},
}