These are the slides and example codes of our part (G. Schuller, R. Profeta) of our Workshop "Teaching AI to Hear Like We Do: Psychoacoustics in Machine Learning" at the October 2022 AES Convention in New York. The collection of slides from all participants can be found at: https://www.aes.org/technical/documentDownloads.cfm?docID=774
https://github.com/TUIlmenauAMS/Python-Audio-Coder
The Colab Jupyter notebook for the perceptual loss function, using the psycho-acoustic model, with comparisons and explanations:
[] (https://colab.research.google.com/github/TUIlmenauAMS/PsychoacousticLoss/blob/main/onlyPsyacLoss.ipynb)
We organised a special session on "Perceptual and Higher Level Loss and Distance Functions for Audio and Acoustics", see an overview here: https://cmsworkshops.com/Asilomar2024/view_session.php?SessionID=1126
Our special session talk slides are "Asilomar2024PsyacLossTalk.pdf" in this repository, "asilomar2024_report.pdf" is an overview of our special session.
All talks of the special session can be found in the subdirectory https://github.com/TUIlmenauAMS/PsychoacousticLoss/tree/main/Asilomar2024SpecialSessionTalks