There has been a multitude of recent research on using deep learning for music & audio generation and classification. In this paper, we plan to build on these works by implementing a novel system to automatically transcribe polyphonic music with an artificial neural network model. We show that by treating the transcription problem as an image classification problem we can use transformed audio data to predict the group of notes currently being played.
Digital Signal Processing: Fourier Transform, STFT, Constant-Q, Onset/Beat Tracking, Auto-correlation Machine Learning: Artificial Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks
-
Pre-Deep Learning Research
- Non-negative matrix factorization for polyphonic music transcription
- Really cool paper for transcribing music using NMF - very simple. I wish there were more results shown with metrics like accuracy, but the work seemed clear. It would be cool to see if/how I could extend this.
- YIN, a fundamental frequency estimator for speech and music - building off autocorrelation which produces an f0 estimator with even less error.
- Non-negative matrix factorization for polyphonic music transcription
-
Research that use Deep Learning
- Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription - Using a sequential model to aid in transcription.
- An End-to-End Neural Network for Polyphonic Piano Music Transcription - Research on AMT that used an acoustic and language model, ~75% accuracy on MAPS
- On the Potential of Simple Framewise Approaches to Piano Transcription - explains the current state-of-the-art and what the most effective architectures and input representations are for framewise transcription.
- An Experimental Analysis of the Entanglement Problem in Neural-Network-based Music Transcription Systems - explains entanglement, which is the problem of learning to generalize note combinations that it may have not been trained with. Entanglement is the current glass ceiling problem for framewise neural network music transcription. They present a few (really just one) possible solutions that I could try to implement (a loss function that takes entanglement into account).
-
Products
- Melodyne - Popular plugin for transcription + pitch correction, costs up to $500
- AnthemScore - A product for Music Transcription that uses deep learning.
The design for the system is as follows:
- Pre-process our data into an ingestible format, fourier-like transform of the audio and piano-roll conversion of midi files.
- Design a neural network model to estimate current notes from audio data
- Use frame-wise (simpler) or onsets (faster)
- Train on a large corpus of audio to midi
- Evaluate it's performance on audio/midi pairs we have not trained on
- Python - due to the abundance of music and machine learning libraries developed for it
- librosa - for digital signal processing methods
- pretty_midi - for midi manipulation methods
- TensorFlow - for neural networks
Both contributors use Python 3.8.2 (64 bit) and would recommend sticking to this to avoid module issues Ensure you have pip 20.2.2 installed and run the following in terminal:
$ pip3 install mido
$ pip3 install pretty_midi
$ pip3 install cython
$ pip3 install madmom
$ pip3 install pandas
$ pip3 install librosa
$ pip3 install matplotlib
$ pip3 install keras
$ pip3 install tensorflow
$ pip3 install pydotplus
$ pip3 install graphviz
You may have to install graphiv from here: https://graphviz.org/download/
Add the data you want to train to a directory called training data
. Add at least 5 peices of data each with both a midi and wav version to a sub directory in this folder.
Start by running the file runs.py