Skip to content

Automatic Music Transcription with Deep Neural Networks

License

Notifications You must be signed in to change notification settings

KimberleyEvans-Parker/wav2mid

 
 

Repository files navigation

wav2mid: Polyphonic Piano Music Transcription with Deep Neural Networks

Thesis by Jonathan Sleep for MS in CSC @ CalPoly

Abstract / Intro

There has been a multitude of recent research on using deep learning for music & audio generation and classification. In this paper, we plan to build on these works by implementing a novel system to automatically transcribe polyphonic music with an artificial neural network model. We show that by treating the transcription problem as an image classification problem we can use transformed audio data to predict the group of notes currently being played.

Background

Digital Signal Processing: Fourier Transform, STFT, Constant-Q, Onset/Beat Tracking, Auto-correlation Machine Learning: Artificial Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks

Related Work on AMT

Design

The design for the system is as follows:

  • Pre-process our data into an ingestible format, fourier-like transform of the audio and piano-roll conversion of midi files.
  • Design a neural network model to estimate current notes from audio data
  • Use frame-wise (simpler) or onsets (faster)
  • Train on a large corpus of audio to midi
  • Evaluate it's performance on audio/midi pairs we have not trained on

Implementation

Libraries

  • Python - due to the abundance of music and machine learning libraries developed for it
  • librosa - for digital signal processing methods
  • pretty_midi - for midi manipulation methods
  • TensorFlow - for neural networks

Data

Run Instructions

Both contributors use Python 3.8.2 (64 bit) and would recommend sticking to this to avoid module issues Ensure you have pip 20.2.2 installed and run the following in terminal:

$ pip3 install mido

$ pip3 install pretty_midi

$ pip3 install cython

$ pip3 install madmom

$ pip3 install pandas

$ pip3 install librosa

$ pip3 install matplotlib

$ pip3 install keras

$ pip3 install tensorflow

$ pip3 install pydotplus

$ pip3 install graphviz

You may have to install graphiv from here: https://graphviz.org/download/

Add the data you want to train to a directory called training data. Add at least 5 peices of data each with both a midi and wav version to a sub directory in this folder.

Start by running the file runs.py

About

Automatic Music Transcription with Deep Neural Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.4%
  • Other 0.6%