-
-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: audio input for induced motion #124
Conversation
…l with band-pass filters, window size based on FPS
regarding docs: if you write up a tutorial as a jupyter notebook, we can add it to pytti-book directly. thanks again for putting this together, will try to poke around this later today! |
quick update: just wanted to let you know this is on my radar. I'm working on a really hard-to-nail bug in |
rebased and added some minor fixes, see PR #140 Merged into test branch for time being. Could you add a demo and/or test case? feel free to add a small/short audio file to |
thanks again for this amazing contribution! Wanted to let you know the feature has been merged into the main branch and I also added some simple support in the main colab notebook. |
closes #102
i started the issue with my thought of making it FFT based, where i sorted frequencies into buckets.
this is a more versatile approach now:
instead of dealing with FFT window sizes and all that sort of stuff, we just take the corresponding audio signal for the time the current frame will last and pass the signal through configurable butterworth band pass filters.
one additional trick: when loading the audio initially, we run the bandpass filters across the whole track and find the maxima to normalize the signal to the
0..1
range for the individual bands so the signal will always end up covering the full 0..1 range in the input, for easier use in functions.feel free to request whatever else you need, i wouldn't mind writing some docs with some basic examples / hints or adding stuff to the notebook where needed.