ISMIR 2022 Tutorial: Few-Shot and Zero-Shot Learning for Music Information Retrieval
Yu Wang, Jeong Choi and I gave a tutorial during ISMIR 2022 on few-shot and zero-shot learning centered around music information retrieval tasks. In this tutorial, we cover the foundations of few-shot//zero-shot learning, build standalone coding examples, and discuss the state-of-the-art in the field, as well as future directions.
The tutorial is available as a jupyter book online.
Deep Learning Tools for Audacity
We provide a software framework that lets deep learning practitioners easily integrate their own PyTorch models into the open-source Audacity DAW. This lets ML audio researchers put tools in the hands of sound artists without doing DAW-specific development work.
Leveraging Hierarchical Structures for Few-Shot Musical Instrument Recognition
In this work, we exploit hierarchical relationships between instruments in a few-shot learning setup to enable classification of a wider set of musical instruments, given a few examples at inference. See the supplementary code on github.
update: this work won the Best Paper Award at ISMIR 2021! :)
ISMIR 2021 Poster Video
Audacity with Deep Learning
I am contributing a deep learning framework and a deep model manager that connects to HuggingFace to Audacity. This project was funded by a Google Summer of Code grant. Read the Work Product Summary.
PyTorch wrappers for using your deep model in Audacity, and sharing it with the community!
A PyTorch port of the openl3 audio embedding (ported from the marl implementation).
PyTorch dataset bindings for 14,000 sound samples of the Philharmonia Orchestra, retrieved from their website. [github]