Interactive Machine Learning for Music
Prof. Rebecca Fiebrink
3. How can we support instrument designers in using ML in practice?
When I began working in this space in 2007 as a PhD student at Princeton, laptops were far more capable than when Lee et al. had first proposed using neural networks for mappings, and I could train many usable neural networks in a few minutes—if not a few seconds—fast enough to even use in live performance. Yet still, almost nobody was building instruments using ML; the exceptions tended to be people with computer science or engineering PhDs who had the expertise and inclination to code up their own ML systems and figure out how to connect them to sensors and music.
I therefore set out to explore how we could make ML tools for instrument creators. I did this through a combination of approaches: endlessly tinkering on my own, discovering what seemed like it might work; learning all I could about DMI design through the NIME community as well as the work of graduate students and professors in the Music department at Princeton; and ultimately working with those colleagues and professors to explore a series of ML software prototypes, critique them, and make music with them. By 2010, this work culminated in a lot of learning about how to make ML usable by and useful to instrument creators; a piece of software, called Wekinator (which has been downloaded over 50,000 times and which is still used around the world in teaching and creative practice); and in my PhD thesis (Fiebrink 2011).