Zum Hauptinhalt

Interactive Machine Learning for Music

Prof. Rebecca Fiebrink

1. Motivation: Why IML for music?

One of my main motivations for beginning to explore machine learning (ML) for music in the early 2000s was my excitement around the possibilities for making new digital musical instruments (or “DMIs”). By that time, decades of work had already been done by researchers and musicians such as Max Matthews (1991), Laetitia Sonami (Bongers 2000), and Michael Waisvisz (Bongers 2000) (see also videos below) to explore the new sonic, musical, and interactive possibilities arising from DMIs. Digital sound synthesis and processing techniques make it possible for such instruments to produce sounds never before heard—in theory, any sound imaginable! Further, a musician’s actions or gestures can be sensed with a huge variety of technologies—not only buttons and knobs, but also cameras, microphones, force or touch or magnetic sensors, physiological sensors capturing brain wave or heart rate information, or any number of others. There is thus huge flexibility in how an instrument maker might link a musician’s actions to the resulting instrument sound: sensor data is fed into a computer or microprocessor, and code—not physics!—determines how sound is produced in response.


Max Mathews demonstrating the Radio Baton.


Laetitia Sonami discussing the Lady’s Glove.


Michael Waisvisz playing The Hands.


(If you are interested in learning more about DMIs, the research behind them, and the music being made with them, a great place to start is the proceedings of the New Interfaces for Musical Expression—or NIME—conference.)