Zum Hauptinhalt

Interactive Machine Learning for Music

Prof. Rebecca Fiebrink

5. Musical and creative benefits of IML

5.4 Designing with data can allow more people to become creators

ML allows a creator to communicate through examples what actions they’d like a musician to make, and what sounds they’d like a computer to make in response. This means that it is no longer necessary to write programming code (at least for designing the mapping), potentially opening up the instrument design process to people who lack programming and other technical expertise.

One project that explored this possibility was Sound Control (figure 6, Parke-Wolfe et al. 2019), a collaboration with music teachers and therapists working with children with a large variety of disabilities. We built a standalone software tool which allows anyone to select an input modality (e.g., Webcam colour tracking, microphone input, Gametrak) and a sound-making module (e.g., looper, FM synthesis, sample mixer) from drop-down menus. They can then demonstrate a few examples of how actions captured with those inputs should relate to sounds, and a Wekinator-style IML process trains a model and produces a playable instrument. This project was particularly exciting because not only could teachers and therapists who knew nothing about ML or programming make completely new instruments for the children they worked with, but they explored a much wider range of musical interactions than if they had depended on a programmer to implement their initial ideas of the sorts of instruments they thought they wanted to make.

edu sharing object

Figure 6: A music teacher and music therapist using Sound Control to make custom interfaces for a child.