Zum Hauptinhalt

Interactive Machine Learning for Music

Prof. Rebecca Fiebrink

3. How can we support instrument designers in using ML in practice?

3.3 The Wikinator: An IML tool for music

The tool I built to enable instrument builders to build mappings is called the Wekinator. Wekinator allows users to employ an IML workflow, illustrated in figure 5.

edu sharing object

Figure 5: The IML workflow supported in Wekinator.

Wekinator has the following key capabilities:

  • It allows you to make mappings using classification or regression, using a few different algorithms. (It also supports a slightly more complex approach to identifying temporal gestures, called dynamic time warping.) It allows you to build multiple models in parallel; for instance, if you want to build a mapping that controls 10 real-valued synthesis parameters, you can build 10 regression models simultaneously.
  • It allows you to record new training examples in realtime, from demonstrations.
  • It can receive inputs (e.g., sensor values) from anywhere using OpenSoundControl messages. For instance, people have controlled Wekinator using sensors attached to Arduinos, microphones, webcams, game controllers, Leap motions, and many other devices.
  • It can send the models’ output values to any other software using OpenSoundControl. For instance, people have used it to control sound in Max/MSP, SuperCollider, Ableton, ChucK, and JavaScript, as well as for controlling game engines, animation software, lighting systems, web apps, physical computing systems built with microcontrollers like Arduino, and other processes.
  • The software itself can also be controlled using OpenSoundControl messages, allowing behaviours like training or loading new models to be triggered by messages sent from other software.
  • It allows you to play with models in realtime immediately after they have been trained, enabling you to try out a new interaction or instrument and decide if you want to change anything about it.
  • It supports an interactive machine learning approach, in which you can change or improve models by immediately and iteratively adding (or removing) training examples.
  • The algorithms Wekinator employs are chosen and configured to generally work well on small datasets. You can sometimes get away with just a few examples (under 10) if you have a straightforward mapping problem. But Wekinator is also fast—you should always be able to run trained models at high rates, and training can take just a few seconds or less for many datasets.

The following videos demonstrate how Wekinator can be used to make two simple instruments. The first uses a very simple webcam program to capture a performer’s posture, and uses this to choose which drum sequences to play in ChucK:

Using Wekinator to build a posture classifier to control a drum machine.

And this next video demonstrates how we can build a more expressive, complex instrument using regression. Here, the input comes from a GameTrak “Real World Golf” controller, and 9 regression models in Wekinator each control a parameter of the Blotar (Van Stiefel et al. 2004) physical model in Max/MSP:

Using Wekinator to build 9 regression models to control a physical model in Max/MSP with a GameTrak controller.

The Wekinator website has detailed instructions on how to run the software, as well as examples for how to hook it up to numerous sensors and software environments, including the two examples above. If you are interested in the details of how it was designed (and why) you can refer to the original NIME publication (Fiebrink et al. 2009).