Zum Hauptinhalt

Sound Gesture Intelligence

Preface

About the author

Greg Beller works as an artist, researcher, teacher and computer designer for the contemporary arts. Founder of the Synekine project, he invents new musical instruments combining sound and movement, which he uses in comprovisation situations with various performers or in computer-assisted composition, notably in his opera The Fault. At the Ligeti Center, while preparing a second doctorate on “Natural Interfaces for Computer Music”, he is a research assistant in the innovation-lab and teaches in the Multimedia Composition department at Hamburg’s HfMT University for Music and Drama. At the nexus of the arts and sciences at IRCAM, he has successively been a doctoral student working on generative models of expressivity and their applications to speech and music, a computer-aided music designer, director of the Research/Creation Interfaces department and product manager of the IRCAM Forum.

This unit

In this unit, you will discover some links between voice and gesture. You will use this natural proximity to create new intuitive musical instruments that allow you to manipulate sound with your hands. The structure of this unit will enable you to tackle, with increasing complexity, the notion of machine learning to model the temporal relationship between multimodal data. Various physical gesture sensors are introduced and compared. Libraries for gesture processing and machine learning are presented. Their uses are demonstrated in different artistic contexts for pedagogical purposes.

The exercises enable you to make them your own, and incorporate them into your artist's studio.