Zum Hauptinhalt

Sound Gesture Intelligence

Dr. Greg Beller

Gesture recognition and following

We have seen that direct mapping is possible between gesture data (in absolute or relative referential) and sound data, enabling, for example, the triggering of pre-recorded or real-time–recorded sounds, or the continuous control of sound effects. Hand tracking is already providing a wealth of information for the creation of new instruments that can be voice-based.

Several mappings are possible, based on a direct relationship between the position and dynamics of the hands or fingers, a relationship mediated by the manipulation of virtual objects, or the addition of random processes at the heart of the hand-sound relationship.

For gestures, it is necessary to take into account temporality, as well as the relative position of the hand to the body. This is made possible by merging data from dynamic (temporally precise) and static (spatially precise) sensors.

Between vocal and manual gestures, various computer programs based on artificial intelligence enable the direct manipulation of sound by gesture, thanks to the learning of temporal relationships from merged data.