Hi Myo community,

My name is Balandino Di Donato and I’m a PhD student at Integra Lab (BCU), where HCI (Human Computer Interaction), design and audio signal elaboration coexist to create new musical experiences. My work explores ways to map beatboxers’ gestures into features to electronically transform beatboxing vocal sounds. This exciting research involves a novel combination of HCI, gesture recognition, machine learning, mixed reality and interaction design, centred around the Myo armband as an input device. In this post I describe a series of initial experiments and software tools that provide a foundation for this research.

The goal of my first experiment was to capture the Myo armband’s data through a laptop in order to drive real-time signal processing modules in the freely available Integra Live software. These included filters, reverbs and stereo panning. To do so, I had to develop the Myo2MIDI script, in order to map and route Myo data into Integra Live through MIDI.

In the video below, pitch and pan of the first audio file were controlled by orientating my arm after making a fist pose. Similarly, after performing a fingers spread pose, the reproduction speed and panning of a second soundfile is affected by orienting the arm.

Integra Live and Myo on Vimeo.

After this first experiment, I realised that the range of gestures recognised by the Myo does not cover the gestures performable during human beatboxing; thus, I started to explore different ways of using the Myo to interact with audio software.

For this purpose, I worked on ways to control oscillators’ amplitudes using the forearm’s muscle activity.

To achieve this goal, since the Myo2MIDI script is unable to extract EMG data, I developed MyoMapper.

MyoMapper, developed using Processing, is an application able to extract all data from the Myo and map them into MIDI and/or OSC messages. Furthermore, it includes features for centering and rescaling all Myo data. MyoMapper also calculates overall muscle activity by calculating their EMG mean value. As with all other values from the system, the EMG mean value can be mapped into MIDI and/or OSC message, centrable around a specified value and rescalable within a range.

Once MyoMapper was developed, I went further with the second experiment, which involved controlling 8 oscillators through the RMS value of EMG data obtained from each EMG sensor.

Myo EMG data in Integra Live Test on Vimeo.

Subsequently, I decided to also expand the Myo gesture’s library with three more gestures. Thus, the target of the third experiment was to control the panning of a sound source within a stereo sound field through hand pose. The audio sound source was a pink noise. The gestures, in the picture below, were chosen after establishing a taxonomy of most easily learnable gestures using a SVM machine learning system trained with EMG data from the Myo. Specifically, the gesture A was used to pan the sound on the left channel, the gesture B to pan the sound to the right channel, and the gesture C to place the sound at the centre.

I used ml.lib, a machine learning library for Pd based on Nick Gillian’s Gesture Recognition Toolkit, developed by Integra Lab and Carnegie Mellon University.

Gesture Recognition using Myo Processing ml-lib on Vimeo.

A couple of weeks after, I was selected for Moog Sound Lab Residency UK, where I had fun in driving the historical System55 through gestural control using the Myo, MyoMapper and the Theremini.

Analog Sound Synthesis Using Gestural Control on Vimeo.

I employed a similar set up, Myo+MyoMapper, for few tests conducted at K-Array‘s laboratories, where I was invited by Francesco Maffei and Daniele Mochi to try out the new born KW8. In this example, my target was to test the appreciability of the spatialisation effect using the KW8, also called the Owl. Because the KW8 can be driven through the DMX protocol only, the Myo values had to be mapped into serial values first. After the Myo2Serial conversion, Myo yaw, pitch and roll values were utilised to control respectively, yaw and pitch of the speaker and intensity of the audio signal.

Live Sound Spatialisation Using Myo and K-Array KW8 on Vimeo.

My next experiment exlpored the reality-virtuality continuum. To achive this, I recorded and looped a crumpling paper sound, which then was processed using and amplitude modulator driven by the average of EMG data. Afterwards, the sound was reverberated and its clean/wet mix value was established by the Myo pitch value. The experiment explores the boundary between the real and virtual worlds (mixed reality) considering gestures which enables the drawing of a linear trajectory between the two realities. The video in fact, shows how it could be possible to produce a crumpled paper sound using real paper and how it would be possible to play the same sound without any paper by enacting exactly the same gesture. It explores the illusion of taking a commonplace object (a simple bin) and creating the perception that it is a vast cavern by augmenting reality with audio transformation (a large reverberation).

Mixed Reality Using Myo and Integra Live on Vimeo.

The next stage in my research will be to bring together findings from these experiments including systems developed, to create a prototype system for processing beatboxer vocals in real-time using hand and arm gestures captured through Myo. The system, is currently at an early stage but it will eventually enable users to shape beatboxers’ vocal sounds into audio-objects positionable within a mono, stereo and binaural soundscape. Moreover, the beatboxer will be able to record and loop beats; and, through their arm’s movements, beatboxers will be able draw trajectories within a virtual environment, which vocalised beats (considered as audio-objects) can travel through.
Aspects of this idea, can be already be appreciated through their implementation in the Integra Lab’s AHRC Transforming Transformations project, which seeks to explore new approaches to sound synthesis and transformation within immersive environments.

By Published On: March 8th, 2016

Share this