At its core our AHRC Digital Transformations project “Transforming Transformation” has a simple aim: to instigate a new way of transforming sound that radically improves on existing approaches.

The current state of the art

If a music producer or sound designer wants to transform a sound, for example by applying a reverb or stretching the sound in time they have a number of possible ways to do this:

  1. Using a software-based audio processor or plugin through its on-screen GUI
  2. Using a hardware controller or hardware audio processing unit through physical controls (sliders, knobs, buttons, trackpads)
  3. Using an audio programming language (Csound, SuperCollider, Max)

All of these methods use a parametric representation of sound transformation. That is, anything we might want to change about the sound is presented as a set of independent parameters. Examples would be “filter cutoff”, “room size”, “time stretch factor” etc. This representation is useful if you mostly only want to change one or two things at at time, and we have a good understanding of how parameter changes affect the acoustic results.

In this project we will be doing the initial stages of an exploration into an alternative approach: one in which we transform by manipulating a representation of the sound itself. This could be termed a direct manipulation model for sound transformation.

Our starting point will be to implement a 3D environment in which sounds can be “grasped”, “picked up”, “felt” and “moved” such that their position within the environment corresponds to their spatial location within a virtual acoustic space. This will eventually lead to extensive exploration of a range of approaches to representing sound and sound transformation following the direct manipulation model.

The initial design for the system is shown below. We wanted to create a conceptually simple model that would could implement rapidly and test with our target users as quickly as possible. The mockup shows the user’s hand as an avatar within a virtual space (the interior of a cuboid), sound sources as spheres (which can be dragged from a 2D “palette”), and spatial trajectories as 3D traces.

By Published On: June 8th, 2015

Share this