TIMAX FOR THEATRE
The TiMax family of products began life in theatre, pioneering the use of a large scale digital delay matrix to realize the work of Helmut Haas 70 years ago.
The ability to localize a reinforced voice to a performer and have that image follow the performer around the stage automatically has long been a dream for theatre sound designers and directors.
SoundHub S version for Theatre
TiMax SoundHub-S source-oriented reinforcement (SOR) system is available with optional radar-assisted TiMax Tracker package to locate multiple performers, each to within 6” in any direction, and provide convincing image localization for upwards of 90 percent of the audience, no matter where in the house they are sitting.
In a 2-RU package, the SoundHub-S contains all audio and control inputs and outputs, DSP, delay matrix and mix circuitry, and random access audio players. The Soundhub is programmable via the TiMax control software GUI, which has been designed to run on both PCs and Macs, and can be operated through controls on the front panel with the computer disconnected.
The optional tracking system is modular and expandable, based on the size and complexity of the show. If you decide to add the Tracker, each performer wears a small, 1” square plastic TT tag, about a quarter of an inch thick, that emits ultra-wide-band radar pulses to typically four or six TiMax Tracker radar sensors. The latter, which are typically mounted out of sight on a lighting truss, transmit data about the location of performers to the Soundhub via MIDI
In addition to the Soundhub and tracking system, of course, a sound reinforcement loudspeaker system is called for—but, rather than the LCR clusters or line arrays typically seen in live sound production, more like eight or 16 channels of relatively controlled dispersion speakers are arrayed to cover separate seating areas, fed from preprogrammed, variably delay-matrixed audio from the Soundhub, plus a number of front-fills.
The objective is to ensure that every audience member receives an acoustic wavefront from each performer about 10-20 milliseconds before receiving the reinforcing energy from the loudspeakers. Within this short time difference, the brain integrates the two arrivals together as one sound, but causes the listener instinctively to localize to the slightly earlier sound arriving directly from the performer. This psychoacoustic phenomenon is often referred to as the Haas or precedence effect.
TiMax achieves this by setting up multiple unique delay relationships between every performer’s wireless microphone and each loudspeaker reinforcing it. These relationships are changed every time a performer moves to a different location on stage, in order to maintain the acoustic precedence that makes the audience localize to the performer and not to the loudspeakers.
The TiMax software simplifies the process by allowing on-stage localization zones to be pre-defined as image definitions, which are simply tables of level and delay instructions preprogrammed into the Soundhub, instructing it to place the performer’s audio image in the appropriate zone on stage.
There’s no black magic involved. TiMax is based on psychoacoustics and the physics of sound, primarily the Haas effect that allows us to localize a sound source in space using interaural cues based on time difference of arrival of a sound at each of our two ears.
In the Box
The TiMax Two Soundhub contains a 16-input x 16-output delay matrix, expandable to 64 x 64 in 2RU via additional DSPand I/O cardsets. The different delays to the various channels are preprogrammed for multiple onstage zones using Smaart software and laser measuring devices, the data outputs of which are entered into a custom spreadsheet that yields the requisite level and delay data for each audio channel. During a show, a performer’s position in three dimensions can be tracked by the TiMax Tracker radar tracking system. As the performer moves from one zone to another, the I/O matrix is switched with soft crossfades, so that audio levels and delays remain appropriate to the performer’s position for accurate psychoacoustic localization according to the Haas effect.
Projecting the Performance
The success of the system depends to a large degree on a performer’s ability to project direct sound adequately throughout the house, to provide a strong direct sound anchor that the audience can unconsciously correlate that with the delayed sound reinforcement emanating from the loudspeakers.
You must have a good anchor. In an opera, the anchor is putting out 130dB SPL at 1m—an opera singer is a loud as a 12-and-a-horn box. That’s a perfect anchor, but if you or I walked on stage in a big domed building like the Royal Albert Hall wearing a lav mic, and walked around talking at our normal speech level, it would be very hard to get the fader on the desk anywhere near that level without hearing the room, because although the system is distributed, it’s still exciting the room a bit. But opera singers have so much power to their voices acoustically that you’re getting a very strong direct anchor.
But for weaker voices, one of the tricks you can use is to build first-wavefront reinforcing speakers into the stage or the set to create an artificial time zero somewhere near the performer. While the ideal would be to strap a loudspeaker to the performer’s chest, in reality you can accomplish almost the same thing by positioning a loudspeaker above the head, halfway back up the stage, which is virtually at time zero as far as the audience is concerned. So, as well as feeding the voice though the PA with appropriate delays, you run it undelayed through the speakers above their heads, which helps the audience hear it as the anchor reference. That’s been done in a number of theatre spaces, including the Royal Danish Theatre [in Copenhagen], as well as in large outdoor venues, where it has helped localize the first arriving wavefront.
Clearly, delay imaging or actor tracking doesn't suit every environment. High-energy rock musicals do not have strong enough anchors to Haas-localize the vocals, and simply level-panning them in the PA would be annoying, so more conventional sound reinforcement arrays are more appropriate for such productions.
Increased intelligibility at lower volume
Most people find it fairly easy to follow a conversation in an animated cocktail party, because directional cues greatly increase intelligibility in noisy environments. This “cocktail party effect” is equally applicable to performance sound.
If you mix multiple sound sources to mono in a large listening space such as a theatre all of the different elements of the mix are sitting in the same place spatially. That limits the ability of the auditory cortex to discriminate based on binaural cues relative to spatial position, and hence limits your ability to focus on what is of interest to you. Whereas if sound is presented in a distributed, multiple mono way, as it is in nature where every sound is essentially monophonic with a single path to each ear, your binaural system enables your auditory cortex to focus on each part as you wish. This means that in a source-oriented reinforcement system, we don’t have to boost the level of a solo to make it stand out, as we would in a conventional mix. The solo has its own spatial signature, and by processes of cross-correlation between time differences of arrival between your two ears, you can filter out almost everything else.