Surround sound is one of the biggest advances to have been made in audio reproduction for quite some time. But what about all the existing 2-channel content? Is there a way to expand it to fill a multichannel surround-sound system? Many have tried to devise algorithms that derive center and surround channels from a 2-channel source, but most of those algorithms produce less than satisfying results.
Sonic Focus (www.sonicfocus.com) is taking a different approach to the problem. Instead of manipulating the signal's phase and delay in various ways to simulate the extra channels, the company has developed an algorithm called Extrapolator that uses physical modeling to synthesize a multichannel sound field.
Extrapolator is part of a software-based, floating-point DSP engine called BlackHawk, which includes several functional elements. Unlike many other DSP algorithms, BlackHawk does not reduce the dynamic range of the input to allow headroom for the processing math in order to avoid clipping distortion. Instead, the Sonic Focus algorithms are based on a look-ahead analysis, which allows a greater dynamic range and lower distortion.
The input signal first encounters the SmartStream block, which uses four sophisticated fast Fourier transform (FFT) algorithms to analyze the audio waveform in real time. This allows BlackHawk to select which of the 20 available DSP algorithms will be applied to the signal.
FIG. 1: Sonic Focus''s Extrapolator uses physical modeling to create a virtual front sound field that emphasizes the vocals and solo instruments, and a virtual rear sound field that minimizes those elements and adds detail to the background sounds and ambience.
The next block is called Adaptive Dynamics Refinement System (ADRS), which identifies the qualities that are typically lost in digital compression schemes (such as MP3, AAC, and WMA) and reconstructs that information in an effort to reverse the effects of lossy compression. It also separates the vocal and solo instruments from the background and ambience information.
Then the signal enters Extrapolator, which creates two virtual acoustic environments: one in front of the listener and the other behind (see Fig. 1). The front sound field emphasizes vocals and solo instruments, and the rear sound field adds detail to the background effects and overall ambience while minimizing the vocals and solo instruments. Using physical-modeling techniques, the algorithm can simulate various acoustic environments while avoiding the picket-fencing effect (vector-based dropouts at certain positions in the sound field) common to surround-sound simulators that rely on phase and delay. In addition, Extrapolator does not depend on preencoded matrix information, which means it works with all forms of audio.
The resulting 5.1- or 7.1-channel signal passes through another new algorithm called StudioEQ, the final element of the BlackHawk DSP signal chain. StudioEQ has 8 channels of 32-bit, real-time equalization for final mastering. The algorithm uses tube modeling based on high-end studio equalizers to generate filters of any shape, a feature not available on low-cost EQs until now. Upon leaving StudioEQ, the signal can drive a 5.1- or 7.1-channel surround system.
BlackHawk can be implemented on various hardware platforms. For example, Sonic Focus is working with Analog Devices to port it to the latter's chip sets. It can also run on the latest generation of general-purpose processors under the upcoming Windows Vista operating system. It will be optimized with SIMD (Single Instruction, Multiple Data) streaming and parallel instructions for multicore processors, and the projected overhead is less than 15 percent of the CPU's bandwidth.
Sonic Focus intends to demonstrate a prototype of BlackHawk at the Consumer Electronics Show in January 2007, with commercial products to follow later in the year. The new algorithms could usher in a whole new era of high-quality multichannel audio from low-cost tools — a trend I heartily applaud.