In the evolutionary cycle of computing technology, the music industry is right on the cusp of something huge. 2006 promises to be a pivotal year in the computer-upgrade arena as it ushers in several next-big-thing technologies, including 64-bit CPU architectures, dual-core processors, dual dual-cores, support for ludicrous amounts of physical RAM, new OS and applications to take advantage of all these goodies and more.
Teeming with the equivalence of Darwinian genetic mutation, computers are about to take the kind of quantum leap in processing power and system design that only comes about once every decade or longer. But what does it all mean for you and your music? How will digital audio applications benefit today, tomorrow or five years from now? Do bigger numbers merely promise “more” and “faster,” or will they actually improve audio quality and the sound of your mixes?
Apple's PowerPC processor was probably the first 64-bit design to gain any kind of commercial success in the personal desktop market. Housed within every Power Mac G5 manufactured since 2003, the chip has made a fantastic foothold regardless that many installed operating systems and software are not yet taking full advantage of its power. But on the PC front, unless you're a compulsive hardware geek or bleeding-edge early adopter, chances are pretty good that what currently lies under your computer's hood is a CPU based on some iteration of decade-old 32-bit hardware technology. Although clock speeds have steadily increased throughout the years, very little has been done in the way of improving overall PC processor architecture and data busing since the mid-1990s — until now, that is.
During the past two years, several major advances have been made in PC processor designs, with both Intel and AMD battling for early supremacy in the 64-bit space. Known within the industry as x64 CPU architecture, support for 64-bit word lengths at the processor level comes as an extension to the current 32-bit ×86 architecture. Both chip giants came out slugging early with ×64 designs, but Intel stepped into the limelight with its hugely popular Itanium chip targeted mainly at enterprise and server applications.
You see, 64-bit technology has been slow to catch on with desktop users — not so much for cost (they're actually less expensive than many flagship models of the past), but because consumers typically wait for hardware and software vendors to align before making the jump. At the time, nobody was doing much to educate the consumer on the benefits of 64-bit computing. And without consumer interest for 64-bit product, OS and application vendors weren't in any hurry to partner and invest time or money, so complacency and status quo snarled x64's momentum all around.
A few key industry alliances and proofs of concept later, both consumers and the media seem to be jumping on the bandwagon, and now definitely appears to be the time for 64-bit computing to shine. The benefits are pretty straightforward: First, a 64-bit CPU architecture boasts deeper and wider data registers than x86 technology, processing data in packets that are twice the size of what a 32-bit CPU can handle. Of the different ways that data is stored and retrieved on a CPU from a system's various forms of memory, registers are closest to the CPU (whereas cache, RAM and disk are farthest, in increasing order) and therefore the fastest accessible data storage place. A Pentium 4 — class CPU has only eight 32-bit general-purpose registers and eight floating-point registers. With the introduction of the ×64 CPU architecture, this is doubled to 16 general-purpose registers instead of eight, and each register is now 64 bits wide. Furthermore, the number of floating-point registers is also doubled to 16. The bottom line is that using 64 bits delivers more data per clock cycle, helping systems to run faster and more efficiently, which translates into better performance.
The second, and equally key to boosting performance, benefit is ×64's ability to address significantly more RAM than today's 32-bit chips, which are limited to 4 GB of RAM split between the OS and applications (practically limited to about 3 GB in Windows). A 64-bit architecture extends this practical limit to a binary-swirling 1,024 GB, or 1 terabyte (TB), of accessible RAM. This helps applications run faster when working with extremely large data sets by loading them directly into memory, bypassing the need for slow-poke virtual memory access or read-from-disk cycles.
The direct benefit of this for a DAW environment, of course, is that more of your session's data — be they audio tracks, sample loops, plug-in instances, virtual instruments or real-time background processes such as time shifting and so forth — can occur and reside entirely within RAM, allowing the software to have quicker access and uninterrupted computation of the data. A practical example of this would be the ability to keep a significantly larger pool of loops in memory, allowing for all sorts of new and exciting ways to process an entire song's worth of audio in real time. Likewise, you could store exceedingly large sample sets in RAM with the ability to access more of these sets simultaneously, never having to stream off disk again. Experts also feel more RAM access will inspire a whole new generation of complex audio resynthesis and sample-based synthesizers that can be fed real-time audio and respond immediately to live input.
Granted, with memory module sizes and prices where they are today, few 64-bit PCs will have that much memory, but applications will be designed with the potential to access all of it. As the demand increases in years to come, RAM prices will surely come down, and the ability to address a terabyte of physical memory will become extremely attractive.
None of this extra power is relevant unless every component in your system is updated to work with it. To benefit the most, you must upgrade not only your computer to a 64-bit CPU but also the OS, applications, audio drivers and plug-ins. True performance gains and enhanced system-wide functionality cannot be had from one without the others. On the OS front, Microsoft answered 64-bit wishes early — Windows XP Professional ×64 Edition has been on store shelves for some time now — as did Apple with Mac OS X Tiger, released early in 2005. Both operating systems are extremely well-equipped for pro audio, sporting mature and stable kernels supporting symmetric multiprocessing. With the proper chip in place, users who want to be the first to venture into 64-bit computing can do so immediately, and both provide backward-compatibility support for legacy 32-bit applications.
Finding apps that fully support 64-bit processing is a little trickier, however. Cakewalk recently released Sonar 5, the first truly 64-bit parallel-processing DAW, which is reportedly enjoying performance gains in the 20 to 30 percent range over the company's previous 32-bit technology. “This kind of performance gain is huge,” says Cakewalk Chief Technology Officer Ron Kuper. “If you have a 3GHz processor, a 30 percent performance boost makes it feel more like a 4GHz processor. When was the last time you got a whole GHz for free?”
PC owners, in particular, looking to make the 64-bit jump in 2006 are waiting with bated breath for the next major Windows release, Windows Vista (previously code-named Longhorn), which promises to bring some additional benefits to DAWs. “Microsoft has redesigned the audio-driver stack so that less work is done in kernel mode, which equals increased stability, meaning no more blue screens!” Kuper says. “It will also have new kernel features to allow DAWs to run more smoothly, such as ultra-high-priority threads and the ability to page-lock more RAM.”
DUAL-CORE AND MORE
Intel first experimented with the notion of multicore chips when it introduced Hyper-Threading Technology about three years ago. This allowed the operating system to allocate resources to unused execution units in a single-core processor, effectively fooling the OS into assuming it was running on a dual-processor system. With CPUs essentially at the ceiling of their clock speeds now, though, true parallel computing provides the next major bump in speed — at least for the foreseeable future.
Parallel computing can come in several forms, but most desktop users today will experience it as either dual-processor (two separate physical chips), dual-core (essentially two identical processors on the same chip) or a combination of both. For technical reasons, working with cores directly has the benefit of being slightly faster than slinging chips together over a bus. But as Apple has most recently demonstrated with its new Power Mac G5 Quad, you can successfully chain together two dual-core 64-bit processors for a total of eight double-precision, floating-point units per computer, along with four velocity engines. Yes, this is the kind of power graduation you can look forward to!
Intel has been shipping dual-core Pentium processors for nearly a year at many different price levels, some of which support Hyper-Threading Technology, enabling four threads. The company expects a very fast dual-core ramp during the next few years and will also deliver four-cores in the not-too-distant future. But what beyond that?
“Looking further out over the next decade, we continue to see valued usage models that will benefit from even greater degrees of computer parallelism,” says Dan Snyder, PR manager at Intel Corporation. “One way to continue to deliver higher levels of performance from parallelism is by continuing to add more cores to future CPUs.”
To address this, Intel has already invested heavily in an R&D infrastructure intended to enable these future platforms by innovating past the hardware and software challenges that come into play when moving to tens and even hundreds of parallel cores. “We have the research programs in place to investigate these issues, so if, in the future, we decide to build CPUs with many cores, we have the ability to do so,” Snyder says. And with Apple migrating from its current PowerPC processors to Intel architecture in 2006, both PC and Mac platforms are on a level playing field for the first time.
Another slant on multicore processing came when Apple introduced the Distributed Audio Processing concept in Logic Pro 7. With it, many users began to consider the economics of buying multiple G5s and linking them in a network to create a powerhouse that could easily topple even the most expanded Digidesign Pro Tools|HD setup. Whether through this networked approach or integrated multicore/multi-CPU systems, multiprocessing isn't a feature that universally benefits every category of software application. Fortunately for musicians and producers, DAWs stand to gain a great deal from it, as they can break down their workload into parallel subtasks.
In a Cakewalk Technology White Paper presented at AES in October 2005, Kuper noted that Sonar 5 enjoys substantial performance gains, upward of 50 percent, when going from a single-core to a dual-core configuration. “Note that parallel processing in a DAW does not come automatically,” Kuper cautions. “A DAW must be carefully designed to support parallelism, which means it must carefully break its workload down into smaller parts. Also, it must carefully manage memory and resource contention between different threads and processors. Finally, it should be designed for general scalability across any number of processors, not only two or four.”
MY MATH SOUNDS BETTER
Okay, but what makes 64-bit sound better, and why should you make the jump? To answer this, it's important to first distinguish between different cases of bit depth. The 64-bit system depth that's been discussed up to this point reflects the processing bit depth that an OS and an audio application can function at internally. It has no direct bearing on, and should not be confused with, the bit depth of your audio files on disk. Speaking of 64-bit computing does not imply a jump from 24-bit audio to 64-bit audio files, but expanding the processing and dynamic headroom within the DAW.
As you know, a digital audio signal's dynamic range is dictated by the number of bits, or word length, used to describe its individual samples. A 16-bit audio system delivers a theoretical signal-to-noise ratio of 96 dB — not bad, but musicians have far outgrown that old standard for production. A well-implemented system using the current de facto 24-bit will be able to deliver digital audio performance with a signal-to-noise of 140 dB or better, which is significantly higher than possibly every converter available today is capable of. So why the worries? Isn't a 24-bit audio system good enough? If you plan to mix two or more tracks of audio, unfortunately not!
In lieu of a long and boring math lesson, all you need to know is, in a nondithered audio system, the signal-to-noise ratio in decibels is roughly equal to the bit depth times six — or 1 bit equals 6 dB. Also, every time you double the number of tracks or inputs at unity gain, the master-bus output level increases by 3 dB. Across 64 channels, this translates into 18 dB more output level and would force you to pull down each fader by 18 dB to not clip the output bus. Because you know that a 6dB level increase or decrease equals one more or less bit, respectively, it's clear that you'll end up with a bunch of 21-bit signals. Conversely, increasing faders above unity gain requires adding bits to preserve headroom and not induce overflow errors.
Considering the hundreds, if not thousands, of possible gain changes that take place throughout an active session, bits are falling off and mutating everywhere, severely reducing the precision of your audio. Additionally, many mix processes and effects plug-ins can actually degrade your audio's fidelity by generating enormous computational values during processing that far exceed the current size of registers, and they must be rounded or truncated to fit. Ultimately, this need for extra bits — to increase internal dynamics and improve summing-bus math — is the reason that higher-bit-depth computing is so important to the future of digital audio.
FLOAT YOUR BOAT
As you can see, math is integrally responsible for sound quality in a DAW. In speaking with mastering engineer David Cain, he pointed out that although increased bit depth is a giant step forward, he'd prefer to see 64-bit floating-point audio paths from start to finish. A firm believer that floating-point math sounds better than integer math for audio (and his gear list proves it, including Weiss, Cedar, SADiE and more — all floating-point specimens), Cain suggests that a good start would be for plug-in architecture to be redefined so that the path to and from plugs is all floating-point. Ideally, his wish is for an entirely floating-point system, including the AES/EBU spec, to be expanded to include a floating-point interface for hardware devices. In fact, he'd take floating-point connections, hardware and software before 64-bit computing.
This sentiment rings close to home with Wiley Hodges, senior product line manager for Core OS and developer products at Apple. “The critical thing to consider is that it's more about having a floating-point approach to audio rather than it is about bit depth; floating-point is much more critical to precision than bit depth,” he says, emphatically reiterating that a 64-bit operating-system depth and application bit depth are not necessarily one and the same. “In the Mac OS, Core Audio has floating-point throughout — not just at the summing stage, but farther back in the chain. You want to maintain the highest precision throughout the entire stream of computation, and all Core Audio applications can do so at every process.”
Apart from subjective sonic differences, floating-point “everything” means you'd never have to worry about clipping anywhere in the system ever again. Suffice it to say that both methods have benefits. Floating-point math — such as that used in Pro Tools systems — has the capacity for virtually unlimited headroom (around 1,500 dB) whereas fixed-point systems typically have more precision and a deterministic noise floor.
Kuper is quick to point out that Sonar 5 has just answered the request for 64-bit floating-point signal paths into and out of plug-ins, as well as providing the ability to export mixes as 64-bit floating-point WAV files, which means you can retain that internal resolution even in your mastering stage. “I'm not sure it makes sense to put a 64-bit floating-point interface on a converter box, though,” he says, calling it a matter of physics. “The best converters have a noise floor of about — 120 dB, which is a bit less than the — 144dB dynamic range of 24-bit audio. In other words, the best converters today can't even fill a 24-bit sample with complete accuracy, so the extra bits that a 64-bit double-precision float could provide wouldn't be able to help.” However, for devices such as digital mixers and outboard effects gear, he adds that a 64-bit floating-point signal could be very useful, as it would allow you to master using outboard gear while retaining the full internal precision at each processing step.
DSP VS. NATIVE
Despite all of this good news for native DAWs, DSP vendors such as Universal Audio claim that DSP-card-based systems like the company's popular UAD-1 platform, in fact, represent the most convenient and scalable way of adding usable horsepower without having to purchase a new computer or upgrade every two years. That and the cost of the cards can be instantly offset in many cases by the bundled software value. But as cheap native processing is on target to quickly out-horsepower DSP solutions this year, does 2006 mark the beginning of the end for DSPs?
“In the near term, we believe there is a strong growth market in DSP cards,” says Mike Barnes, director of marketing for Universal Audio, noting that current CPUs are still unable to handle large sessions at high sample rates and small buffer sizes. He goes on to say that Universal Audio has found the emerging practice of running soft synths on the host and effects and mix processing on DSP cards to be a practical and reliable way of working around the problems of trying to run every session in real time — even on the latest generation of 64-bit computers. “Every time you double the sample rate, you require twice the horsepower to run the session, and most people are struggling to run everything they want without bouncing or freezing, even at 44.1 kHz,” he says. “With higher sample rates and wider data paths — be they 32-bit or 64-bit — the processing power required keeps rapidly climbing.”
Naturally, the promise of guaranteed performance that DSPs provide has been their boon from the beginning of digital audio. They do only what they're instructed to do, and in the case of audio processing, that means they don't monitor system buses, draw images or share the processor with other nonrelated tasks: All they do is calculate audio processes.
“The most obvious benefit that DSP-based systems have over host-based is low latency,” says Gannon Kashiwa, market manager of professional products at Digidesign. Of concern to anyone wanting to mix inputs and process effects in real time, I/O latency and plug-in delay can cripple a native session before running out of processing power even becomes an issue. If the plug-ins you're using introduce delay and you are stacking them, most certainly, the delay will add up to intolerable levels quickly. A lot of native plugs don't introduce delays; you can stack those up all day, and they won't add any delay to your signal path.
“[With a DSP-based system], depending on the algorithm, the latency can be as low as four samples — one sample in, two for processing and one sample out,” Kashiwa says. At a session sample rate of 44.1 kHz, that equates to less than one one-hundredth of a millisecond of latency! “In addition, the TDM II bus enables signals to reach any processor on the bus in one sample period, so overall system latency is kept to the absolute minimum,” he continues, speaking of the Pro Tools|HD environment specifically.
DSPs can also run processes in double-precision mode. The Motorola chips used on Pro Tools|HD and HD Accel cards, for instance, are 24-bit processors with 56-bit accumulators. Plug-ins can be instructed to run in 48-bit mode, in which they can then take full advantage of an internal dynamic range of more than 288 dB and use twice the processing for calculating filters and other math-intensive operations. Dither can then be applied before the signal is truncated back to 24 bits to be transported to the next DSP, which maximizes low-level detail.
From a developer point of view, the audio industry is a small niche market. With a few thousand sales of any plug-in considered a major success, the added protection of working with specialized DSP hardware acts as a security blanket, often forcing the better plug-in manufacturers to develop exclusively for hardware.“Piracy destroyed Kind of Loud's business,” Barnes says, speaking of the popular software company that Universal Audio bought in 1999 and later discontinued product development on in 2003. “So native plug-ins are completely off the table for us right now even though, of course, they're easily possible. Developing plugs at the very cutting edge and with the best scientists in the industry all costs us a great deal of investment. I would love to think we could rely on the honesty and integrity of users to avoid pirated software, but that is a daydream. A brief breakdown in the copy-protection mechanism and your business can be over, so this is core to our way of thinking and developing.”
LOOKING INTO THE SILICON BALL
Taking a more explorative glance into the future, one of the coolest concepts bandied about for achieving high horsepower is that of “chip farms” — that is, motherboards housing several open sockets (five, seven, 10 or more) that merely await the user-installation of however many CPUs desired. So rather than being restricted to the current multicore number du jour, you could expand your processing power in much the same way you currently do by popping RAM. Digital audio users who wish to process everything natively could conceivably stock their systems full of CPUs running in parallel and therefore compete with dedicated DSP systems at a fraction of the cost. Although most chip makers are moving more toward multiple cores as opposed to multiple sockets for the consumer industry, Intel already offers multiple-socket systems in the realm of high-end workstations and servers for enterprise. “When multicore, Hyper-Threading and multisocket are combined, you have the possibility of double-digit real and virtual cores available to the OS,” Snyder says.
Another truly exciting bit of buzz around industry circles is that of “hardware splitting,” which is essentially hardware virtualization. With this, the hardware will allow more than one operating system to run simultaneously — fully functional, side-by-side and completely independent — but on the same physical system. The combined benefits of higher-bit-depth processors and multicores would allow you to virtually break up a system into extremely powerful but noninterdependent “machines,” if you will.
Imagine the possibilities: One part of your CPU could be your main host DAW, running on a specialized, lean DAW-ified version of your favorite OS; another part could be slimmed right down to be an extremely accurate and efficient effects-processor block. Yet another part could become a dedicated sampler running on its own proprietary OS. Each part, or machine, essentially runs completely independently of the others and could therefore have its own streamlined and direct access to independent hardware, I/O interfaces, buses, drivers and so on. Throw some shared memory between these parallel systems, and you get some interesting live-media and production possibilities!
Naturally, many intrinsic developments must first take place for both of these concepts to happen (getting everything off the current single-bus architecture, specialized OS development, cooperation from application and plug-in developers and more) — not the least of which is thermal consideration. Once cooling issues are dealt with to keep all of those processors from melting down, you can count on these becoming big trends.
EVEN MORE AND FASTER?
As if the real transition to 64-bit isn't enough to consider, there's long been talk of 128-bit computing down the road. Despite the bigger-better-faster motto of the computer industry, audio insiders feel such a move would offer little or no practical advantage to audio processing or mixing in a DAW. Perhaps at an esoteric level, being able to do math with 128-bit precision would enable some new and interesting kinds of DSP or synthesis that were previously impossible, but all agree that the industry is at least another 10 years away from any movement.
“For us, the single most important trends in 2006 will be the proliferation of PCIe, the introduction of Mactel machines and the arrival of Windows Vista,” Barnes says, noting that the compatibility aspect of these, alone, will keep Universal Audio and the majority of audio developers scrambling to keep up. “Taking advantage of multicore or making apps true 64-bit is of lower priority and, in some cases, under debate as of questionable relevance for audio applications, because it all depends on the OS implementation to really understand if there are actual performance gains to be made that the user can tangibly feel.”
Meanwhile, speed is definitely on Kashiwa's mind, suggesting the two main benefactors of faster hosts will be to help deliver more powerful instruments and capability to the systems. “As sessions become more complex, more power is required to keep the GUI snappy and responsive,” he says. “I also think that as the Internet gets speedier and more people have high-speed access, collaboration between musicians, mixers and producers will become commonplace. I have been using a DigiDelivery server at home for quite a while now, and it's amazing how simple it is to send and receive projects anywhere in the world. When [Internet] speeds increase, it's going to become routine.”
With so much on the horizon, it will surely take time for each new development to hit its stride. But it's certainly safe to say that the pro-audio industry has never been quite this jazzed about showing off a new technology as it is with 64-bit processing. Mix-in-the-box naysayers are finally changing their tunes.