FIG. 1: IBM''s Power5 microprocessor includes two processing cores.
In the unbridled rush toward computers with terahertz clock speeds and many terabytes of data storage, a few stumbling blocks lie in the path. Among the most important is the issue of heat, which increases with clock speed and other factors, such as gate density; in fact, today's microprocessors are already approaching the thermal limit beyond which silicon starts to break down. Apple has addressed this with liquid cooling in its G5 models, and Intel could soon follow suit, but it's an expensive solution.
Another concern is the delay introduced by the interconnections between gates. The speed at which electrons flow through copper wire is limited by the resistance and capacitance of the wire. As the gate density on a chip increases, the interconnecting wires get shorter, which is good, but they also get thinner, which increases the delay factor.
More significant delays arise from the relatively slow connection between the processor and main memory. It can take 400 times as long to fetch a piece of data as it does to execute an instruction, meaning that the processor is furiously spinning its wheels while waiting for more data to process. On-chip memory caches and instruction-level parallelism (working on one instruction while waiting for the data that another instruction needs) help alleviate this bottleneck, but only to a point.
These problems have led to slowdowns in the development of faster processors. For example, Intel delayed the Prescott, a version of the Pentium 4 with 125 million transistors (compared with 55 million in the previous version), due to manufacturing troubles, and its performance improvements underwhelmed analysts when it was finally released. In addition, the company has postponed the introduction of a 4 GHz Pentium and completely stopped development on next-generation Pentium and Xeon chips.
Instead, Intel, IBM, and other chipmakers are shifting their focus to designs that include two or more processor cores, or computational engines, on a single chip. Such architectures are already being used in computers with multiple processor chips on the same motherboard, but integrating several processing engines within a single chip could lead to even higher plateaus of performance while reducing heat and delay concerns.
The idea is to divide complex tasks among multiple cores, allowing those tasks to be completed more efficiently. Because the cores reside on a single chip, their interaction time is greatly reduced compared with traditional multiprocessor machines. The clock speed of each core can be slower than that of single-core chips, which reduces the amount of heat generated and spreads it over a larger surface area. Even so, the effective computational speed is greatly increased; according to IBM, a dual-core processor can perform roughly twice as many operations per second as a single-core chip with the same architecture running at the same clock speed.
Multicore processors are not a panacea, however. For one thing, software must be designed specifically to take advantage of multiple cores, and some types of applications are better suited to exploit multicore processors than others. Fortunately, graphics-rendering and table-lookup operations are prime candidates for parallel processing, and both are important for electronic musicians.
In 2001, IBM was the first to introduce a dual-core processor, known as the Power4, which was updated to the Power5 in May 2004 (see Fig. 1). In the same month, Intel announced that its new desktop and server microprocessors would be multicore designs; the company now has a prototype dual-core version of the Itanium 2. Sun Microsystems introduced the dual-core UltraSparc-IV in February 2004, and an 8-core chip, code named Niagara, is expected to appear in 2006. Clearly, multicore designs are likely to become the next standard for microprocessor architecture, allowing ever-higher levels of performance to be achieved within the limits of silicon and copper.