PxPixel
First Take: 3≠C - EMusician

First Take: 3≠C

Read Gino Robair October 2008 EM Editors Note, Where He Writes About The Pros and Cons of Audio Product Features
Author:
Publish date:
Image placeholder title
Image placeholder title

When describing the inhabitants of the fictional town of Lake Wobegon on A Prairie Home Companion, Garrison Keillor memorably notes that “all the women are strong, all the men are good-looking, and all the children are above average.” Ah, if only it were a similar situation with audio products: it would make it a lot easier to write reviews!

Recently a distributor of a product that received a pair of 3s expressed great dismay, saying that such a rating would kill the item in the marketplace. While I know for a fact that the reviews in EM do not hold that kind of sway, he did bring up a popular misconception that some readers might have: that a rating of 3 means the product is average — in the negative sense — like a C grade for a high-school student who hopes to get into a prestigious university. But that's not the case in terms of EM's rating system.

I do agree with the distributor that to the casual reader of the magazine — someone who might judge a product from a quick glance at the meters and maybe the last sentence of a review — anything less than a 5 could be construed as bad. But to an astute, longtime EM reader, who is interested in the details of a review and is seriously considering a purchase, a 3 means that the product delivers what it should in a specific category, such as Audio Quality. “Good; meets expectations” is how we've described that rating since we redefined our system in February 2006.

We all use products like that. Sometimes the feature set is killer, but the audio quality is, well, just okay. I've also known the opposite to occur. For reviews, EM's editors make sure that the text and ratings are consistent, which can mean that the reviewer has to reconsider aspects of the review — positive or negative — before it goes to print. In fact, the editor will challenge the reviewer's conclusions if he thinks a rating is too high or too low based on the body of the review. We don't want to say something is “Unacceptably flawed” (a 1 rating) or “Amazing; as good as it gets with current technology” (a 5) unless we believe it to be true.

In addition, we don't rate products of various types, prices, and complexity against each other equally across the board in our reviews, because you cannot accurately compare a simple, inexpensive product to a highly complex, high-ticket item. When we give a product such as the Korg KO-1 Kaossilator (an item that costs less than $200) a rating of 4 for Ease of Use and Audio Quality, we are rating it against other products that are designed for a similar purpose — in this case, to make sound and loop audio — in a particular price class. It's not going head-to-head with the Korg OASYS, a full-featured keyboard workstation costing upwards of $8,000.

So if another product in the same issue (August 2008 in this case) gets 3s for Ease of Use and Audio Quality, such as Redmatica Compendium 1.5, does it mean that the inexpensive synth/looper sounds better and is easier to use than a specialized suite of software tools designed to do very complex things? Of course not, because we're not comparing a software bundle to a hardware instrument. Each is being rated on how it performs within its own class of tools. Although there might be nothing that does quite what the apps in the Compendium bundle do, for example, the user experience can be evaluated and rated after a period of time. Our reviews are designed to give you a real-world evaluation that should match the typical user experience, because our writers test the products for several weeks in their projects.

Fortunately for our editorial staff (and occasionally to the chagrin of our advertisers), we're not pressured by our sales staff to produce glowing reviews for “paying clients.” The products we choose to cover are selected based on what we think is most relevant to our readers, and we judge these items on their merits alone.

In describing what the ratings signify, we've kept in mind the real-life expectations of our readers. As a reminder, we have reintroduced the ratings descriptions on the first page of the Reviews section (see p. 74). I hope that you find them useful in evaluating how the products we review fit your own production needs.