Master Class: Pros and Cons of Automated Mastering

Can a Robot Replace an Engineer?
Image placeholder title

Since the turn of the century, audio production has become increasingly transformed—some would say degraded—by shortcuts. Instrumental loops and MIDI packs offer instant partial takes that you can readily arrange and record. Plug-ins tender factory presets that promise perfect-sounding kick drum, electric guitar, and vocal tracks at mixdown without breaking a sweat. Even mastering software has entered the game, with developers claiming to assure clearer, wider, and bigger-sounding releases without the aid of a professional mastering engineer.

However, even the new tools require at least one warm body to decide which loop or preset sounds best. Therein lies the rub with mastering: Unless you have a super-flat room and neutral monitoring chain—not to mention highly trained ears—you can’t possibly know whether or not the bottom end really needs to be bigger, the lead vocal louder, or the cymbals softer. Not to mention that with mastering, processing that improves one element of a mix can damage another if it’s applied incorrectly. This is one stage of the production process where you need the chops.

Fig. 1. MixGenius’ LANDR website offers automated mastering at very low cost and with almost instant turnaround. A company called MixGenius wants to change all that. Its online, subscription-based service offers completely automated audio mastering that promises “…results that rival professional studio work… at a fraction of the cost of studio mastering” (see Figure 1). The underlying processing, dubbed LANDR, first uses an algorithm to determine your uploaded song’s genre. An adaptive engine then purportedly makes frame-by-frame adjustments in processing, adding multiband compression, EQ, stereo enhancement, limiting, and aural excitation to the degree the algorithm determines they’re needed. Minutes later, your mastered track is ready for download in MP3 or uncompressed WAV (16-bit/44.1kHz) format. Monthly subscription rates for the service range from $6 to $39 per month; only the most expensive rate provides masters in WAV format without paying additional fees. Nonsubscribers can get two low-resolution MP3 masters delivered and listen online to unlimited previews of their LANDR-mastered tracks for free.

Image placeholder title

Predictably, high-profile mastering engineers have publicly and fervently denounced LANDR, saying its processing lacks the finesse and subtlety required for professional results. But no doubt LANDR’s relatively low price and near-instantaneous turnaround will appeal to some for whom professional mastering has always been financially out of reach. So is automated mastering the next step in the evolution and democratization of pro audio, or is it a pipe dream?

To answer that question, I compared three very different mixes I had previously mastered to the same mixes mastered by LANDR, and to the unmastered mixes. In this article, I’ll describe the sonic differences I heard in my three A/B/C tests, to delineate what automated mastering can and can’t do.

I went into this process with an open mind, and I advise you to do the same. Some of LANDR’s results surprised me, and I think they will surprise you, too. But before we go to the A/B/C-comparison results, it’s important to talk about what mastering engineers do before they touch a single control.



When I am asked to master a project, the first thing I do after receiving mixes from a client is to discuss with them what their objectives are for the project. How big of a bottom end do they want their master to have? How important is competitive loudness compared to better sound quality (higher resolution, lower distortion, tighter stereo imaging, greater dynamic range, more air and nuance)? Are there any odd elements in any of the tracks—an unusually bass-y guitar track, for example—that they want me not to “fix”? Is there any element they do want me to fix, such as a lead vocal they feel is too loud in a particular song’s chorus?

Try giving such discriminating instructions to a robot. An algorithm can’t be programmed to take subjective judgment into account. I may feel the lead vocal is the perfect volume during a song’s chorus, but if my client thinks it sounds too loud, I’m going to lower it. The robot can’t possibly recognize and act on the client’s specific wishes; after all, they’re not on speaking terms. Who do you think will make the client happy?

Once the mastering session is under way, there are many other aspects to address: Checking for (and avoiding introducing) phase issues, spectral imbalances, ephemeral frequency masking, selective dynamics control and enhancement; correcting and enhancing imaging, and so on. To understand how LANDR handles these details, it’s helpful to get a brief overview of how the automated service works.


LANDR’s interface is set up to take the guesswork out of mastering that undertaking the job yourself would entail and to provide the service quickly and at very low cost. (Remember, this service is not designed for those who already know how to master.) To accomplish all that, and to keep you from making choices that would return bad results, LANDR necessarily gives you just a few options.

Fig. 2. LANDR lets you choose from three Intensity Levels for your master, affecting its loudness. LANDR defaults to an automatic mastering mode that applies an arbitrary amount of dynamics processing to your mix. Alternatively, you can select from three loudness options (called Intensity Levels): Low, Med(- ium) and High (see Figure 2). In my tests using the service, the Low setting essentially preserved the original mix’s dynamic range but normalized the file so that peaks approached 0 dBFS. Selecting the next highest intensity level generally reduced the mix’s crest factor 2 to 6 dB, depending on the original file’s dynamics. As best I could tell, the Med setting sounded (and, judging by my meters, looked) like it provided the same depth of dynamics processing as the Automatic Mastering mode. (At the time I performed my tests, the website wouldn’t allow me to A/B Automatic mode and the Med setting in real time.)

Image placeholder title

As expected, the High setting caused a highly noticeable increase in audible artifacts: distortion, squashed transients and bottom end, smeared imaging, very diminished depth and nuance, and so on. But those artifacts would also occur if one were to ask a professional mastering engineer to make their master stupid-loud by severely limiting their mix’s dynamic range (an all too common request, as evidenced by the many horrible-sounding major-label releases over the past decade-plus). LANDR’s High setting is for people who want their master to be competitively loud at any cost.



As the High Intensity Level in LANDR caused too many artifacts and the Low setting left the dynamic range virtually untouched, I chose the Automatic Mastering setting—virtually the same as the Med setting—for LANDR’s mastering of the three mixes I would use for my A/B/C tests. I deliberately chose three mixes that differed widely in quality to see how LANDR would handle each: The first was an excellent mix that needed only a very light touch-up; the second was good but not great; and the third was a horrible mix with many serious flaws that cried out for deep mastering.

Fig. 3. Sample Magic’s Magic AB plug-in allows you to load and play back up to nine mixes for comparison purposes. The names of the loaded tracks have been blotted out here to protect client confidentiality. Opening a new project in Digital Performer, I instantiated Sample Magic’s Magic AB plugin on a stereo master track. (Magic AB provides outstanding facilities, including looping and gain matching, that are highly useful when performing A/B/C tests; see Figure 3.) For each of the three songs used for the tests, I imported the unmastered 24-bit mix, my 16-bit/44.1kHz master (for CD release) and LANDR’s 16-bit/44.1kHz master into separate playback slots in Magic AB. That setup allowed me to listen to and compare all three versions of the same mix—unmastered, mastered by human, and mastered by robot—while looping the same song section. First up was the excellent mix that needed very little done to it. Would LANDR handle it with kid gloves?

Image placeholder title


If there’s one overriding precept for mastering, it’s this: First, do no harm. In other words, if it ain’t broke, don’t fix it. The last thing you want is a master that sounds worse than the unmastered mix. To see if LANDR would only do what was necessary, I first fed the robot an excellent mix of a contemporary country ballad.

In this case, the master LANDR returned was not nearly as loud as my master. To match the playback levels, I lowered my master’s playback volume 2.4 dB in Magic AB; doing so made both masters sound equally loud during playback. It also gave the LANDR master a very slight advantage, as no dither could be applied to my attenuated master post-fader in Magic AB. More important, it illustrated one of the inherent pitfalls of automated mastering and LANDR in particular: With limited and widely contrasting options for dynamics processing depth, and no in-between settings available, you can’t nudge your way to the best possible setup but must accept one of the arbitrary three. But I digress.

Comparing the two masters to each other and to the original mix was an eye-opener. LANDR’s WAV master sounded thinner in the midrange band, suffered a narrower stereo image, and had noticeably less depth compared to both my own master and the unmastered mix. It sounded to my ears like LANDR had applied a smiley curve (hyped bass and highs, and attenuated midrange) to the mix. Compared to my master, the vocal on LANDR’s master had very noticeably receded into the background, lessening the song’s impact and undermining one of country music’s primary production goals: Keep the lead vocal up front. Also, the bass guitar’s bottom end sounded a tad flabby and less focused on LANDR’s master. The automated master also amplified an inherently resonant upper-bass band in the male lead vocalist’s chest register, making some frequencies occasionally sound a bit boomy on my Yamaha NS- 10Ms (a red flag that monitor is famous for waving). The high end on LANDR’s file also sounded a bit zingy. Along with the attenuated midrange, this made the LANDR master sound overall a little edgy and thin.


In my own mastering of this country mix, I had automated the fader higher at the beginning of the song to correct for a sparsely arranged intro that had been mixed too low. LANDR’s master didn’t execute the same compensation, and the intro sounded relatively weaker (if subtly so) as a result. Although my master had more dynamics processing applied—again, it was noticeably louder than LANDR’s master before I attenuated its level in Magic AB—it sounded warmer and more supple than the LANDR file. In short, LANDR degraded much of what sounded great in the original mix.


My next A/B/C comparison used a mix of an electronica ballad as the source material. The original mix was good, if unexceptional. LANDR did a surprisingly good job mastering this track; so good that, frankly, I was initially taken aback. But the electronica mix was far simpler than the country one, which had much more extensive and varied instrumentation; on the sparser electronica track, LANDR had less risk of fixing one thing only to break another (always a danger when processing a complex stereo mix). Nevertheless, LANDR made the electronica track’s bottom end sound slightly tubby. My master for the same track had a tighter bottom end.

Fig. 4. Very mild bell-curve EQ is applied to only the mid channel of a well-mixed electronica track, using FabFilter Pro-Q. (The title of the custom preset no longer reflects the control settings, as the preset was edited substantially.) In my mastering for this track, I had also brought the lead vocal slightly forward using mid-side processing. I boosted the mid channel’s level ever so slightly. I also applied very mild bell-curve EQ only to the mid channel: a broad 0.6dB cut centered at 271 Hz, and less than 0.7 dB boost in narrower bands centered at 1,022 and 3,934 Hz (see Figure 4). LANDR’s master produced a comparatively understated vocal. This was not what my ears told me the track needed, but in this case it wasn’t an open-and-shut case as to who addressed the vocal better, me or the robot. My client liked how I mastered the track, but it’s possible she would’ve also liked the robot’s treatment—the two masters sounded quite similar.

Image placeholder title

Still, my master had a noticeably tighter bottom end. This was partly due to the slight EQ tweaks I’d made in the mid channel. While evaluating the mix during the mastering session, I could hear the bottom end was slightly muddy and the vocal slightly understated—a serendipitous coincidence. My subtle EQ tweaks in the mid-channel fixed both issues at once, clarifying the bass band and pulling the vocal slightly forward. Based on what I heard in all three of LANDR’s masters, I doubt mid-side processing was ever applied. And even if the robot could perform M/S processing, in this case it obviously couldn’t “hear” that more than one discrete element in the mid channel would benefit by having the same subtle EQ and stereo-imaging adjustments applied. The robot could see the forest, but not the trees.



Fig. 5. Very intricate and deep mid-side equalization is applied to an extremely muddy and boomy mix, using FabFilter Pro-Q to restore clarity and add punch. The last test involved a track that was a cross-genre blend of industrial and heavy metal. Produced in a modest home studio, the mix was incredibly cloudy and boomy, and presented a very narrow soundstage. My mastering was very deep on this cut and included mid-side equalization, multiband and wideband compression, stereo-image widening, subtle harmonics enhancement, and limiting (see Figure 5). While there was no possible way to turn this track into an audiophile release—technology has its limits—mastering dramatically improved the mix. The soundstage became a lot wider, and the track much clearer, punchier, richer, and louder.

Image placeholder title

On this cut, LANDR showed its deepest limitations yet, completely failing to address the many severe problems with the mix. After mastering by LANDR, the track still sounded very muddy and boomy, and had a very narrow soundstage.


Based on my tests of LANDR, several conclusions can be drawn. When presented with an excellent and complex mix, automated mastering doesn’t know when to leave well enough alone and is likely to do damage. With horribly flawed mixes that require intricate analysis and highly selective mid-side processing, the robot doesn’t know where to begin to address the many problems. For relatively sparse mixes that fall in between these two quality extremes, the robot might do a good job.

The problem is, the engine’s apparent analysis of the mix as a whole—focusing on the forest, ignoring the trees—and the algorithm’s broad (vs. selective) application of processing affect individual elements of a mix in ways that are unpredictable. And we haven’t even touched on how cohesively (or not) a robot might handle mastering multiple tracks for an album or other compilation.

Finally, if you’re not completely happy with what a mastering engineer did for a particular mix, you can always ask for specific revisions. The robot doesn’t currently take calls (or adapt to feedback). Until that day arrives, the mastering engineer’s job is safe.

Michael Cooper is a recording, mix, mastering, and post-production engineer and a contributing editor for Mix magazine. You can reach Michael at and hear some of his mastering work at