Master Class: Streaming Concerts

Broadcast your performance from any venue
Image placeholder title

IMAGINE YOU are about to watch your favorite performer sing your favorite song right to you. But you didn’t have to save your money for an expensive ticket to a sold out show, wait in a long line in the cold, show your ID, and then pony up for an eight-dollar beer. You didn’t have to remember your ear plugs, strain your neck to see past the hulking dude in the front row, hope you can understand the words to the song over that annoying wasted heckler, and you won’t have to find a ride home afterward.

Instead, you’re watching a concert streamed live online from one of many virtual venues available on the Internet. But what goes into making for an engaging and exciting show? HD video and lighting are essential ingredients. However, most people agree that audio quality and the music’s sonic presentation are what actually can make or break the online concert experience—just like in a brick-and-mortar concert setting.

There are many reasons for artists to put effort into performing an online show. They don’t have to pick a venue that they can fill with ticket holders; instead, they can perform almost anywhere, and tap into a worldwide number of potential viewers, who enjoy the concert right from their own living rooms. Artists can get support from fans with families (who have a harder time making it to the clubs), and will have many more options for promoting their shows, including possible sponsorships from companies that understand the value of an infinite audience with an exciting new platform. If artists and promoters put in extra effort on the production, they can make the concert seem larger than real life could ever be, with bonuses like cameras backstage and applications for the audience to choose which camera angle or microphones to tune into.

Live online concerts do pose some similar challenges to brick-and-mortar concerts, however: Artists still have to reach a captive audience and keep their attention, compete with other shows and events happening at the same time, convey the right musical message, and monetize the effort of the show. And of course, we can’t forget the biggest difference between online and brick-and-mortar concert: the technology, which can be a big challenge. Sending bits of audio and video data down the tubes and expecting them to be reassembled again at the other end, properly and in sync, can seem like wizardry, and it takes a talented tech company to do it, period, regardless of the sound and visual quality they can actually achieve. But in my experience, the benefits of online concerts far outweigh the technical challenges.

The Signal Path Before getting into production details, it is important to understand the signal path. After the sound travels through the artist’s instrument, microphone preamp, and analog-to-digital interface, it is picked up in the streaming software. The main software available is Adobe’s Flash Media Live Encoder (FMLE), which provides many options for using external A-to-D conversion and external video input. On the other end of the chain—in the concert viewer’s living room, for instance— the digital audio gets converted back to analog electrical energy, and then back to acoustical energy through the speakers.

As a mastering engineer, I am constantly in pursuit of the workstation that does that recording and editing at the highest quality possible, with the goal being to work with software that I cannot “hear.” Too many workstation companies make recording software work and pass audio but stop there, providing no options for perfecting the sound quality or reducing the grunge and distortion inherent in digital audio. As we add more complex factors like video sync and Internet streaming, we must pay even closer attention to precise calculations and programming.

The person to explain this precision is Sonic Studio’s CEO and Chief Programmer, Jon Reichbach. He tells us that, “Ensuring the highest level of data precision where we are required to reduce the distortion caused by digital processing is very important for a workstation. Some of the other principles that we feel are important in software audio design are paying attention to details like sound quality, support for multiple formats, support for metadata, and attention to all aspects of the audio processing chain.”

Jon shared with me that Sonic Studio has found that certain audio and software engineering principles can positively or negatively affect the way music sounds. Toward this end, they focused their work on two areas: advanced digital signal processing for audio applications, and optimizing the interaction between the audio processing, the computer hardware, and its operating system. The developers at Sonic also recognize that a computer is a very noisy environment, and by designing their own unique system architecture (Sonic Studio Engine) they have optimized the system, resulting in a clearer and more transparent sound on recording and playback.

With respect to online concerts, the audio processing chain Jon refers to now incorporates the complexities of the Internet. There are so many sound quality limitations when transferring audio over the Internet, especially due to limitations of network bandwidth—in particular, upload speed. This is why audio is usually encoded and compressed when streamed, which reduces information and causes fidelity loss, higher noise levels, digital artifacts, and other issues. For most encoders, this introduces noise quantization errors and differing results based on playback level. (When a higher bandwidth and bit rate are available, these issues will be reduced.) Codecs used can also introduce delays and cause loss of sync between audio and video. Other streaming challenges include hardware limitations and Internet upload speed limitations.

Image placeholder title

Every concert-streaming platform I’ve tried is wildly different from the next. Why don’t they all sound the same? Is it the technical quality of the stream? Is it the “live-ness” of the stream? Is it the ease of use of the platform? Could it be future support for other formats, such as surround, Ambisonic, or soundfield arrays? Of course, it is all of these things. To understand the differences, we need to understand the platform’s encoding processes, upload speeds, and the way the stream is shared across many users.

Streaming services are paired with content delivery networks (CDNs) that transfer huge amounts of audio and video data. Some are better than others; the differences are largely dependent on where exactly the CDNs put their streaming servers. I performed a streaming test with some of the biggest streaming services available, using the CDNs they provide; you can see my results at

Every CDN seems to have the goal of supporting and delivering the highest-quality video content, and audio quality comes second. When you are streaming a soccer game or CSPAN, the audio probably doesn’t need to be high quality. But we are talking about streaming concerts, which are a big part of CDNs’ business; unfortunately, they aren’t delivering a lot of the audio properly. In my experience, incoming streams have had dropouts, level changes, and sync issues that stretched for more than a minute when measured against the outgoing stream. Overall, I find that Verizon’s Edgecast is the best and most reliable, but Akamai is the largest and most widely used.

Video Codecs Video codecs are an important and sometimes confusing part of the streaming concert process. All of the streaming services I have mentioned support the H.264 video standard, which supports the highest-quality 1080p compression available, and integrates AAC at 96, 128, 192, 256, and 320 Kbps. This standard will soon be upgraded to H.265 to support 4k streaming, which allows about four times the pixel resolution of 1080p but requires a higher bit rate while streaming, and much more processing power.

VP6, the other available codec, is technically more efficient but sacrifices quality and options for audio fidelity. This is ultimately being upgraded to VP9, which will offer higher color quality and new compression priority. The reason we care about video compression is to get the best quality video without having to downgrade the audio rate on a limited Internet bandwidth.

Giving Stage is an online concert venue that supports causes and nonprofit fundraising.Showtime An engineer needs to do many things that will make or break the concert mix, both artistically and technically. I make sure that my systems are running smoothly by turning off unnecessary computer processes (such as Apple’s Spotlight), using an efficient hard drive, spanning separate buses for extra processing power, and using a separate AD/DA converter. Other good practices include using short cables, and employing a power conditioner and cooling fans. Be sure your levels are in a comfortable place without needing to compress the dynamic range before it hits the codecs. As long as you are using 24-bit processing in your DAW, you can mix at lower levels and not hit your limiters or compression too hard on the master bus. Pay attention to what is happening musically so you can make adjustments; this will always sound better than a static, flat mix. Just because you are using data compression, it does not mean you’re off the hook for providing a dynamic and interesting mix to draw in the listener.

Image placeholder title

Then we have the challenge of lighting for video and using the correct type of camera for streaming. This becomes an issue when you’re choosing between an interlaced or progressive video signal: Interlaced video has a lower data stream but requires more processing to interpolate the frames; progressive video is a full, cleaner image from the beginning, so it is easier to process and easier to achieve an audio sync. Progressive video also replicates motion more accurately. Since interlaced video requires more processing, we often lose sync with audio, and the image suffers when movement is involved. I try to work with switchers or cameras that can output a progressive video signal, which I have found streams better through FMLE.

I use a multicamera production with a live video switch for most of my work. I also have help from a wonderful video team ( Most people will use one camera, and that’s just fine. Find a place where the whole band can be seen, but be sure to not move the camera around if you can avoid it.

Even if you have done everything you can to run an efficient and great-sounding computer system, you will still hit your first obstacle when trying to run a synced audio and video feed to the web. The quality and availability of technical support and learning resources vary: You can sometimes find a best-practices FAQ, but it is usually just a video of a person with a laptop or webcam.

Image placeholder title

Streamed concerts are meant to be scalable. Remote, streamed concerts are even being supported by Teradeck with its Bond hardware, which takes several wireless networks, and adaptively pulls from the best signal and sums them together to create a more powerful 3G/4G network. This lets you effectively webcast anywhere you can get a cell phone signal. Of course, we can’t forget about Bob Weir’s multimillion dollar streaming studio, a black-box video studio where he streams big name concerts for paid tickets.

Just as you want lighting and staging to create a compelling look for your webcast, you need high-quality audio gear to capture studio-quality sound. Here we have a backyard webcast featuring Lia Rose. I always run a simple test when considering what brick-and-mortar venue to stream from (a recording studio, someone’s backyard, a bar, etc.). Be sure to plug into a hard-wired Ethernet connection if possible. Then, simply do a web search for “Internet speed test” to see the free options that are available. I like Ookla’s speed test, as it is reliable and easy to use ( For the quality of stream I want to execute, I need the upload speed to be at least 5 mbps. Download speed usually doesn’t matter to me, unless I am going to be doing sync tests at the same venue. I often get asked, “What is the ideal speed for streaming?” The answer is always, “The highest you can find,” but an upload speed of at least 2.5 mbps—ideally at least 5 mbps—will provide a good, basic SD stream. Ten mbps or more is ideal; it will allow for a higher audio bit rate as well.

Image placeholder title

If your computer is too slow for external audio and video connections, such as a camera or an audio interface for controlling the mix and video quality, you can make a successful webcast with just a laptop and a USB webcam, usually without any extra hardware. Livestream offers hardware to stream from any type of device. Giving Stage, Ustream, YouTube, and Livestream content can be streamed from any type of device. If the Internet is too slow, there is the possibility of using wireless cell phone networks. I have successfully streamed this way as well, though the quality wasn’t great.

I record everything I stream on two redundant multitrack systems, but some of the streaming sites are set up to record as well. Giving Stage, Livestream, Ustream, and FMLE can all record the artist’s stream. YouTube records automatically unless you set the event to private.

Mistakes to Avoid I see three common mistakes when webcasting: First, people rarely give themselves enough time to troubleshoot and test audio/video sync. New hardware and software advances are being made every day, but let’s face it: We are sending out very complex information through the Internet in fast, little spurts and expecting it to be reassembled in people’s living rooms. It isn’t always just plug-and-play.

Second, I find that when problems arise during a concert, the artist doesn’t know or adjust. Be sure to have a secondary person watch the stream and let you know when there are tech issues.

Last, find a place to stream that has a fast enough upload speed or downgrade the stream resolution. Stuttering and blackouts turn off viewers quickly.

Ready, Set, Webcast Concert streaming is an exciting new avenue for music performers and for music lovers. Even in the time I have been working with online concerts, the technology has developed into more than I ever thought it would. I can’t wait to see what the streaming technology companies are going to come up with next, as our Internet bandwidths keep swelling and musicians are looking for more interesting ways to reach their fans.

Piper Payne is a mastering engineer at Michael Romanowski Mastering in San Francisco, a governor on the San Francisco Board of the Recording Academy and chair of the the San Francisco Producers and Engineers Wing. She also works in software development for digital audio workstations and is the Chief Content Officer for Giving Stage.