Music Live Streaming

Blair Liikala
12 min readSep 26, 2020


This covers a system to live stream and video record all the concert events at the largest music college in the nation. 20 robotic cameras across 6 concert halls, automatically streaming on a custom-built web platform, built over the course of several years. Half is AV, and the other in web technology, but both are required to make it all work. My role in this covers camera to screen, from designing the audio and video systems, to writing code for the website and encoders. Being able to design the entire end-to-end workflow was a rewarding experience, knowing students and faculty can access video so easily and effortlessly, and I can walk away with a greater appreciation for just how many moving parts there are when someone is able to watch a concert online.

Disclaimer, I am no longer at UNT, and this archived article may have outdated or inaccurate information.

UNT has a few more students than most [all] colleges, but has the same issues with digital, and bureaucracy as any other large organization. You probably feel similar roadblocks are at your organization. I took this on with a desire to make something better, and for a number of years had a supervisor willing to see those through. It was not a well-planned, well-thought-out, perfectly executed multi-year deployment It simply took constant effort on getting over the bumps, red tape, and limitations, and often doing many, many things outside of the scope of the job (and salary). Sometimes it just takes strong willpower, the one right person to trust in you, gaining a few gray hairs, and a whole lot of overtime to try and make changes happen.

The Goals:

  • All concerts live streamed, 60/year to +600/year.
  • 6 concert halls, three control rooms.
  • Streaming and download to authenticated logins.
  • No staff increases. Full time staff size of 2, plus student workers.
Pair of new control rooms for multi-camera video.

Part 1: Build a Traditional Broadcast System

Streaming is built on great video production, built on great audio quality. Getting a level of quality took us a few years of listening to faculty, and steady AV upgrades.

Being at a music college, and one with the most renowned jazz program, we take our audio seriously. So in 2015 in conjunction with the FOH audio engineer we designed an expansive Dante audio network. 16 pairs of network switches in 11 locations that can run more than 2500 channels of audio. There is a dedicated writeup on that install.

In 2017 I designed a video system for our flagship Winspear Hall using Canon lenses and sensors with Vinten robotics, and was the 3rd camera install for the hall. Five cameras were installed around the hall with two on tripods. The dedicated control room was lightly remodeled, and a proper room was created for the equipment and new video encoders. We have been very happy with the image quality from cinema-style cameras, and the pure easy of use of the Vinten system for setting and recalling shots. Curious how we got a 2/3 B4 ENG lens on a super 35 camera? Gaf tape. ….ok, that’s a joke, …or is it?

In 2017 — 2018 older multimode fiber and copper were replaced with higher strand single mode fiber. It totaled 288 strands between 12 runs involving 3 buildings. This blinking magical light in flexible glass carries the Dante audio network, SDI, and 10G video network.

In 2020, for the additional concert halls I went with a more functional Panasonic PTZ cameras, AJA router, Blackmagic switchers, AJA Ki Pro recorders, and audio embedders fed from the Dante audio network. Nearly all our signals on IP and SDI are over about 30 strands of fiber. Two new control rooms were built for producing the two multi-camera halls, and smaller robotic controllers and recorders were setup in the existing larger audio studio for the three single camera spaces.

Simplified capture system layout
Cameras in large concert hall, Winspear
Canon C200 with a Vinten robotic head in large concert hall, Winspear
Primary Control Room in large concert hall, Winspear.

Keep in mind this was not just an install around COVID restrictions, but several years of planning and incremental steps. That said, the entire system can be operated remotely from home.

The other very common question is how we handle copyrights:

Part 2: Build the Streaming Systems

Once the broadcast equipment was in place, the next step is to get that pretty signal on the Internet. That breaks down into two steps, the video encoding to smaller streamable files, and the website to display and manage it.

The system approach uses a number of automated processes, using a flexible web content management system that controls everything from the live streaming encoders to external services and the viewer experience. Automation has the advantage of reducing staffing labor in routine procedures and in complex or access-controlled steps, and can be more accurate in moving data around than a human copying and pasting. Reduced scheduling mistakes, and reduced staffing hours make it ideal for scaling up our quantity of streamed events.

The Live Stream Options

Here is the broad overview of options for streaming a concert live, then how it is controlled and managed:

1. Featured

For the traditional major ensemble concerts that are pushed through Youtube Live, and other platforms. They are manually started using a Haivision KB live encoder, and staffed with more senior production crew. Concerts are edited and re-processed for on-demand using the internal Wowza Streaming Engine combined with a CDN.

2. Internal

SDI video is processed into the bitrate ladder using on-premise encoders, pushed to our internal Wowza Streaming Engine server, and then to a content delivery network (Fastly). Encoded files are saved and used for immediate on-demand. The process is mostly automated.


A single high bitrate feed is set from an AJA Helo video encoder to their service to process to a bitrate ladder and create live and on-demand stream links that are embedded in our website. The process is mostly automated.

Overview of the three streaming options.

Provider Choices

My first choice on video encoding is to use an on-premises multi-bitrate encoder as it can give the best first-generation encode quality, and greatest control over the streaming video experience. The larger encoders are more stable, loaded with features, and can have great APIs for automation. We also stream enough events that the upfront encoder cost makes more sense than cloud usage.

However in this first rollout I was not able to get more encoding hardware, so I integrated with a service, Mux, to provide the live processing, and also content delivery for when our current encoders can’t handle the simultaneous encoding load.

Mux is video infrastructure as a service that is developer-focused (encoding, storage and CDN). It has a straightforward guide to getting streams in and out of their system with their API. Like others in this category, quality is ok, and the cost is slightly above average. If the thought of a little code gives you anxiety then maybe this isn’t for you.

Shameless plug: I have an add-on for ExpressionEngine to make integrating with their service super easy!

Why Not Use Youtube?

It is technically possible to use, but I found the time of manually synchronizing events or web development costs to automate to be too high compared to other solutions. Also Youtube comes with baggage such as copyright strikes from Content ID, lack of authenticated videos features, and video quality. It was just too much complexity to get live streaming automated the way we needed it for the majority of our events. YouTube works better as a platform, not as a service.

We will still use Youtube Live for premiere events. With Youtube does come positives like notifications for subscribers, copyright matching, and other features that make Youtube work well. However it will be for much fewer events that are manually controlled, for now, and this may change in the future.

Facebook is a disaster. Live copyright takedowns that are incorrect, inflated viewership, login required, low quality encoding, no automation unless you build an app…

To make automated streams work, components needed to be written:

  • A read-only endpoint to get concert halls and concert entry data.
  • Scripts to control the live streaming encoders.
  • Code to handle the processing of video, and interaction with external service APIs.

The Brain; ExpressionEngine:

My content management system for the last decade has been ExpressionEngine due to the flexibility in organizing datatypes and fields. As our integrations with web services change, changes in concert program metadata, or features like streaming video are added, ExprssionEngine’s data organization and structure easily handles the changes live. CraftCMS shares similar theology, and roots, and in a complete rebuild situation it would be wise to evaluate it as well.

Other writeup using waveforms in the audio player and targeted searching that relate to using EE:

EE’s Channels (in pink) are setup like so:

ExpressionEngine Channel layout

A concert entry is linked to, say, a hall. The Hall entry has fields for storing the video encoder and recorder setup that represents how the system is wired in the real-world. With the two linked, the entry can include the hall’s encoder configuration. This information can be returned as a sort of API, and used in further scripting and encoder automation.

JSON Output for a concert hall.
Easy to change options in the hall’s entry. These are for controlling the AJA Ki Pro video recorders.

EE is conceptually running half as a traditional website and half with the templates generating all the html markup, and half as headless. For headless, templates return json. Then javascript, like Vue, handle the displaying of data to a viewer with live updating, or stand-alone scripts use the json data to control encoders and other audio and video hardware.

As devices change, their parameters can be added as fields in ExpressionEngine, added to the JSON output template, and included in scripting. The same applies for services. If Vimeo was added as an option, then it is simply adding an extension to retrieve streaming keys, store those in the entry in fields, and outputting those in the JSON feed for further scripting.

ExpressionEngine does carry some work ingesting data like an API, and becomes problematic for webhooks from external services like Mux. To get around this as simply as possible, though probably not the more correct way, I have a stand-alone script that saves webhook data to a local file, then EE reads that file (being sure to sanitize it).

Live Concert Stream Workflow:

Step 1: Getting Events Automatically From Calendar System

Concerts are scheduled using an old custom-written web application that is no longer getting new features. So I setup a polling script to get its event data at regular intervals into ExpressionEngine. A webhook would be ideal here of course.

Once the ExpressionEngine site has the event data, it can function like a typical calendar with upcoming events, and be the foundation for controlling live streaming.

Step 2: Using Hooks To Setup Live Streams

When concert entries are created or updated a hook allows for additional scripts to be run at that moment. I created Extensions scripts that work with various services to automate the process of getting various keys and hashes.

One extension creates live streams with If the room is enabled for Mux (set by the Hall), the script makes an API request to Mux to create a new live stream. Mux returns a stream key for the encoder, and stream key for the player. Both are then stored in the EE concert entry. One script polls the JSON feed to get the stream key, and updates the live encoder. The front-end template outputs the playback key.

Other services include updating Algolia for search, JW Player for analytics, and Zapier for notifications and other automation magic.

Step 3: Automatic Streaming

To automatically start and stop events, each live encoder brand is controlled by individual stand-alone script since each brand has slightly different API quirks. This script is run every minute to control start, stop, and set stream names.

Simplified PHP script for the AJA Helo control

Step 4: The Viewer Experience

What the viewer sees on the public side with the player is handled completely by ExpressionEngine and its template system, along with libraries like Vue for interactivity. It controls showing a pre-show message, starting the player, and then stopping the player.

Student and staff workers simply have to point the cameras, and the stream is automatically started, the Ki Pro video recorders have event file names, and all are start and stop automatically. There are also a manually-started recorders just incase, but this way at least one of the two systems will get things right!

UI features include the countdown for when it goes live, downloading calendar reminders, and the program. Statistically I found that extras like hall descriptions, and ensemble info are rarely looked at and increase bloat to the site, so those were dropped.

I find it is really important on the front-end (what the public sees) not to organize events by concert hall. If I’m looking for a friend’s concert live stream then the hall is one extra piece of information needed to find the event that is irrelevant since I’m not physically going there. While our behind-the-scenes workflows are based around a hall’s technical ability, on the viewer side the hall should not be emphasized.

Post Production: On-Demand and Download

On-Demand Player

Concerts are made available to university members through an authenticated archive. It includes streaming video, video downloads, and audio.

The system automatically creates the AAC audio with concert program metadata and artwork (screenshot) along with waveform data for streaming on the front-end. Simple ffmpeg encoding on the server.

Video takes a little more. There are three workflows for on-demand delivery:

1a. Mux to On-Demand

Live streams using Mux are immediately made available for on-demand using the live stream encode. Mux’s on-demand assets are not immediately available, and so Mux send a notification to my server when they are ready.

However these are unedited, and the quality of the Mux encodes are lower than we like.

1b. Live to On-Demand, Internal

Live streams using internal encoding are immediately made available for on-demand using the live stream encode.

When a live stream is stopped, the encoder triggers a post-processing script. It creates the file directory, Wowza xml, and uses FFmpeg for metadata like length and making a thumbnail.

However these too are unedited, and quality can vary.

2. File Encode

Edited and mixed concerts are exported to a folder that is watched by the file encoder. The encoder takes its time to make the encodes look great. When complete, a script is triggered to put files into our storage and delivery location. Timecode for each piece is added to the concert entry for markers workers. Last, ExpressionEngine is updated that the post-encoded files are ready.

All the ProRes recordings are stored on a system on campus, and maintained by our IT team, thank goodness. Compressed streaming files are also stored on their system and connected closely with the Wowza Streaming Engine server. Audio and video downloads are stored on a drive connected to the webserver for better transfers, however eventually the plan is to move video downloads to be served by a separate server.

Keeping Track of Files

The filename contains EE’s entry ID to keep the live and post-production files linked to ExpressionEngine outside of the hooks and lifecycle. It can then be easily parsed out, and used in a query to get the full concert info.

Simple example to get the concert entry from a filename.
Video Processing Status for concerts

Real-World Results:

So far so good. Check back in later!!

Need help with your own implementation? Let me know.



Blair Liikala

Solutions architect, previously manage a music recording department. Audio, video gear, web streaming and web development.