Tag Archives: Silver Sound Studios

Making the indie short The Sound of Your Voice

Hunt Beaty is a director, producer and Emmy Award-winning production sound recordist based in Brooklyn. Born and raised in Nashville, this NYU Tisch film school grad spent years studying how films got made — and now he’s made his own.

The short film The Sound of Your Voice was directed by Beaty and written and produced by Beaty, José Andrés Cardona and Wesley Wingo. This thriller focuses a voiceover artist who is haunted by a past relationship as she sinks deep into the isolation of a recording booth.

Hunt Beaty

The Sound of Your Voice was shot on location at Silver Sound, a working audio post house, in New York City.

What inspired the film?
This short was largely reverse-engineered. I work with Silver Sound, a production and post sound studio in New York City, so we knew we had a potential location. Given access to such a venue, Andrés lit the creative fuse with an initial concept and we all started writing from there.

I’ve long admired the voiceover craft, as my father made his career in radio and VO work. It’s a unique job, and it felt like a world not often portrayed in film/TV up to this point. That, combined with my experience working alongside VO artists over the years, made this feel like fertile ground to create a short film.

The film is part of a series of shorts my producers and I have been making over the past few months. We’re all good friends who met at NYU film undergrad. While narrative filmmaking was always our shared interest and catalyst for making content, the realities of staying afloat in NYC after graduation prompted a focus on freelance commercial work in our chosen crafts in order to make a living. It’s been a great ride, but our own narrative work, the original passion, was often moved to the backburner.

After discussing the idea for years — we drank too many beers one night and decided to start getting back into narrative work by making shorts within a particular set of constrained parameters: one weekend to shoot, no stunts/weapons or other typical production complicators, stay close to home geographically, keep costs low, finish the film fast and don’t stop. We’re getting too old to remain stubbornly precious.

Inspired by a class we all took at NYU called “Sight and Sound: Film,” we built our little collective on the idea of rotating the director role while maintaining full support from the other two in whatever short currently in production.

Andrés owns a camera and can shoot, Wesley writes and directs and also does a little bit of everything. I can produce and use all of my connections and expertise having been in the production and post sound world for so long.

We shot a film that Wesley directed at the end of November and released it in January. We shot my film in January and are releasing it here and now. Andrés just directed a film that we’re in post-production on right now.

What were you personally looking to achieve with the film?
My first goal was to check my natural inclination to overly complicate a short story, either by including too many characters or bouncing from one location to another.
I wanted to stay in one close-fitting place and largely focus on one character. The hope was I’d have more time to focus on performance nuance and have multiple takes for each setup. Realistically, with indie filmmaking, you never have the time you want, but being able to work closely with the actors on variations of their performances was super important. I also wanted to be able to focus on the work of directing as opposed to getting lost in the ambition of the production itself.

How was the film made?
The production was noticeably scrappy, as all of these films inevitably become. The crew was just the three of us, in addition to a rotating set of production sound recordists and an HMU artist (Allison Brooke), who all agreed to help us out.

We rented from Hand Held films, which is a block away from Silver Sound, so we knew we could just wheel over all of the lights and grip equipment without renting a vehicle. Wesley would would primarily focus on camera and lighting support for Andrés, but we were all functioning within an “all hands on deck” framework. It was never pretty, but we made it all happen.

Our cast was incredibly chill, and we had worked with Harry, the engineer, on our first short Into Quiet. We shot the whole thing over a weekend, (again, one of our parameters) so we could do our best to get back to our day-to-day.

Also, a significant amount of re-writing was done to the off-screen voices in post based on the performance of our actress, which gave us some interesting room to play around while writing to the edit, tweaking the edit itself to fit new script, and in the recording of our voice actors to the cut. Meta? Probably.

We’ve been wildly fortunate to have the support of our post-sound team at Silver Sound. Theodore Robinson and Tarcisio Longobardi, in particular, gave so much of themselves to the sound design process in order to make this come to life. Given my background as a production recordist, and simply due to the storyline of this short, sound design was vital.

In tandem with that hard work, we had Alan Gordon provide the color grading and Brent Ferguson the VFX.

What are you working on now?
Mostly fretting about our cryptocurrency investments. But once that all crashes and burns, we’re going to try and keep the movie momentum going. We’re all pretty hungry to make stuff. Doing feels better than sitting idly and talking about it.

L-R: Re-recording mixer Cory Choy, Hunt Beaty and supervising sound editor Tarcisio Longobardi.

We’re currently in post for Andrés’ movie, which should be coming out in a month or so. Wesley also has a new script and we’re entering into pre-production for that one as well so that we can hopefully start the cycle all over again. We’re also looking for new scripts and potential collaborators to roll into our rotation while our team continues to build momentum towards potentially larger projects.

On top of that, I’m hanging up the headphones more often to transition out of production sound work and shift to fully producing and directing commercial projects.

What camera and why?
The Red Weapon Helium because the DP owns one already (laughs). But in all seriousness, it is an incredible camera. We also shot on elite anamorphic glass. Only had two focal lengths on set, a 50mm and a 100mm plus a diopter set.

How involved were you in the edit?
DP Andres Cardona singlehandedly did the first pass at a rough cut. After that, myself and my co-producer Wes Wingo gave elaborate notes on each cut thereafter. Also, we ended up re-writing some of the movie itself after reconsidering the overall structure of the film due to our lead actress’ strong performance in certain shots.

For example, I really loved the long close-up of Stacey’s eyes that’s basically the focal point of the movie’s ending. So I had to reconfigure some of the story points in order to give that shot its proper place in the edit to allow it to be the key moment the short is building up to.

The grade what kind of look were you going for?
The color grade was done by Alan Gordon at Post Pro Gumbo using a DaVinci Resolve. It was simply all about fixing inconsistencies and finessing what we shot in camera.

What about the sound design and mix?
The sound design was completed by Ted Robinson and Tarcisio Longobardi. The final mix was handled by Cory Choy at Silver Sound in New York. All the audio work was done in Reaper.

Review: RTW Continuous Loudness Control

By Tarcisio Longobardi

In the past, the most common way to measure the “loudness” of an audio signal was to represent amplitude variations over a certain period of time through a VU meter, a peak meter or a waveform. These tools, however, don’t give us an accurate estimate of how humans will perceive loudness. As a result, it is possible for two audio sources with identical peak values to be perceived as having very different overall loudness.

For example, movie and television show audio usually has a relatively wide dynamic range — characters can go from a whisper to a shout — as opposed to commercials where the audio material is compressed. Consequently, commercials often sound much louder than content — despite both being within specified decibel peak levels. Audiences experience loudness jumps between programs and commercials, and those inconsistencies are the cause of much frustration and complaint.

Solution! LKFS/LUFS
To find a solution to this problem, a new kind of measurement that attempts to quantify our perception of loudness has been introduced.

RTW_Loudness_Tools_menu

The United Nations specialized agency for information and communication technologies (ITU) introduced LKFS (described on ITU-R BS.1770), which means “Loudness K-weighted relative to Full Scale.” It’s a scale for audio measurement where a K-weighted filter (a filter used to emphasize frequencies that humans are more sensitive to) is applied to audio material to obtain weighted measurements that try to estimate how loud a human listener will perceive a given piece of audio.

The European Broadcast Union (EBU) uses the term LUFS, which stands for “Loudness Units Full Scale.” Despite the different names, LFKS and LUFS are identical. Both terms describe the same phenomenon and, just like LKFS, one unit of LUFS is equal to one dB.

Since 2012 many European countries have adopted EBU R128, which is a set of rules that set maximum levels of audio signals during broadcast based on K-weighted loudness normalization. In 2012 the US Congress approved the Commercial Advertisement Loudness Mitigation (CALM) Act. The Act sets rules similar to EBU R128, requiring commercials to have the same average volume as the programs they accompany.

RTW CLC
RTW Continuous Loudness Control (CLC) assures full EBU R128 compliance in a streamlined way, allowing the user to adjust program material to a target loudness value, as well as to an adjustable TruePeak value, with or without correcting the original loudness range.

CLC Works either as a standalone application or as a plug-in inside a digital audio workstation. I have used the standalone AAX version in Pro Tools 11, and the VST3 version in Reaper 5.18 to test the software.

CLC has a very simple user interface. The display area is divided into sections: The “Metering” section is a fully compliant EBU meter. The left side shows the measured loudness numeric values of the input audio; the right side the values of the relative processed signal. In the “Processing” section, the values and their dynamical processing are displayed on graphs. The bar graphs in the middle display the values of the current increase or decrease of loudness and the current percentage reduction of the loudness range.

The circle in the middle shows whether or not the brick wall limiter is engaged and how much of the signal is limited. Finally, at the bottom there are a bypass button, a button for resetting the device and another for accessing the setup menus. The interface has a plain, straightforward look, which makes it easily readable. However, there’s a lot of unused space, which makes it unnecessarily big — taking up valuable screen real estate.

While testing CLC, I found it has a very simple yet functional workflow. Once the processing mode (dynamic, semi-dynamic, static) and target loudness are chosen, maximum LRA e true peak limits are set, and the processor works automatically — granting us loudness normalization of the audio signal. I didn’t have to tweak it any more unless I wanted to change the target values.

During my tests, I was impressed with CLC’s ability to handle loudness variations in a very inaudible way — there was no pumping or brick-walling.

CLC says it accomplishes its dynamic correction by using data acquired by performing realtime analysis of the audio content to make predictions of the future signal progress. I have to admit I was skeptical in the beginning, but the software was surprisingly able to perform dynamic corrections in a very transparent way, even in the first seconds of content by using a technique that combines a look-ahead algorithm with statistical data. Dynamics compression is used only when very abrupt changes in dynamics occur, and adjustments triggered by abrupt volume increases are hardly audible.

CLC was able to recognize natural loudness increases caused by loud passages as a natural part of signal dynamic and thus leave it mostly unaltered. Low-dynamics passages pass through unaltered as well.

Thus CLC works well in its “default mode,” but it is also has presets that suit different kinds of programs (e.g. news and discussion, movies and sports). The presets change different parameters of the user interface and other invisible characteristics of the processor. I found that choosing the right preset is an important part of the workflow because this heavily affects the way the software handles the same situation.

Although these presets are effective, it could be preferable to have more control over the dynamic processing. Once the preset is chosen it is impossible to make any changes. It would be useful to have a plug-in with the possibility to fine-tune parameters and to know exactly what the software does to handle a specific situation.

While CLC is designed for a realtime workflow, it also features an interesting offline operation mode, called “file mode,” in the standalone application. If you use the file mode, the loudness of audio signals coming from a file can be processed. The loaded audio file will be analyzed and processed, according to your settings, and stored as a new audio file. The complete analysis before processing of the audio file will allow a more precise result with regards to the target value.

Oh, and the CLC was just as easy to use in 5.1 as it was in stereo.

Summing Up
CLC has a simple workflow and is extremely user friendly. Its advanced algorithm sounds good, and will help engineers make their programs CALM-compliant in a simple and efficient process. However, its “set and forget” workflow doesn’t allow the user fine control.

Supported platforms are: Windows 7, 8, 10: VST2.4, VST3, RTAS, AAX and standalone; Mac OS X: VST3, RTAS, AAX Native 64, AU and standalone. System requirements are: dual-core 2.5Ghz processor, 4GB RAM, 200MB free hard disk space, iLok USB smart key and iLok account, Internet connection required for activation process. Sample rates are: 44.1kHz, 48kHz, 88.2kHz, 96kHz.

Tarcisio Longobardi is a sound engineer at Silver Sound Studios in New York City.

Ergonomics from a post perspective (see what I did there?)

By Cory Choy

Austin’s SXSW is quite a conference, with pretty much something for everyone. I attended this year for three reasons: I’m co-producer and re-recording mixer on director Musa Syeed’s narrative feature film in competition, A Stray; I’m a member of the New York Post Alliance and was helping out at our trade show booth; and I’m a blogger and correspondent for this here online publication.

Given that my studio, Silver Sound in New York, has been doing a lot of sound for virtual reality recently, and with the mad scramble that every production company, agency and corporation has been in to make virtual reality content, I was pretty darn sure that my first post was going to be about VR (and don’t fear, I will be following up with one soon), but while I was checking out the new 360-degree video camera and rig offerings from Theta360 and 360Heros, and taking a good look at the new Micro Cinema Camera from Blackmagic, I noticed a pretty enthused and sizable crowd at one of the booths. The free Stella Artois beer samples were behind me, so I was pretty excited to go check out what I was sure must be the hip, new virtual reality demonstration, The Martian VR Experience.

To my surprise, the hot demo wasn’t for a new camera rig or stitching software. It was for a chair… sort of. Folks were gathered around a tall table playing with Legos while resting on the Mogo, the latest “leaning seat” offering from inventor Martin Keen’s company, Focal Upright. It’s kind of a mix between a monopod, a swivel stool and an exercise ball chair, and it comes in a neat little portable bag — have chair, will travel! Leaning chairs allow people to comfortably maintain good posture while at their workstations. They also encourage you to work in a position that, unlike most traditional chairs, allows for good blood flow through the legs.

They were raffling off one of those suckers, hence all the people around. I didn’t win, but I did have the opportunity to talk to Keen about his products — a full line of leaning chairs, standing desks and workstations. Keen’s a really nice fellow, and I’m going to follow-up with a more in-depth interview in the future. For now, though, the basics are that Keen’s company, Focal Upright, is one of several companies that have emerged to help folks who spend the majority of their days sitting (i.e. all of us post professionals) figure out a way to bring better posture and health back into their daily routines.

As a sound engineer, and therefore as someone who spends a whole lot of time every day at a console or mixing board, ergonomics is something I’ve had to pay a lot of attention to. So I thought I might share some of my, and my colleagues’, ergonomics experiences, thoughts and solutions.

Standing, Sitting and Posture
We’ve all been hearing about it for a while — sitting for extended periods of time can be bad for you. Sitting with bad posture can be even worse. My buddy and co-worker Luke Allen has been doing design and editing at a standing desk for the last couple of years, and he swears that it’s one of the best work decisions he’s ever made. After the first couple of months though, I noticed that he was complaining that his feet were getting tired and his knees hurt. In the same pickle? Luke solved his problem with a chef’s mat, like this one. Want to move around a little more at the standing desk? Check out the Level from FluidStance, another exhibitor at this year’s SXSW show. Not ready for a standing desk? Maybe try exploring a ball chair or fluid disc from physical therapy equipment manufacturer Isokinetics Inc.

Feel a little silly with that stuff? Instead, try getting up and walking around, or stretching every 20 minutes or so — 30 seconds to a minute should do. When I was getting started in this business, I was lucky enough to have the opportunity to apprentice under sound master craftsman Bernie Hajdenberg. I first got to observe him working in the mix, and then after some time, I had the privilege of operating sessions with him. One of the things that struck me was that Bernie usually stood up for the majority of the mixing sessions, and he would pace while discussing changes. When I was operating for him, he had me sit in a seat with no arms that could be raised pretty high. He told me this was very important, and it’s something that I’ve continued throughout my career. And lo and behold, I now realize that part of what Bernie had me do was to make sure that I wasn’t cutting off the circulation in my legs by keeping them extended and a little in front of me. And the chair with no arms helped keep my back straight.

Repetitive Stress
People who use their fingers a lot, whether typing or using a mouse, run the risk of developing a repetitive stress injury. Personally, I had a lot of wrist pain after my first year or so. What to do? First, make sure that your set-up isn’t forcing you to put your hands or wrists in an uncomfortable position. One of the things I did was elevate my mouse pad and keyboard. My buddy Tarcisio, and many others, use a trackball mouse. Try to break up your typing or mouse movements every couple of minutes with frequent, short bursts of finger stretches. After a few weeks of introducing stretching into my routine, my wrist and finger pain was alleviated greatly.

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City. He was recently nominated for an Emmy for “Outstanding Sound Mixing for Live Action” for Born To Explore.

Making our dialogue-free indie feature ‘Driftwood’

By Paul Taylor and Alex Megaro

Driftwood is a dialogue-free feature film that focuses on a woman and her captor in an isolated cabin. We chose to shoot entirely MOS… because we are insane. Or perhaps we were insane to shoot a dialogue-free feature in the first place, but our choice to remove sound recording from the set was both freeing and nerve wracking due to the potential post production nightmare that lay ahead.

Our decision was based on how, without speech to carry along the narrative, every sound would need to be enhanced to fill in the isolated world of our characters. We wanted draconian control over the soundscape, from every footstep to every door creak, but we also knew the sheer volume of work involved would put off all but the bravest post studios.

The film was shot in a week with a cast of three and a crew of three in a small cabin in Upstate New York. Our camera of choice was a Canon 5D Mark II with an array of Canon L-series lenses. We chose the 5D because we already owned it — so more bang for our buck — and also because it gave us a high-quality image, even with such a small body. Its ease of use allowed us to set up extremely quickly, which was important considering our extremely truncated shooting schedule. Having no sound team on set allowed us to move around freely without the concerns of planes passing overhead or cars rumbling in the distance delaying a shot.

The Audio Post
The editing was a wonderfully liberating experience in which we cut purely to image, never once needing to worry about speech continuity or a host of other factors that often come into play with dialogue-driven films. Driftwood was edited on Apple’s Final Cut Pro X, a program that can sometimes be a bit difficult for audio editing, but for this film it was a non-issue. The Magnetic Timeline was actually quite perfect for the way we constructed this film and made the entire process smooth and simple.

Once picture locked, we brought the project to New York City’s Silver Sound Studios, who jumped at the chance to design the atmosphere for an entire feature from the ground up. We sat with the engineers at Silver Sound and went through Driftwood shot-by-shot, creating a master list of all the sounds we thought necessary to include. Some were obvious, such as footsteps, breathing, clocks ticking and others less so, such as the humming of an old refrigerator or creaking of a wooden chair.

Once the initial list was set, we discussed whether or not to use stock audio or rerecord everything at the original location. Again, because we wanted complete control to create something wholly unique, we concluded it was important to return to the cabin and capture its particular character. Over the course of a few days, the Silver Sound gang rerecorded nearly every sound in the film, leaving only some basic Foley work to complete in their studio.

Once their library was complete, one of the last steps before mixing was to ADR all of the breathing. We had the actors come into the studio over a one-week period during which they breathed, moaned and sighed inside Silver Sound’s recording booth. These subtle sounds are taken for granted in most films, but for Driftwood they were of utter importance. The way the actors would sigh or breath could change the meaning behind that sound and change the subtext of the scene. If the characters cannot talk, then their expressions must be conveyed in other ways, and in this case we chose a more physiological track.

By the time we completed the film we had spent over a year recording and mixing the audio. The finished product is a world unto itself, a testament to the laborious yet incredibly exciting work performed by Silver Sound.

Driftwood was written, directed and photographed by Paul Taylor. It was produced and edited by Alex Megaro.

The sound of VR at Sundance and Slamdance

By Luke Allen

If last year’s annual Park City film and cultural meet-up was where VR filmmaking first dipped its toes in the proverbial water, count 2016’s edition as its full on coming out party. With over 30 VR pieces as official selections at Sundance’s New Frontier sub-festival, and even more content debuting at Slamdance and elsewhere, festival goers this year can barely take two steps down Main Street without being reminded of the format’s ubiquitous presence.

When I first stepped onto the main demonstration floor of New Frontier (which could be described this year as a de-facto VR mini-festival), the first thing that struck me was, why was it so loud in there? I admit I’m biased since I’m a sound designer with a couple of VR films being exhibited around town, but I am definitely backed up by a consensus among content creators regarding sound’s importance to creating the immersive environment central to VR’s promise as a format (I know, please forgive the buzzwords). In seemingly direct defiance of this principle, Sundance’s two main public exhibition areas for all the latest and greatest content were inundated with the rhythmic bass lines of booming electronic music and noisy crowds.

I suppose you can’t blame the programmers for some of this — the crowds were unavoidable — but I can’t help contrasting the New Frontier experience with the way Slamdance handled its more limited VR offering. Both festivals required visitors to sign up for a viewing time, but while the majority of Sundance’s screenings involved strapping on a headset while seated on a crowded bench in the middle of the demonstration floor, Slamdance reserved a quiet room for the screening experience. Visitors were advised to keep their voices to a murmur while in the viewing chamber, and the screenings took place in an isolated corner seated on — crucially — a chair with full range of motion.

Why is this important? Consider the nature of VR: the viewer has the freedom to look around the environment at their own discretion, and the best content creators make full use the 360-degrees at their disposal to craft the experience. A well-designed VR piece will use directional sound mixing to cue the viewer to look in different directions in order to further the story. It will also incorporate deep soundscapes that shift as one looks around the environment in order to immerse the viewer. Full range of motion, including horizontal rotation, is critical to allowing this exploration to take place.

The Visitor, which I had the pleasure of experiencing in Slamdance’s VR sanctuary, put this concept to use nicely by placing the two lead characters 90 degrees apart from one another, forcing the viewer to look around the beautifully-staged set in order to follow the story. Director James Kaelan and the post sound team at WEVR used subtly shifting backgrounds and eerie footsteps to put the viewer right in the middle of their abstract world.

VR New Frontier

Sundance’s New Frontier VR Bar.

Resonance, an experience directed by Jessica Brillhart that I sound designed and engineered, features violinist Tim Fain performing in a variety of different locations, mostly abandoned, selected both for their visual beauty and their unique sonic character. We used an Ambisonic microphone on set in order to capture the full range of acoustic reflections and, with a lot of love in the mix room at Silver Sound, were able to recreate these incredible sonic landscapes while enhancing the directionality of Fain’s playing in order to help the viewer follow him through the piece (Unfortunately, when Resonance was screening at Sundance’s New Frontier VR Bar, there was a loudspeaker playing Top 40 hits located about three feet above the viewer’s head).

In both of these live-action VR films, sound and picture serve to enhance and guide the experience of the other, much like in traditional cinema, but in a new and more enchanting way. I have had many conversations with other festival attendees here in Park City in which we recall shared VR experiences much like shared dreams, so personal and haunting is this format. We can only hope that in future exhibitions more attention is paid to ensure that viewers have the quiet they need to fully experience the artists’ work.

Luke Allen is a sound designer at Silver Sound Studios in New York City. You can reach him at luke@silversound.us

Slamdance, Sundance: Why it’s 
important to audio post pros

By Cory Choy

Why are we, audio post professionals, in Park City right now? The most immediate reason is Silver Sound has some skin in the game this year: we are both executive producers and the post sound team for Driftwood, a feature narrative in competition at Slamdance that was shot completely MOS. We also provided production and audio post on content Resonance and World Tour for Google’s featured VR Google Cardboard demos at Sundance’s New Frontier.

Sundance’s footprint is everywhere here. During the festival, the entirety of Park City is transformed — schools, libraries, cafes, restaurants, hotels and office buildings are now venues for screenings, panel discussions and workshops. A complex and comprehensive network of shuttle busses allows festival goers to get around without having to rely on their own vehicles.

Tech companies, such as Samsung and Canon, set up public areas for people to rest, talk, demo their wares and mingle. You can’t take three steps in any direction without bumping into a director, producer or someone who provides services to filmmakers. In addition to being chock full of industry folk — and this is a very important ingredient —Park City is charming, beautiful and very different than the American film hubs, New York and Los Angeles. So people are in a relaxed and friendly mood.

Films in competition at Sundance often feature big-name actors, receive critical acclaim and more and more often are receiving distribution. In short, this is the place to make personal connections with “indie” filmmaking professionals who are either directly, or through friends, connected to the studio system.

As a partner and engineer at a boutique sound studio in Manhattan, I see this as a fantastic opportunity to cut through the noise and hopefully put myself, and my company, on the radar of folks with whom I might not otherwise get a chance to meet or collaborate. It’s a chance for me, a post professional in the indie world, to elevate my game.

Slamdance
Slamdance sets up shop in one very specific location, the Treasure Mountain Inn on Main Street in Park City. It happens at the same time as Sundance — and is located right in eye of the storm — but has built a reputation for celebrating the most indie of the indies. Films in competition at Slamdance must have budgets under one million dollars (and many often have budgets far below that.) Where Sundance is a sprawling behemoth — long lines, hard-to-get tickets, dozens of venues, the inability to see all that is offered — Slamdance sort of feels like a friend’s very nice living room.

Slamdance logo

Many folks see most of or even the entire line-up of films. There’s no rushing about to different locations. Slamdance embraces the DIY, and is about empowering people outside of the industry establishment. Tech companies such as Blackmagic and Digital Bolex hold workshops geared towards enabling filmmakers with smaller budgets to be able to make films unencumbered by technical limits. This is a place where daring and often new or first-time filmmakers showcase their work. Often this is one of the first times or perhaps even the first time they’ve gone through the post and finishing process. It is the perfect place for an audio professional to shine.

In my experience, the films that screen best at Slamdance — the ones that are the most immersive and get the most attention — are the ones with a solid sound mix and a creative sound design. This is because some of the films in competition have had minimal or no post sound. They are enjoyable, but the audience finds itself sporadically taken out of the story for technical reasons. The directors and producers of these films are going to keep creating, and after being exposed to and competing against films with very good sound, are probably going to be looking to forge a creative partnership — one that could quite possibly grow and last the entirety or majority of their future careers — with a post sound person or team. Like Silver Sound!

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City.

Review: Nugen Audio’s Halo Upmixer

By Robin Shore

Upmixing is nothing new. The basic goal is to take stereo audio and convert it to higher channel count formats (5.1, 7.1, etc.) that can meet surround sound delivery requirements. The most common use case for this is when needing to use stereo music tracks in a surround sound mix for film or television.

Various plug-ins exist for this task, and the results run the gamut from excellent to lackluster. In terms of sonic quality Nugen Audio’s new Halo Upmixer plug-in falls firmly on the excellent side of this range. It creates a nice enveloping surround field, while staying true to the original stereo mix, and it doesn’t seem to rely on any weird reverb or delay effects that you sometimes find in other upmix plug-ins.

NUGEN Audio Halo Upmix - IO panel

What really sets Halo apart is its well-thought-out design, and the high level of control it offers in sculpting the surround environment.

Digging In
At the top of the plug-in window is a dropdown for selecting the channel configuration of the upmix — you can select any standard format from LCR up to 7.1. The centerpiece of Halo is a large circular scope that gives a visual representation of the location and intensity of the upmixed sound. Icons representing each speaker surround the scope, and can be clicked on to solo and mute individual channels.

Several arcs around the perimeter of the scope provide controls for steering the upmix. The Fade arcs around the scope will adjust how much signal is sent to the rear surround channels, while the Divergence arc at the top of the scope adjusts the spread between the mono center and front stereo speakers. On the left side of the scope is a grid representing diffusion. Increasing the amount of diffusion spreads the sound more evenly throughout the surround field, creating a less directionally focused upmix. Lower values of diffusion give a more detailed sound, with greater definition between the front and rear.

The LFE channel in the upmix can be handled in two ways. The “normal” LFE mode in Halo will add additional content into the LFE channels based on low frequencies in the original source. This is nice for adding a little extra oomph to the mix and it also preserves the LFE information when downmixing back to stereo.

For those that are worried about adding too much additional bass into the upmix, the “Split” LFE mode works more like a traditional crossover, siphoning off low frequencies into the LFE without leaving them in the full range channels.

NUGEN Audio Halo Upmix - 5_1 main view - using colour to determine energy source

An Easy And Nuanced UI
The layout and controls in Halo are probably the best of I’ve ever seen in this sort of plug-in. Moving the Fade and Divergence arcs around the circle feels very smooth and intuitive, almost like gesturing on a touchscreen, and the position of the arcs along the edge of the scope seems to correspond really well with what I hear through the speakers.

New users should have no problem quickly wrapping their heads around the basic controls. The diffusion is an especially nice touch as it allows you to very quickly alter the character of the upmix without drastically changing the overall balance between front, rear and center. Typically, I’ve found that leaving the diffusion somewhere on the higher end gives a nice even feel, but for times when I want the upmix to have a little more punch, dragging the diffusion down can really add a lot.

Of course, digging a little deeper reveals some more nuanced controls that may take some more time to master. Below the scope are controls for a shelf filter which, combined with higher levels of diffusion, can be used to dull the surround speakers without decreasing their overall level. This ensures that sharp transients in the rear don’t pop out too much and distract the audience’s attention from the screen in front of them.

The Center window focuses only on the front speakers and gives you some fine control on how the mono center channel is derived and played back. An I/O window acts like a mixer, allowing you to adjust input level of the stereo source, as well levels for each individual channel in the upmix. The settings window provides a high level of customization for the appearance and behavior of the plug-in. One of my favorite things here is the ability to assign different colors for each channel in the surround scope, which aside from creating a really neat looking display, gives a nice clear visual representation of what’s happening with the upmix.

NUGEN Audio Halo Upmix - IO panel       NUGEN Audio Halo Upmix - 7_1 main view

Playback
One of the most important considerations in an upmix tool is how it will all sound once everything is folded down for playback from televisions and portable devices, and Halo really shines here. Less savvy upmixing can cause phasing and other issues when converted back to stereo, so it’s important to be able to compare as you are working.

A monitoring section at the bottom of the plug-in allows you to switch between listening to the original source audio, the upmixed version and a stereo downmix so you can be certain that your mixing is folding down correctly. If that’s not enough, hitting the “Exact” button will guarantee that the downmixed version matches the stereo version completely, by disabling certain parameters that might affect the downmix. All of this can be done as you’re listening in realtime, allowing for fast and easy A-B comparisons.

Summing Up
Nugen has really put out a fine well-thought-out product with the Halo upmixer. It’s at once simple to operate and incredibly tweakable, giving lots of attention to important technical considerations. Above all it sounds great. For mixers who often find themselves having to fit two channel music into a multi-channel mix you’ll be hard pressed to find a nicer solution than this.

Halo Upmixer retails for $499 and is available in AAX, AU, VST2 and VST3 formats.

Robin Shore is a co-owner and audio post pro at SilverSound Studios in New York City.