Tag Archives: Silver Sound Studios

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Duo teams up to shoot, post Upside Down music video

The Gracie and Rachel music video Upside Down, a collaboration between the grand prize-winners of Silver Sound Showdown, was written, directed and edited by Ace Salisbury and Adam Khan. Showdown is one-part music video film festival, one-part battle of the bands. In a rare occurrence, Salisbury and Khan, both directors in competition, tied for grand prize with their music videos (RhodoraStairwell My Love). Showdown is held annually at Brooklyn Bowl, a bowling alley and venue in Brooklyn, New York.

Ace Salisbury

We reached out to the directors and the band to find out more about this Silver Sound-produced four-minute offering about a girl slowly unraveling emotionally, which was shot with a Red camera.

What did you actually win? What resources were available to you?
Salisbury: Winning the grand prize got me teamed up with the winning band Gracie and Rachel, and with Adam, to make a music video, with Silver Sound stepping in to offer their team to help shoot and edit, and giving time at their partner’s studio space at Parlay Studios in New Jersey.

Khan: Silver Sound offered a DP, editor and colorist, but Ace and I decided to do of all that ourselves. Parlay Studios graced us with three days in one of their spaces, as well as access to any equipment available. I was a kid in a candy store.

What was it like collaborating with a co-director and a band you had never met before?
Salisbury: Working with a co-director can be great — you can balance the workload, benefit from your differing skillsets and shake up your usual comfort zone for how you go about making work.

It’s important to stop being precious about your vision for the project, and be game to compromise on every idea you bring, but you learn a lot. Having never met Adam before made the whole experience more exciting. I had no ability to predict what he would bring to the project in terms of personality and work style from looking at his reel.

Adam Khan

Making a video with a production company is like having a well-connected producer on your project; once you get them onboard with your idea, all of the resources at their disposal come out of the woodwork, and things like studio space and high-power DPs come into the mix if you want them.

Pitching a music video to a band you’ve never met is interesting. You look at their music, aesthetics and previous music videos and try to predict what direction they’ll want to move in. You want to make them something they’ll embrace and want to promote the hell out of, not sweep under the rug. With Gracie and Rachel, they have such an established aesthetic, the key was figuring out how to take what they had and make it look polished.

Khan: At first I was wary of co-directing, I was concerned our ideas/egos would clash. But after meeting with Ace all worry vanished. Sure both of us had to compromise but there was never any friction; ideas and concepts flowed. Working with a new band requires looking back at their previous work and getting a feel for the aesthetic.

Gracie and Rachel: Collaborating with people you haven’t yet worked with is always a unique experience. You really get to hone your skills when it comes to thinking on your feet and practicing the art of give-and-take. Compromise is important, and so is staying true to your artistic values. If you can learn from others how to expand on what you already know, you’re gaining something powerful.

What is Upside Down about?
Salisbury: Upside Down is a video about emotional unraveling. Gracie portrays a girl whose world literally turns upside down as her mental state deteriorates. She is attached via a long rope to her shadow self, portrayed by Rachel, who takes control of her, pulling her across the floor and suspending her in the air. I co-authored the concept, co-directed and co-edited the video with Adam.

The original concept involved the fabrication of a complicated camera rig that would rotate both the actor and camera together. Imagine a giant rotisserie with the actor strapped in on one side and the camera on another, all rotating together. Just three days before our shoot date, the machine fabricator let us know that there were safety and liability issues which meant they couldn’t give us a finished rig. Adam and I scrambled to put together a modified concept using rope rigging in place of this ill-fated machine.

Khan: Upside Down is abstract; it was our job to make it tangible.

Gracie, you actually performed in upside down. What was that like, and what did you learn from that experience?
Yes, I really was suspended upside down! I trained for that for only about an hour or two prior to the actual shoot with some really lovely aerialist professionals. It was surprising to learn what your body feels like after doing dozens of takes upside down!

Can you talk about the digital glitches in the video?
Salisbury: On set, one of the monitors was seriously glitching out. I took a video of the glitched monitor with my phone and showed it to Adam, saying, “This is what our video needs to look like!”

We tried to match the footage of the glitching monitor on set, manipulating our footage in After Effects. We developed a scrambling technique for randomly generating white blocks on screen. As much as we liked those effects, the original phone video of the glitched monitor ended up making it into the final video.

People might be surprised by how much animation goes into a live-action project that they would never notice. For a project like Upside Down, a lot of invisible animation goes into it, like matting the edges of the spotlight’s spill on the stage floor. Not all animation jobs look like Steamboat Willie.

This video had a few invisible animated elements, like removing stunt wire, removing a spot on the stage, and cleaning up the black portions of the frame.

What did you shoot on?
Khan: This video was shot with a Red Epic Dragon rocking the Fujinon 19-90.

What tools were used for post?
Salisbury: The software used on this video was Adobe Premiere and After Effects—Premiere for the basic assembly of the footage, and After Effects for the heavy graphical lifting and color correct. Everything looks better coming out of After Effects.

Are there tools that you wish you had access to?
Salisbury: Personally, I was pretty happy with the tools we had access to. For this concept, we had everything we needed, tool-wise.

Khan: Faster computers.

How much of what you do is music video work? Do you work differently depending on the genre?
Khan: My focus is music videos, though you can find me working on all types of projects. From the production standpoint, things are the same. The real difference comes from what can be done in front of the camera. In a music video, one does not need to follow the rules. In fact, it is encouraged to break the rules.

Salisbury: I get hired to direct music videos every so often. The budget tends to be what dictates the experience, whether it’s going to be a video of a band rocking out shot on a DSLR or a high-intensity animated spectacle. Music videos can be a chance to establish wild aesthetics without the burden of having to justify them in your film’s world. You can go nuts. It’s a music video!

Where do you find inspiration?
Khan: Inspiration comes from past filmmakers and artists alike. I also pay close attention to my peers, there is some incredible stuff coming out. For this project, we pulled from Gracie and Rachel’s previous songs and visuals.

Salisbury: I find that I’m usually most influenced by old video games, but that wasn’t going to be a good fit for this band. My initial intention was to combine Gracie and Rachel’s aesthetic with a Quay Brothers aesthetic, but things shifted a bit by the end of the project.

Review: Audionamix IDC for cleaning dialogue

By Claudio Santos

Sound editing has many different faces. It is part of big-budget blockbuster movies and also an integral part of small hobby podcasting projects. Every project has its own sound needs. Some edit thousands upon thousands of sound effects. Others have to edit hundreds of hours of interviews. What most projects have in common, though, is that they circle around dialogue, whether in the form of character lines, interviews, narrators or any other format by which the spoken word guides the experience.

Now let’s be honest, dialogue is not always well recorded. Archival footage needs to be understood, even if the original recording was made with a microphone that was 20 feet away from the speaker in a basement full of machines. Interviews are often quickly recorded in the five minutes an artist has between two events while driving from point A to point B. And until electric cars are the norm, the engine sound will always be married to that recording.

The fact is, recordings are sometimes a little bit noisier than ideal, and it falls upon the sound editor to make it a little bit clearer.

To help with that endeavor, Audionamix has come out with the newest version of their IDC (Instant Dialogue Cleaner). I have been testing it on different kinds of material and must say that overall I’m very impressed with it.Let’s first get the awkward parts of this conversation out of the way. First, let’s see what the IDC is not.

– It is not a full-featured restoration workstation, such as Izotope RX.
– It does not depend on the cloud like other Audionamix plugins.
– It is not magic.

Honestly, all that is fine because what it does do, it does very well and in a very straightforward manner.

IDC aims to keep it simple. You get three controls plus output level and bypass. This makes trying out the plugin on different samples of audio a very quick task, which means you don’t waste time on clips that are beyond salvation.
The three controls you get are:
– Strength: The aggressiveness of the algorithm
– Background: Level of the separated background noise
– Speech: Level of the separated speech

Like all digital processing tools, things sound a bit techno glitchy toward the extremes of the scales, but within reasonable parameters the plugin makes a very good job of reducing background levels without gargling up the speech too noticeably. I personally had fairly good results with strengths around 40% to 60%, and background reductions of up to -24 dB. Anything more radical than that sounded heavily processed.

Now, it’s important to make a note that not all noise is the same. In fact, there are entirely different kinds of audio muck that obscures dialogue, and the IDC is more effective against some than others.

Noise reduction comparison between original clip (1), Cedar DNS Two VST (2), Audionamix IDC (3) and Izotope RX 7 Voice Denoise (4). The clip presents loud air conditioner noise in the background of close mic’d dialogue. All plugins had their level boosted by +4dB after processing.

– Constant broadband background noise (air conditioners, waterfalls, freezers): Here the IDC does fairly well. I couldn’t notice a lot of pumping at the beginning and end of phrases, and the background didn’t sound crippled either.

– Varying broadband background noise (distant cars passing, engines from inside cars): Here again, the IDC does a good job of increasing the dialogue/background ratio. It does introduce artifacts when the background noise spikes or varies very abruptly, but if the goal is to increase intelligibility then it is definitely a success in that area.

– Wind: On this kind of noise the IDC needs a little helping hand from other processes. I tried to clean up some heavily winded dialogue, and even though the wind was indeed lowered significantly so was the speech under it, resulting in a pumping clip that went up and down following the shadow of the removed wind. I believe with some pre-processing using high pass filters and a little bit of limiting the results could have been better, but if you are emergency buying this to clean up bad wind audio I’d definitely keep that in mind. It does work well on light wind reduction, but in those cases as well it seems it benefits from some pre-processing.

Summing Up
I am happily impressed by the plugin. It does not work miracles, but no one should really expect any tool to do so. It is great at improving the signal-to-noise ratio of your sound and does so in a very easy-to-use interface, which allows you to quickly decide whether you like the results or not. That alone is a plus that should be kept in consideration.


Claudio Santos is a sound mixer and tech aficionado who works at Silver Sound in NYC. He has worked on a wide range of sound projects ranging from traditional shows like I Was Prey for the Animal Planet and VR experiences like The Mile-Long Opera.

Making the indie short The Sound of Your Voice

Hunt Beaty is a director, producer and Emmy Award-winning production sound recordist based in Brooklyn. Born and raised in Nashville, this NYU Tisch film school grad spent years studying how films got made — and now he’s made his own.

The short film The Sound of Your Voice was directed by Beaty and written and produced by Beaty, José Andrés Cardona and Wesley Wingo. This thriller focuses a voiceover artist who is haunted by a past relationship as she sinks deep into the isolation of a recording booth.

Hunt Beaty

The Sound of Your Voice was shot on location at Silver Sound, a working audio post house, in New York City.

What inspired the film?
This short was largely reverse-engineered. I work with Silver Sound, a production and post sound studio in New York City, so we knew we had a potential location. Given access to such a venue, Andrés lit the creative fuse with an initial concept and we all started writing from there.

I’ve long admired the voiceover craft, as my father made his career in radio and VO work. It’s a unique job, and it felt like a world not often portrayed in film/TV up to this point. That, combined with my experience working alongside VO artists over the years, made this feel like fertile ground to create a short film.

The film is part of a series of shorts my producers and I have been making over the past few months. We’re all good friends who met at NYU film undergrad. While narrative filmmaking was always our shared interest and catalyst for making content, the realities of staying afloat in NYC after graduation prompted a focus on freelance commercial work in our chosen crafts in order to make a living. It’s been a great ride, but our own narrative work, the original passion, was often moved to the backburner.

After discussing the idea for years — we drank too many beers one night and decided to start getting back into narrative work by making shorts within a particular set of constrained parameters: one weekend to shoot, no stunts/weapons or other typical production complicators, stay close to home geographically, keep costs low, finish the film fast and don’t stop. We’re getting too old to remain stubbornly precious.

Inspired by a class we all took at NYU called “Sight and Sound: Film,” we built our little collective on the idea of rotating the director role while maintaining full support from the other two in whatever short currently in production.

Andrés owns a camera and can shoot, Wesley writes and directs and also does a little bit of everything. I can produce and use all of my connections and expertise having been in the production and post sound world for so long.

We shot a film that Wesley directed at the end of November and released it in January. We shot my film in January and are releasing it here and now. Andrés just directed a film that we’re in post-production on right now.

What were you personally looking to achieve with the film?
My first goal was to check my natural inclination to overly complicate a short story, either by including too many characters or bouncing from one location to another.
I wanted to stay in one close-fitting place and largely focus on one character. The hope was I’d have more time to focus on performance nuance and have multiple takes for each setup. Realistically, with indie filmmaking, you never have the time you want, but being able to work closely with the actors on variations of their performances was super important. I also wanted to be able to focus on the work of directing as opposed to getting lost in the ambition of the production itself.

How was the film made?
The production was noticeably scrappy, as all of these films inevitably become. The crew was just the three of us, in addition to a rotating set of production sound recordists and an HMU artist (Allison Brooke), who all agreed to help us out.

We rented from Hand Held films, which is a block away from Silver Sound, so we knew we could just wheel over all of the lights and grip equipment without renting a vehicle. Wesley would would primarily focus on camera and lighting support for Andrés, but we were all functioning within an “all hands on deck” framework. It was never pretty, but we made it all happen.

Our cast was incredibly chill, and we had worked with Harry, the engineer, on our first short Into Quiet. We shot the whole thing over a weekend, (again, one of our parameters) so we could do our best to get back to our day-to-day.

Also, a significant amount of re-writing was done to the off-screen voices in post based on the performance of our actress, which gave us some interesting room to play around while writing to the edit, tweaking the edit itself to fit new script, and in the recording of our voice actors to the cut. Meta? Probably.

We’ve been wildly fortunate to have the support of our post-sound team at Silver Sound. Theodore Robinson and Tarcisio Longobardi, in particular, gave so much of themselves to the sound design process in order to make this come to life. Given my background as a production recordist, and simply due to the storyline of this short, sound design was vital.

In tandem with that hard work, we had Alan Gordon provide the color grading and Brent Ferguson the VFX.

What are you working on now?
Mostly fretting about our cryptocurrency investments. But once that all crashes and burns, we’re going to try and keep the movie momentum going. We’re all pretty hungry to make stuff. Doing feels better than sitting idly and talking about it.

L-R: Re-recording mixer Cory Choy, Hunt Beaty and supervising sound editor Tarcisio Longobardi.

We’re currently in post for Andrés’ movie, which should be coming out in a month or so. Wesley also has a new script and we’re entering into pre-production for that one as well so that we can hopefully start the cycle all over again. We’re also looking for new scripts and potential collaborators to roll into our rotation while our team continues to build momentum towards potentially larger projects.

On top of that, I’m hanging up the headphones more often to transition out of production sound work and shift to fully producing and directing commercial projects.

What camera and why?
The Red Weapon Helium because the DP owns one already (laughs). But in all seriousness, it is an incredible camera. We also shot on elite anamorphic glass. Only had two focal lengths on set, a 50mm and a 100mm plus a diopter set.

How involved were you in the edit?
DP Andres Cardona singlehandedly did the first pass at a rough cut. After that, myself and my co-producer Wes Wingo gave elaborate notes on each cut thereafter. Also, we ended up re-writing some of the movie itself after reconsidering the overall structure of the film due to our lead actress’ strong performance in certain shots.

For example, I really loved the long close-up of Stacey’s eyes that’s basically the focal point of the movie’s ending. So I had to reconfigure some of the story points in order to give that shot its proper place in the edit to allow it to be the key moment the short is building up to.

The grade what kind of look were you going for?
The color grade was done by Alan Gordon at Post Pro Gumbo using a DaVinci Resolve. It was simply all about fixing inconsistencies and finessing what we shot in camera.

What about the sound design and mix?
The sound design was completed by Ted Robinson and Tarcisio Longobardi. The final mix was handled by Cory Choy at Silver Sound in New York. All the audio work was done in Reaper.

Review: RTW Continuous Loudness Control

By Tarcisio Longobardi

In the past, the most common way to measure the “loudness” of an audio signal was to represent amplitude variations over a certain period of time through a VU meter, a peak meter or a waveform. These tools, however, don’t give us an accurate estimate of how humans will perceive loudness. As a result, it is possible for two audio sources with identical peak values to be perceived as having very different overall loudness.

For example, movie and television show audio usually has a relatively wide dynamic range — characters can go from a whisper to a shout — as opposed to commercials where the audio material is compressed. Consequently, commercials often sound much louder than content — despite both being within specified decibel peak levels. Audiences experience loudness jumps between programs and commercials, and those inconsistencies are the cause of much frustration and complaint.

Solution! LKFS/LUFS
To find a solution to this problem, a new kind of measurement that attempts to quantify our perception of loudness has been introduced.

RTW_Loudness_Tools_menu

The United Nations specialized agency for information and communication technologies (ITU) introduced LKFS (described on ITU-R BS.1770), which means “Loudness K-weighted relative to Full Scale.” It’s a scale for audio measurement where a K-weighted filter (a filter used to emphasize frequencies that humans are more sensitive to) is applied to audio material to obtain weighted measurements that try to estimate how loud a human listener will perceive a given piece of audio.

The European Broadcast Union (EBU) uses the term LUFS, which stands for “Loudness Units Full Scale.” Despite the different names, LFKS and LUFS are identical. Both terms describe the same phenomenon and, just like LKFS, one unit of LUFS is equal to one dB.

Since 2012 many European countries have adopted EBU R128, which is a set of rules that set maximum levels of audio signals during broadcast based on K-weighted loudness normalization. In 2012 the US Congress approved the Commercial Advertisement Loudness Mitigation (CALM) Act. The Act sets rules similar to EBU R128, requiring commercials to have the same average volume as the programs they accompany.

RTW CLC
RTW Continuous Loudness Control (CLC) assures full EBU R128 compliance in a streamlined way, allowing the user to adjust program material to a target loudness value, as well as to an adjustable TruePeak value, with or without correcting the original loudness range.

CLC Works either as a standalone application or as a plug-in inside a digital audio workstation. I have used the standalone AAX version in Pro Tools 11, and the VST3 version in Reaper 5.18 to test the software.

CLC has a very simple user interface. The display area is divided into sections: The “Metering” section is a fully compliant EBU meter. The left side shows the measured loudness numeric values of the input audio; the right side the values of the relative processed signal. In the “Processing” section, the values and their dynamical processing are displayed on graphs. The bar graphs in the middle display the values of the current increase or decrease of loudness and the current percentage reduction of the loudness range.

The circle in the middle shows whether or not the brick wall limiter is engaged and how much of the signal is limited. Finally, at the bottom there are a bypass button, a button for resetting the device and another for accessing the setup menus. The interface has a plain, straightforward look, which makes it easily readable. However, there’s a lot of unused space, which makes it unnecessarily big — taking up valuable screen real estate.

While testing CLC, I found it has a very simple yet functional workflow. Once the processing mode (dynamic, semi-dynamic, static) and target loudness are chosen, maximum LRA e true peak limits are set, and the processor works automatically — granting us loudness normalization of the audio signal. I didn’t have to tweak it any more unless I wanted to change the target values.

During my tests, I was impressed with CLC’s ability to handle loudness variations in a very inaudible way — there was no pumping or brick-walling.

CLC says it accomplishes its dynamic correction by using data acquired by performing realtime analysis of the audio content to make predictions of the future signal progress. I have to admit I was skeptical in the beginning, but the software was surprisingly able to perform dynamic corrections in a very transparent way, even in the first seconds of content by using a technique that combines a look-ahead algorithm with statistical data. Dynamics compression is used only when very abrupt changes in dynamics occur, and adjustments triggered by abrupt volume increases are hardly audible.

CLC was able to recognize natural loudness increases caused by loud passages as a natural part of signal dynamic and thus leave it mostly unaltered. Low-dynamics passages pass through unaltered as well.

Thus CLC works well in its “default mode,” but it is also has presets that suit different kinds of programs (e.g. news and discussion, movies and sports). The presets change different parameters of the user interface and other invisible characteristics of the processor. I found that choosing the right preset is an important part of the workflow because this heavily affects the way the software handles the same situation.

Although these presets are effective, it could be preferable to have more control over the dynamic processing. Once the preset is chosen it is impossible to make any changes. It would be useful to have a plug-in with the possibility to fine-tune parameters and to know exactly what the software does to handle a specific situation.

While CLC is designed for a realtime workflow, it also features an interesting offline operation mode, called “file mode,” in the standalone application. If you use the file mode, the loudness of audio signals coming from a file can be processed. The loaded audio file will be analyzed and processed, according to your settings, and stored as a new audio file. The complete analysis before processing of the audio file will allow a more precise result with regards to the target value.

Oh, and the CLC was just as easy to use in 5.1 as it was in stereo.

Summing Up
CLC has a simple workflow and is extremely user friendly. Its advanced algorithm sounds good, and will help engineers make their programs CALM-compliant in a simple and efficient process. However, its “set and forget” workflow doesn’t allow the user fine control.

Supported platforms are: Windows 7, 8, 10: VST2.4, VST3, RTAS, AAX and standalone; Mac OS X: VST3, RTAS, AAX Native 64, AU and standalone. System requirements are: dual-core 2.5Ghz processor, 4GB RAM, 200MB free hard disk space, iLok USB smart key and iLok account, Internet connection required for activation process. Sample rates are: 44.1kHz, 48kHz, 88.2kHz, 96kHz.

Tarcisio Longobardi is a sound engineer at Silver Sound Studios in New York City.

Ergonomics from a post perspective (see what I did there?)

By Cory Choy

Austin’s SXSW is quite a conference, with pretty much something for everyone. I attended this year for three reasons: I’m co-producer and re-recording mixer on director Musa Syeed’s narrative feature film in competition, A Stray; I’m a member of the New York Post Alliance and was helping out at our trade show booth; and I’m a blogger and correspondent for this here online publication.

Given that my studio, Silver Sound in New York, has been doing a lot of sound for virtual reality recently, and with the mad scramble that every production company, agency and corporation has been in to make virtual reality content, I was pretty darn sure that my first post was going to be about VR (and don’t fear, I will be following up with one soon), but while I was checking out the new 360-degree video camera and rig offerings from Theta360 and 360Heros, and taking a good look at the new Micro Cinema Camera from Blackmagic, I noticed a pretty enthused and sizable crowd at one of the booths. The free Stella Artois beer samples were behind me, so I was pretty excited to go check out what I was sure must be the hip, new virtual reality demonstration, The Martian VR Experience.

To my surprise, the hot demo wasn’t for a new camera rig or stitching software. It was for a chair… sort of. Folks were gathered around a tall table playing with Legos while resting on the Mogo, the latest “leaning seat” offering from inventor Martin Keen’s company, Focal Upright. It’s kind of a mix between a monopod, a swivel stool and an exercise ball chair, and it comes in a neat little portable bag — have chair, will travel! Leaning chairs allow people to comfortably maintain good posture while at their workstations. They also encourage you to work in a position that, unlike most traditional chairs, allows for good blood flow through the legs.

They were raffling off one of those suckers, hence all the people around. I didn’t win, but I did have the opportunity to talk to Keen about his products — a full line of leaning chairs, standing desks and workstations. Keen’s a really nice fellow, and I’m going to follow-up with a more in-depth interview in the future. For now, though, the basics are that Keen’s company, Focal Upright, is one of several companies that have emerged to help folks who spend the majority of their days sitting (i.e. all of us post professionals) figure out a way to bring better posture and health back into their daily routines.

As a sound engineer, and therefore as someone who spends a whole lot of time every day at a console or mixing board, ergonomics is something I’ve had to pay a lot of attention to. So I thought I might share some of my, and my colleagues’, ergonomics experiences, thoughts and solutions.

Standing, Sitting and Posture
We’ve all been hearing about it for a while — sitting for extended periods of time can be bad for you. Sitting with bad posture can be even worse. My buddy and co-worker Luke Allen has been doing design and editing at a standing desk for the last couple of years, and he swears that it’s one of the best work decisions he’s ever made. After the first couple of months though, I noticed that he was complaining that his feet were getting tired and his knees hurt. In the same pickle? Luke solved his problem with a chef’s mat, like this one. Want to move around a little more at the standing desk? Check out the Level from FluidStance, another exhibitor at this year’s SXSW show. Not ready for a standing desk? Maybe try exploring a ball chair or fluid disc from physical therapy equipment manufacturer Isokinetics Inc.

Feel a little silly with that stuff? Instead, try getting up and walking around, or stretching every 20 minutes or so — 30 seconds to a minute should do. When I was getting started in this business, I was lucky enough to have the opportunity to apprentice under sound master craftsman Bernie Hajdenberg. I first got to observe him working in the mix, and then after some time, I had the privilege of operating sessions with him. One of the things that struck me was that Bernie usually stood up for the majority of the mixing sessions, and he would pace while discussing changes. When I was operating for him, he had me sit in a seat with no arms that could be raised pretty high. He told me this was very important, and it’s something that I’ve continued throughout my career. And lo and behold, I now realize that part of what Bernie had me do was to make sure that I wasn’t cutting off the circulation in my legs by keeping them extended and a little in front of me. And the chair with no arms helped keep my back straight.

Repetitive Stress
People who use their fingers a lot, whether typing or using a mouse, run the risk of developing a repetitive stress injury. Personally, I had a lot of wrist pain after my first year or so. What to do? First, make sure that your set-up isn’t forcing you to put your hands or wrists in an uncomfortable position. One of the things I did was elevate my mouse pad and keyboard. My buddy Tarcisio, and many others, use a trackball mouse. Try to break up your typing or mouse movements every couple of minutes with frequent, short bursts of finger stretches. After a few weeks of introducing stretching into my routine, my wrist and finger pain was alleviated greatly.

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City. He was recently nominated for an Emmy for “Outstanding Sound Mixing for Live Action” for Born To Explore.

Making our dialogue-free indie feature ‘Driftwood’

By Paul Taylor and Alex Megaro

Driftwood is a dialogue-free feature film that focuses on a woman and her captor in an isolated cabin. We chose to shoot entirely MOS… because we are insane. Or perhaps we were insane to shoot a dialogue-free feature in the first place, but our choice to remove sound recording from the set was both freeing and nerve wracking due to the potential post production nightmare that lay ahead.

Our decision was based on how, without speech to carry along the narrative, every sound would need to be enhanced to fill in the isolated world of our characters. We wanted draconian control over the soundscape, from every footstep to every door creak, but we also knew the sheer volume of work involved would put off all but the bravest post studios.

The film was shot in a week with a cast of three and a crew of three in a small cabin in Upstate New York. Our camera of choice was a Canon 5D Mark II with an array of Canon L-series lenses. We chose the 5D because we already owned it — so more bang for our buck — and also because it gave us a high-quality image, even with such a small body. Its ease of use allowed us to set up extremely quickly, which was important considering our extremely truncated shooting schedule. Having no sound team on set allowed us to move around freely without the concerns of planes passing overhead or cars rumbling in the distance delaying a shot.

The Audio Post
The editing was a wonderfully liberating experience in which we cut purely to image, never once needing to worry about speech continuity or a host of other factors that often come into play with dialogue-driven films. Driftwood was edited on Apple’s Final Cut Pro X, a program that can sometimes be a bit difficult for audio editing, but for this film it was a non-issue. The Magnetic Timeline was actually quite perfect for the way we constructed this film and made the entire process smooth and simple.

Once picture locked, we brought the project to New York City’s Silver Sound Studios, who jumped at the chance to design the atmosphere for an entire feature from the ground up. We sat with the engineers at Silver Sound and went through Driftwood shot-by-shot, creating a master list of all the sounds we thought necessary to include. Some were obvious, such as footsteps, breathing, clocks ticking and others less so, such as the humming of an old refrigerator or creaking of a wooden chair.

Once the initial list was set, we discussed whether or not to use stock audio or rerecord everything at the original location. Again, because we wanted complete control to create something wholly unique, we concluded it was important to return to the cabin and capture its particular character. Over the course of a few days, the Silver Sound gang rerecorded nearly every sound in the film, leaving only some basic Foley work to complete in their studio.

Once their library was complete, one of the last steps before mixing was to ADR all of the breathing. We had the actors come into the studio over a one-week period during which they breathed, moaned and sighed inside Silver Sound’s recording booth. These subtle sounds are taken for granted in most films, but for Driftwood they were of utter importance. The way the actors would sigh or breath could change the meaning behind that sound and change the subtext of the scene. If the characters cannot talk, then their expressions must be conveyed in other ways, and in this case we chose a more physiological track.

By the time we completed the film we had spent over a year recording and mixing the audio. The finished product is a world unto itself, a testament to the laborious yet incredibly exciting work performed by Silver Sound.

Driftwood was written, directed and photographed by Paul Taylor. It was produced and edited by Alex Megaro.

The sound of VR at Sundance and Slamdance

By Luke Allen

If last year’s annual Park City film and cultural meet-up was where VR filmmaking first dipped its toes in the proverbial water, count 2016’s edition as its full on coming out party. With over 30 VR pieces as official selections at Sundance’s New Frontier sub-festival, and even more content debuting at Slamdance and elsewhere, festival goers this year can barely take two steps down Main Street without being reminded of the format’s ubiquitous presence.

When I first stepped onto the main demonstration floor of New Frontier (which could be described this year as a de-facto VR mini-festival), the first thing that struck me was, why was it so loud in there? I admit I’m biased since I’m a sound designer with a couple of VR films being exhibited around town, but I am definitely backed up by a consensus among content creators regarding sound’s importance to creating the immersive environment central to VR’s promise as a format (I know, please forgive the buzzwords). In seemingly direct defiance of this principle, Sundance’s two main public exhibition areas for all the latest and greatest content were inundated with the rhythmic bass lines of booming electronic music and noisy crowds.

I suppose you can’t blame the programmers for some of this — the crowds were unavoidable — but I can’t help contrasting the New Frontier experience with the way Slamdance handled its more limited VR offering. Both festivals required visitors to sign up for a viewing time, but while the majority of Sundance’s screenings involved strapping on a headset while seated on a crowded bench in the middle of the demonstration floor, Slamdance reserved a quiet room for the screening experience. Visitors were advised to keep their voices to a murmur while in the viewing chamber, and the screenings took place in an isolated corner seated on — crucially — a chair with full range of motion.

Why is this important? Consider the nature of VR: the viewer has the freedom to look around the environment at their own discretion, and the best content creators make full use the 360-degrees at their disposal to craft the experience. A well-designed VR piece will use directional sound mixing to cue the viewer to look in different directions in order to further the story. It will also incorporate deep soundscapes that shift as one looks around the environment in order to immerse the viewer. Full range of motion, including horizontal rotation, is critical to allowing this exploration to take place.

The Visitor, which I had the pleasure of experiencing in Slamdance’s VR sanctuary, put this concept to use nicely by placing the two lead characters 90 degrees apart from one another, forcing the viewer to look around the beautifully-staged set in order to follow the story. Director James Kaelan and the post sound team at WEVR used subtly shifting backgrounds and eerie footsteps to put the viewer right in the middle of their abstract world.

VR New Frontier

Sundance’s New Frontier VR Bar.

Resonance, an experience directed by Jessica Brillhart that I sound designed and engineered, features violinist Tim Fain performing in a variety of different locations, mostly abandoned, selected both for their visual beauty and their unique sonic character. We used an Ambisonic microphone on set in order to capture the full range of acoustic reflections and, with a lot of love in the mix room at Silver Sound, were able to recreate these incredible sonic landscapes while enhancing the directionality of Fain’s playing in order to help the viewer follow him through the piece (Unfortunately, when Resonance was screening at Sundance’s New Frontier VR Bar, there was a loudspeaker playing Top 40 hits located about three feet above the viewer’s head).

In both of these live-action VR films, sound and picture serve to enhance and guide the experience of the other, much like in traditional cinema, but in a new and more enchanting way. I have had many conversations with other festival attendees here in Park City in which we recall shared VR experiences much like shared dreams, so personal and haunting is this format. We can only hope that in future exhibitions more attention is paid to ensure that viewers have the quiet they need to fully experience the artists’ work.

Luke Allen is a sound designer at Silver Sound Studios in New York City. You can reach him at luke@silversound.us

Slamdance, Sundance: Why it’s 
important to audio post pros

By Cory Choy

Why are we, audio post professionals, in Park City right now? The most immediate reason is Silver Sound has some skin in the game this year: we are both executive producers and the post sound team for Driftwood, a feature narrative in competition at Slamdance that was shot completely MOS. We also provided production and audio post on content Resonance and World Tour for Google’s featured VR Google Cardboard demos at Sundance’s New Frontier.

Sundance’s footprint is everywhere here. During the festival, the entirety of Park City is transformed — schools, libraries, cafes, restaurants, hotels and office buildings are now venues for screenings, panel discussions and workshops. A complex and comprehensive network of shuttle busses allows festival goers to get around without having to rely on their own vehicles.

Tech companies, such as Samsung and Canon, set up public areas for people to rest, talk, demo their wares and mingle. You can’t take three steps in any direction without bumping into a director, producer or someone who provides services to filmmakers. In addition to being chock full of industry folk — and this is a very important ingredient —Park City is charming, beautiful and very different than the American film hubs, New York and Los Angeles. So people are in a relaxed and friendly mood.

Films in competition at Sundance often feature big-name actors, receive critical acclaim and more and more often are receiving distribution. In short, this is the place to make personal connections with “indie” filmmaking professionals who are either directly, or through friends, connected to the studio system.

As a partner and engineer at a boutique sound studio in Manhattan, I see this as a fantastic opportunity to cut through the noise and hopefully put myself, and my company, on the radar of folks with whom I might not otherwise get a chance to meet or collaborate. It’s a chance for me, a post professional in the indie world, to elevate my game.

Slamdance
Slamdance sets up shop in one very specific location, the Treasure Mountain Inn on Main Street in Park City. It happens at the same time as Sundance — and is located right in eye of the storm — but has built a reputation for celebrating the most indie of the indies. Films in competition at Slamdance must have budgets under one million dollars (and many often have budgets far below that.) Where Sundance is a sprawling behemoth — long lines, hard-to-get tickets, dozens of venues, the inability to see all that is offered — Slamdance sort of feels like a friend’s very nice living room.

Slamdance logo

Many folks see most of or even the entire line-up of films. There’s no rushing about to different locations. Slamdance embraces the DIY, and is about empowering people outside of the industry establishment. Tech companies such as Blackmagic and Digital Bolex hold workshops geared towards enabling filmmakers with smaller budgets to be able to make films unencumbered by technical limits. This is a place where daring and often new or first-time filmmakers showcase their work. Often this is one of the first times or perhaps even the first time they’ve gone through the post and finishing process. It is the perfect place for an audio professional to shine.

In my experience, the films that screen best at Slamdance — the ones that are the most immersive and get the most attention — are the ones with a solid sound mix and a creative sound design. This is because some of the films in competition have had minimal or no post sound. They are enjoyable, but the audience finds itself sporadically taken out of the story for technical reasons. The directors and producers of these films are going to keep creating, and after being exposed to and competing against films with very good sound, are probably going to be looking to forge a creative partnership — one that could quite possibly grow and last the entirety or majority of their future careers — with a post sound person or team. Like Silver Sound!

Cory Choy is an audio engineer and co-founder of Silver Sound Studios in New York City.

Review: Nugen Audio’s Halo Upmixer

By Robin Shore

Upmixing is nothing new. The basic goal is to take stereo audio and convert it to higher channel count formats (5.1, 7.1, etc.) that can meet surround sound delivery requirements. The most common use case for this is when needing to use stereo music tracks in a surround sound mix for film or television.

Various plug-ins exist for this task, and the results run the gamut from excellent to lackluster. In terms of sonic quality Nugen Audio’s new Halo Upmixer plug-in falls firmly on the excellent side of this range. It creates a nice enveloping surround field, while staying true to the original stereo mix, and it doesn’t seem to rely on any weird reverb or delay effects that you sometimes find in other upmix plug-ins.

NUGEN Audio Halo Upmix - IO panel

What really sets Halo apart is its well-thought-out design, and the high level of control it offers in sculpting the surround environment.

Digging In
At the top of the plug-in window is a dropdown for selecting the channel configuration of the upmix — you can select any standard format from LCR up to 7.1. The centerpiece of Halo is a large circular scope that gives a visual representation of the location and intensity of the upmixed sound. Icons representing each speaker surround the scope, and can be clicked on to solo and mute individual channels.

Several arcs around the perimeter of the scope provide controls for steering the upmix. The Fade arcs around the scope will adjust how much signal is sent to the rear surround channels, while the Divergence arc at the top of the scope adjusts the spread between the mono center and front stereo speakers. On the left side of the scope is a grid representing diffusion. Increasing the amount of diffusion spreads the sound more evenly throughout the surround field, creating a less directionally focused upmix. Lower values of diffusion give a more detailed sound, with greater definition between the front and rear.

The LFE channel in the upmix can be handled in two ways. The “normal” LFE mode in Halo will add additional content into the LFE channels based on low frequencies in the original source. This is nice for adding a little extra oomph to the mix and it also preserves the LFE information when downmixing back to stereo.

For those that are worried about adding too much additional bass into the upmix, the “Split” LFE mode works more like a traditional crossover, siphoning off low frequencies into the LFE without leaving them in the full range channels.

NUGEN Audio Halo Upmix - 5_1 main view - using colour to determine energy source

An Easy And Nuanced UI
The layout and controls in Halo are probably the best of I’ve ever seen in this sort of plug-in. Moving the Fade and Divergence arcs around the circle feels very smooth and intuitive, almost like gesturing on a touchscreen, and the position of the arcs along the edge of the scope seems to correspond really well with what I hear through the speakers.

New users should have no problem quickly wrapping their heads around the basic controls. The diffusion is an especially nice touch as it allows you to very quickly alter the character of the upmix without drastically changing the overall balance between front, rear and center. Typically, I’ve found that leaving the diffusion somewhere on the higher end gives a nice even feel, but for times when I want the upmix to have a little more punch, dragging the diffusion down can really add a lot.

Of course, digging a little deeper reveals some more nuanced controls that may take some more time to master. Below the scope are controls for a shelf filter which, combined with higher levels of diffusion, can be used to dull the surround speakers without decreasing their overall level. This ensures that sharp transients in the rear don’t pop out too much and distract the audience’s attention from the screen in front of them.

The Center window focuses only on the front speakers and gives you some fine control on how the mono center channel is derived and played back. An I/O window acts like a mixer, allowing you to adjust input level of the stereo source, as well levels for each individual channel in the upmix. The settings window provides a high level of customization for the appearance and behavior of the plug-in. One of my favorite things here is the ability to assign different colors for each channel in the surround scope, which aside from creating a really neat looking display, gives a nice clear visual representation of what’s happening with the upmix.

NUGEN Audio Halo Upmix - IO panel       NUGEN Audio Halo Upmix - 7_1 main view

Playback
One of the most important considerations in an upmix tool is how it will all sound once everything is folded down for playback from televisions and portable devices, and Halo really shines here. Less savvy upmixing can cause phasing and other issues when converted back to stereo, so it’s important to be able to compare as you are working.

A monitoring section at the bottom of the plug-in allows you to switch between listening to the original source audio, the upmixed version and a stereo downmix so you can be certain that your mixing is folding down correctly. If that’s not enough, hitting the “Exact” button will guarantee that the downmixed version matches the stereo version completely, by disabling certain parameters that might affect the downmix. All of this can be done as you’re listening in realtime, allowing for fast and easy A-B comparisons.

Summing Up
Nugen has really put out a fine well-thought-out product with the Halo upmixer. It’s at once simple to operate and incredibly tweakable, giving lots of attention to important technical considerations. Above all it sounds great. For mixers who often find themselves having to fit two channel music into a multi-channel mix you’ll be hard pressed to find a nicer solution than this.

Halo Upmixer retails for $499 and is available in AAX, AU, VST2 and VST3 formats.

Robin Shore is a co-owner and audio post pro at SilverSound Studios in New York City.