Category Archives: Audio

NAB: Adobe’s spring updates for Creative Cloud

By Brady Betzel

Adobe has had a tradition of releasing Creative Cloud updates prior to NAB, and this year is no different. The company has been focused on improving existing workflows and adding new features, some based on Adobe’s Sensei technology, as well as improved VR enhancements.

In this release, Adobe has announced a handful of Premiere Pro CC updates. While I personally don’t think that they are game changing, many users will appreciate the direction Adobe is going. If you are color correcting, Adobe has added the Shot Match function that allows you to match color between two shots. Powered by Adobe’s Sensei technology, Shot Match analyzes one image and tries to apply the same look to another image. Included in this update is the long-requested split screen to compare before and after color corrections.

Motion graphic templates have been improved with new adjustments like 2D position, rotation and scale. Automatic audio ducking has been included in this release as well. You can find this feature in the Essential Sound panel, and once applied it will essentially dip the music in your scene based on dialogue waveforms that you identify.

Still inside of Adobe Premiere Pro CC, but also applicable in After Effects, is Adobe’s enhanced Immersive Environment. This update is for people who use VR headsets to edit and or process VFX. Team Project workflows have been updated with better version tracking and indicators of who is using bins and sequences in realtime.

New Timecode Panel
Overall, while these updates are helpful, none are barn burners, the thing that does have me excited is the new Timecode Panel — it’s the biggest new update to the Premiere Pro CC app. For years now, editors have been clamoring for more than just one timecode view. You can view sequence timecodes, source media timecodes from the clips on the different video layers in your timeline, and you can even view the same sequence timecode in a different frame rate (great for editing those 23.98 shows to a 29.97/59.94 clock!). And one of my unexpected favorites is the clip name in the timecode window.

I was testing this feature in a pre-release version of Premiere Pro, and it was a little wonky. First, I couldn’t dock the timecode window. While I could add lines and access the different menus, my changes wouldn’t apply to the row I had selected. In addition, I could only right click and try to change the first row of contents, but it would choose a random row to change. I am assuming the final release has this all fixed. If it the wonkiness gets flushed out, this is a phenomenal (and necessary) addition to Premiere Pro.

Codecs, Master Property, Puppet Tool, more
There have been some compatible codec updates, specifically Raw Sony X-OCN (Venice), Canon Cinema Raw Light (C200) and Red IPP2.

After Effects CC has also been updated with Master Property controls. Adobe said it best during their announcement: “Add layer properties, such as position, color or text, in the Essential Graphics panel and control them in the parent composition’s timeline. Use Master Property to push individual values to all versions of the composition or pull selected changes back to the master.”

The Puppet Tool has been given some love with a new Advanced Puppet Engine, giving access to improving mesh and starch workflows to animate static objects. Beyond updates to Add Grain, Remove Grain and Match Grain effects, making them multi-threaded, enhanced disk caching and project management improvements have been added.

My favorite update for After Effects CC is the addition of data-driven graphics. You can drop a CSV or JSON data file and pick-whip data to layer properties to control them. In addition, you can drag and drop data right onto your comp to use the actual numerical value. Data-driven graphics is a definite game changer for After Effects.

Audition
While Adobe Audition is an audio mixing application, it has some updates that will directly help anyone looking to mix their edit in Audition. In the past, to get audio to a mixing program like Audition, Pro Tools or Fairlight you would have to export an AAF (or if you are old like me possibly an OMF). In the latest Audition update you can simply open your Premiere Pro projects directly into Audition, re-link video and audio and begin mixing.

I asked Adobe whether you could go back and forth between Audition and Premiere, but it seems like it is a one-way trip. They must be expecting you to export individual audio stems once done in Audition for final output. In the future, I would love to see back and forth capabilities between apps like Premiere Pro and Audition, much like the Fairlight tab in Blackmagic’s Resolve. There are some other updates like larger tracks and under-the-hood updates which you can find more info about on: https://theblog.adobe.com/creative-cloud/.

Adobe Character Animator has some cool updates like overall character building updates, but I am not too involved with Character Animator so you should definitely read about things like the Trigger Improvements on their blog.

Summing Up
In the end, it is great to see Adobe moving forward on updates to its Creative Cloud video offerings. Data-driven animation inside of After Effects is a game-changer. Shot color matching in Premiere Pro is a nice step toward a professional color correction application. Importing Premiere Pro projects directly into Audition is definitely a workflow improvement.

I do have a wishlist though: I would love for Premiere Pro to concentrate on tried-and-true solutions before adding fancy updates like audio ducking. For example, I often hear people complain about how hard it is to export a QuickTime out of Premiere with either stereo or mono/discrete tracks. You need to set up the sequence correctly from the jump, adjust the pan on the tracks, as well as adjust the audio settings and export settings. Doesn’t sound streamlined to me.

In addition, while shot color matching is great, let’s get an Adobe SpeedGrade-style view tab into Premiere Pro so it works like a professional color correction app… maybe Lumetri Pro? I know if the color correction setup was improved I would be way more apt to stay inside of Premiere Pro to finish something instead of going to an app like Resolve.

Finally, consolidating and transcoding used clips with handles is hit or miss inside of Premiere Pro. Can we get a rock-solid consolidate and transcode feature inside of Premiere Pro? Regardless of some of the few negatives, Premiere Pro is an industry staple and it works very well.

Check out Adobe’s NAB 2018 update video playlist for details on each and every update.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Tips for music sourcing and usage

By Yannick Ireland

1. Music Genre vs. Video Theme
Although there are no restrictions, nor an exact science when choosing a music genre for your video content, there are some reliable genres of music for certain video themes.

For example, you may have a classic cinematic scene of lovers meeting for the first time. These visuals could be well complemented by a more orchestral, classical production, as generally there is a lot of emotive expression in this sort of music.

Another example would be sports video paired with electronic music. The high-adrenaline nature of electronic genres are a match made in heaven for extreme sports content. However, I would like to echo my first sentiment about there being no restriction —you may well choose to use something so unconventional that it creates a shock reaction, which may indeed be the desired effect.

But if you want subconscious acceptance from your viewers that the music really suits your imagery and that they were meant to be together, do some research of successfully similar content and from there you will be able to analyze the genre and attempt to replicate the successful marriage yourself.

2. Instruments for Feelings
Now let’s go a little deeper with the first tip and single out the instruments themselves. Two tracks of the same genre may have completely different instrumentation within their construction, and this could be relevant to your production.

If a filmmaker is working on something cinematic, then pieces of music with an instrumental solo could be invaluable for the feeling you are trying to convey. There have been scholarly articles on this subject with a more psychological investigation for the reasoning behind how certain emotions are triggered by certain instruments… but let’s keep it simple for now. For instance, music box sounds, xylophones and bells have always invoked the feeling of youth or enforced a child-like context in a production, especially as single instruments.

But remember, just because you have decided on a genre for your theme does not mean any good quality track will do. Listen to its makeup and content. Does it fulfill your intention?

3. Keep it Simple
A relatively easy, yet extremely important tip: don’t get an overly congested or epic-sounding track. Going orchestral and epic is fine for a similarly grand moment in your film, but when pairing any audio to video there is always a great danger of drawing the viewer away from the production itself due to overly intrusive music or audio.

Music is supposed to aid and complement your production, not draw you away from it. So even if the track sounds amazing and full at first listen, be aware of its potential to ultimately be detrimental overall.

4. Does the Track Change With Your Content?
Video productions generally change throughout their linear journey, and maybe your music should too. The obvious example of this would be the audio and video both reaching a crescendo together at the production’s conclusion.

In music, there is not always the formula of starting at “A” and finishing at “B,” because modern electronic and instrumental productions have very different middle eights or bridges. The fact that the music may switch up somewhere within the middle may be ideal for your video’s timeline, so perhaps you want to break the mold and change the vibe or content somewhere in the middle of the project. Certain tracks could help you do that seamlessly.

I would like just to suggest you think past the ideal genre and instrumentation, and that you really think about how the track is executed and if it is the best option for your production. The right music can enhance a video project more than anticipated and filmmakers should really get the most out of their audio.

5. Get a Second Opinion
Even working under certain guidelines and being prompted to think a certain way when sourcing music, it is always worth getting a second opinion to see if your experiences with the music are shared. Odds are that with a little extra time, you will find something much better than you may have done choosing something that sounded “good enough.” But never devalue a quick opinion check with your peers.

So, What’s Next?
Now that you know what to consider when browsing music and what potential
attributes to look for (and what to avoid), the next question is, “Where do you get your audio?”

So let’s say you have an ideal, familiar track in your head that would perfectly suit your production. The problem is maybe that’s a famous artist’s track that would cost thousands of dollars to license. So that’s a non-starter. But don’t you fret. Fortunately, there are now affordable and quality alternatives thanks to royalty free music libraries — essentially stock music.

Video editors, filmmakers and content creators of all kinds can visit these libraries to not only buy the track they need, but also get an automated license provided to them immediately with the purchase. There is no contacting artists or record labels, no complications on royalty split or composition and recording terms – it’s simple and consolidated.

The good news is there are plenty of these libraries around, but do your due diligence – and make sure the audio is high-quality and the pricing structure is simple.

High-quality music is incredibly important for all creative video productions. Now it is abundantly available and, not at extreme costs.


Yannick Ireland (@ArtisoundYan) is a musician, music producer and founder of Artisound, which is based in London.

Cinna 4.13

Behind the Title: Freelance sound editor Cathleen Conte

NAME: Cathleen Conte

COMPANY: Freelancer

WHAT’S YOUR JOB TITLE?
Sound Editor/ Recording Engineer

WHAT DOES THAT ENTAIL?
I use the sound recorded from production shoots, Foley and I also pull from SFX libraries. I clean up the sounds and place them in sync with the visuals. As a sound editor for film shorts and TV spots, I create and modify sounds to match and support the visuals. To create those sounds, I like to first get a feel for and gain an understanding of the story the visuals want or need to tell. I love when the story allows me to record Foley. There is something magical about interpreting what a visual would sound like.

For instance, rain sounds like millions of drops of water falling from the sky at the same time. Instead, I can create a soundscape that makes you feel like you can walk in between the raindrops and not get wet (I love that).

I’m grateful when an editor makes all the production soundtracks available. It makes the workflow much stronger by having the original sounds, being able to clean them up and either use them solo or tuck them under to build a SFX bed. When I don’t do Foley, I pull pre-recorded mastered sounds from a large database — sound effects libraries.

As a recording engineer, working with many different voiceover talents is always a treat. You never know what creative character just walked into your day (laughs). With just a very brief introduction, I can get a sense or tone of the talent and gauge their mood. I try to make them feel very welcome and comfortable in the booth and while on mic.

You don’t always get a chance to get a good mic check for levels. Some voice actors like to just get started. I get asked “are we rolling” while I’m still in the booth adjusting the mic, or with my hands still on the music stand. When you’re lucky, the talent will read through half the script at full delivery, which helps me track their golden voice. I like to take (what I feel) is a non-intrusive approach to small talk. From there I can put myself in a good starting place for levels and how much compression I’ll need to start with, if at all.

It’s important to make a connection with your talent in the booth. Establishing that connection will help capture the best reads, which is what the director wants and needs from the talent. Besides the technical aspects of working the gear and getting a good sound, it’s important to do the groundwork and make that human connection solid. From there, I let the gear do what it was made to do and record beautiful voices and sound.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Copywriting. Many times there are script changes that happen during a session for whatever reason — maybe a script is too long, too short or the legal department has flagged a line. They are almost always deadline driven. I’m often asked what do you think? What can we replace this line with? What will rhyme with it?

My first rule at the start of a session, in addition to establishing the human connection, is to not only become their engineer for the day but to become part of their team.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I have to admit, it’s the people. The clients and voiceover talents bring so much great energy to a room, it’s amazing! Regardless of whether we spend one hour together or 10 hours, everyone in the world is a creative at some level. Humans!

WHAT’S YOUR LEAST FAVORITE?
My least favorite is when the gear decides not to work. It is not often, but it does happen. I’m grateful for tech engineers!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I have to say my favorite time of day is 3pm. It marks the completion of a successful and productive morning, and it’s also the best time for an afternoon coffee.

WHY DID YOU CHOOSE THIS PROFESSION?
This profession chose me. I’ve been tapping on random surfaces and making “noise” since I was a baby. When I was around 10 years old, I was finding ways to play my older cousin Leo’s Casio k10 electronic mini-keyboard. It had dog barking and wind sounds and I could manipulate the pitch. Needless to say, I was a noisy, misunderstood child who appreciated the sounds around me.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
While working at AE Media, I mixed the Monster headphones behind-the-scenes video for Super Bowl LI. It was such a fun piece to work on. Having Monster Products creator Noel Lee talk about creating the headphones. It was also fun working on set with Iggy Azalea, Aerosmith axeman Joe Perry, Internet personality Ricegum, Big Kenny, Yo Gotti, Jonathan Cheban and Nsync’s Joey Fatone.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
All of them! When I’m asked to work on a project, I’m committed from start to finish.
Regardless of what day-to-day events unfold, seeing a project to its successful completion is very gratifying.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Zoom H4n Pro, Rode VideoMic Me, and my iPhone.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I’d be a veterinarian.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Breathing helps me to de-stress tremendously, and allows me to better focus on the task at hand. Breathing calms the body and helps maintain focus on accuracy and speed with a smile. I don’t allow deadlines to drive any project. Keeping a calm room and working together as a team will always help a session to stay on course and drive it to its destination on cruise control. My approach to stressful situations is nothing more than recognizing them as a challenge and finding a solution for them.


Review: Krotos Reformer Pro for customizing sounds

By Robin Shore

Krotos has got to be one of the most.innovative developers of sound design tools in the industry right now. That is a strong statement, but I stand by it. This Scottish company has become well known over the past few years for its Dehumaniser line of products, which bring a fresh approach to the creation of creature vocals and monster sounds. Recently, they released a new DAW plugin, Reformer Pro, which aims to give sound editors creative new ways of accessing and manipulating their sound effects.

Reformer Pro brings a procedural approach to working with sound effects libraries. According to their manual, “Reformer Pro uses an input to control and select segments of prerecorded audio automatically, and recompiles them in realtime, based on the characteristics of the incoming signal.” In layman’s terms this means you can “perform” sound effects from a library in realtime, using only a microphone and your voice.

It’s dead simple to use. A menu inside the plugin lets you choose from a list of libraries that have been pre-analyzed for use with Reformer Pro. Once you’ve loaded up the library you want, all that’s left to do is provide some sort of sonic input and let the magic happen. Whatever sound you put in will be instantly “reformed” into a new sound effect of your choosing. A number of libraries come bundled in when you buy Reformer Pro and additional libraries can be purchased from the Krotos website. The choice to include the Black Leopard library as a default when you first open the plugin was a very good one. There is just something so gratifying about breathing and grunting into a microphone and hearing a deep menacing growl come out the speakers instead of your own voice. It made me an immediate fan.

There are a few knobs and switches that let you tweak the response characteristics of Reformer Pro’s output, but for the most part you’ll be using sound to control things, and the amount of control you can get over the dynamics and rhythm of Reformer Pro’s output is impressive. While my immediate instinct was to drive Reformer Pro by vocalizing through a mic, any sound source can work well as an input. I also got great results by rubbing and tapping my fingers directly against the grill of a microphone and by dragging the mic across the surface of my desk.

Things get even more interesting if you start feeding pre-recorded audio into Reformer Pro. Using a Foley footstep track as the input for library of cloth and leather sounds creates a realistic and perfectly synced rustle track. A howling wind used as the input for a library of creaks and rattles can add a nice layer of texture to a scenes ambience tracks. Pumping music through Reformer Pro can generate some really wacky sounds and is great way to find inspiration and test out abstract sound design ideas.

If the only libraries you could use with Reformer Pro’s were the 100 or so available on the Krotos website it would still be a fun and innovative tool, but its utility would be pretty limited. What makes Reformer Pro truly powerful is its analysis tool. This lets you create custom libraries out of sounds from your own collection. The possibilities here are literally endless. As long as sound exists it can turned into a unique new plugin. To be sure some sounds are better for this than others, but it doesn’t take long at all figure out what kind of sounds will work best and I was pleasantly surprised with how well most of the custom libraries I created turned out. This is a great way to breath new life into an old sound effects collection.

Summing Up
Reformer Pro adds a sense liveliness, creativity and most importantly fun to the often tedious task of syncing sound effects to picture. It’s also a great way to breath new life into an old sound effects collection. Anyone who spends their days working with sound effects would be doing themselves a disservice by not taking Reformer Pro for a test drive, I imagine most will be both impressed and excited by it’s novel approach to sound effects editing and design.


Robin Shore is an audio engineer at NYC’s Silver Sound Studios


Review: RTW’s Masterclass Mastering Tools

By David Hurd

RTW, based in Cologne, Germany, has been making broadcast-quality metering tools for audio professionals since 1965. Today, we will be looking at its Masterclass Mastering Tools and Loudness Tools plug-ins, which are awesome to have in your arsenal if you are mastering music or audio for broadcast.

These tools operate both as DAW plugins and in standalone mode. I tested them in Magix Sound Forge.

To start, I simply opened Sound Forge and added the RTW plug-in to the Plug-in Chain. RTW’s Masterclass Mastering Tools handle all of the loudness standards for broadcast so that your mix doesn’t get squished while giving you a detailed picture of the dynamics of your mix for use on the Web.

The Masterclass Mastering bundle includes a lot of loudness presets that will conform your audio levels to the standards of other countries. Since the listeners of most of my projects reside in the USA, I used one of the US standard presets.

The CALM Act preset uses a K- weighted metering scale with “True Peak,” “Momentary,” “Short” and “Integrated Total Level” views, as well as a meter that displays your loudness range. I was mostly concerned with the Integrated Level and True Peak displays. The integrated level shows you an average of the perceived loudness over the entire length of the program. It actually improves your dynamic range since it doesn’t count the extremely quiet and loud areas in your mix.

This comes in handy on projects like a home improvement show that I work, where I have mostly dialog except for a loud power tool like an air nailer or chop saw.

As long as the whole program conforms to the average for US standards for Integrated Level, my dialog can be heard while still allowing the power tools to be loud. This allows me to have a robust mix and still keep it legal.

If you have ever tested the difference between Peak and RMS settings on a loudness plug-in, you know that your settings can make a huge difference in the perceived loudness of your audio signal. Usually, loud is good, but it depends on the hardware path that your program will have to take on its way to the listeners.

If your audio is going to be broadcast, your loud mix may be degraded when it is processed for broadcast by the station. If the broadcast output processing limiters think that your mix is too loud they will add compression or limiting of their own. Suddenly, you’ll learn too late that the station’s hardware has squished your wonderful loud and punchy mix into mush.

If your listeners are on the Web, rather than watching a TV broadcast, you will have less of a problem. Most of the Internet broadcast venues, like YouTube and iTunes, are using an automatic volume control that just adjusts the file volume instead of applying any compression or limiting to your audio. The net result is that your listeners will hear your mix as it was intended to be heard.

Digital clipping is an ugly thing, which no one wants any part of. To make sure that my program never clips, I also keep an eye on the True Peak meter. The True Peak meter looks for peaks in your audio program, and here’s the cool part. It actually calculates where your audio wave would have peaked had there been headroom and uses that level. This allows me to easily set an overall level for the whole mix that doesn’t include any clipping distortion.
As you probably know, the phase relationship between your audio channels is very important, so Masterclass Mastering Tools include tools for these as well.

You get a Stereo Correlation Meter, a Surround Sound Analyzer and a RealTime Frequency Analyzer. To top it off, you also get a Vectorscope for monitoring the phase relationship between any pair of audio channels.

It’s not like you couldn’t add a bunch of metering plug-ins to your present system and get roughly the same results. But, why would you want to? The Masterclass Mastering Tools from RTW puts everything that you need together in one easy-to-use package.

Summing Up
If you are on a budget, you may want to look into the Loudness Tools package, which is only $239 dollars. It contains everything the Mastering Tools package offers, except for the Surround Sound Analyzer, RealTime Analyzer and the Vectorscope. The full-blown Mastering Tools package is only $578.91, which gives you everything you need to comply with loudness standards all over the world.

For conforming world-class professional audio, you need to use professional tools, and Masterclass Mastering Tools will easily enable you to get the job done.


David Hurd own David Hurd Productions in Tampa, Florida. He has been reviewing products for over 20 years.

Hobo Audio’s Chris Stangroom discusses Jonestown doc

New York-based audio post house Hobo provided audio post and sound design for A&E’s two-hour doc Jonestown: The Women Behind the Massacre. The film focuses on the four women in Jim Jones’ inner circle who helped plan the 1978 Jonestown Massacre, one of the largest murder-suicide events in modern history.

Hobo is no stranger to the documentaries in the true crime genre, having recently worked on the acclaimed Netflix docs Voyeur and Amanda Knox, as well as multiple series on the Investigation Discovery channel, including Evil Lives Here and My Dirty Little Secret.

Senior engineer Chris Stangroom, who handled the project’s complex audio mix, says that true crime documentaries, an incredibly popular genre in film and TV currently, uniquely challenges sound designers and audio engineers to think about sound differently.

Let’s find out more from Stangroom about this project.

What did you and Hobo contribute to the film overall?
We were quite involved early on. Hobo producer Mary Valentino and I first met with execs at production company Every Hill Films and discussed the project in length. They wanted to include some form of recreation footage in the series, so Mary worked with them to cast both the voiceover and on-camera talent for those segments.

We then brought the voice talent into our studios and did some voice comparisons to the original recordings of the actual women of Jonestown. The talent did a phenomenal job being truthful and accurate to the powerful women of Jonestown, which gave Every Hill a lot to work with to complete the edit and lock the cut.

Once the locked cut was delivered to us we began the full audio post process. Our senior sound designer Diego Jimenez went through the entire two-hour show and layered in sound design to give the re-creation and archival footage a more dramatic texture. He listened closely to the music that the producers had chosen and added in layered drones and synth sounds that made everything a bit more tension-filled. That elevated the entire soundtrack to a deeper and darker place in anticipation of the fateful ending.

I focused on the mix and finessed all of the music, archival dialogue, interviews, sound design and recreation recordings so that the arc of the show was always moving and always keeping the viewer interested in what was being told. There was a significant amount of audio restoration required for the archival, but in the end everything turned out crystal clear.

Was there a specific scene or part of the film that you found most challenging or creatively interesting?
We spent a couple rounds on the recreation voiceovers. We tried keeping the voices full frequency to give them more of a voiceover feeling, but in the end we felt that a slight “futzing” was necessary to make the voiceovers sound like they were coming from a different sound source like a telephone, old speaker or radio. For each character I did something a little different. I even took one of the main voiceovers and recorded it down to an old, used cassette tape. That gave it a nice saturation to the voice and gives a style of compression on the audio that you don’t always get from emulation plugins like Audio Ease’s Speakerphone and ones like it.

The goal was to make the voices feel like they were possibly the actual recordings of these Jonestown women telling their most intimate thoughts. In reality, I believe no recordings of these actually exist but were based on written journal entries from the women at the time.

Speaking broadly, is there something unique about the true crime genre of filmmaking and audio/sound design? Does this genre need something specific 

from audio that you see or hear less of in other genres like comedy or drama?The true crime genre is fascinating to me because it challenges us sound designers and audio engineers to realize that sometimes removing sounds is just as powerful as adding them. These stories, especially ones like Jonestown, that are based on a real event, can be dark to their core. Simply hearing someone tell you about it can impact viewers deeply. Adding dramatic hits and heavy drones under the most chilling moments can actually take away from those moments.

In the Jonestown special, one of those moments occurs when the actual people involved with Jim James and his movement are talking about how the parents were asked to send their children to drink the cyanide-laced Kool-Aid first. I can’t even imagine that feeling, so I felt it needed to stand almost on its own without any audio flourishes. The words alone hit you like nothing else could. Less is very much more in that scene.

What technology did you rely on for this project?
Avid Pro Tools 12 HD, Soundminer, Izotope RX6 Advanced, some custom tools created at Hobo for the darker drones and sounds, Audio Ease’s Speakerphone and my timeless old boom-box.


Behind the Title: PlushNYC partner/mixer Mike Levesque, Jr.

NAME: Michael Levesque, Jr.

COMPANY: PlushNYC

CAN YOU DESCRIBE YOUR COMPANY?
We provide audio post production

WHAT’S YOUR JOB TITLE?
Partner/Mixer/Sound Designer

WHAT DOES THAT ENTAIL?
The foundation of it all for me is that I’m a mixer and a sound designer. I became a studio owner/partner organically because I didn’t want to work for someone else. The core of my role is giving my clients what they want from an audio post perspective. The other parts of my job entail managing the staff, working through technical issues, empowering senior employees to excel in their careers and coach junior staff when given the opportunity.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Everyday I find myself being the janitor in many ways! I’m a huge advocate of leading by example and I feel that no task is too mundane for any team member to take on. So I don’t cast shade on picking up a mop or broom, and also handle everything else above that. I’m a part of a team, and everyone on the team participates.

During our latest facility remodel, I took a very hands-on approach. As a bit of a weekend carpenter, I naturally gravitate toward building things, and that was no different in the studio!

WHAT TOOLS DO YOU USE?
Avid Pro Tools. I’ve been operating on Pro Tools since 1997 and was one of the early adopters. Initially, I started out on analog ¼-inch tape and later moved to the digital editing system SSL ScreenSound. I’ve been using Pro Tools since its humble beginnings, and that is my tool of choice.

WHAT’S YOUR FAVORITE PART OF THE JOB?
For me, my favorite part about the job is definitely working with the clients. That’s where I feel I am able to put my best self forward. In those shoes, I have the most experience. I enjoy the conversation that happens in the room, the challenges that I get from the variety of projects and working with the creatives to bring their sonic vision to life. Because of the amount of time i spend in the studio with my clients one of the great results besides the work is wonderful, long-term friendships. You get to meet a lot of different people and experience a lot of different walks of life, and that’s incredibly rewarding for me.

WHAT’S YOUR LEAST FAVORITE?
We’ve been really lucky to have regular growth over the years, but the logistics of that can be challenging at times. Expansion in NYC is a constant uphill battle!

WHAT IS YOUR FAVORITE TIME OF THE DAY?
The train ride in. With no distractions, I’m able to get the most work done. It’s quiet and allows me to be able to plan my day out strategically while my clarity is at its peak. That way I can maximize my day and analyze and prioritize what I want to get done before the hustle and bustle of the day begins.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I weren’t a mixer/sound designer, I would likely be a general contractor or in a role where I was dealing with building and remodeling houses.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I started when I was 19 and I knew pretty quickly that this was the path for me. When I first got into it, I wanted to be a music producer. Being a novice musician, it was very natural for me.

Borgata

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I recently worked on a large-scale project for Frito-Lay, a project for ProFlowers and Shari’s Berries for Valentine’s Day, a spot for Massage Envy and a campaign for the Broadway show Rocktopia. I’ve also worked on a number of projects for Vevo, including pieces for The World According To… series for artists — that includes a recent one with Jaden Smith. I also recently worked on a spot with SapientRazorfish New York for Borgata Casino that goes on a colorful, dreamlike tour of the casino’s app.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Back in early 2000s, I mixed a DVD box set called Journey Into the Blues, a PBS film series from Martin Scorsese that won a Grammy for Best Historical Album and Best Album Notes.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
– My cell phone to keep me connected to every aspect of life.
– My Garmin GPS Watch to help me analytically look at where I’m performing in fitness.
– Pro Tools to keep the audio work running!

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I’m an avid triathlete, so personal wellness is a very big part of my life. Training daily is a really good stress reliever, and it allows me to focus both at work and at home with the kids. It’s my meditation time.


Sound Reality: Clint Eastwood’s The 15:17 to Paris

By Jennifer Walden

Films based on true stories are always popular, and Oscar-winning director Clint Eastwood has made his share, including Sully and American Sniper, as well as Flags of Our Fathers and Letters From Iwo Jima. While those stories were inspired by real people/events, Eastwood has ramped up the reality a notch with his latest. The 15:17 to Paris — about three men who thwarted a terrorist attack on a train from Amsterdam to Paris — features the actual trio of heroes, Spencer Stone, Anthony Sadler and Alek Skarlatos, as themselves.

Alan R. Murray

Working with non-actors on a feature film could be challenging, so before deciding to cast the heroes in the film, Eastwood did a test run. He called up Warner Bros. Sound’s Oscar-winning supervising sound editor Alan R. Murray, with whom he had collaborated over 40 years. Eastwood mysteriously asked Murray to bring his recording equipment to his office on the Warner’s lot.

The next morning, Eastwood let Murray in on the plan. “Clint wanted to see how these guys would do on camera, so he asked me to walk them around the Warner Bros. lot. They were going to shoot them with a Steadicam, and Clint asked me to record the sound.”

They spent half a day touring the lot with Eastwood and introducing the heroes to people they met. “Walking around Warner Bros. with them, you could see that they were going to be able to pull this off. It was cool to get to talk to them, get to know them and have them re-tell what really happened.”

Capturing Reality
In keeping with his realistic vision for the film, Eastwood chose to shoot the train sequences on an actual Thalys train (the high-speed train on which the attack happened) instead of shooting on a soundstage. According to Murray, production sound mixer Steven Morrow spent five days on a Thalys train recording all the sounds relevant to the terrorist attack — everything from the doors to the train moving. He even captured the sound of an AK-47 jamming up.

“Everything had to be accurate down to the timing,” explains Murray. “With Steve Morrow’s recordings, we were able to recreate the actual events in sound, which was pretty awesome. We also had the recordings he’d done throughout Europe to help us recreate the atmosphere at some of the well-traveled places that these guys visited.”

The production team visited popular destinations in Rome, Venice, Amsterdam and France, following the path that the trio took on their trip through Europe. Eastwood captured the sites and sound mixer Morrow captured the sounds using a Sound Devices 970 64-channel recorder in conjunction with a 5.1 microphone setup.

Murray relied on Morrow’s library of location ambiences and recordings on the train to build a track that was as close to what the heroes experienced as possible. “We had this great library of sounds to work with and you need that when you are telling a true story. I was thankful that we were able to get all of these sound effects and direction on building the sound.”

Though Stone, Sadler and Skarlatos weren’t on-hand during sound editorial to help Murray piece together the sound of what happened, they had discussed every detail with the film’s editor, Blu Murray, who relayed that information to the post sound team.

One of the important aspects of the soundtrack was the build-up of tension prior to the attack on the train. Murray notes that when Stone, Sadler and Skarlatos first get on the train, the effects are at a comfortable level, allowing the audience to sink into the dialogue. Then as the film gets closer to the terrorist attack, the sounds of the train grow darker, more rumbly and ominous. The sound team pushed the train sounds into darker territory through pitch shifting, low-end enhancement, EQ and other processing via iZotope RX 6 Advanced, iZotope Iris 2 and Avid’s Pro Subharmonic and ReVibe II reverb. They also layered in sound design elements. “This helped to create an underlying tension, so that by the time we are finally into the attack we were going full bore with the sound of the train. It was louder there and more intense and harsher,” says Murray.

This isn’t just a film about the thwarted terror attack on the train. It’s also a story of the heroes’ lives — how they became who they are and how they learned the military and medical skills that allowed them to fight the attacker and survive. The most imagined sections of the film were the early lives of the men, since capturing those real sounds would involve a time machine. Recreating the trials of childhood required more emotion than realism, and Murray had to find sounds that could take the audience back to their memories of growing up and playing with friends. “We had to find just the right sounds to spark the audience’s imagination and memories of their childhood to help them relax and get into the story. The sound takes a backseat to the story in the beginning and then it slowly takes over the soundtrack as we work our way through the film.”

The 15:17 to Paris was mixed on Dub Stage 10 at Warner Bros. in Dolby Atmos by re-recording mixers Dean A. Zupancic, John T. Reitz and Jason King (also co-supervising sound editor). When the mix was complete, Eastwood invited Stone, Sadler and Skarlatos to the dub stage to watch the film.

“Being able to glance over at these guys as they watched their lives unfold on screen and see their reactions was priceless,” describes Murray. “We could see their demeanors change as we got into the story about what happened on the train. They were reliving that experience, and you could see it on their faces. They were so thankful at the end.”

When working on a film that’s based on an actual event, Murray says recreating the event as accurately as possible is at the forefront of the job. “We had to do that on Sully, and now The 15:17 to Paris. The difference was actually getting to meet these people and to talk to them in person. You’re so in awe of these guys and what they did and meeting them makes the experience more than a job. There have been so many cool experiences on this movie but that one was priceless,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Ren Klyce: Mixing the score for Star Wars: The Last Jedi

By Jennifer Walden

There are space battles and epic music, foreign planets with unique and lively biomes, blasters, lightsabers, a universe at war and a force that connects it all. Over the course of eight “Episodes” and through numerous spin-off series and games, fans of Star Wars have become well acquainted with its characteristic sound.

Creating the world, sonically, is certainly a feat, but bringing those sounds together is a challenge of equal measure. Shaping the soundtrack involves sacrifice and egoless judgment calls that include making tough decisions in service of the story.

Ren Klyce

Skywalker Sound’s Ren Klyce was co-supervising sound editor, sound designer and a re-recording mixer on Star Wars: The Last Jedi. He not only helped to create the film’s sounds but he also had a hand in shaping the final soundtrack. As re-recording mixer of the music, Klyce got a new perspective on the film’s story.

He’s earned two Oscar nominations for his work on the Rian Johnson-directed The Last Jedi — one for sound editing and another for sound mixing. We reached out to Klyce to ask about his role as a re-recording mixer, what it was like to work with John Williams’ Oscar-nominated score, and what it took for the team to craft The Last Jedi’s soundtrack.

You had all the Skywalker-created effects, the score and all the dialog coming together for the final mix. How did you bring clarity to what could have been be a chaotic soundtrack?
Mostly, it’s by forcing ourselves to potentially get rid of a lot of our hard work for the sake of the story. Getting rid of one’s work can be difficult for anyone, but it’s the necessary step in many instances. When you initially premix sound for a film, there are so many elements and often times we have everything prepared just in case they’re asked for. In the case of Star Wars, we didn’t know what director Rian Johnson might want and not want. So we had everything at the ready in either case.

On Star Wars, we ended up doing a blaze pass where we played everything from the beginning to the end of a reel all at once. We could clearly see that it was a colossal mess in one scene, but not so bad in another. It was like getting a 20-minute Cliff Notes of where we were going to need to spend some time.

Then it comes down to having really skilled mixers like David Parker (dialog) and Michael Semanick (sound effects), whose skill-sets include understanding storytelling. They understand what their role is about — which is making decisions as to what should stay, what should go, what should be loud or quiet, or what should be turned off completely. With sound effects, Michael is very good at this. He can quickly see the forest for the trees. He’ll say, “Let’s get rid of this. These elements can go, or the background sounds aren’t needed here.” And that’s how we started shaping the mix.

After doing the blaze pass, we will then go through and listen to just the music by itself. John Williams tells his story through music and by underscoring particular scenes. A lot of the process is learning what all the bits and pieces are and then weighing them up against each other. We might decide that the music in a particular scene tells the story best.

That is how we would start and then we worked together as a team to continue shaping the mix into a rough piece. Rian would then come in and give his thoughts to add more sound here or less music there, thus shaping the soundtrack.

After creating all of those effects, did you wish you were the one to mix them? Or, are you happy mixing music?
For me personally, it’s a really great experience to listen to and be responsible for the music because I’ve learned so much about the power of the music and what’s important. If it were the other way around, I might be a little more overly focused on the sound effects. I feel like we have a good dynamic. Michael Semanick has such great instincts. In fact, Rian described Michael as being an incredible storyteller, and he really is.

Mixing the music for me is a wonderful way to get a better scope of the entire soundtrack. By not touching the sound effects on the stage, those faders aren’t so precious. Instead, the movie itself and the soundtrack takes precedence instead of the bits and pieces that make it up.

What was the trickiest scene to mix in terms of music?
I think that would have to be the ski speeder sequence on the salt planet of Crait. That was very difficult because there was a lot of dodging and burning in the mix. In other words, Rian wanted to have loud music and then the music would have to dive down to expose a dialogue line, and then jump right back up again for more excitement and then dive down to make way for another dialogue line. Then boom, some sound effects would come in and the Millennium Falcon would zoom by. Then the Star Wars theme would take over and then it had to come down for the dialogue. So we worked that sequence quite a bit.

Our picture editor Bob Ducsay really guided us through the shape of that sequence. What was so great about having the picture editor present was that he was so intimate with the rhythm of the dialogue and his picture cutting. He knew where all of the story points were supposed to be, what motivated a look to the left and so on. Bob would say something like, “When we see Rose here, we really need to make sure we hear her musical theme, but then when we cut away, we need to hear the action.”

Were you working with John Williams’ music stems? Did you feel bad about pulling things out of his score? How do you dissect the score?
Working with John is obviously an incredible experience, and on this film I was lucky enough to work with Shawn Murphy as well, who is really one of my heroes and I’ve known him for years. He is the one who records the orchestra for John Williams and balances everything. Not only does he record the orchestra, but Shawn is a true collaborator with John as well. It’s incredible the way they communicate.

John is really mixing his own soundtrack when he’s up there on the podium conducting, and he’s making initial choices as to which instruments are louder than others — how loud the woodwinds play, how loud the brass plays, how loud the percussion is and how loud the strings are. He’s really shaping it. Between Williams and Murphy, they work on intonation, tuning and performance. They go through and record and then do pickups for this measure and that measure to make sure that everything is as good as it can be.

I actually got to witness John Williams do this incredible thing — which was during the recording of the score for the Crait scene. There was this one section where the brass was playing and John (who knows every single person’s name in that orchestra) called out to three people by name and said something like, “Mark, on bar 63, from beat two to beat six, can you not play please. I just want a little more clarity with two instruments instead of three. Thank you.” So they backed up and did a pick-up on that bar and that gentleman dropped out for those few beats. It was amazing.

In the end, it really is John who is creating that mix. Then, editorially, there would be moments where we had to change things. Ramiro Belgardt, another trusted confidant of John Williams, was our music editor. Once the music is recorded and premixed, it was up to Ramiro to keep it as close to what John intended throughout all of the picture changes.

A scene would be tightened or opened up, and the music isn’t going to be re-performed. That would be impossible to do, so it has to be edited or stretched or looped or truncated. Ramiro had the difficult job of making the music seem exactly how it was on the day it was performed. But in truth, if you look at his Pro Tools session, you’ll see all of these splices and edits that he did to make everything function properly.

Does a particular scene stick out?
There was one scene where Rey ignites the lightsaber for the very first time on Jedi Island, and there we did change the balance within the music. She’s on the cliff by the ocean and Luke is watching her as she’s swinging the lightsaber. Right when she ignites the lightsaber, her theme comes in, which is this beautiful piano melody. The problem was when they mixed the piano they didn’t have a really loud lightsaber sound going with it. We were really struggling because we couldn’t get that piano melody to speak right there. I asked Ramiro if there was any way to get that piano separately because I would love it if we could hear that theme come in just as strong as that lightsaber. Those are the types of little tiny things that we would do, but those are few and far between. For the most part, the score is how John and Shawn intended the mix to be.

It was also wonderful having Ramiro there as John’s spokesperson. He knew all of the subtle little sacred moments that Williams had written in the score. He pointed them out and I was able to push those and feature those.

Was Rian observing the sessions?
Rian attended every single scoring session and knew the music intricately. He was really excited for the music and wanted it to breathe. Rian’s knowledge of the music helped guide us.

Where did they perform and record the score?
This was recorded at the Barbra Streisand Scoring Stage on the Sony Pictures Studios lot in Culver City, California.

Are there any Easter eggs in terms of the score?
During the casino sequence there’s a beautiful piece of music that plays throughout, which is something like an homage that John Williams wrote, going back to the Cantina song that he wrote for the original Star Wars.

So, the Easter egg comes as the Fathiers are wreaking havoc in the casino and we cut to the inside of a confectionery shop. There’s an abrupt edit where all the music stops and you hear this sort of lounge piano that’s playing, like a piece of source music. That lounge piano is actually John Williams playing “The Long Goodbye,” which is the score that he wrote for the film The Long Goodbye. Rian is a huge fan of that score and he somehow managed to get John Williams to put that into the Star Wars film. It’s a wonderful little Easter egg.

John Williams is, in so many ways, the closest thing we have to Beethoven or Brahms in our time. When you’re in his presence — he’s 85 years old now — it’s humbling. He still writes all of his manuscripts by hand.

On that day that John sat down and played “The Long Goodbye” piano piece, Rian was so excited that he pulled out his iPhone and filmed the whole thing. John said, “Only for you, Rian, do I do this.” It was a very special moment.

The other part of the Easter egg is that John’s brother Donald Williams is a timpanist in the orchestra. So what’s cool is you hear John playing the piano and the very next sound is the timpani, played by his brother. So you have these two brothers and they do a miniature solo next to each other. So those are some of the fun little details.

John Williams earned an Oscar nomination for Best Original Music Score for Star Wars: The Last Jedi.
It’s an incredible score. One of the fortunate things that occurred on this film was that Rian and producer Ram Bergman wanted to give John Williams as much time as possible so they started him really early. I think he had a year to compose, which was great. He could take his time and really work diligently through each sequence. When you listen to just the score, you can hear all of the little subtle nuances that John composed.

For example, Rose stuns Finn and she’s dragging him on this little cart and they’re having this conversation. If you listen to just the music through there, the way that John has scored every single little emotional beat in that sequence is amazing. With all the effects and dialogue, you’re not really noticing the musical details. You hear two people arguing and then agreeing. They hate each other and now they like each other. But when you deconstruct it, you hear the music supporting each one of those moments. Williams does things like that throughout the entire film. Every single moment has all these subtle musical details. All the scenes with Snoke in his lair have these ominous, dark musical choir phrases for example. It’s phenomenal.

The moments where the choice was made to remove the score completely, was that a hard sell for the director? Or, was he game to let go of the score in those effects-driven moments?
No, it wasn’t too difficult. There was one scene that we did revert on though. It was on Crait, and Rian wanted to get rid of the whole big music sequence when Leia sees that the First Order is approaching and they have to shut the giant door. There was originally a piece of music, and that was when the crystal foxes were introduced. So we got rid of the music there. Then we watched the film and Rian asked us to put that music back.

A lot of the music edits were crafted in the offline edit, and those were done by music editor Joseph Bonn. Joe would craft those moments ahead of time and test them. So a lot of that was decided before it got to my hands.

But on the stage, we were still experimenting. Ramiro would suggest trying to lose a cue and we’d mute it from the sequence. That was a fun part of collaborating with everyone. It’s a live experiment. I would say that on this film most of the music editorial choices were decided before we got to the final mix. Joe Bonn spent months and months crafting the music guide, which helped immensely.

What is one audio tool that you could not have lived without on the mix? Why?
Without a doubt, it’s our Avid Pro Tools editing software. All the departments —dialog, Foley, effects and music were using Pro Tools. That is absolutely hands-down the one tool that we are addicted to. At this point, not having Pro Tools is like not having a hammer.

But you used a console for the final mix, yes?
Yes. Star Wars: The Last Jedi was not an in-the-box mix. We mixed it on a Neve DFC Gemini console in the traditional manner. It was not a live Pro Tools mix. We mixed it through the DFC console, which had its own EQ, dynamics processing, panning, reverb sends/returns, AUX sends/returns and LFE sends/returns.

The pre-pre-mixing was done in Pro Tools. Then, looking at the sound effects for example, that was shaped roughly in the offline edit room, and then that would go to the mix stage. Michael Semanick would pre-mix the effects through the Neve DFC in a traditional premixing format that we would record to 9.1 pre-dubs and objects. A similar process was done with the dialogue. So that was done with the console.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney

CAS celebrates Dunkirk, GoT and more at 54th Awards show

The 54th CAS Awards took place this weekend at the Omni Los Angeles Hotel. The event, hosted by comedian Michael Kosta, was a celebration of people and projects that featured the best sound mixing as well as what the Cinema Audio Society consider the top audio products from 2017.

Re-recording mixer Anna Behlmer was honored  with the CAS Career Achievement AwardShe  is the first woman to receive the CAS Career Achievement Honor. 

The following are all the winners from the evening: 

MOTION PICTURE – LIVE ACTION

Dunkirk

Production Mixer – Mark Weingarten, CAS

Re-recording Mixer – Gregg Landaker

Re-recording Mixer – Gary Rizzo, CAS

Scoring Mixer – Alan Meyerson, CAS

ADR Mixer – Thomas J. O’Connell

Foley Mixer – Scott Curtis

(The Dunkirk team is our main image.)

(Photo: Alex J. Berliner / ABImages)

The Coco team. 

MOTION PICTURE—ANIMATED

Coco

Original Dialogue Mixer – Vince Caro

Re-recording Mixer – Christopher Boyes

Re-recording Mixer – Michael Semanick

Scoring Mixer – Joel Iwataki

Foley Mixer – Blake Collins

MOTION PICTURE—DOCUMENTARY

Jane

Production Mixer – Lee Smith

Re-recording Mixer – David E. Fluhr, CAS

Re-recording Mixer – Warren Shaw

Scoring Mixer – Derek Lee

ADR Mixer – Chris Navarro, CAS

Foley Mixer – Ryan Maguire

TELEVISION MOVIE or MINI-SERIES

Black Mirror: USS Callister

Production Mixer – John Rodda, CAS

Re-recording Mixer – Tim Cavagin

Re-recording Mixer – Dafydd Archard

Re-recording Mixer – William Miller

ADR Mixer – Nick Baldock

Foley Mixer – Sophia Hardman

TELEVISION SERIES – 1 HOUR 

Game of Thrones: Beyond the Wall

Production Mixer – Ronan Hill, CAS

Production Mixer – Richard Dyer, CAS

Re-recording Mixer – Onnalee Blank, CAS

Re-recording Mixer – Mathew Waters, CAS

Foley Mixer – Brett Voss, CAS

Anna Behlmer with her CAS Career Achievement Award.

TELEVISION SERIES – 1/2 HOUR

Silicon Valley: Episode 9 “Hooli-Con”

Production Mixer – Benjamin A. Patrick, CAS

Re-recording Mixer – Elmo Ponsdomenech

Re-recording Mixer – Todd Beckett

TELEVISION NON-FICTION, VARIETY or MUSIC SERIES or SPECIALS

Rolling Stone: Stories from the Edge

Production Mixer – David Hocs

Production Mixer – Tom Tierney

Re-Recording Mixer – Tom Fleischman, CAS

OUTSTANDING PRODUCT – PRODUCTION

 Sound Devices’ Mix Pre- 10T Recorder

OUTSTANDING PRODUCT – POST PRODUCTION

 iZotope’s RX 6 Advanced

STUDENT RECOGNITION AWARD

Xing  Li

Chapman University – Orange, California


All Images: Alex J. Berliner/ABImages