Arraiy 4.11.19

Category Archives: Audio

Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.

Adobe’s new Content-Aware fill in AE is magic, plus other CC updates

By Brady Betzel

NAB is just under a week away, and we are here to share some of Adobe’s latest Creative Cloud offerings. And there are a few updates worth mentioning, such as a freeform project panel in Premiere Pro, AI-driven Auto Ducking for Ambience for Audition and addition of a Twitch extension for Character Animator. But, in my opinion, the Adobe After Effects updates are what this year’s release will be remembered by.


Content Aware: Here is the before and after. Our main image is the mask.

There is a new expression editor in After Effects, so us old pseudo-website designers can now feel at home with highlighting, line numbers and more. There are also performance improvements, such as faster project loading times and new deBayering support for Metal on macOS. But the first prize ribbon goes to the Content-Aware fill for video powered by Adobe Sensei, the company’s AI technology. It’s one of those voodoo features that when you use it, you will be blown away. If you have ever used Mocha Pro by BorisFX then you have had a similar tool known as the “Object Removal” tool. Essentially, you draw around the object you want to remove, such as a camera shadow or boom mic, hit the magic button and your object will be removed with a new background in its place. This will save users hours of manual work.

Freeform Project panel in Premiere.

Here are some details on other new features:

● Freeform Project panel in Premiere Pro— Arrange assets visually and save layouts for shot selects, production tasks, brainstorming story ideas, and assembly edits.
● Rulers and Guides—Work with familiar Adobe design tools inside Premiere Pro, making it easier to align titling, animate effects, and ensure consistency across deliverables.
● Punch and Roll in Audition—The new feature provides efficient production workflows in both Waveform and Multitrack for longform recording, including voiceover and audiobook creators.
● Surprise viewers in Twitch Live-Streaming Triggers with Character Animator Extension—Livestream performances are enhanced where audiences engage with characters in real-time with on-the-fly costume changes, impromptu dance moves, and signature gestures and poses—a new way to interact and even monetize using Bits to trigger actions.
● Auto Ducking for ambient sound in Audition and Premiere Pro — Also powered by Adobe Sensei, Auto Ducking now allows for dynamic adjustments to ambient sounds against spoken dialog. Keyframed adjustments can be manually fine-tuned to retain creative control over a mix.
● Adobe Stock now offers 10 million professional-quality, curated, royalty-free HD and 4K video footage and Motion Graphics templates from leading agencies and independent editors to use for editorial content, establishing shots or filling gaps in a project.
● Premiere Rush, introduced late last year, offers a mobile-to-desktop workflow integrated with Premiere Pro for on-the-go editing and video assembly. Built-in camera functionality in Premiere Rush helps you take pro-quality video on your mobile devices.

The new features for Adobe Creative Cloud are now available with the latest version of Creative Cloud.

Arraiy 4.11.19

After fire, SF audio house One Union is completely rebuilt

San Francisco-based audio post house One Union Recording Studios has completed a total rebuild of its facility. It features five all-new, state-of-the-art studios designed for mixing, sound design, ADR, voice recording and other sound work.

Each studio offers Avid/Euphonix digital mixing consoles, Avid MTRX interface systems, the latest Pro Tools software PT Ultimate and robust monitoring and signal processing gear. All studios have dedicated, large voice recording booths. One is certified for Dolby Atmos sound production. The facility’s infrastructure and central machine room are also all new.

One Union began its reconstruction in September 2017 in the aftermath of a fire that affected the entire facility. “Where needed, we took the building back to the studs,” says One Union president/owner John McGleenan. “We pulled out, removed and de-installed absolutely everything and started fresh. We then rebuilt the studios and rewired the whole facility. Each studio now has new consoles, speakers, furniture and wiring, and all are connected to new machine rooms. Every detail has been addressed and everything is in its proper place.”

During the 18 months of reconstruction, One Union carried on operations on a limited basis while maintaining its full staff. That included its team of engineers Joaby Deal, Eben Carr, Andy Greenberg, Matt Wood and Isaac Olsen who worked continuously and remain in place.

Reconstruction was managed by LA-based Yanchar Design & Consulting Group. All five studios feature Avid/Euphonix System 5 digital audio consoles, Pro Tools 2018 and Avid MTRX with Dante interface systems. Studio 4 adds Dolby Atmos capability with a full Atmos Production Suite as well as Atmos RMU. Studio 5, the facility’s largest recording space, has two MTRX systems, with a total of more than 240 analog, MADI and Dante outputs (256 inputs), integrated with a nine-foot Avid/Euphonix console. It also features a 110-inch, retractable projection screen in the control room and a 61-inch playback monitor in its dedicated voice booth. Among other things, the central machine room includes 300TB LTO archiving system.

John McGleenan

The facility was also rebuilt with an eye toward avoiding production delays. “All of the equipment is enterprise-grade and everything is redundant,” McGleenan notes. “The studios are fed by a dual power supply and each is equipped with dual devices. If some piece of gear goes down, we have a redundant system in place to keep going. Additionally, all our critical equipment is hot-swappable. Should any component experience a catastrophic failure, it will be replaced by the manufacturer within 24 hours.”

McGleenan adds that redundancy extends to broadband connectivity. To avoid outages, the facility is served by two 1Gig fiber optic connections provided by different suppliers. WiFi is similarly available through duplicate services.

One Union Recording was founded by McGleenan, a former advertising agency executive, in 1994 and originally had just one sound studio. More studios were soon added as the company became a mainstay sound services provider to the region’s advertising industry.

In recent years, the company has extended its scope to include corporate and branded media, television, film and games, and built a client base that extends across the country and around the world.

Recent work includes commercials for Mountain Dew and carsharing company Turo, the television series Law and Order SVU and Grand Hotel, and the game The Grand Tour.


Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.


Providing audio post for Three Identical Strangers documentary

By Randi Altman

It is a story that those of us who grew up in the New York area know well. Back in the ‘80s, triplet brothers separated at birth were reunited, after two of them attended the same college within a year of each other — with one being confused for the other. A classmate figured it out and their story was made public. Enter brother number three.

It’s an unbelievable story that at the time was considered to be a heart-warming tale of lost brothers — David Kellman, Bobby Shafran and Eddy Galland — who found each other again at the age of 19. But heart-warming turned heart-breaking when it was discovered that the triplets were part of a calculated, psychological research project. Each brother was intentionally placed in different levels of economic households, where they were “checked in on” over the years.

L-R: Chad Orororo, Nas Parkash and Kim Tae Hak

Last year, British director Tim Wardle told the story in his BAFTA-nominated documentary, Three Identical Strangers, produced by Raw TV. For audio post production, Wardle called on dialogue editor and re-recording mixer Nas Parkash, sound effects editor Kim Tae Hak and Foley and archive FX editor editor Chad Orororo, all from London-based post house Molinare. The trio was nominated for an MPSE Award earlier this year for their work on the film.

We recently reached out to the team to ask about workflow on this compelling work.

When you first started on Three Identical Strangers, did you realize then how powerful a film it was going to be?
Nas Parkash: It was after watching the film for the first time that we realized it was going to be seminal film. It’s an outrageous story — the likes of which we hadn’t come across before. We as a team have been fortunate to work on a broad range of documentary features, but this one has stuck out, probably because of its unpredictability and sheer number of plot twists.

Chad Orororo: I agree. It was quite an exciting moment to watch an offline cut and instantly know that it was going to be phenomenal project. The great thing about having this reaction was that the pressure was fused with excitement, which is always a win-win. Especially as the storytelling had so much charisma.

Kim Tae Hak: When the doc was first mentioned, I had no idea about their story, but soon after viewing the first cut I realized that this would be a great film. The documentary is based on an unbelievable true story — it evokes a lot of mixed feelings, and I wanted to ensure that every single sound effect element reflected those emotions and actions.

How early did you get involved in the project?
Tae Hak: I got to start working on the SFX as soon as the picture was locked and available.

Parkash: We had a spotting session a week before we started, with director Tim Wardle and editor Michael Harte, where we watched the film in sections and made notes. This helped us determine what the emotion in each scene should be, which is important when you’ve come to a film cold. They had been living with the edit, evolving it over months, so it was important to get up to speed with their vision as quickly as possible.

Courtesy of Newsday

Documentary audio often comes from many different sources and in varying types of quality. Can you talk about that and the challenges related to that?
Parkash: The audio quality was pretty good. The interview recordings were clean and on mic. We had two mics for every interview, but I went with the boom every time, as it sounded nicer, albeit more ambient, but with atmospheres that bedded in nicely.

Even the archive clips, such as from the Phil Donahue Show, were good. Funnily enough, you tend to get worse-sounding archives the more recent it is in history. 1970s stuff on the whole seems to have been preserved quite well, whereas stuff from the 1990s can be terrible.

Any technical challenges on the project?
Parkash: The biggest challenge for me was mixing in commercial music with vocals underneath interview dialogue. It had to be kept at a loud enough level to retain impact in the cinema, but low enough that it didn’t fight with the interview dialogue. The biggest deliberation was to what degree should we use sound effects in the drama recon — do we fully fill or just go with dialogue and music? In the end it was judged on a case-by-case basis.

How was Foley used within the doc?
Orororo: The Foley covered everything that you see on screen — all of the footsteps, clothing movement, shaving and breathing. You name it. It’s in there somewhere. My job was to add a level of subtle actuality, especially during the drama reconfiguration scenes.

These scenes took quite a bit of work to get right because they had to match the mood of the narration. For example, the coin spillage during the telephone box scene required a specific amount of coins on the right surface. It took a numerous amount of takes to get right because you can’t exactly control how objects fall and the texture also changes depending on the height from which you drop an object. So generally, there’s a lot more to consider when recording Foley than people may assume.

Unfortunately there we’re a few scenes where Foley was completely dropped (mainly on the archive material), but this is something that usually happens. The shape of the overall mix always takes favor over the individual elements that contribute to the mix. Teamwork makes the dream work, as they say, and I really think that showed with the final result.

Parkash: We did have sync sound recorded on location, but we decided it would be better to re-record at a higher fidelity. Some of it was noisy or didn’t sound cinematic enough. When it’s cleaner sound, you can make more of it.

What about the sound effects? Did you use a library or your own?
Parkash: Kim has his own extensive sound effects library. We also have our own personal ones, plus of Molinare’s. Anything we can’t find, we’ll go out and record. Kim has a Zoom recorder and his breathing has been featured on many films now (laughs).

Tae Hak: I mainly used my own SFX library. I always build up my own FX library, which I can apply instantly for any type of motioned pictures. I then tweak by applying various software plugins, such as Pitch & Time Pro, Altiverb and many more.

As a brief example of how I completed sound design for the opening title, the first thing I did was specifically look for realistic heartbeats of six-month infants. After successfully collecting some natural heartbeats. I then blended them with other synthetic elements as I started to vary the pitch slightly between them (for the three babies), applying various effects, such as chorus and reverb, so each heartbeat has a slightly different texture. It was a bit tricky to make them distinct, but still the same (like identical triplets).

The three heartbeats were panned across the front three speakers in order to create as much separation and clarity as possible. Once I was happy with the heartbeats as a foundation. I then added other sound elements, such as underwater, ambiguous liquids and other sound design elements. It was important for this sequence to build in a dramatic way, starting as mono and gradually filling the 5.1 space before a hard cut into the interview room.

Can you talk about working with director Tim Wardle?
Tae Hak: Tim was fantastic and very supportive throughout the project. As an FX editor, I had less face to face with him than Nas, but we had a spot session together before the first day of working, and we also talked about our sound designing approach over the phone, especially for the opening title, and the aforementioned sound of triplets’ heartbeats.

Orororo: Tim was great to work with! He’s a very open-minded director who also trusts in the talent that he’s working with, which can be hard to come by especially on a project as important as Three Identical Strangers.

Parkash: Tim and editor Michael Harte were wonderful to work with. The best aspect of working in this industry are the people you meet and the friendships you make. They are both cinephiles, who cited numerous other films and directors in order to guide us through the process — “this scene should feel like this scene from such and such movie.” But they were also open to our suggestions and willing to experiment with different approaches. It felt like a collaboration, and I remember having fun in those intense few weeks.

How much stock footage versus new footage was shot?
Parkash: It was all pretty much new — the sit-down interviews, drama recon and the GVs (b-roll). The archive material was obviously cleared from various sources. The home movie footage came mute, so we rebuilt the sound but upon review decided that it was better left mute. It tends to change the audience’s perspective of the material depending on whether you hear the sound or not. Without, it feels more like you’re looking upon the subjects, as opposed to being with them.

What kind of work went into the new interviews?
Parkash: EQ, volume automation, de-essing, noise reduction, de-reverb, reverb, mouth de-click — Izotope RX6 software basically. We’ve become quite reliant upon this software for unifying our source material into something consistent and to achieve a quality good enough to stand up in the cinema, at theatrical level.

What are you all working on now at Molinare?
Tae Hak: I am working on a project about football (soccer for Americans) as the FX editor. I can’t name it yet, but it’s a six-episode series for Amazon Prime. I’m thoroughly enjoying the project, as I am a football fan myself. It’s filmed across the world, including Russia where the World Cup was held last year. The story really captures the beautiful game, how it’s more than just a game, and its impact on so much of the global culture.

Parkash: We’ve just finished a series for Discovery ID, about spouses who kill each other. I’m also working on the football series that Kim mentioned for Amazon Prime. So, murder and footy! We are lucky to work on such varied, high-quality films, one after another.

Orororo: Surprisingly, I’m also working on this football series (smiles). I work with Nas fairly often and we’ve just finished up on an evocative, feature-length TV documentary that follows personal accounts of people who have survived massacre attacks in the US.

Molinare has revered creatives everywhere you look, and I’m lucky enough to be working with one of the sound greats — Greg Gettens — on a new HBO Channel 4 documentary. However, it’s quite secret so I can’t say much more, but keep your eyes peeled.

Main Image: Courtesy of Neon


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Hulu’s PEN15: Helping middle school sound funny

By Jennifer Walden

Being 13 years old once was hard enough, but the creators of the Hulu series PEN15 have relived that uncomfortable age — braces and all — a second time for the sake of comedy.

James Parnell

Maya Erskine and Anna Konkle might be in their 30s, but they convincingly play two 13-year-old BFFs journeying through the perils of 7th grade. And although they’re acting alongside actual teenagers, it’s not Strangers With Candy grown-up-interfacing-with-kids kind of weird — not even during the “first kiss” scene. The awkwardness comes from just being 13 and having those first-time experiences of drinking, boyfriends, awkward school dances and even masturbation (the topic of focus in Episode 3). Erskine, Konkle and co-showrunner Sam Zvibleman hilariously capture all of that cringe-worthy coming-of-age content in their writing on PEN15.

The show is set in the early 2000s, a time when dial-up Internet and the Sony Discman were prevailing technology. The location is a non-descript American suburb that is relatable in many ways to many people, and that is one way the show transports the audience back to their early teenage years.

At Monkeyland Audio in Glendale, California, supervising sound editor/re-recording mixer James Parnell and his team worked hard to capture that almost indescribable nostalgic essence that the showrunners were seeking. Monkeyland was responsible for all post sound editorial, including Foley, ADR, final 5.1 surround mixing and stereo fold-downs for each episode. Let’s find out more from Parnell.

I happened to watch Episode 3, “Ojichan,” with my mom, and it was completely awkward. It epitomized the growing pains of the teenage years, which is what this series captures so well.
Well, that was an awkward one to mix as well. Maya (Erskine) and Anna (Konkle) were in the room with me while I was mixing that scene! Obviously, the show is an adult comedy that targets adults. We all ended up joking about it during the mix — especially about the added Foley sound that was recorded.

The beauty of this show is that it has the power to take something that might otherwise be thought of as, perhaps, inappropriate for some, and humanize it. All of us went through that period in our lives and I would agree that the show captures that awkwardness in a perfect and humorous way.

The writers/showrunners also star. I’m sure they were equally involved with post as well as other aspects of the show. How were they planning to use sound to help tell their story?
Parnell: In terms of the post schedule, I was brought on very early. We were doing spotting sessions to pre-locked picture, for Episode 1 and Episode 3. From the get-go, they were very specific about how they wanted the show to sound. I got the vibe that they were going for that Degrassi/Afterschool Special feeling but kept in the year 2000 — not the original Degrassi of the early ‘90s.

For example, they had a very specific goal for what they wanted the school to sound like. The first episode takes place on the first day of 7th grade and they asked if we could pitch down the school bell so it sounds clunky and have the hallways sound sparse. When class lets out, the hallway should sound almost like a relief.

Their direction was more complex than “see a school hallway, hear a school hallway.” They were really specific about what the school should sound like and specific about what the girls’ neighborhoods should sound like — Anna’s family in the show is a bit better off than Maya’s family so the neighborhood ambiences reflect that.

What were some specific sounds you used to capture the feel of middle school?
The show is set in 2000, and they had some great visual cues as throwbacks. In Episode 4 “Solo,” Maya is getting ready for the school band recital and she and her dad (a musician who’s on tour) are sending faxes back and forth about it. So we have the sound of the fax machine.

We tried to support the amazing recordings captured by the production sound team on-set by adding in sounds that lent a non-specific feeling to the school. This doesn’t feel like a California middle school; it could be anywhere in America. The same goes for the ambiences. We weren’t using California-specific birds. We wanted it to sound like Any Town, USA so the audience could connect with the location and the story. Our backgrounds editor G.W. Pope did a great job of crafting those.

For Episode 7, “AIM,” the whole thing revolves around Maya and Anna’s AOL instant messenger experience. The creatives on the show were dreading that episode because all they were working with was temp sound. They had sourced recordings of the AOL sound pack to drop into the video edit. The concern was how some of the Hulu execs would take it because the episode mostly takes place in front of a computer, while they’re on AOL chatting with boys and with each other. Adding that final layer of sound and then processing on the mix stage helped what might otherwise feel like a slow edit and a lagging episode.

The dial-up sounds, AOL sign-on sounds and instant messenger sounds we pulled from library. This series had a limited budget, so we didn’t do any field recordings. I’ve done custom recordings for higher-budget shows, but on this one we were supplementing the production sound. Our sound designer on PEN15 was Xiang Li, and she did a great job of building these scenes. We had discussions with the showrunners about how exactly the fax and dial-up should sound. This sound design is a mixture of Xiang Li’s sound effects editorial with composer Leo Birenberg’s score. The song is a needle drop called “Computer Dunk.” Pretty cool, eh?

For Episode 4, “Solo,” was the middle school band captured on-set? Or was that recorded in the studio?
There was production sound recorded but, ultimately, the music was recorded by the composer Leo Birenberg. In the production recording, the middle school kids were actually playing their parts but it was poorer than you’d expect. The song wasn’t rehearsed so it was like they were playing random notes. That sounded a bit too bad. We had to hit that right level of “bad” to sell the scene. So Leo played individual instruments to make it sound like a class orchestra.

In terms of sound design, that was one of the more challenging episodes. I got a day to mix the show before the execs came in for playback. When I mixed it initially, I mixed in all of Leo’s stems — the brass, percussion, woodwinds, etc.

Anna pointed out that the band needed to sound worse than how Leo played it, more detuned and discordant. We ended up stripping out instruments and pitching down parts, like the flute part, so that it was in the wrong key. It made the whole scene feel much more like an awkward band recital.

During the performance, Maya improvises a timpani solo. In real life, Maya’s father is a professional percussionist here in LA, and he hooked us up with a timpani player who re-recorded that part note-for-note what she played on-screen. It sounded really good, but we ended up sticking with production sound because it was Maya’s unique performance that made that scene work. So even though we went to the extremes of hiring a professional percussionist to re-perform the part, we ultimately decided to stick with production sound.

What were some of the unique challenges you had in terms of sound on PEN15?
On Episode 3, “Ojichan,” Maya is going through this process of “self-discovery” and she’s disconnecting her friendship from Anna. There’s a scene where they’re watching a video in class and Anna asks Maya why she missed the carpool that morning. That scene was like mixing a movie inside a show. I had to mix the movie, then futz that, and then mix that into the scene. On the close-ups of the 4:3 old-school television the movie would be less futzed and more like you’re in the movie, and then we’d cut back to the girls and I’d have to futz it. Leo composed 20 different stems of music for that wild life video. Mixing that scene was challenging.

Then there was the Wild Things film in Episode 8, “Wild Things.” A group of kids go over to Anna’s boyfriend’s house to watch Wild Things on VHS. That movie was risqué, so if you had an older brother or older cousin, then you might have watched it in middle school. That was a challenging scene because everyone had a different idea of how the den should sound, how futzed the movie dialogue should be, how much of the actual film sound we could use, etc. There was a specific feel to the “movie night” that the producers were looking for. The key was mixing the movie into the background and bringing the awkward flirting/conversation between the kids forward.

Did you have a favorite scene for sound?
The season finale is one of the bigger episodes. There’s a middle school dance and so there’s a huge amount of needle-drop songs. Mixing the music was a lot of fun because it was a throwback to my youth.

Also, the “AIM” episode was fun because it ended up being fun to work on — even though everyone was initially worried about it. I think the sound really brought that episode to life. From a general standpoint, I feel like sound lent itself more so than any other aspect to that episode.

The first episode was fun too. It was the first day of school and we see the girls getting ready at their own houses, getting into the carpool and then taking their first step, literally, together toward the school. There we dropped out all the sound and just played the Lit song “My Own Worst Enemy,” which gets cut off abruptly when someone on rollerblades hops in front of the girls. Then they talk about one of their classmates who grew boobs over the summer, and we have a big sound design moment when that girl turns around and then there’s another needle-drop track “Get the Job Done.” It’s all specifically choreographed with sound.

The series music supervisor Tiffany Anders did an amazing job of picking out the big needle-drops. We have a Nelly song for the middle school dance, we have songs from The Cranberries, and Lit and a whole bunch more that fit the era and age group. Tiffany did fantastic work and was great to work with.

What were some helpful sound tools that you used on PEN15?
Our dialogue editor’s a huge fan of iZotope’s RX 7, as am I. Here at Monkeyland, we’re on the beta-testing team for iZotope. The products they make are amazing. It’s kind of like voodoo. You can take a noisy recording and with a click of a button pretty much erase the issues and save the dialogue. Within that tool palette, there are lot of ways to fix a whole host of problems.

I’m a huge fan of Audio Ease’s Altiverb, which came in handy on the season finale. In order to create the feeling of being in a middle school gymnasium, I ran the needle-drop songs through Altiverb. There are some amazing reverb settings that allow you to reverse the levels that are going to the surround speakers specifically. You can literally EQ the reverb, take out 200Hz, which would make the music sound more boomy than desired.

The lobby at Monkeyland is a large cinder-block room with super-high ceilings. It has acoustics similar to a middle school gymnasium. So, we captured a few impulse responses (IR), and I used those in Altiverb on a few lines of dialogue during the school dance in the season finale. I used that on a few of the songs as well. Like, when Anna’s boyfriend walks into the gym, there was supposed to be a Limp Bizkit needle-drop but that ended up getting scrapped at the last minute. So, instead there’s a heavy-metal song and the IR of our lobby really lent itself to that song.

The show was a simple single-card Pro Tools HD mix — 256 tracks max. I’m a huge fan of Avid and the new Pro Tools 2018. My dialogue chain features Avid’s Channel Strip; McDSP SA-2; Waves De-Esser (typically bypassed unless being used); McDSP 6030 Leveling Amplifier, which does a great job at handling extremely loud dialogue and preventing it from distorting, as well as Waves WNS.

On staff, we have a fabulous ADR mixer named Jacob Ortiz. The showrunners were really hesitant to record ADR, and whenever we could salvage the production dialogue we did. But when we needed ADR, Jacob did a great job of cueing that, and he uses the Sound In Sync toolkit, including EdiCue, EdiLoad and EdiMarker.

Any final thoughts you’d like to share on PEN15?
Yes! Watch the show. I think it’s awesome, but again, I’m biased. It’s unique and really funny. The showrunners Maya, Anna and Sam Zvibleman — who also directed four episodes — are three incredibly talented people. I was honored to be able to work with them and hope to be a part of anything they work on next.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney


Spider-Man Into the Spider-Verse: sound editors talk ‘magical realism’

By Randi Altman

Sony Pictures’ Spider-Man: Into the Spider-Verse isn’t your ordinary Spider-Man movie, from its story to its look to its sound. The filmmakers took a familiar story and turned it on its head a bit, letting audiences know that Spider-Man isn’t just one guy wearing that mask… or even a guy, or even from this dimension.

The film focuses on Miles Morales, a teenager from Brooklyn, struggling with all things teenager while also dealing with the added stress of being Spider-Man.

Geoff Rubay

Audio played a huge role in this story, and we recently reached out to Sony supervising sound editors Geoff Rubay and Curt Schulkey to dig in a bit deeper. The duo recently won an MPSE Award for Outstanding Achievement in Sound Editing — Feature Animation… industry peers recognizing the work that went into creating the sound for this stylized world.

Let’s find out more about the sound process on Spider-Man: Into the Spider Verse, which won the Academy Award for Best Animated Feature.

What do you think is the most important element of this film’s sound?
Curt Schulkey: It is fun, it is bold, it has style and it has attitude. It has energy. We did everything we could to make the sound as stylistic and surprising as the imagery. We did that while supporting the story and the characters, which are the real stars of the movie. We had the opportunity to work with some incredibly creative filmmakers, and we did our best to surprise and delight them. We hope that audiences like it too.

Geoff Rubay: For me, it’s the fusion of the real and the fantastic. Right from the beginning, the filmmakers made it clear that it should feel believable — grounded — while staying true to the fantastic nature of the visuals. We did not hold back on the fantastic side, but we paid close attention to the story and made sure we were supporting that and not just making things sound awesome.

Curt Schulkey

How early did your team get involved in the film?
Rubay: We started on an SFX pre-design phase in late February for about a month. The goal was to create sounds for the picture editors and animators to work with. We ended up doing what amounted to a temp mix of some key sequences. The “Super Collider” was explored. We only worked on the first sequence for the collider, but the idea was that material could be recycled by the picture department and used in the early temp mixes until the final visuals arrived.

Justin Thompson, the production designer, was very generous with his time and resources early on. He spent several hours showing us work-in-progress visuals and concept art so that we would know where visuals would eventually wind up. This was invaluable. We were able to work on sounds long before we saw them as part of the movie. In the temp mix phase, we had to hold back or de-emphasize some of those elements because they were not relevant yet. In some cases, the sounds would not work at all with the storyboards or un-lit animation that was in the cut. Only when the final lit animation showed up would those sounds make sense.

Schulkey: I came onto the film in May, about 9.5 months before completion. We were neck-deep in following changes throughout our work. We were involved in the creation of sounds from the very first studio screening, through previews and temp mixes, right on to the end of the final mix. This sometimes gave us the opportunity to create sounds in advance of the images, or to influence the development of imagery and timing. Because they were so involved in building the movie, the directors did not always have time to discuss their needs with us, so we would speculate on what kinds of sounds they might need or want for events that they were molding visually. As Geoff said, the time that Justin Thompson spent with us was invaluable. The temp-mix process often gave us the opportunity to audition creations for the directors/producers.

What sort of direction did you receive from the directors?
Schulkey: Luckily, because of our previous experiences with producers Chris Miller and Phil Lord and editor Bob Fisher, we had a pretty good idea of their tastes and sensitivities, so our first attempts were usually pointed in the right direction. The three directors — Bob Persichetti, Peter Ramsey and Rodney Rothman — also provided input, so we were rich with direction.

As with all movies, we had hundreds of side discussions with the directors along the way about details, nuances, timing and so on. I think that the most important overall direction we got from the filmmakers was related to the dynamic arc of the movie. They wanted the soundtrack to be forceful but not so much that it hurt. They wanted it to breathe — quiet in some spots, loud in others, and they wanted it to be fun. So, we had to figure out what “fun” sounds like.

Rubay: This will sound strange, but we never did a spotting session for the movie. We just started our work and got feedback when we showed sequences or did temp mixes. Phil called when we started the pre-design phase and gave us general notes about tone and direction. He made it clear he did not want us to hold back, but he wanted to keep the film grounded. He explained the importance of the various levels of technology of different characters.

Peni Parker is from the 31st century, so her robot sidekick needed to sound futuristic. Scorpion is a pile of rusty metal. Prowler’s tech is appropriated from his surroundings and possibly with some help from Kingpin. We discussed the sound of previous Spider-Man movies and asked how much we needed to stay true to established sounds from those films. The direction was “not at all unless it makes sense.” We endeavored to make Peter Parker’s web-slings sound like the previous films. After that, we just “went for it.”

How was working on a film like this different than working on something live-action? Did it allow you more leeway?
Schulkey: In a live-action film, most or all of the imagery is shot before we begin working. Many aspects of the sound are already stamped in. On this film, we had a lot more creative involvement. At the start, a good percentage of the movie was still in storyboards, so if we expanded or contracted the timing of an event, the animators might adjust their work to fit the sounds. As the visual elements developed, we began creating layers of sound to support them.

For me, one of the best parts of an animated film’s soundtrack is that no sounds are imposed by the real world, as is often the case in live-action productions. In live-action, if a dialogue scene is shot on a city street in Brooklyn, there is a lot of uninteresting traffic noise built into the dialogue recordings.

Very few directors (or actors) want to lose the spontaneity of the original performance by re-recording dialogue in a studio, so we tweak, clean and process the dialogue to lessen unwanted noise, sometimes diminishing the quality of the recording. We sometimes make compromises with sound effects and music to support a not-so-ideal dialogue track. In an animated film, we don’t have that problem. Sound effects and ambiences can shine without getting in the way. This film has very quiet moments, which feel very natural and organic. That’s a pleasure to have in the movie.

Rubay: Everything Curt said! You have quite a bit of freedom because there is no “production track.” On the flip side, every sound that is added is just that — added. You have to be aware of that; more is not always better.

Spider-Man: Into the Spider-Verse is an animated film with a unique visual style. At times, we played the effects straight, as we might in a live-action picture, to ground it. Other times, we stripped away any notion of “reality.” Sometimes we would do both in the same scene as we cut from one angle to the next. Chris and Phil have always welcomed hard right angle turns, snapping sounds off on a cut or mixing and matching styles in close proximity. They like to do whatever supports the story and directs the audience. Often, we use sound to make your eye notice one thing or look away from another. Other times, we expand the frame, adding sounds outside of what you can see to further enhance the image.

There are many characters in the film. Can you talk about helping to create personality for each?
Rubay: There was a lot of effort made to differentiate the various “spider people” from each other. Whether it was through their web-slings or inherent technology, we were directed to give as much individual personality as possible to each character. Since that directive was baked in from the beginning, every department had it in mind. We paid attention to every visual cue. For example, Miles wears a particular pair of shoes — Nike Air Jordan 1s. My son, Alec Rubay, who was the Foley supervisor, is a real sneakerhead. He tracked down those shoes — very rare — and we recorded them, capturing every sound we could. When you hear Miles’s shoes squeak, you are hearing the correct shoes. Those shoes sound very specific. We applied that mentality wherever possible.

Schulkey: We took the opportunity to exploit the fact that some characters are from different universes in making their sound signatures different from one another. Spider-Ham is from a cartoon universe, so many of the sounds he makes are cartoon sounds. Sniffles, punches, swishes and other movements have a cartoon sensibility. Peni Parker, the anime character, is in a different sync than the rest of the cast, and her voice is somewhat more dynamic. We experimented with making Spider-Man Noir sound like he was coming from an old movie soundtrack, but that became obnoxious, so we abandoned the idea. Nicolas Cage was quite capable of conveying that aspect of the character without our help.

Because we wanted to ground characters in the real world, a lot of effort was put into attaching their voices to their images. Sync, of course, is essential, as is breathing. Characters in most animated films don’t do much breathing, but we added a lot of breaths, efforts and little stutters to add realism. That had to be done carefully. We had a very special, stellar cast and we wanted to maintain the integrity of their performances. I think that effort shows up nicely in some of the more intimate, personal scenes.

To create the unique look of this movie, the production sometimes chose to animate sections of the film “on twos.” That means that mouth movements change every other frame rather than every frame, so sync can be harder than usual to pinpoint. I worked closely with director Bob Persichetti to get dialogue to look in its best sync, doing careful reviews and special adjustments, as needed, on all dialogue in the film.

The main character in this Spider-Man thread is Miles Morales, a brilliant African-American/Puerto Rican Brooklyn teenager trying to find his way in his multi-cultural world. We took special care to show his Puerto Rican background with added Spanish-language dialogue from Miles and his friends. That required dialect coaches, special record sessions and thorough review.

The group ADR required a different level of care than most films. We created voices for crowds, onlookers and the normal “general” wash of voices for New York City. Our group voices covered many very specific characters and were cast in detail by our group leader, Caitlin McKenna. We took a very realistic approach to crowd activity. It had to be subtler than most live-action films to capture the dry nonchalance of Miles Morales’s New York.

Would you describe the sounds as realistic? Fantastical? Both?
Schulkey: The sounds are fantastically realistic. For my money, I don’t want the sounds in my movie to seem fantastical. I see our job as creating an illusion for the audience — the illusion that they are hearing what they are seeing, and that what they are seeing is real. This is an animated film, where nothing is actually real, but has its own reality. The sounds need to live in the world we are watching. When something fantastical happens in the movie’s reality, we had to support that illusion, and we sometimes got to do fun stuff. I don’t mean to say that all sounds had to be realistic.

For example, we surmised that an actual supercollider firing up below the streets of Brooklyn would sound like 10,000 computer fans. Instead, we put together sounds that supported the story we were telling. The ambiences were as authentic as possible, including subway tunnels, Brooklyn streets and school hallways. Foley here was a great tool for giving reality to animated images. When Miles walks into the cemetery at night, you hear his footsteps on snow and sidewalk, gentle cloth movements and other subtle touches. This adds to a sense that he’s a real kid in a real city. Other times, we were in the Spider-Verse and our imagination drove the work.

Rubay: The visuals led the way, and we did whatever they required. There are some crazy things in this movie. The supercollider is based on a real thing so we started there. But supercolliders don’t act as they are depicted in the movie. In reality, they sound like a giant industrial site, fans and motors, but nothing so distinct or dramatic, so we followed the visuals.

Spider-sense is a kind of magical realism that supports, informs, warns, communicates, etc. There is no realistic basis for any of that, so we went with directions about feelings. Some early words of direction were “warm,” “organic,” “internal” and “magical.” Because there are no real sounds for those words, we created sounds that conveyed the emotional feelings of those ideas to the audience.

The portals that allow spider-people to move between dimensions are another example. Again, there was no real-world event to link to. We saw the visuals and assumed it should be a pretty big deal, real “force of nature” stuff. However, it couldn’t simply be big. We took big, energetic sounds and glued them onto what we were seeing. Of course, sometimes people are talking at the same time, so we shifted the frequency center of the moment to clear for the dialog. As music is almost always playing, we had to look for opportunities within the spaces it left.

 

Can you talk about working on the action scenes?
Rubay: For me, when the action starts, the sound had to be really specific. There is dialogue for sure. The music is often active. The guiding philosophy for me at that point is not “Keep adding until there is nothing left to add,” rather, it’s, “We’re done when there is nothing left to strip out.” Busy action scene? Broom the backgrounds away. Usually, we don’t even cut BG’s in a busy action scene, but, if we do, we do so with a skeptical eye. How can we make it more specific? Also, I keep a keen eye on “scale.” One wrong, small detail sound, no matter how cool or interesting, will get the broom if it throws off the scale. Sometimes everything might be sounding nice and big; impressive but not loud, just big, and then some small detail creeps in and spoils it. I am constantly looking out for that.

The “Prowler Chase” scene was a fun exploration. There are times where the music takes over and runs; we pull out every sound we can. Other times, the sound effects blow over everything. It is a matter of give and take. There is a truck/car/prowler motorcycle crash that turns into a suspended slo-mo moment. We had to decide which sounds to play where and when. Its stripped-down nature made it among my favorite moments in the picture.

Can you talk about the multiple universes?
Rubay: The multiverse presented many challenges. It usually manifested itself as a portal or something we move between. The portals were energetic and powerful. The multiverse “place” was something that we used as a quiet place. We used it to provide contrast because, usually, there was big action on either side.

A side effect of the multiple universes interacting was a buildup or collision/overlap. When universes collide or overlap, matter from each tries to occupy the same space. Visually, this created some very interesting moments. We referred to the multi-colored prismatic-looking stuff as “Picasso” moments. The supporting sound needed to convey “force of nature” and “hard edges,” but couldn’t be explosive, loud or gritty. Ultimately, it was a very multi-layered sound event: some “real” sounds teamed with extreme synthesis. I think it worked.

Schulkey: Some of the characters in the movie are transported from another dimension into the dimension of the movie, but their bodies rebel, and from time to time their molecules try to jump back to their native dimension, causing “glitching.” We developed, with a combination of plug-ins, blending, editing and panning, a signature sound that served to signal glitching throughout the movie, and was individually applied for each iteration.

What stands out in your mind as the most challenging scenes audio wise?
Rubay: There is a very quiet moment between Miles and his dad when dad is on one side of the door and Miles is on the other. It’s a very quiet, tender one-way conversation. When a movie gets that quiet every sound counts. Every detail has to be perfect.

What about the Dolby Atmos mix? How did that enhance the film? Can you give a scene or two as an example?
Schulkey: This film was a native Atmos mix, meaning that the primary final mix was directly in the Atmos format, as opposed to making a 7.1 mix and then going back to re-mix sections using the Atmos format.

The native Atmos mix allowed us a lot more sonic room in the theater. This is an extremely complex and busy mix, heavily driven by dialogue. By moving the score out into the side and surround speakers — away from the center speaker — we were able to make the dialogue clearer and still have a very rich and exciting score. Sonic movement is much more effective in this format. When we panned sounds around the room, it felt more natural than in other formats.

Rubay: Atmos is fantastic. Being able to move sounds vertically creates so much space, so much interest, that might otherwise not be there. Also, the level and frequency response of the surround channels makes a huge difference.

You guys used Avid Pro Tools for editing, can you mention some other favorite tools you employed on this film?
Schulkey : The Delete key and the Undo key.

Rubay: Pitch ‘n’ Time, Envy, Reverbs by Exponential Audio, Recording rigs and microphones of all sorts.

What haven’t I asked that’s important?
Our crew! Just in case anyone thinks this can be done by two people, it can’t.
– re-recording mixers Michael Semanick and Tony Lamberti
– sound designer John Pospisil
– dialogue editors James Morioka and Matthew Taylor
– sound effects editors David Werntz, Kip Smedley, Andy Sisul, Chris Aud, Donald Flick, Benjamin Cook, Mike Reagan and Ando Johnson
– Foley mixer Randy Singer
– Foley artists Gary Hecker, Michael Broomberg and Rick Owens


Warner Bros. Studio Facilities ups Kim Waugh, hires Duke Lim

Warner Bros. Studio Facilities in Burbank has promoted long-time post exec Kim Waugh to executive VP, worldwide post production services. They have also hired Duke Lim to serve as VP, post production sound at the studio.

In his new role, Waugh will be reporting to Jon Gilbert, president, worldwide studio facilities, Warner Bros. and will continue to lead the post creative services senior management team, overseeing all marketing, sales, talent management, facilities and technical operations across all locations. Waugh has been instrumental in expanding the business beyond the studio’s Burbank-based headquarters, first to Soho, London in 2012 with the acquisition of Warner Bros. De Lane Lea and then to New York in the 2015 acquisition of WB Sound in Manhattan.

The group supports all creative post production elements, ranging from sound mixing, editing and ADR to color correction and restoration, for Warner Bros.’ clients worldwide. Waugh’s creative services group features a vast array of award-winning artists, including the Oscar-nominated sound mixing team behind Warner Bros. Pictures’ A Star is Born.

Reporting to Waugh, Lim is responsible for overseeing the post sound creative services supporting Warner Bros.’ film and television clients on a day-to-day basis across the studio’s three facilities.

Duke Lim

Says Gilbert, “At all three of our locations, Kim has attracted award-winning creative talent who are sought out for Warner Bros. and third-party projects alike. Bringing in seasoned post executive Duke Lim will create an even stronger senior management team under Kim.”

Waugh most recently served as SVP, worldwide post production services, Warner Bros. Studio Facilities, a post he had held since 2007. In this position, he managed the post services senior management team, overseeing all talent, sales, facilities and operations on a day-to-day basis, with a primary focus on servicing all Warner Bros. Studios’ post sound clients. Prior to joining Warner Bros. as VP, post production services in 2004, Waugh worked at Ascent Media Creative Sound Services, where he served as SVP of sales and marketing, managing sales and marketing for the company’s worldwide divisional facilities. Prior to that, he spent more than 10 years at Soundelux, holding posts as president of Soundelux Vine Street Studios and Signet Soundelux Studios.

Lim has worked in the post production industry for more than 25 years, most recently posted at the Sony Sound Department, which he joined in 2014 to help expand the creative team and total number of mix stages. He began his career at Skywalker Sound South serving in various positions until their acquisition by Todd-AO in 1995, when Lim was given the opportunity to move into operations and began managing the mixing facilities for both its Hollywood location and the Todd-AO West studio in Santa Monica.


CAS and MPSE honor audio post pros and their work

By Mel Lambert

With a BAFTA win and high promise for the upcoming Oscar Awards, the sound team behind Bohemian Rhapsody secured a clean sweep at both the Cinema Audio Society (CAS) and Motion Picture Sound Editors (MPSE) ceremonies here in Los Angeles last weekend.

Paul Massey

The 55th CAS Awards also honored sound mixer Lee Orloff with a Cinema Audio Society Career Achievement Award, while director Steven Spielberg received its Cinema Audio Society Filmmaker Award. And at the MPSE Awards, director Antoine Fuqua accepted the 2019 Filmmaker Award, while supervising sound editor Stephen H. Flick secured the MPSE Career Achievement honor.

Re-recording mixer Paul Massey — accepting the CAS Award for Outstanding Sound Mixing Motion Picture-Live Action on behalf of his fellow dubbing mixers Tim Cavagin and Niv Adiri, together with production mixer John Casali — thanked Bohemian Rhapsody’s co-executive producer and band members Roger Taylor and Brian May for “trusting me to mix the music of Queen.”

The film topped a nominee field that also included A Quiet Place, A Star is Born, Black Panther and First Man; for several years the CAS winner in the feature-film category also has secured an Oscar Award for sound mixing.

Isle of Dogs secured a CAS Award in the animation category, which also included Incredibles 2, Ralph Breaks the Internet, Spider-Man: Into the Spider-Verse and The Grinch. The sound-mixing team included original dialogue mixer Darrin Moore and re-recording mixers Christopher Scarabosio and Wayne Lemmer, together with scoring mixers Xavier Forcioli and Simon Rhodes and Foley mixer Peter Persaud.

Free Solo won a documentary award for production mixer Jim Hurst, re-recording mixers Tom Fleischman and Ric Schnupp, together with scoring mixer Tyson Lozensky, ADR mixer David Boulton and Foley mixer Joana Niza Braga.

Finally, American Crime Story: The Assassination of Gianni Versace (Part 1) The Man Who Would Be Vogue, The Marvelous Mrs. Maisel: Vote For Kennedy, Vote For Kennedy and Anthony Bourdain: Parts Unknown (Bhutan) won CAS Awards within various broadcast sound categories.

Steven Spielberg and Bradley Cooper

The CAS Filmmaker Award was presented to Steven Spielberg by fellow director Bradley Cooper. This followed tributes from regular members of Spielberg’s sound team, including production sound mixer Ron Judkins plus re-recording mixers Andy Nelson and Gary Rydstrom, who quipped: “We spent so much money on Jurassic Park that [Steven] had to shoot Schindler’s List in black & white!”

“Through your talent, [sound editors and mixers] allow the audience to see with their ears,” Spielberg acknowledged, while stressing the full sonic and visual impact of a theatrical experience. “There’s nothing like a big, dark theater,” he stated. He added that he still believes that movie theaters are the best environment in which to fully enjoy his cinematic creations.

Upon receiving his Career Achievement Award from sound mixer Chris Noyes and director Dean Parisot, production sound mixer Lee Orloff acknowledged the close collaboration that needs to exist between members of the filmmaking team. “It is so much more powerful than the strongest wall you could build,” he stated, recalling a 35-year career that spans nearly 80 films.

Lee Orloff

Outgoing CAS president Mark Ulano presented the President’s Award to leading Foley mixer MaryJo Lang, while the CAS Student Award went to Anna Wozniewicz of Chapman University. Finalists included Maria Cecilia Ayalde Angel of Pontificia Universidad Javeriana, Bogota, Allison Ng of USC, Bo Pang of Chapman University and Kaylee Yacono of Savannah College of Art and Design.

Finally, the CAS Outstanding Product Awards went to Dan Dugan Sound Design for its Dugan Automixing in the Sound Devices 633 Compact Mixer, and to Izotope for its RX7 Audio Repair Software.

The CAS Awards ceremony was hosted by comedian Michael Kosta.

 

Motion Picture Sound Editors Awards

During the 66th Annual Golden Reels, outstanding achievement in sound editing awards were presented in 23 categories, encompassing feature films, long- and short-form television, animation, documentaries, games, special venue and other media.

The Americans, Atlanta, The Marvelous Mrs. Maisel and Westworld figured prominently within the honored TV series.

Following introductions by re-recording mixer Steve Pederson and supervising sound editor Mandell Winter, director/producer Michael Mann presented the 2019 MPSE Filmmaker Award to Antoine Fuqua, while Academy Award-winning supervising sound editor Ben Wilkins presented the MPSE Career Achievement Award to fellow supervising sound editor Stephen H. Flick, who also serves as professor of cinematic arts at the University of Southern California.

Antoine Fuqua

“We celebrate the creation of entertainment content that people will enjoy for generations to come,” MPSE president Tom McCarthy stated in his opening address. “As new formats appear and new ways to distribute content are developed, we need to continue to excel at our craft and provide exceptional soundtracks that heighten the audience experience.”

As Pederson stressed during his introduction to the MPSE Filmmaker Award, Fuqua “counts on sound to complete his vision [as a filmmaker].” “His films are stylish and visceral,” added Winter, who along with Pederson has worked on a dozen films for the director during the past two decades.

“He is a director who trusts his own vision,” Mandell confirmed. “Antoine loves a layered soundtrack. And ADR has to be authentic and true to his artistic intentions. He is a bone fide storyteller.”

Four-time Oscar-nominee Mann stated that the honored director “always elevates everything he touches; he uses sound design and music to its fullest extent. [He is] a director who always pushes the limits, while evolving his art.”

Pre-recorded tributes to Fuqua came from actor Chis Pratt, who starred in The Magnificent Seven (2017). “Nobody deserves [this award] more,” he stated. Actor Mark Wahlberg, who starred in Shooter (2007), and producer Jerry Bruckheimer were also featured.

Stephen Hunter Flick

During his 40-year career in the motion picture industry, while working on some 150 films, Steven H. Flick has garnered two Oscar Award wins for Speed (1994) and Robocop (1987) together with nominations for Total Recall (1990), Die Hard (1988) and Poltergeist (1982).

The award for Outstanding Achievement in Sound Editing – Animation Short Form went to Overwatch – Reunion from Blizzard Entertainment, headed by supervising sound editor Paul Menichini. The Non-Theatrical Animation Long Form award was awarded to NextGen from Netflix, headed by supervising sound editors David Acord and Steve Slanec.

The Feature Animation award went to the Oscar-nominated Spider-Man: Into the Spider-Verse from Sony Pictures Entertainment/Marvel, headed by supervising sound editors Geoffrey Rubay and Curt Schulkey. The Non-Theatrical Documentary award went to Searching for Sound — Islandman and Veyasin from Karga Seven Pictures/Red Bull TV, headed by supervising sound editor Suat Ayas. Finally, the Feature Documentary was a tie between Free Solo from National Geographic Documentary Films, headed by supervising sound editor Deborah Wallach, and They Shall Not Grow Old from Wingnut Films/Fathom Events/Warner Bros., headed by supervising sound editors Martin Kwok, Brent Burge, Melanie Graham and Justin Webster.

The Outstanding Achievement in Sound Editing — Music Score award also went to Spider-Man: Into the Spider-Verse, with music editors Katie Greathouse and Catherine Wilson, while the Musical award went to Bohemian Rhapsody from GK Films/Fox Studios, with supervising music editor John Warhurst and music editor Neil Stemp. The Dialogue/ADR award also went to Bohemian Rhapsody, with supervising ADR/dialogue editors Nina Hartston and Jens Petersen, while the Effects/Foley award went to A Quiet Place from Paramount Pictures, with supervising sound editors Ethan Van der Ryn and Erik Aadahl.

The Student Film/Verna Fields Award went to Facing It from National Film and Television School, with supervising sound designer/editor Adam Woodhams.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

Sound designer Ash Knowlton joins Silver Sound

Emmy Award-winning NYC sound studio Silver Sound has added sound engineer Ash Knowlton to its roster. Knowlton is both a location sound recordist and sound designer, and on rare and glorious occasions she is DJ Hazyl. Knowlton has worked on film, television, and branded content for clients such as NBC, Cosmopolitan and Vice, among others.

“I know it might sound weird but for me, remixing music and designing sound occupy the same part of my brain. I love music, I love sound design — they are what make me happy. I guess that’s why I’m here,” she says.

Knowlton moved to Brooklyn from Albany when she was 18 years old. To this day, she considers making the move to NYC and surviving as one of her biggest accomplishments. One day, by chance, she ran into filmmaker John Zhao on the street and was cast on the spot as the lead for his feature film Alexandria Leaving. The experience opened Knowlton’s eyes to the wonders and complexity of the filmmaking process. She particularly fell in love with sound mixing and design.

Ten years later, with over seven independent feature films now under her belt, Knowlton is ready for the next 10 years as an industry professional.

Her tools of choice at Silver Sound are Reaper, Reason and Kontakt.

Main Photo Credit: David Choy