Category Archives: Audio Mixing

The sound of fighting in Jack Reacher: Never Go Back

By Jennifer Walden

Tom Cruise is one tough dude, and not just on the big screen. Cruise, who seems to be aging very gracefully, famously likes to do his own stunts, much to the dismay of many film studio execs.

Cruise’s most recent tough guy turn is in the sequel to 2014’s Jack Reacher. Jack Reacher: Never Go Back, which is in theaters now, is based on the protagonist in author Lee Child’s series of novels. Reacher, as viewers quickly find out, is a hands-on type of guy — he’s quite fond of hand-to-hand combat where he can throw a well-directed elbow or headbutt a bad guy square in the face.

Supervising sound editor Mark P. Stoeckinger, based at Formosa Group’s Santa Monica location, has worked on numerous Cruise films, including both Jack Reachers, Mission: Impossible II and III, The Last Samurai and he helped out on Edge of Tomorrow. Stoeckinger has a ton of respect for Cruise, “He’s my idol. Being about the same age, I’d love to be as active and in shape as he is. He’s a very amazing guy because he is such a hard worker.”

The audio post crew on ‘Jack Reacher: Never Go Back.’ Mark Stoeckinger is on the right.

Because he does his own stunts, and thanks to the physicality of Jack Reacher’s fighting style, sometimes Cruise gets a bruise or two. “I know he goes through a fair amount of pain, because he’s so extreme,” says Stoeckinger, who strives to make the sound of Reacher’s punches feel as painful as they are intended to be. If Reacher punches through a car window to hit a guy in the face, Stoeckinger wants that sound to have power. “Tom wants to communicate the intensity of the impacts to the audience, so they can appreciate it. That’s why it was performed that way in the first place.”

To give the fights that Reacher feel of being visceral and intense, Stoeckinger takes a multi-frequency approach. He layers high-frequency sounds, like swishes and slaps to signify speed, with low-end impacts to add weight. The layers are always an amalgamation of sound effects and Foley.

Stoeckinger prefers pulling hit impacts from sound libraries, or creating impacts specifically with “oomph” in mind. Then he uses Foley to flesh out the fight, filling in the details to connect the separate sound effects elements in a way that makes the fights feel organic.

The Sounds of Fighting
Under Stoeckinger’s supervision, a fight scene’s sound design typically begins with sound effects. This allows his sound team to start immediately, working with what they have at hand. On Jack Reacher: Never Go Back this task was handed over to sound effects editor Luke Gibleon at Formosa Group. Once the sound effects were in place, Stoeckinger booked the One Step Up Foley stage with Foley artist Dan O’Connell. “Having the effects in place gives us a very clear idea of what we want to cover with Foley,” he says. “Between Luke and Dan, the fight soundscapes for the film came to life.”

Jack Reacher: Never Go BackThe culminating fight sequence, where Reacher inevitably prevails over the bad guy, was Stoeckinger’s favorite to design. “The arc of the film built up to this fight scene, so we got to use some bigger sounds. Although, it still needed to seem as real as a Hollywood fight scene can be.”

The sound there features low-frequency embellishments that help the audience to feel the fight and not just hear it. The fight happens during a rowdy street festival in New Orleans in honor of the Day of the Dead. Crowds cavort with noisemakers, bead necklaces rain down, music plays and fireworks explode. “Story wise, the fireworks were meant to mask any gunshots that happened in the scene,” he says. “So it was about melding those two worlds — the fight and the atmosphere of the crowds — to help mask what we were doing. That was fun and challenging.”

The sounds of the street festival scene were all created in post since there was music playing during filming that wasn’t meant to stay on the track. The location sound did provide a sonic map of the actual environment, which Stoeckinger considered when rebuilding the scene. He also relied on field recordings captured by Larry Blake, who lives in New Orleans. “Then we searched for other sounds that were similar because we wanted it to sound fun and festive but not draw the ear too much since it’s really just the background.”

Stoeckinger sweetened the crowd sounds with recordings they captured of various noisemakers, tambourines, bead necklaces and group ADR to add mid-field and near-field detail when desired. “We tried to recreate the scene, but also gave it a Hollywood touch by adding more specifics and details to bring it more to life in various shots, and bring the audience closer to it or further away from it.”

Jack Reacher: Never Go BackStoeckinger also handled design on the film’s other backgrounds. His objective was to keep the locations feeling very real, so he used a combination of practical effects they recorded and field recordings captured by effect editor Luke Gibleon, in addition to library effects. “Luke [Gibleon] has a friend with access to an airport, so Luke did some field recordings of the baggage area and various escalators with people moving around. He also captured recordings of downtown LA at night. All of those field recordings were important in giving the film a natural sound.”

There where numerous locations in this film. One was when Reacher meets up with a teenage girl who he’s protecting from the bad guys. She lives in a sketchy part of town, so to reinforce the sketchiness of the neighborhood, Stoeckinger added nearby train tracks to the ambience and created street walla that had an edgy tone. “It’s nothing that you see outside of course, but sound-wise, in the ambient tracks, we can paint that picture,” he explains.
In another location, Stoeckinger wanted to sell the idea that they were on a dock, so he added in a boat horn. “They liked the boat horn sound so much that they even put a ship in the background,” he says. “So we had little sounds like that to help ground you in the location.”

Tools and the Mix
At Formosa, Stoeckinger has his team work together in one big Avid Pro Tools 12 sessions that included all of their sounds: the Foley, the backgrounds, sound effects, loop group and design elements. “We shared it,” he says. “We had a ‘check out’ system, like, ‘I’m going to check out reel three and work on this sequence.’ I did some pre-mixing, where I went through a scene or reel and decided what’s working or what sections needed a bit more. I made a mark on a timeline and then handed that off to the appropriate person. Then they opened it up and did some work. This master session circulated between two or three of us that way.” Stoeckinger, Gibleon and sound designer Alan Rankin, who handled guns and miscellaneous fight sounds, worked on this section of the film.

All the sound effects, backgrounds, and Foley were mixed on a Pro Tools ICON, and kept virtual from editorial to the final mix. “That was helpful because all the little pieces that make up a sound moment, we were able to adjust them as necessary on the stage,” explains Stoeckinger.

Jack Reacher: Never Go BackPremixing and the final mixes were handled at Twentieth Century Fox Studios on the Howard Hawks Stage by re-recording mixers James Bolt (effects) and Andy Nelson (dialogue/music). Their console arrangement was a hybrid, with the effects being mixed on an Avid ICON, and the dialogue and music mixed on an AMS Neve DFC console.

Stoeckinger feels that Nelson did an excellent job of managing the dialogue, particularly for moments where noisy locations may have intruded upon subtle line deliveries. “In emotional scenes, if you have a bunch of noise that happens to be part of the dialogue track, that detracts from the scene. You have to get all of the noise under control from a technical standpoint.” On the creative side, Stoeckinger appreciated Nelson’s handling of Henry Jackman’s score.

On effects, Stoeckinger feels Bolt did an amazing job in working the backgrounds into the Dolby Atmos surround field, like placing PA announcements in the overheads, pulling birds, cars or airplanes into the surrounds. While Stoeckinger notes this is not an overtly Atmos film, “it helped to make the film more spatial, helped with the ambiences and they did a little bit of work with the music too. But, they didn’t go crazy in Atmos.”

Behind the Title: Sound mixer/sound designer Rob DiFondi

Name: Rob DiFondi

Company: New York City’s Sound Lounge

Can you describe your company?
Sound Lounge is an audio post company that provides creative services for TV and radio commercials, feature films, television series, digital campaigns, gaming and other emerging media. Artist-owned and operated, we’re made up of an incredibly diverse, talented and caring group of people who all love the advertising and film worlds.

We recently celebrated Sound Lounge’s 18th birthday. I’m proud to say I’ve been a part of the SL family for over 13 years now, and I couldn’t ask for a better group of friends to hang out with every day.

What’s your job title?
Senior Mixer/Sound Designer

What does that entail?
I have actors in my booth all day recording VO (voiceover) for different commercials. My clients (usually brands, ad agencies, production companies, or editorials) hang in my room, and together we get the best possible read from the actor while they’re in the booth. I then craft sound design for the spot by either pulling sound effects from my library or recreating the necessary sounds myself (a.k.a. “Foley”). Once that’s set, I’ll take the lines the actor recorded, the sound effects I created, and any music, and then mix them all together so the spot sounds perfect (and is legal for TV broadcast)!

Being a mixer in the advertising post world isn’t easy. I also have to be able to provide a solid lunch recommendation — I always need to make sure I know where my clients can get the best sushi in the Flatiron district!

What would surprise people the most about what falls under that title?
That most of us are musicians who wanted to be rock stars but thought better of it. Maybe that isn’t so surprising though.

Sound Lounge

What’s your favorite part of the job?
The people, and the social part of the advertising industry. This business is filled with so many kind, funny and talented people, and it’s so nice to have them be a part of your life. And how can you beat partying every year at the MOMA for the AICP Gala?

What’s your least favorite?
Probably the lack of travel. I love our office, but it would be fun to do my job in different cities once in a while.

What is your favorite time of the day?
Walking in my front door and seeing my wife and kids.

If you didn’t have this job, what would you be doing instead?
Something that involves beaches and nice weather.

How early on did you know this would be your path?
I totally fell into this profession. I went to school to become a music engineer/producer. I had no idea there was a whole industry for mixing TV spots. Once I got into it though, I knew immediately that I loved it.

Can you name some recent projects you have worked on?
I worked on some really nice pieces for Maybelline, Google, Lincoln and TD Ameritrade.

What is the project that you are most proud of?
Miracle Stain, a Super Bowl commercial that I mixed for Tide a few years back. I finished the mix at 10pm on Thursday and got a call at 2am that there had been some changes, so I had to come back to work in the middle of the night. I tweaked the mix until the sun came up and had it ready to ship by 9am. It was one of those very epic projects that had all the classic markings of a Super Bowl spot.

Name three pieces of technology you can’t live without.
My iPhone, my DSLR camera and iZotope RX.

What social media channels do you follow?
I’m a big Instagram guy. I love seeing people’s lives told through photos. Facebook is so 2015.

Do you listen to music while you work? Care to share your favorite music to work to?
Since I work in audio I can’t listen to music while I work, but when I’m not working I listen to a lot of modern country music, Dave Matthews Band (not afraid to say it!), prog metal and pretty much everything in between.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
I just leased a Jeep Wrangler Unlimited. There’s nothing like putting the top down and taking a drive to the beach!

G-Tech 6-15

The sound of two worlds for The Lost City of Z

By Jennifer Walden

If you are an explorer, your goal is to go where no one has gone before, or maybe it’s to unearth and re-discover a long-lost world. Director James Gray (The Immigrant), takes on David Grann’s non-fiction novel The Lost City of Z, which follows the adventures of British explorer Colonel Percival Fawcett, who in 1925 disappeared with his son in the Amazon jungle while on a quest to locate an ancient lost city.

Gray’s biographical film, which premiered October 15 at the 54th New York Film Festival, takes an interpretive approach to the story by exploring Fawcett’s inner landscape, which is at odds with his physical location — whether he’s in England or the Amazon, his thoughts drift between the two incongruent worlds.

Once Gray returned from filming The Lost City of Z in the jungles of Colombia, he met up with supervising sound editor/sound designer Robert Hein at New York’s Harbor Picture Company. Having worked together on The Immigrant years ago, Hein says he and Gray have an understanding of each other’s aesthetics. “He has very high goals for himself, and I try to have that also. I enjoy our collaboration; we keep pushing the envelope. We have a mutual appreciation for making a film the greatest it can be. It’s an evolution, and we keep pushing the film to new places.”

The Sound of Two Worlds
Gray felt Hein and Harbor Picture Company would be the perfect partner to handle the challenging sound job for The Lost City of Z. “It involved the creation of two very different worlds: Victorian England, and the jungle. Both feature the backdrop of World War I. Therefore, we wanted someone who naturally thinks outside the box, someone who doesn’t only look at the images on the screen, but takes chances and does things outside the realm of what you originally had in mind, and Bob [Hein] and his crew are those people.”

Bob Hein

Gray tasked Hein with designing a soundscape that could merge Fawcett’s physical location with his inner world. Fawcett (Charlie Hunnam) is presented with physical attacks and struggles, but it’s his inner struggle that Gray wanted to focus on. Hein explains, “Fawcett is a conflicted character. A big part of the film is his longing for two worlds: the Amazon and England. When he’s in one place, his mind is in the other, so that was very challenging to pull off.”

To help convey Fawcett’s emotional and mental conflicts, Hein introduced the sounds of England into the Amazon, and vice-versa, subtly blending the two worlds. Through sound, the audience escapes the physical setting and goes into Fawcett’s mind. For example, the film opens with the sounds of the jungle, to which Hein added an indigenous Amazonian battle drum that transforms into the drumming of an English soldier, since Fawcett is physically with a group of soldiers preparing for a hunt. Hein explains that Fawcett’s belief that the Amazonians were just as civilized as Europeans (maybe even more so) was a controversial idea at the time. Merging their drumming wasn’t just a means of carrying the audience from the Amazon to England; it was also a comment on the two civilizations.

“In a way, it’s kind of emblematic of the whole sound design,” explains Hein. “It starts out as one thing but then it transforms into another. We did that throughout the film. I think it’s very beautiful and engaging. Through the sound you enter into his world, so we did a lot of those transitions.”

In another scene, Fawcett is traveling down a river in the jungle and he’s thinking about his family in England. Here, Hein adds an indigenous bird calling, and as the scene develops he blends the sound of that bird with an English church bell. “It’s very subtle,” he says. “The sounds just merge. It’s the merging of two worlds. It’s a feeling more than an obvious trick.”

During a WWI battle scene, Fawcett leads a charge of troops out of their trench. Here Hein adds sounds related to the Amazon in juxtaposition of Fawcett’s immediate situation. “Right before he goes into war, he’s back in the jungle even though he is physically in the trenches. What you hear in his head are memories of the jungle. You hear the indigenous Amazonians, but unless you’re told what it is you might not know.”

A War Cry
According to Hein, one of the big events in the film occurs when Fawcett is being attacked by Amazonians. They are shooting at him but he refuses to accept defeat. Fawcett holds up his bible and an arrow goes tearing into the book. At that moment, the film takes the audience inside Fawcett’s mind as his whole life flashes by. “The sound is a very big part of that because you hear memories of England and memories of his life and his family, but then you start to hear an indigenous war cry that I changed dramatically,” explains Hein. “It doesn’t sound like something that would come out of a human voice. It’s more of an ethereal, haunted reference to the war cry.”

As Fawcett comes back to reality that sound gets erased by the jungle ambience. “He’s left alone in the jungle, staring at a tribe of Indians that just tried to kill him. That was a very effective sound design moment in this film.”

To turn that war cry into an ethereal sound, Hein used a granular synthesizer plug-in called Paulstretch (or Paul’s Extreme Sound Stretch) created by Software Engineer by Paul Nasca. “Paulstretch turns sounds almost into music,” he says. “It’s an old technology, but it does some very special things. You can set it for a variety of effects. I would play around with it until I found what I liked. There were a lot of versions of a lot of different ideas as we went along.”

It’s all part of the creative process, which Gray is happy to explore. “What’s great is that James [Gray] is excited about sound,” says Hein. “He would hang out and we would play things together and we would talk about the film, about the main character, and we would arrive at sounds together.”

Drones
Additionally, Hein sound designed drones to highlight the anxiety and trepidation that Fawcett feels. “The drones are low, sub-frequency sounds but they present a certain atmosphere that conveys dread. These elements are very subtle. You don’t get hit over the head with them,” he says.

The drones and all the sound design were created from natural sounds from the Amazon or England. For example, to create a low-end drone, they would start with jungle sounds — imagine a bee’s nest or an Amazonian instrument — and then manipulate those. “Everything was done to immerse the audience in the world of The Lost City of Z in its purest sense,” says Hein, who worked closely with Harbor’s sound editors Glenfield Payne, Damian Volpe and Dave Paterson. “They did great work and were crucial in the sound design.”

The Amazon
Gray also asked that Hein design the indigenous Amazon world exactly the way that it should be, as real as it could be. Hein says, “It’s very hard to find the correct sound to go along with the images. A lot of my endeavor was researching and finding people who did recordings in the Amazon.”

He scoured the Smithsonian Institute Archives, and did hours of research online, looking for audio preservationists who captured field recordings of indigenous Amazonians. “There was one amazing coincidence,” says Hein. “There’s a scene in the movie where the Indians are using an herbal potion to stun the fish in the river. That’s how they do it so as not to over-fish their environment. James [Gray] had found this chant that he wanted to have there, but that chant wasn’t actually a fishing chant. Fortunately, I found a recording of the actual fishing chant online. It’s beautifully done. I contacted the recordist and he gave us the rights to use it.”

Filming in the Amazon, under very difficult conditions presented Hein with another post production challenge. “Location sound recording in the jungle is challenging because there were loud insects, rain and thunder. There were even far-afield trucks and airplanes that didn’t exist at the time.”

Gray was very concerned that sections of the location dialogue would be unusable. “The performances in the film are so great because they went deep into the Amazon jungle to shoot this film. Physically being in that environment I’m sure was very stressful, and that added a certain quality to the actors’ performances that would have been very difficult to replace with ADR,” says Hein, who carefully cleaned up the dialogue using several tools, including iZotope’s RX 5 Advanced audio restoration software. “With RX 5 Advanced, we could microscopically choose which sounds we wanted to keep and which sounds we wanted to remove, and that’s done visually. RX gives you a visual map of the audio and you can paint out sounds that are unnecessary. It’s almost like Photoshop for sound.”

Hein shared the cleaned dialogue tracks with Gray, who was thrilled. “He was so excited about them. He said, “I can use my location sound!” That was a big part of the project.”

ADR and The Mix
While much of the dialogue was saved, there were still a few problematic scenes that required ADR, including a scene that was filmed during a tropical rainstorm, and another that was shot on a noisy train as it traveled over the mountains in Colombia. Harbor’s ADR supervisor Bobby Johanson, who has worked with Gray on previous films, recorded everything on Harbor’s ADR stage that is located just down the hall from Hein’s edit suite and the dub stage.

Gray says, “Harbor is not just great for New York; it’s great, period. It is this fantastic place where they’ve got soundstages that are 150 feet away from the editing rooms, which is incredibly convenient. I knew they could handle the job, and it was really a perfect scenario.”

The Lost City of Z was mixed in 5.1 surround on an Avid/Euphonix System 5 console by re-recording mixers Tom Johnson (dialogue/music) and Josh Berger (effects, Foley, backgrounds) in Studio A at Harbor Sound’s King Street location in Soho. It was also reviewed on the Harbor Grand stage, which is the largest theatrical mix stage in New York. The team used the 5.1 environment to create the feeling of being engulfed by the jungle. Fawcett’s trips, some which lasted years, were grueling and filled with disease and death. “The jungle is a scary place to be! We really wanted to make sure that the audience understood the magnitude of Percy’s trips to the Amazon,” says Berger. “There are certain scenes where we used sound to heighten the audience’s perspective of how erratic and punishing the jungle can be, i.e. when the team gets caught in rapids or when they come under siege from various Indian tribes.”

Johnson, who typically mixes at Skywalker Sound, had an interesting approach to the final mix. Hein explains that Johnson would first play a reel with every available sound in it — all the dialogue and ADR, all the sound effects and Foley — and the music. “We played it all in the reel,” says Hein. “It would be overwhelming. It would be unmixed and at times chaotic. But it gave us a very good idea of how to approach the mix.”

As they worked through the film, the sound would evolve in unexpected ways. What they heard toward the end of the first pass influenced their approach on the beginning of the second pass. “The film became a living being. We became very flexible about how the sound design was coming in and out of different scenes. The sound became very integrated into the film as a whole. It was really great to experience that,” shares Hein.

As Johnson and Berger mixed, Hein was busy creating new sound design elements for the visual effects that were still coming in at the last minute. For example, the final version of the arrows that were shot in the film didn’t come in until the last minute. “The arrows had to have a real special quality about them. They were very specific in communicating just how dangerous the situation actually was and what they were up against,” says Hein.

Later in the film, Amazonians throw tomahawks at Fawcett and his son as they run through the jungle. “Those tomahawks were never in the footage,” he says. “We had just an idea of them until days before we finished the mix. There was also a jaguar that comes out of the jungle and threatens them. That also came in at the last minute.”

While Hein created new sound elements in his edit suite next to the dub stage, Gray was able to join him for critique and collaboration before those sounds were sent next door to the dub stage. “Working with James is a high-energy, creative blast and super fun. He’s constantly coming up with new ideas and challenges. He spends every minute in the mix encouraging us, challenging us and, best of all, making us laugh a lot. He’s a great storyteller, and his knowledge of film and film history is remarkable. Working with James Gray is a real highlight in my career,” concludes Hein.


Jennifer Walden is a New Jersey-based audio engineer and writer. 


AES Conference focuses on immersive audio for VR/AR

By  Mel Lambert

The AES Convention, which was held at the Los Angeles Convention Center in early October, attracted a broad cross section of production and post professionals looking to discuss the latest technologies and creative offerings. The convention had approximately 13,000 registered attendees and more than 250 brands showing wares in the exhibits halls and demo rooms.

Convention Committee co-chairs Valerie Tyler and Michael MacDonald, along with their team, created the comprehensive schedule of workshops, panels and special events for this year’s show. “The Los Angeles Convention Center’s West Hall was a great new location for the AES show,” said MacDonald. “We also co-located the AVAR conference, and that brought 3D audio for gaming and virtual reality into the mainstream of the AES.”

“VR seems to be the next big thing,” added AES executive director Bob Moses, “[with] the top developers at our event, mapping out the future.”

The two-day, co-located Audio for Virtual and Augmented Reality Conference was expected to attract about 290 attendees, but with aggressive marketing and outreach to the VR and AR communities, pre-registration closed at just over 400.

Aimed squarely at the fast-growing field of virtual/augmented reality audio, this conference focused on the creative process, applications workflow and product development. “Film director George Lucas once stated that sound represents 50 percent of the motion picture experience,” said conference co-chair Andres Mayo. “This conference demonstrates that convincing VR and AR productions require audio that follows the motions of the subject and produces a realistic immersive experience.”

Spatial sound that follows head orientation for headsets powered either by dedicated DSP, game engines or smartphones opens up exciting opportunities for VR and AR producers. Oculus Rift, HTC Vive, PlayStation VR and other systems are attracting added consumer interest for the coming holiday season. Many immersive-audio innovators, including DST and Dolby, are offering variants of their cinema systems targeted at this booming consumer marketplace via binaural headphone playback.

Sennheiser’s remarkable new Ambeo VR microphone (pictured left) can be used to capture 3D sound and then post produced to prepare different spatial perspectives — a perfect adjunct for AR/VR offerings. At the high end, Nokia unveiled its Ozo VR camera, equipped with eight camera sensors and eight microphones, as an alternative to a DIY assembly of GoPro cameras, for example.

Two fascinating keynotes bookended the AVAR Conference. The opening keynote, presented by Philip Lelyveld, VR/AR initiative program manager at the USC Entertainment Technology Center, Los Angeles, and called “The Journey into Virtual and Augmented Reality,” defined how virtual, augmented and mixed reality will impact entertainment, learning and social interaction. “Virtual, Augmented and Mixed Reality have the potential of delivering interactive experiences that take us to places of emotional resonance, give us agency to form our own experiential memories, and become part of the everyday lives we will live in the future,” he explained.

“Just as TV programming progressed from live broadcasts of staged performances to today’s very complex language of multithread long-form content,” Lelyveld stressed, “so such media will progress from the current early days of projecting existing media language with a few tweaks to a headset experience into a new VR/AR/MR-specific language that both the creatives and the audience understand.”

Is his closing keynote, “Future Nostalgia, Here and Now: Let’s Look Back on Today from 20 Years Hence,” George Sanger, director of sonic arts at Magic Leap, attempted to predict where VR/AR/MR will be in two decades. “Two decades of progress can change how we live and think in ways that boggle the mind,” he acknowledged. “Twenty years ago, the PC had rudimentary sound cards, now the entire ‘multitrack recording studio’ lives on our computers. By 2036, we will be wearing lightweight portable devices all day. Our media experience will seamlessly merge the digital and physical worlds; how we listen to music will change dramatically. We live in the Revolution of Possibilities.”

According to conference co-chair Linda Gedemer, “It has been speculated by Wall Street [pundits] that VR/AR will be as game changing as the advent of the PC, so we’re in for an incredible journey!”

Mel Lambert, who also gets photo credit on pictures from the show, is principal of Content Creators, an LA-based copywriting and editorial service, and can be reached at mel.lambert@content-creators.com Follow him on Twitter @MelLambertLA


The color and sound of Netflix’s The Get Down

The Get Down, Baz Luhrmann’s new series for Netflix, tells the story of the birth of hip-hop in the late 1970s in New York’s South Bronx. The show depicts a world filled with colorful characters pulsating to the rhythms of an emerging musical form.

Shot on the Red Dragon and Weapon in 6K, sound and picture finishing for the full series was completed over several months at Technicolor PostWorks New York. Re-recording mixers Martin Czembor and Eric Hirsch, working under Luhrmann’s direction and alongside supervising sound designer Ruy Garcia, put the show’s dense soundtrack into its final form.

The Get Down

Colorist John Crowley, meanwhile, collaborated with Luhrmann, cinematographer William Rexer and executive producer Catherine Martin in polishing its look. “Every episode is like a movie,” says Czembor. “And the expectations, on all levels, were set accordingly. It was complex, challenging, unique… and super fascinating.”

The Get Down’s soundtrack features original music from composer Elliott Wheeler, along with classic hip-hop tracks and flashes of disco, new wave, salsa and even opera. And the music isn’t just ambiance; it is intricately woven into the story. To illustrate the creative process, a character’s attempt to work out a song lyric might seamlessly transform into a full-blown finished song.

According to Garcia, the show’s music team began working on the project from the writing stage. “Baz uses songs as plot devices — they become part of the story. The music works together with the sound effects, which are also very musical. We tuned the trains, the phones and other sounds and synced them to the music. When a door closes, it closes on the beat.”

Ruy Garcia

The blending of story, music, dialogue and sound came together in the mix. Hirsch, who mixed Foley and effects, recalls an intensive trial-and-error process to arrive at a layering that felt right. “There was more music in this show than anything I’ve previously worked on,” he says. “It was a challenge to find enough sound effects to fill out the world without stepping on the music. We looked for places where they could breathe.”

In terms of tools, they used Avid Pro Tools 12 HD for sound and music, ADR manager for ADR cueing and Sound Miner for Sound FX library management. For sound design they called on Altiverb, Speakerphone and SoundToys EchoBoy to create spaces, and iZotope Iris for sampling. “We mixed using two Avid Pro Tools HDX2 systems and a double operator Avid S6 control surface,” explains Garcia. “The mix sessions were identical to the editorial sessions, including plug-ins, to allow seamless exchange of material and elaborate conformations.”

Music plays a crucial role in the series’ numerous montage sequences, acting as a bridge as the action shifts between various interconnecting storylines. “In Episode 2, Cadillac interrogates two gang members about a nightclub shooting, as Shaolin and Zeke are trying to work out the ‘get down’ — finding the break for a hip-hop beat,” recalls Czembor. “The way those two scenes are cut together with the music is great! It has an amazing intensity.”

Czembor, who mixed dialogue and music, describes the mix as a collaborative process. During the early phases, he and Hirsch worked closely with Wheeler, Garcia and other members of the sound and picture editing teams. “We spent several days pre-mixing the dialogue, effects and music to get it into a basic shape that we all liked,” he explains. “Then Baz would come in and offer ideas on what to push and where to take it next. It was a fun process. With Baz, bigger and bolder is always better.”

The team mostly called on Garcia’s personal sound library, “plus a lot of vintage New York E train and subway recordings from some very generous fellow sound editors,” he says. “Shaolin Fantastic’s kung-fu effects come from an old British DJ’s effects record. We also recorded and edited extensive Foley, which was edited against the music reference guide.”

The Color of Hip-Hop
Bigger and bolder also applied to the picture finishing. Crowley notes that cinematographer William Rexer employed a palette of rich reddish brown, avocado and other colors popular during the ‘70s, all elevated to levels slightly above simple realism. During grading sessions with Rexer, Martin and Luhrmann, Crowley spent time enhancing the look within the FilmLight Baselight, sharpening details and using color to complement the tone of the narrative. “Baz uses color to tell the story,” he observes. “Each scene has its own look and emotion. Sometimes, individual characters have their own presence.”

ohn-crowley

John Crowley

Crowley points to a scene where Mylene gives an electrifying performance in a church (photo above). “We made her look like a superstar,” he recalls. “We darkened the edges and did some vignetting to make her the focus of attention. We softened her image and added diffusion so that she’s poppy and glows.”

The series uses archival news clips, documentary material and stock footage as a means of framing the story in the context of contemporary events. Crowley helped blend this old material with the new through the use of digital effects. “In transitioning from stock to digital, we emulated the gritty 16mm look,” he explains. “We used grain, camera shake, diffusion and a color palette of warm tones. Then, once we got into a scene that was shot digitally, we would gradually ride the grain out, leaving just a hint.”

Crowley says it’s unusual for a television series to employ such complex, nuanced color treatments. “This was a unique project created by a passionate group of artists who had a strong vision and knew how to achieve it,” he says.

NAB 1/17

AES: Avid intros Pro Tools 12.6 and new MRTX audio interface

Avid was at AES in LA with several new tools and updates for audio post pros. New releases include Pro Tools 12.6 software and Pro Tools MTRX, an audio interface for Pro Tools, HDX and HD Native.

Avid Pro Tools 12.6 delivers new editing capabilities, including Clip Effects and layered editing features, making it possible to edit and prepare mixes faster. Production can also be accelerated using automatic playlist creation and selection using shortcut keys. Enhanced “in-the-box” dubber workflows have also been included.

Pro Tools MTRX, developed by Digital Audio Denmark, gives Pro Tools users the superior sonic quality of DAD’s A to D and D to A converters, along with flexible monitoring, I/O and routing capabilities, all in one unit. MTRX will let users gain extended monitor control and flexible routing with Pro Tools S6, S3 and other EUCON surfaces, use the converter as a high-performance 64-channel Pro Tools HD interface, and get automatic sample rate conversion on AES inputs. MTRX (our main photo) will be available later this year.

Tony Cariddi

During AES LA, we caught up with Tony Cariddi, director of product and solutions marketing for Avid, to see what he had to say about where Avid is going next. “What we have seen in the industry is that there is no shortage of innovation and there are new solutions for problems that are always emerging,” says Cariddi. “But what happens when you have all of these different solutions is it puts a lot of pressure on the user to make sure everything works together seamlessly. So what you’ll see from Avid Everywhere going forward is a continuation of trying to connect our own products closer together on the MediaCentral Platform, so it’s really fluid for our users, but also for people to be able to integrate other solutions into that platform just as easily.

“We also have to be responsive to how people want to access our tools,” he continued. “What kind of packages are they looking for? Do they want to subscribe? Do they want to buy? Enterprise licensing? Floating license? So you’ll probably see bundles and new ways to access licensing and new flexible ways to maybe rent the software when you need it. We’re trying to be very responsive to the multifaceted needs of the industry, and part of that is workflow, part of that is financial and part of that is the integration of everything.”

NAB 1/17

iZotope intros mixing plug-in Neutron at AES show

iZotope was at last week’s AES show in LA with Neutron, their newest plug-in, which is geared toward simplifying and enhancing the mixing process. Neutron’s Track Assistant saves you time by listening to your audio and recommending custom starting points for tracks. According to iZotope, analysis intelligence within Neutron allows Track Assistant to automatically detect instruments, recommend the placement of EQ nodes and set optimal settings for other modules. Users still maintain full control over all their mix decisions, but Track Assistant gives them more time to focus on their creative take on the mix.

Neutron’s Masking Meter allows you to visually identify and fix perceptual frequency collisions between instruments, which can result in guitars masking lead vocals, bass covering up drums and other issues that can cause a “muddy” or overly crowded mix. Easily tweak each track to carve away muddiness and reveal new sonic possibilities.

“[Neutron] has a deep understanding of the tracks and where they compete with one another, and it offers subtle enhancements to the sound based on that understanding,” explains iZotope CEO/co-founder Mark Ethier.

Neutron can be used on every track, offering zero-latency, CPU-efficient performance. It offers static /dynamic EQ, two multiband compressors, a multiband Transient Shaper, a multiband Exciter and a True Peak Limiter.

What the plug-in offers:
• The ability to automatically detect different instruments — such as vocals, dialogue, guitar, bass, and drums — and then apply the spectral shaping technology within Neutrino to provide subtle clarity and balance to each track.
• Recommendations for optimal starting points using Track Assistant, including EQ nodes, compressor thresholds, saturation types and multiband crossover points.
• It carves out sonic space using the Masking Meter to help each instrument sit better in the mix.
• The ability to create the a mix with five mixing processors integrated into one CPU-efficient channel strip, offering both clean digital and warm vintage-flavored processing.
• There is surround support [Advanced Only] for audio post pros that need to enhance the audio for picture experience.
• There are individual plug-ins [Advanced Only] for the Equalizer, Compressor, Transient Shaper and Exciter.

Neutron and Neutron Advanced is available now. Neutron Advanced will also be available as part of iZotope’s new Music Production Bundle 2. This combines iZotope’s latest products with its other tools, including Ozone 7 Advanced, Nectar 2 Production Suite, VocalSynth, Trash 2 Expanded, RX Plug-in Pack and Insight.

Once available, Neutron, Neutron Advanced, and the Music Production Bundle 2 will be discounted through October 31, 2016: Neutron will be available for $199 (reg.$249); Neutron Advanced will be available for $299 (reg. $349); and the Music Production Bundle 2 will be available for $499 (reg. $699).