Tag Archives: HBO

VFX in Series: The Man in the High Castle, Westworld

By Karen Moltenbrey

The look of television changed forever starting in the 1990s as computer graphics technology began to mature to the point where it could be incorporated within television productions. Indeed, the applications initially were minor, but soon audiences were witnessing very complicated work on the small screen. Today, we see a wide range of visual effects being used in television series, from minor wire and sign removal to all-CG characters and complete CG environments — pretty much anything and everything to augment the action and story, or to turn a soundstage or location into a specific locale that could be miles away or even non-existent.

Here, we examine two prime examples where a wide range of visual effects are used to set the stage and propel the action for a pair of series with very unique settings. For instance, The Man in the High Castle uses effects to turn back the clock to the 1960s, but also to create an alternate reality for the period, turning the familiar on its head. In  Westworld, effects create a unique Wild West of the future. In both series, VFX also help turn up the volume on these series’ very creative storylines.

The Man in the High Castle

What would life in the US be like if the Axis powers had defeated the Allied forces during World War II? The Amazon TV series The Man in the High Castle explores that alternate history scenario. Created by Frank Spotnitz and produced by Amazon Studios, Scott Free Productions, Headline Pictures, Electric Shepherd Productions and Big Light Productions, the series is scheduled to start its fourth and final season in mid-November. The story is based on the book by Philip K. Dick.

High Castle begins in the early 1960s in a dystopian America. Nazi Germany and the Empire of Japan have divvied up the US as their spoils of war. Germany rules the East, known as the Greater Nazi Reich (with New York City as the regional capital), while Japan controls the West, known as the Japanese Pacific States (whose capital is now San Francisco). The Rocky Mountains serve as the Neutral Zone. The American Resistance works to thwart the occupiers, spurred on after the discovery of materials displaying an alternate reality where the Allies were victorious, making them ponder this scenario.

With this unique storyline, visual effects artists were tasked with turning back the clock on present-day locations to the ’60s and then turning them into German- and Japanese-dominated and inspired environments. Starting with Season 2, the main studio filling this role has been Barnstorm Visual Effects (Los Angeles, Vancouver). Barnstorm operated as one of the vendors for Season 1, but has since ramped up its crew from a dozen to around 70 to take on the additional work. (Barnstorm also works on CBS All Access shows such as The Good Fight and Strange Angel, in addition to Get Shorty, Outlander and the HBO series Room 104 and Silicon Valley.)

According to Barnstorm co-owner and VFX supervisor Lawson Deming, the studio is responsible for all types of effects for the series — ranging from simple cleanup and fixes such as removing modern objects from shots to more extensive period work through the addition of period set pieces and set extensions. In addition, there are some flashback scenes that call for the artists to digitally de-age the actors and lots of military vehicles to add, as well as science-fiction objects. The majority of the overall work entails CG set extensions and world creation, Deming explains, “That involves matte paintings and CG vehicles and buildings.”

The number of visual effects shots per episode also varies greatly, depending on the story line; there are an average of 60 VFX shots an episode, with each season encompassing 10 episodes. Currently the team is working on Season 4. A core group of eight to 10 CG artists and 12 to 18 compositors work on the show at any given time.

For Season 3, released last October, there are a number of scenes that take place in the Reich-occupied New York City. Although it was possible to go to NYC and photograph buildings for reference, the city has changed significantly since the 1960s, “even notwithstanding the fact that this is an alternate history 1960s,” says Deming. “There would have been a lot of work required to remove modern-day elements from shots, particularly at the street level of buildings where modern-day shops are located, even if it was a building from the 1940s, ’50s or ’60s. The whole main floor would have needed replaced.”

So, in many cases, the team found it more prudent to create set extensions for NYC from scratch. The artists created sections of Fifth and Sixth avenues, both for the area where American-born Reichmarshall and Resistance investigator John Smith has his apartment and also for a parade sequence that occurs in the middle of Season 3. They also constructed a digital version of Central Park for that sequence, which involved crafting a lot of modular buildings with mix-and-match pieces and stories to make what looked like a wide variety of different period-accurate buildings, with matte paintings for the backgrounds. Elements such as fire escapes and various types of windows (some with curtains open, some closed) helped randomize the structures. Shaders for brick, stucco, wood and so forth further enabled the artists to get a lot of usage from relatively few assets.

“That was a large undertaking, particularly because in a lot of those scenes, we also had crowd duplication, crowd systems, tiling and so on to create everything that was there,” Deming explains. “So even though it’s just a city and there’s nothing necessarily fantastical about it, it was almost fully created digitally.”

The styles of NYC and San Francisco are very different in the series narrative. The Nazis are rebuilding NYC in their own image, so there is a lot of influence from brutalist architecture, and cranes often dot the skyline to emphasize all the construction taking place. Meanwhile, San Francisco has more of a 1940s look, as the Japanese are less interested in influencing architectural changes as they are in occupation.

“We weren’t trying to create a science-fiction world because we wanted to be sure that what was there would be believable and sell the realistic feel of the story. So, we didn’t want to go too far in what we created. We wanted it to feel familiar enough, though, that you could believe this was really happening,” says Deming.

One of the standout episodes for visual effects is “Jahr Null” (Season 3, Episode 10), which has been nominated for a 2019 Emmy in the Outstanding Special Visual Effects category. It entails the destruction of the Statue of Liberty, which crashes into the water, requiring just about every tool available at Barnstorm. “Prior to [the upcoming] Season 4, our biggest technical challenge was the Statue of Liberty destruction. There were just so many moving parts, literally and figuratively,” says Deming. “So many things had to occur in the narrative – the Nazis had this sense of showmanship, so they filmed their events and there was this constant stream of propaganda and publicity they had created.”

There are ferries with people on them to watch the event, spotlights are on the statue and an air show with music prior to the destruction as planes with trails of colored smoke fly toward the statue. When the planes fire their missiles at the base of the statue, it’s for show, as there are a number of explosives planted in the base of the statue that go off in a ring formation to force the collapse. Deming explains the logistics challenge: “We wanted the statue’s torch arm to break off and sink in the water, but the statue sits too far back. We had to manufacture a way for the statue to not just tip over, but to sort of slide down the rubble of the base so it would be close enough to the edge and the arm would snap off against the side of the island.”

The destruction simulation, including the explosions, fire, water and so forth, was handled primarily in Side Effects Houdini. Because there was so much sim work, a good deal of the effects work for the entire sequence was done in Houdini as well. Lighting and rendering for the scene was done within Autodesk’s Arnold.

Barnstorm also used Blender, an open-source 3D program for modeling and asset creation, for a small portion of the assets in this sequence. In addition, the artists used Houdini Mantra for the water rendering, while textures and shaders were built in Adobe’s Substance Painter; later the team used Foundry’s Nuke to composite the imagery. “There was a lot of deep compositing involved in that scene because we had to have the lighting interact in three dimensions with things like the smoke simulation,” says Deming. “We had a bunch of simulations stacked on top of one another that created a lot of data to work with.”

The artists referenced historical photographs as they designed and built the statue with a period-accurate torch. In the wide aerial shots, the team used some stock footage of the statue with New York City in the background, but had to replace pretty much everything in the shot, shortening the city buildings and replacing Liberty Island, the water surrounding it and the vessels in the water. “So yeah, it ended up being a fully digital model throughout the sequence,” says Deming.

Deming cannot discuss the effects work coming up in Season 4, but he does note that Season 3 contained a lot of digital NYC. This included a sequence wherein John Smith was installed as the Reichmarshall near Central Park, a scene that comprised a digital NYC and digital crowd duplication. On the other side of the country, the team built digital versions of all the ships in San Francisco harbor, including CG builds of period Japanese battleships retrofitted with more modern equipment. Water simulations rounded out the scene.

In another sequence, the Japanese performed nuclear testing in Monument Valley, blowing the caps off the mesas. For that, the artists used reference photos to build the landscape and then created a digital simulation of a nuclear blast.

In addition, there were a multitude of banners on the various buildings. Because of the provocative nature of some of the Nazi flags and Fascist propaganda, solid-color banners were often hung on location, with artists adding the offensive VFX image in post as to not upset locals where the series was filmed. Other times, the VFX artists added all-digital signage to the scenes.

As Deming points out, there is only so much that can be created through production design and costumes. Some of the big things have to be done with visual effects. “There are large world events in the show that happen and large settings that we’re not able to re-create any other way. So, the visual effects are integral to the process of creating the aesthetic world of the show,” he adds. “We’re creating things that while they are visually impressive, also feel authentic, like a world that could really exist. That’s where the power and the horror of the world here comes from.”

High Castle is up for a total of three Emmy awards later this month. It was nominated for three Emmys in 2017 for Season 2 and four in 2016 for Season 1, taking home two Emmys that year: one for Outstanding Cinematography for a Single-Camera Series and another for Outstanding Title Design.

Westworld

What happens when high tech meets the Wild West, and wealthy patrons can indulge their fantasies with no limits? That is the premise of the Emmy-winning HBO series Westworld from creators Jonathan Nolan and Lisa Joy, who executive produce along with J.J. Abrams, Athena Wickham, Richard J. Lewis, Ben Stephenson and Denise Thé.

Westworld is set in the fictitious western theme park called Westworld, one of multiple parks where advanced technology enables the use of lifelike android hosts to cater to the whims of guests who are able to pay for such services — all without repercussions, as the hosts are programmed not to retaliate or harm the guests. After each role-play cycle, the host’s memory is erased, and then the cycle begins anew until eventually the host is either decommissioned or used in a different narrative. Staffers are situated out of sight while overseeing park operations and performing repairs on the hosts as necessary. As you can imagine, guests often play out the darkest of desires. So, what happens if some of the hosts retain their memories and begin to develop emotions? What if some escape from the park? What occurs in the other themed parks?

The series debuted in October 2016, with Season 2 running from April through June of 2018. The production for Season 3 began this past spring and it is planned for release in 2020.

The first two seasons were shot in various locations in California, as well as in Castle Valley near Moab, Utah. Multiple vendors provide the visual effects, including the team at CoSA VFX (North Hollywood, Vancouver and Atlanta), which has been with the show since the pilot, working closely with Westworld VFX supervisor Jay Worth. CoSA worked with Worth in the past on other series, including Fringe, Undercovers and Person of Interest.

The number of VFX shots per episode varies, depending on the storyline, and that means the number of shots CoSA is responsible for varies widely as well. For instance, the facility did approximately 360 shots for Season 1 and more than 200 for Season 2. The studio is unable to discuss its work at this time on the upcoming Season 3.

The type of effects work CoSA has done on Westworld varies as well, ranging from concept art through the concept department and extension work through the studio’s environments department. “Our CG team is quite large, so we handle every task from modeling and texturing to rigging, animation and effects,” says Laura Barbera, head of 3D at CoSA. “We’ve created some seamless digital doubles for the show that even I forget are CG! We’ve done crowd duplication, for which we did a fun shoot where we dressed up in period costumes. Our 2D department is also sizable, and they do everything from roto, to comp and creative 2D solutions, to difficult greenscreen elements. We even have a graphics department that did some wonderful shots for Season 2, including holograms and custom interfaces.”

On the 3D side, the studio’s pipeline js mainly comprised of Autodesk’s Maya and Side Effects Houdini, along with Adobe’s Substance, Foundry’s Mari and Pixologic’s ZBrush. Maxon’s Cinema 4D and Interactive Data Visualization’s SpeedTree vegetation modeler are also used. On the 2D side, the artists employ Foundry’s Nuke and the Adobe suite, including After Effects and Photoshop; rendering is done in Chaos Group’s V-Ray and Redshift’s renderer.

Of course, there have been some recurring effects each season, such as the host “twitches and glitches.” And while some of the same locations have been revisited, the CoSA artists have had to modify the environments to fit with the changing timeline of the story.

“Every season sees us getting more and more into the characters and their stories, so it’s been important for us to develop along with it. We’ve had to make our worlds more immersive so that we are feeling out the new and changing surroundings just like the characters are,” Barbera explains. “So the set work gets more complex and the realism gets even more heightened, ensuring that our VFX become even more seamless.”

At center stage have been the park locations, which are rooted in existing terrain, as there is a good deal of location shooting for the series. The challenge for CoSA then becomes how to enhance it and make nature seem even more full and impressive, while still subtly hinting toward the changes in the story, says Barbera. For instance, the studio did a significant amount of work to the Skirball Cultural Center locale in LA for the outdoor environment of Delos, which owns and operates the parks. “It’s now sitting atop a tall mesa instead of overlooking the 405!” she notes. The team also added elements to the abandoned Hawthorne Plaza mall to depict the sublevels of the Delos complex. They’re constantly creating and extending the environments in locations inside and out of the park, including the town of Pariah, a particularly lawless area.

“We’ve created beautiful additions to the outdoor sets. I feel sometimes like we’re looking at a John Ford film, where you don’t realize how important the world around you is to the feel of the story,” Barbera says.

CoSA has done significant interior work too, creating spaces that did not exist on set “but that you’d never know weren’t there unless you’d see the before and afters,” Barbera says. “It’s really very visually impressive — from futuristic set extensions, cars and [Westworld park co-creator] Arnold’s house in Season 2, it’s amazing how much we’ve done to extend the environments to make the world seem even bigger than it is on location.”

One of the larger challenges in the first seasons came in Season 2: creating the Delos complex and the final episodes where the studio had to build a world inside of a world – the Sublime –as well as the gateway to get there. “Creating the Sublime was a challenge because we had to reuse and yet completely change existing footage to design a new environment,” explains Barbera. “We had to find out what kind of trees and foliage would live in that environment, and then figure out how to populate it with hosts that were never in the original footage. This was another sequence where we had to get particularly creative about how to put all the elements together to make it believable.”

In the final episode of the second season, the group created environment work on the hills, pinnacles and quarry where the door to the Sublime appears. They also did an extensive rebuild of the Sublime environment, where the hosts emerge after crossing over. “In the first season, we did a great deal of work on the plateau side of Delos, as well as adding mesas into the background of other shots — where [hosts] Dolores and Teddy are — to make the multiple environments feel connected,” adds Barbera.

Aside from the environments, CoSA also did some subtle work on the robots, especially in Season 2, to make them appear as if they were becoming unhinged, hinting at a malfunction. The comp department also added eye twitches, subtle facial tics and even rapid blinks to provide a sense of uneasiness.

While Westworld’s blending of the Old West’s past and the robotic future initially may seem at thematic odds, the balance of that duality is cleverly accomplished in the filming of the series and the way it is performed, Barbera points out. “Jay Worth has a great vision for the integrated feel of the show. He established the looks for everything,” she adds.

The balance of the visual effects is equally important because it enhances the viewer experience. “There are things happening that can be so subtle but have so much impact. Much of our work on the second season was making sure that the world stayed grounded, so that the strangeness that happened with the characters and story line read as realistic,” Barbera explains. “Our job as visual effects artists is to help our professional storytelling partners tell their tales by adding details and elements that are too difficult or fantastic to accomplish live on set in the midst of production. If we’re doing our job right, you shouldn’t feel suddenly taken out of the moment because of a splashy effect. The visuals are there to supplement the story.”


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Creating and mixing authentic sounds for HBO’s Deadwood movie

By Jennifer Walden

HBO’s award-winning series Deadwood might have aired its final episode 13 years ago, but it’s recently found new life as a movie. Set in 1889 — a decade after the series finale — Deadwood: The Movie picks up the threads of many of the main characters’ stories and weaves them together as the town of Deadwood celebrates the statehood of South Dakota.

Deadwood: The Movie

The Deadwood: The Movie sound team.

The film, which aired on HBO and is available on Amazon, picked up eight 2019 Emmy nominations including in the categories of sound editing, sound mixing and  best television movie.

Series creator David Milch has returned as writer on the film. So has director Daniel Minahan, who helmed several episodes of the series. The film’s cast is populated by returning members, as is much of the crew. On the sound side, there are freelance production sound mixer Geoffrey Patterson; 424 Post’s sound designer, Benjamin Cook; NBCUniversal StudioPost’s re-recording mixer, William Freesh; and Mind Meld Arts’ music editor, Micha Liberman. “Series composers Reinhold Heil and Johnny Klimek — who haven’t been a composing team in many years — have reunited just to do this film. A lot of people came back for this opportunity. Who wouldn’t want to go back to Deadwood?” says Liberman.

Freelance supervising sound editor Mandell Winter adds, “The loop group used on the series was also used on the film. It was like a reunion. People came out of retirement to do this. The richness of voices they brought to the stage was amazing. We shot two days of group for the film, covering a lot of material in that limited time to populate Deadwood.”

Deadwood (the film and series) was shot on a dedicated film ranch called Melody Ranch Motion Picture Studio in Newhall, California. The streets, buildings and “districts” are consistently laid out the same way. This allowed the sound team to use a map of the town to orient sounds to match each specific location and direction that the camera is facing.

For example, there’s a scene in which the town bell is ringing. As the picture cuts to different locations, the ringing sound is panned to show where the bell is in relation to that location on screen. “We did that for everything,” says co-supervising sound editor Daniel Colman, who along with Freesh and re-recording mixer John Cook, works at NBCUniversal StudioPost. “You hear the sounds of the blacksmith’s place coming from where it would be.”

“Or, if you’re close to the Chinese section of the town, then you hear that. If you were near the saloons, that’s what you hear. They all had different sounds that were pulled forward from the series into the film,” adds re-recording mixer Freesh.

Many of the exterior and interior sounds on set were captured by Benjamin Cook, who was sound effects editor on the original Deadwood series. Since it’s a practical location, they had real horses and carriages that Cook recorded. He captured every door and many of the props. Colman says, “We weren’t guessing at what something sounded like; we were putting in the actual sounds.”

The street sounds were an active part of the ambience in the series, both day and night. There were numerous extras playing vendors plying their wares and practicing their crafts. Inside the saloons and out in front of them, patrons talked and laughed. Their voices — performed by the loop group in post — helped to bring Deadwood alive. “The loop group we had was more than just sound effects. We had to populate the town with people,” says Winter, who scripted lines for the loopers because they were played more prominently in the mix than what you’d typically hear. “Having the group play so far forward in a show is very rare. It had to make sense and feel timely and not modern.”

In the movie, the street ambience isn’t as strong a sonic component. “The town had calmed down a little bit as it’s going about its business. It’s not quite as bustling as it was in the series. So that left room for a different approach,” says Freesh.

The attenuation of street ambience was conducive to the cinematic approach that director Minahan wanted to take on Deadwood: The Movie. He used music to help the film feel bigger and more dramatic than the series, notes Liberman. Re-recording mixer John Cook adds, “We experimented a lot with music cues. We saw scenes take on different qualities, depending on whether the music was in or out. We worked hard with Dan [Minahan] to end up with the appropriate amount of music in the film.”

Minahan even introduced music on set by way of a piano player inside the Gem Saloon. Production sound mixer Patterson says, “Dan was very active on the set in creating a mood with that music for everyone that was there. It was part and parcel of the place at that time.”

Authenticity was a major driving force behind Deadwood’s aesthetics. Each location on set was carefully dressed with era-specific props, and the characters were dressed with equal care, right down to their accessories, tools and weapons. “The sound of Seth Bullock’s gun is an actual 1889 Remington revolver, and Calamity Jane’s gun is an 1860’s Colt Army cavalry gun. We’ve made every detail as real and authentic as possible, including the train whistle that opens the film. I wasn’t going to just put in any train whistle. It’s the 1880s Black Hills steam engine that actually went through Deadwood,” reports Colman.

The set’s wooden structures and elevated boardwalk that runs in front of the establishments in the heart of town lent an authentic character to the production sound. The creaky wooden doors and thumpiness of footsteps across the raised wooden floors are natural sounds the audience would expect to hear from that environment. “The set for Deadwood was practical and beautiful and amazing. You want to make sure that you preserve that realness and let the 1800s noises come through. You don’t want to over sterilize the tracks. You want them to feel organic,” says Patterson.

Freesh adds, “These places were creaky and noisy. Wind whistled through the windows. You just embrace it. You enhance it. That was part of the original series sound, and it followed through in the movie as well.”

The location was challenging due to its proximity to real-world civilization and all of our modern-day sonic intrusions, like traffic, airplanes and landscaping equipment from a nearby neighborhood. Those sounds have no place in the 1880s world of Deadwood, but “if we always waited for the moment to be perfect, we would never make a day’s work,” says Patterson. “My mantra was always to protect every precious word of David Milch’s script and to preserve the performances of that incredible cast.”

In the end, the modern-day noises at the location weren’t enough to require excessive ADR. John Cook says, “Geoffrey [Patterson] did a great job of capturing the dialogue. Then, between the choices the picture editors made for different takes and the work that Mandell [Winter] did, there were only one or two scenes in the whole movie that required extra attention for dialogue.”

Winter adds, “Even denoising the tracks, I didn’t take much out. The tracks sounded really good when they got to us. I just used iZotope RX 7 and did our normal pass with it.”

Any fan of Deadwood knows just how important dialogue clarity is since the show’s writing is like Shakespeare for the American West — with prolific profanity, of course. The word choices and their flow aren’t standard TV script fare. To help each word come through clearly, Winter notes they often cut in both the boom and lav mic tracks. This created nice, rich dialogue for John Cook to mix.

On the stage, John Cook used the FabFilter Pro-Q 2 to work each syllable, making sure the dialogue sounded bright and punchy and not too muddy or tubby. “I wanted the audience to hear every word without losing the dynamics of a given monologue or delivery. I wanted to maintain the dynamics, but make sure that the quieter moments were just as intelligible as the louder moments,” he says.

In the film, several main characters experience flashback moments in which they remember events from the series. For example, Al Swearengen (Ian McShane) recalls the death of Jen (Jennifer Lutheran) from the Season 3 finale. These flashbacks — or hauntings, as the post team refers to them — went through several iterations before the team decided on the most effective way to play each one. “We experimented with how to treat them. Do we go into the actor’s head and become completely immersed in the past? Or, do we stay in the present — wherever we are — and give it a slight treatment? Or, should there not be any sounds in the haunting? In the end, we decided they weren’t all going to be handled the same,” says Freesh.

Before coming together for the final mix on Mix 6 at NBCUniversal StudioPost on the Universal Studios Lot in Los Angeles, John Cook and Freesh pre-dubbed Deadwood: The Movie in separate rooms as they’d do on a typical film — with Freesh pre-dubbing the backgrounds, effects, and Foley while Cook pre-dubbed the dialogue and music.

The pre-dubbing process gave Freesh and John Cook time to get the tracks into great shape before meeting up for the final mix. Freesh concludes, “We were able to, with all the people involved, listen to the film in real good condition from the first pass down and make intelligent decisions based on what we were hearing. It really made a big difference in making this feel like Deadwood.”


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

The Emmy-nominated sound editing team’s process on HBO’s Vice Principals

By Jennifer Walden

HBO’s comedy series Vice Principals — starring Danny McBride and Walton Goggins as two rival vice principals of North Jackson High School — really went wild for the Season 2 finale. Since the school’s mascot is a tiger, they hired an actual tiger for graduation day, which wreaked havoc inside the school. (The tiger was part real and part VFX, but you’d never know thanks to the convincing visuals and sound.)

The tiger wasn’t the only source of mayhem. There was gunfire and hostages, a car crash and someone locked in a cage — all in the name of comedy.

George Haddad

Through all the bedlam, it was vital to have clean and clear dialogue. The show’s comedy comes from the jokes that are often ad-libbed and subtle.

Here, Warner Bros. Sound supervising sound editor George Haddad, MPSE, and dialogue/ADR editor Karyn Foster talk about what went into the Emmy-nominated sound editing on the Vice Principals Season 2 finale, “The Union Of The Wizard & The Warrior.”

Of all the episodes in Season 2, why did you choose “The Union of the Wizard & The Warrior” for award consideration?
George Haddad: Personally, this was the funniest episode — whether that’s good for sound or not. They just let loose on this one. For a comedy, it had so many great opportunities for sound effects, walla, loop group, etc. It was the perfect match for award consideration. Even the picture editor said beforehand that this could be the one. Of course, we don’t pay too much attention to its award-potential; we focus on the sound first. But, sure enough, as we went through it, we all agreed that this could be it.

Karyn Foster: This episode was pretty dang large, with the tiger and the chaos that the tiger causes.

In terms of sound, what was your favorite moment in this episode? Why?
Haddad: It was during the middle of the show when the tiger got loose from the cage and created havoc. It’s always great for sound when an animal gets loose. And it was particularly fun because of the great actors involved. This had comedy written all over it. You know no one is going to die, just because the nature of the show. (Actually, the tiger did eat the animal handler, but he kind of deserved it.)

Karyn Foster

I had a lot of fun with the tiger and we definitely cheated reality there. That was a good sound design sequence. We added a lot of kids screaming and adults screaming. The reaction of the teachers was even more scared than the students, so it was funny. It was a perfect storm for sound effects and dialogue.

Foster: My favorite scene was when Lee [Goggins] is on the ground after the tiger mauls his hand and he’s trying to get Neal [McBride] to say, “I love you.” That scene was hysterical.

What was your approach to the tiger sounds?
Haddad: We didn’t have production sound for the tiger, as the handler on-set kept a close watch on the real animal. Then in the VFX, we have the tiger jumping, scratching with its paws, roaring…

I looked into realistic tiger sounds, and they’re not the type of animal you’d think would roar or snarl — sounds we are used to having for a lion. We took some creative license and blended sounds together to make the tiger a little more ferocious, but not too scary. Because, again, it’s a comedy so we needed to find the right balance.

What was the most challenging scene for sound?
Haddad: The entire cast was in this episode, during the graduation ceremony. So you had 500 students and a dozen of the lead cast members. That was pretty full, in terms of sound. We had to make it feel like everyone is panicking at the same time while focusing on the tiger. We had to keep the tension going, but it couldn’t be scary. We had to keep the tone of the comedy going. That’s where the balance was tricky and the mixers did a great job with all the material we gave them. I think they found the right tone for the episode.

Foster: For dialogue, the most challenging scene was when they are in the cafeteria with the tiger. That was a little tough because there are a lot of people talking and there were overlapping lines. Also, it was shot in a practical location, so there was room reflection on the production dialogue.

A comedy series is all about getting a laugh. How do you use sound to enhance the comedy in this series?
Haddad: We take the lead off of Danny McBride. Whatever his character is doing, we’re not going to try to go over the top just because he and his co-stars are brilliant at it. But, we want to add to the comedy. We don’t go cartoonish. We try to keep the sounds in reality but add a little bit of a twist on top of what the characters are already doing so brilliantly on the screen.

Quite frankly, they do most of the work for us and we just sweeten what is going on in the scene. We stay away from any of the classic Hanna-Barbera cartoon sound effects. It’s not that kind of comedy, but at the same time we will throw a little bit of slapstick in there — whether it’s a character falling or slipping or it’s a gun going off. For the gunshots, I’ll have the bullet ricochet and hit a tree just to add to the comedy that’s already there.

A comedy series is all about the dialogue and the jokes. What are some things you do to help the dialogue come through?
Haddad: The production dialogue was clean overall, and the producers don’t want to change any of the performances, even if a line is a bit noisy. The mixers did a great job in making sure that clarity was king for dialogue. Every single word and every single joke was heard perfectly. Comedy is all about timing.

We were fortunate because we get clean dialogue and we found the right balance of all the students screaming and the sounds of panicking when the tiger created havoc. We wanted to make sure that Danny and his co-stars were heard loud and clear because the comedy starts with them. Vice Principals is a great and natural sounding show for dialogue.

Foster: Vice Principals was a pleasure to work on because the dialogue was in good shape. The editing on this episode wasn’t difficult. The lines went together pretty evenly.

We basically work with what we’ve been given. It’s all been chosen for us and our job is to make it sound smooth. There’s very minimal ADR on the show.

In terms of clarification, we make sure that any lines that really need to be heard are completely separate, so when it gets to the mix stage the mixer can push that line through without having to push everything else.

As far as timing, we don’t make any changes. That’s a big fat no-no for us. The picture editor and showrunners have already decided what they want and where, and we don’t mess with that.

There were a large number of actors present for the graduation ceremony. Was the production sound mixer able to record those people in that environment? Or, was that sound covered in loop?
Haddad: There are so many people in the scene. and that can be challenging to do solely in loop group. We did multiple passes with the actors we had in loop. We also had the excellent sound library here at Warner Bros. Sound. I also captured recordings at my kids’ high school. So we had a lot of resource material to pull from and we were able to build out that scene nicely. What we see on-camera, with the number of students and adults, we were able to represent that through sound.

As for recording at my kids’ high school, I got permission from the principal but, of course, my kids were embarrassed to have their dad at school with his sound equipment. So I tried to stay covert. The microphones were placed up high, in inconspicuous places. I didn’t ask any students to do anything. We were like chameleons — we came and set up our equipment and hit record. I had Røde microphones because they were easy to mount on the wall and easy to hide. One was a Røde VideoMic and the other was their NTG1 microphone. I used a Roland R-26 recorder because it’s portable and I love the quality. It’s great for exterior sounds too because you don’t get a lot of hiss.

We spent a couple hours recording and we were lucky enough to get material to use in the show. I just wanted to catch the natural sound of the school. There are 2,700 students, so it’s an unusually high student population and we were able to capture that. We got lucky when kids walked by laughing or screaming or running to the next class. That was really useful material.

Foster: There was production crowd recorded. For most of the episodes when they had pep rallies and events, there was production crowd recorded. They took the time to record some specific takes. When you’re shooting group on the stage, you’re limited to the number of people you have. You have to do multiple takes to try and mimic that many people.

Can you talk about the tools you couldn’t have done without?
Haddad: This show has a natural sound, so we didn’t use pitch shifting or reverb or other processing like we’d use on a show like Gotham, where we do character vocal treatments.

Foster: I would have to say iZotope RX 6. That tool for a dialogue editor is one that you can’t live without. There were some challenging scenes on Vice Principals, and the production sound mixer Christof Gebert did a really good job of getting the mics in there. The iso-mics were really clean, and that’s unusual these days. The dialogue on the show was pleasant to work on because of that.

What makes this show challenging in terms of dialogue is that it’s a comedy, so there’s a lot of ad-libbing. With ad-libbing, there’s no other takes to choose from. So if there’s a big clunk on a line, you have to make that work. With RX 6, you can minimize the clunk on a line or get rid of it. If those lines are ad-libs, they don’t want to have to loop those. The ad-libbing makes the show great but it also makes the dialogue editing a bit more complicated.

Any final thoughts you’d like to share on Vice Principals?
Haddad: We had a big crew because the show was so busy. I was lucky to get some of the best here at Warner Bros. Sound. They helped to make the show sound great, and we’re all very proud of it. We appreciate our peers selecting Vice Principals for Emmy nomination. That to us was a great feeling, to have all of our hard work pay off with an Emmy nomination.


Jennifer Walden is a New Jersey-based audio engineer and writer. You can follow her on Twitter at @audiojeney.com.

Digging into the dailies workflow for HBO’s Sharp Objects

By Randi Altman

If you have been watching HBO’s new series Sharp Objects, you might have some theories about who is murdering teenage girls in a small Missouri town, but at this point they are only theories.

Sharp Objects revolves around Amy Adams’ character, Camille, a journalist living in St. Louis, who returns to her dysfunctional hometown armed with a deadline from her editor, a drinking problem and some really horrific childhood memories.

Drew Dale

The show is shot in Atlanta and Los Angeles, with dailies out of Santa Monica’s Local Hero and post out of its sister company, Montreal’s Real by Fake. Real by Fake did all the post on the HBO series Big Little Lies.

Local Hero’s VP of workflows, Drew Dale, managed the dailies workflow on Sharp Objects, coming up against the challenges of building a duplicate dailies set up in Atlanta as well as dealing with HBO’s strict delivery requirements — not just for transcoding, but for labeling files and more. Local Hero co-owner Steve Bannerman calls it “the most detailed and specific dailies workflow we’ve ever designed.”

To help cope with such a high level of complexity, Dale turned to Assimilate’s Scratch as the technical heart of his workflow. Since Scratch is a very open system, it was able to integrate seamlessly with all the software and hardware tools that were needed to meet the requirements.

Local Hero’s DI workflow is something that Dale and the studio have been developing for about five or six years and adjusting for each show or film they work on. We recently reached out to Dale to talk about that workflow and their process on Sharp Objects, which was created by Marti Noxon and directed by Jean-Marc Vallée.

Can you describe your workflow with the footage?
Basically, the DIT hands a shuttle RAID (we use either OWC or Areca RAIDs) to a PA, and they’ll take it to our operator. Our operators tend to start as soon as wrap hits, or as soon as lunch breaks, depending on whether or not you’re doing one or two breaks a day.

We’ll ingest into Scratch and apply the show LUT. The LUT is typically designed by our lead colorist and is based on a node stack in Blackmagic Resolve that we can use on the back end as the first pass of the DI process. Once the LUT is loaded, we’ll do our grades using the CDL protocol, though we didn’t do the grade on Sharp Objects. Then we’ll go through, sync all the audio, QC the footage and make our LTO back-ups.

What are you looking for in the QC?
Things like crew in the shot, hot pixels, corrupt footage, lens flares, just weird stuff that’s going to cost money on the backend. Since we’re working in conjunction with production a lot of the time, we can catch those things reasonably early; a lot earlier than if you were waiting until editorial. We flag those and say, “This scene that you shot yesterday is out of focus. You should probably re-shoot.” This allows them adjust more quickly to that sort of thing.

After the QC we do a metadata pass, where we take the embedded information from the WAV files provided by the sound mixer, as well as custom metadata entered by our operator and apply that throughout the footage. Then we’ll render out editorial media — typically Avid but sometimes Premiere or Final Cut — which will then get transferred to the editors either via online connection or shipped shuttle drives. Or, if we’re right next to them, we’ll just push it to their system from our computer using a fiber or Ethernet intranet.

We’ll also create web dailies. Web dailies are typically H.264s, and those will either get loaded onto an iPad for the director, uploaded to pix or Frame.io for web review, or both.

You didn’t grade the dailies on Sharp Objects?
No, they wanted a specific LUT applied; one that was used on the first season of Big Little Lies, and is being used on the second season as well. So they have a more generic look applied, but they do have very specific needs for metadata, which is really important. For example, a lot of the things they require are the input of shoot date and shoot day information, so you can track things.

We also ingest track information from WAV files, so when the editor is cutting the footage you can see the individual audio channel names in the edit, which makes cutting audio a lot easier. It also helps sync things up on the backend with the audio mix. As per HBO’s requests, a lot of extra information in the footage goes to the editor.

The show started in LA and then moved to Atlanta, so you had to build your workflow for a second time? Can you talk about that?
The tricky part of working on location is making sure the Internet is set up properly and getting a mobile version of our rig to wherever it needs to go. Then it’s dealing with the hassle of being on location. I came up in the production world in the camera department, so it reminds me of being back on set and being in the middle of nowhere with a lot less infrastructure than you’re used to when sitting at a post house in Los Angeles. Most of the challenge of being on location is finding creative ways to implement the same workflow in the face of these hurdles.

Let’s get back to working with HBO’s specific specs. Can you talk about different tools you had to call on to make sure it was all labeled and structured correctly?
A typical scene identifier for us is something like “35B-01” 35 signifies the scene, “B” signifies the shot and “01” signifies the take.

The way that HBO structured things on Sharp Objects was more by setup, so it was a much more fluid way of shooting. It would be like “Episode 1, setup 32, take one, two, three, four, five.” But each of those takes individually was more like a setup and less like a take itself. A lot of the takes were 20 minutes long, 15 minutes long, where they would come in, reset the actors, reset the shot, that kind of thing.

In addition to that, there was a specific naming convention and a lot of specific metadata requirements required by the editors. For example, the aforementioned WAV track names. There are a lot of ways to process dailies, but most software doesn’t provide the same kind of flexibility with metadata as Scratch.

For this show it was these sorts of things, as well as very specific LTO naming conventions and structure, which took a lot of effort on our part to get used to. Typically, with a smaller production or smaller movie, the LTO backups they require are basically just to make sure that the footage is placed somewhere other than our hard drives, so we can store it for a long period of time. But with HBO, very specific manifests are required with naming conventions on each tape as well as episode numbers, scene and take info, which is designed to make it easier for un-archiving footage later for restoration, or for use in later seasons of a show. Without that metadata, it becomes a much more labor-intensive job to track down specific shots and scenes.

HBO also requires us to use multiple LTO brands in case one brand suddenly ceases to support the medium, or if a company goes under, they can un-archive the footage 30 years from now. I think a lot of the companies are starting to move toward future-proofing their footage in case you need to go back and remaster it.

Does that make your job harder? Easier?
It makes it harder in some ways, and easier in others. Harder because there is a lot of material being generated. I think the total count for the show was something like 120TB of footage, which is not an excessive amount for a show this big, but it’s definitely a lot of data to manage over the course of a show.

Could name some of the tools that you used?
As I mentioned, the heartbeat of all our dailies workflows is Scratch. I really love Scratch for three reasons. First, I can use it to do fully color graded, fully animated dailies with power windows, ramping curves — everything. Second, it handles metadata very well. This was crucial for Sharp Objects. And finally, it’s pretty affordable.

Beyond Scratch, the software that we tend to use most for copying footage is Silverstack. We use that for transferring files to and from the RAID to make sure everything’s verified. We use Scratch for processing the footage; that’s sort of the big nexus of everything. We use YoYottaID for LTO creations; that’s what HBO suggests we use to handle their specific LTO requirements. One of the things I love is the ability to export ALEs directly out of Scratch and into YoYattID. This saves us time and errors. We use Aspera for transferring files back and forth between HBO and ourselves. We use Pix for web daily distributions. Pix access was specifically provided to us by HBO.

Hardware wise, we’re mostly working on either Mac Pros or Silverdraft Demon PCs for dailies. We used to use mostly Mac Pros, but we find that they aren’t quite robust enough for larger projects, though they can be useful for mid-range or smaller jobs.

We typically use Flanders monitors for our on-set grading, but we’ve also used Sony’s and JVC’s, depending on the budget level and what’s available on hand. We tend to use the G-Speed Shuttle XLs for the main on-set RAIDs, and we like to use OWC Thunderbays or Areca thunderbolt RAIDS for our transfer drives.

What haven’t I asked that is important?
For me it’s important to have tools, operators and infrastructure that are reliable so we can generate trust with our clients. Trust is the biggest thing for me, and the reason we vetted all the software… we know what works. We know it does what we need it to do to be flexible for everybody’s needs. It’s really about just showing the clients that we’ve got their back.

Disfiguring The Man in Black for HBO’s Westworld

If you watch HBO’s Westworld, you are familiar with the once-good guy turned bad guy The Man in Black. He is ruthless and easy to hate, so when Karma caught up to him audiences were not too upset about it.

Westworld doesn’t shy away from violence. In fact, it has a major role in the series. A recent example of an invisible effect displaying mutilation came during the show’s recent Season Two finale. CVD VFX, a boutique visual effects house based in Vancouver, was called on to create the intricate and gruesome result of what The Man in Black’s hand looked like after being blown to pieces.

During the long-awaited face-off between The Man in Black (Ed Harris) and Dolores Abernathy (Evan Rachel Wood), we see their long-simmering conflict culminate with his pistol pressed against her forehead, cocked and ready to fire. But when he pulls the trigger, the gun backfires and explodes in his hand, sending fingers flying into the sand and leaving horrifyingly bloody stumps.

CVD VFX’s team augmented the on-set footage to bring the moment to life in excruciating detail. Harris’ fingers were wrapped in blue in the original shot, and CVD VFX went to work removing his digits and replacing them with animated stubs, complete with the visceral details of protruding bone and glistening blood. The team used special effects makeup for reference on both blood and lighting, and were able to seamlessly incorporate the practical and digital elements.

The result was impressive, especially considering the short turnaround time that CVD had to create the effect.

“We were brought on a little late in the game as we had a couple weeks to turn it around,” explains Chris van Dyck, founder of CVD VFX, who worked with the show’s VFX supervisor, Jay Worth. “Our first task was to provide reference/style frames of what we’d be proposing. It was great to have relatively free reign to propose how the fingers were blown off. Ultimately, we had great direction and once we put the shots together, everyone was happy pretty quickly.”

CVD used Foundry’s Nuke and Autodesk’s Maya to create the effect.

CVD VFX’s work on Westworld wasn’t the first time they worked with Worth. They previously worked together on Syfy’s The Magicians and Fox’s Wayward Pines.

The challenges of dialogue and ice in Game of Thrones ‘Beyond the Wall’

By Jennifer Walden

Fire-breathing dragons and hordes of battle-ready White Walkers are big attention grabbers on HBO’s Game of Thrones, but they’re not the sole draw for audiences. The stunning visual effects and sound design are just the gravy on the meat and potatoes of a story that has audiences asking for more.

Every line of dialogue is essential for following the tangled web of storylines. It’s also important to take in the emotional nuances of the actors’ performances. Striking the balance between clarity and dynamic delivery isn’t an easy feat. When a character speaks in a gruff whisper because, emotionally, it’s right for the scene, it’s the job of the production sound crew and the post sound crew to make that delivery work.

At Formosa Group’s Hollywood location, an Emmy-winning post sound team works together to put as much of the on-set performances on the screen as possible. They are supervising sound editor Tim Kimmel, supervising dialogue editor Paul Bercovitch and dialogue/music re-recording mixer Onnalee Blank.

Blank and the show’s mixing team picked up a 2018 Emmy for Outstanding Sound Mixing For a Comedy or Drama Series (One Hour) for their work on Season 7’s sixth episode, “Beyond the Wall.”

Tim Kimmel and Onnalee Blank

“The production sound crew does such a phenomenal job on the show,” says Kimmel. “They have to face so many issues on set, between the elements and the costumes. Even though we have to do some ADR, it would be a whole lot more if we didn’t have such a great sound crew on-set.”

On “Beyond the Wall,” the sound team faced a number of challenges. Starting at the beginning of this episode, Jon Snow [Kit Harington] and his band of fighters trek beyond the wall to capture a White Walker. As they walk across a frozen, windy landscape, they pass the time by getting to know each other more. Here the threads of their individual stories from past seasons start to weave together. Important connections are being made in each line of dialogue.

Those snowy scenes were shot in Iceland and the actors wore metal spikes on their shoes to help them navigate the icy ground. Unfortunately, the spikes also made their footsteps sound loud and crunchy, and that got recorded onto the production tracks.

Another challenge came from their costumes. They wore thick coats of leather and fur, which muffled their dialogue at times or pressed against the mic and created a scratchy sound. Wind was also a factor, sometimes buffeting across the mic and causing a low rumble on the tracks.

“What’s funny is that parts of the scene would be really tough to get cleaned up because the wind is blowing and you hear the spikes on their shoes — you hear costume movements. Then all of a sudden they stop and talk for a minute and the wind stops and it’s the most pristine, quiet, perfect recording you can think of,” explains Kimmel. “It almost sounded like it was shot on a soundstage. In Iceland, when the wind isn’t blowing and the actors aren’t moving, it’s completely quiet and still. So it was tough to get those two to match.”

As supervising sound editor, Kimmel is the first to assess the production dialogue tracks. He goes through an episode and marks priority sections for supervising dialogue editor Bercovitch to tackle first. He says, “That helps Tim [Kimmel] put together his ADR plan. He wants to try to pare down that list as much as possible. For Beyond the Wall, he wanted me to start with the brotherhood’s walk-and-talk north of the wall.”

Bercovitch began his edit by trying to clean up the existing dialogue. For that opening sequence, he used iZotope RX 6’s Spectral Repair to clean up the crunchy footsteps and the rumble of heavy winds. Next, he searched for usable alt takes from the lav and boom tracks, looking for a clean syllable or a full line to cut in as needed. Once Bercovitch was done editing, Kimmel could determine what still needed to be covered in ADR. “For the walk-and-talk beyond the wall, the production sound crew really did a phenomenal job. We didn’t have to loop that scene in its entirety. How they got as good of recordings as they did is honestly beyond me.”

Since most of the principle actors are UK and Ireland-based, the ADR is shot in London at Boom Post with ADR supervisor Tim Hands. “Tim [Hands] records 90% of the ADR for each season. Occasionally, we’ll shoot it here if the actor is in LA,” notes Kimmel.

Hands had more lines than usual to cover on Beyond the Wall because of the battle sequence between the brotherhood and the army of the dead. The principle actors came in to record grunts, efforts and breaths, which were then cut to picture. The battle also included Bercovitch’s selects of usable production sound from that sequence.

Re-recording mixer Blank went through all of those elements on dub Stage 1 at Formosa Hollywood using an Avid S6 console to control the Pro Tools 12 session. She chose vocalizations that weren’t “too breathy, or sound like it’s too much effort because it just sounds like a whole bunch of grunts happening,” she says. “I try to make the ADR sound the same as the production dialogue choices by using EQ, and I only play sounds for whoever is on screen because otherwise it just creates too much confusion.”

One scene that required extensive ADR was for Arya (Maisie Williams) and Sansa (Sophie Turner) on the catwalk at Winterfell. In the seemingly peaceful scene, the sisters share an intimate conversation about their father as snow lightly falls from the sky. Only it wasn’t so peaceful. The snow was created by a loud snow machine that permeated the production sound, which meant the dialogue on the entire scene needed to be replaced. “That is the only dialogue scene that I had no hand in and I’ve been working on the show for three seasons now,” says Bercovitch.

For Bercovitch, his most challenging scenes to edit were ones that might seem like they’d be fairly straightforward. On Dragonstone, Daenerys (Emilia Clarke) and Tyrion (Peter Dinklage) are in the map room having a pointed discussion on succession for the Iron Throne. It’s a talk between two people in an interior environment, but Bercovitch points out that the change of camera perspective can change the sound of the mics. “On this particular scene and on a lot of scenes in the show, you have the characters moving around within the scene. You get a lot of switching between close-ups and longer shots, so you’re going between angles with a usable boom to angles where the boom is not usable.”

There’s a similar setup with Sansa and Brienne (Gwendoline Christie) at Winterfell. The two characters discuss Brienne’s journey to parley with Cersei (Lena Headey) in Sansa’s stead. Here, Bercovitch faced the same challenge of matching mic perspectives, and also had the added challenge of working around sounds from the fireplace. “I have to fish around in the alt takes — and there were a lot of alts — to try to get those scenes sounding a little more consistent. I always try to keep the mic angles sounding consistent even before the dialogue gets to Onnalee (Blank). A big part of her job is dealing with those disparate sound sources and trying to make them sound the same. But my job, as I see it, is to make those sound sources a little less disparate before they get to her.”

One tool that’s helped Bercovitch achieve great dialogue edits is iZotope’s RX 6. “It doesn’t necessarily make cleaning dialogue faster,”he says. “It doesn’t save me a ton of time, but it allows me to do so much more with my time. There is so much more that you can do with iZotope RX 6 that you couldn’t previously do. It still takes nitpicking and detailed work to get the dialogue to where you want it, but iZotope is such an incredibly powerful tool that you can get the result that you want,” he says.

On the dub stage, Blank says one of her most challenging scenes was the opening walk-and-talk sequence beyond the wall. “Half of that was ADR, half was production, and to make it all sound the same was really challenging. Those scenes took me four days to mix.”

Her other challenge was the ADR scene with Arya and Sansa in Winterfell, since every line there was looped. To help the ADR sound natural, as if it’s coming from the scene, Blank processes and renders multiple tracks of fill and backgrounds with the ADR lines and then re-records that back into Avid Pro Tools. “That really helps it sit back into the screen a little more. Playing the Foley like it’s another character helps too. That really makes the scene come alive.”

Bercovitch explains that the final dialogue you hear in a series doesn’t start out that way. It takes a lot of work to get the dialogue to sound like it would in reality. “That’s the thing about dialogue. People hear dialogue all day, every day. We talk to other people and it doesn’t take any work for us to understand when other people speak. Since it doesn’t take any work in one’s life why would it require a lot of work when putting a film together? There’s a big difference between the sound you hear in the world and recorded sound. Once it has been recorded you have to take a lot of care to get those recordings back to a place where your brain reads it as intelligible. And when you’re switching from angle to angle and changing mic placement and perspective, all those recordings sound different. You have to stitch those together and make them sound consistent so it sounds like dialogue you’d hear in reality.”

Achieving great sounding dialogue is a team effort — from production through post. “Our post work on the dialogue is definitely a team effort, from Paul’s editing and Tim Hands’ shooting the ADR so well to Onnalee getting the ADR to match with the production,” explains Kimmel. “We figure out what production we can use and what we have to go to ADR for. It’s definitely a team effort and I am blessed to be working with such an amazing group of people.”


Jennifer Walden is a New Jersey-based audio engineer and writer.

Emmy Awards: HBO’s The Night Of

Nominee Nicholas Renbeck, supervising sound editor/re-recording mixer

By Jennifer Walden

The HBO drama series The Night Of tells the tale of Nasir “Naz” Khan, a young Pakistani-American male accused of brutally murdering a young woman in her uptown Manhattan home. The series takes the audience on a tour of New York City’s penal system, from the precinct to the morgue, into the court room and out to Riker’s Island. It also explores different neighborhoods, from uptown Manhattan across the East River into Queens. Each location has a rich tapestry of sound, a vibrant background upon which the drama plays out.

Supervising sound editor/re-recording mixer Nicholas Renbeck from c5 Sound in New York, has been nominated for two Emmys for his work on the show: one for Outstanding Sound Editing For A Limited Series for Ep. 2 “Subtle Beast,” and one for Outstanding Sound Mixing For A Limited Series for Ep.1 “The Beach.” He’s already won a 2017 Golden Reel Award for Best Sound Editing on The Night Of.

Here he shares insight on building the expressive backgrounds and mixing the effects to create a rich world around the actors.

Nicholas Renbeck

How did you get involved with the show?
They were looking to do the sound in New York and c5 Sound was one of the places they were considering. I interviewed for the job and ended up getting it.

I flew out to Los Angeles while they were wrapping up locking the picture cut. Just prior to going they had sent me screening links to watch the series, all but the last episode. So I viewed the first seven episodes pretty much straight in a row, and in less than 24 hours I got on the plane and flew out to LA to spot the entire show with Steve Zaillian (series creator/ director/writer), still not knowing what happens in the last episode. While on the plane I had all these possible sound ideas swirling around in my head, mixed with this deep desire to know what happens in the final episode.

Then upon arriving I sat and did a spotting session with Steve and Nick Houy, the picture editor. We watched all eight episodes over a two-day period and talked about the sound concerns and possibilities.

This was your first time working with show runners Richard Price and Steven Zaillian. Did they have specific plans for how they wanted to use sound in the show?
Steve had a definite vision for where he wanted to go with the show. He had very specific ideas on what it would sound like in the prison, or what the city should sound like depending on the neighborhood. When I sat down with them, they already had a lot of sounds in their Avid Media Composer that they were working with. Actually, much more than any show I’ve worked on before.

Warren Shaw (a fellow supervising sound editor/sound designer who was New York-based but went out to Los Angeles a little while ago) had been brought onto the show early on while they were still cutting. Warren did some great initial sound design for them on a few of the later episodes. I got to hear what his ideas were and we brought his work, along with everything they had in the Avid, into our working sound sessions. Then Ruy Garcia, Wyatt Sprague (sound design/effects editors) and I kept going further, adding more elements and refining ideas.

I find there’s always a transitional step when moving from a mono or stereo Avid track into a 5.1 surround environment. Everybody up to this point is used to listening to things in a certain way. Now we’ve added four more speakers, and there’s a re-adjustment processes that happens. So, I spent a good amount of time working to present all the material in a way that would play to the strengths of a 5.1 sound environment.

What came about was a wonderful combination of all our ideas up to that point. I would make a full 5.1 sound effect premix in one of c5 sound design suites for an entire episode, then bring Steve in and get his reaction, and then afterward build from that. What we learned from working with Steve on Episode 102 we would then take and apply to Episode 103, building as we went.

How did they want the prison to sound? What descriptions did they give?
You hear this low rumbling tone, this presence of heaviness. That really spoke to Steve’s idea of what he wanted the prison atmosphere to encompass. We found sounds and tones to mold that mood, working to create what that feeling is like when the prison is busy and full of activity. We also created the flip side of what that oppressive sound is when the lights are out and we are alone with Naz [Riz Ahmed] in this very scary place that’s now quiet. We kept working to give the cell block a heaviness so that it feels like it’s pulling you down as you go through these scenes with Naz and see what his life has become at this point.

Marissa Littlefield, our ADR supervisor, Steve and I had conversations about what we needed in terms of added voices and how we would handle that. We did a lot of interesting casting for loop group, with a focus on being specific to the locations around the city. We definitely put our loop group coordinators Dann Fink and Bruce Winant (of Loopers Unlimited) through the paces of casting. It was nice to be able to combine those added voices from the loop group with the substantial production recording that was done on set, along with a number of sounds we had in our personal sound libraries. I think we were pretty successful at creating those different locations based on both voices and sound atmospheres.

What about the reverb work for the prison and the precinct? You have dry loop group recordings, so what reverbs did you use to help fit those into the environments?
I jump back and forth using Avid’s ReVibe II, Space and Audio Ease’s Altiverb. In doing some of his design work I know Ruy liked to use Soundtoy’s Echoboy delay for some fun stuff, and I believe Michael Berry (re-recording mixer on music/dialog/ADR/Foley) used ReVibe II and Altiverb for most of the show. So there was a variety of different reverbs and effects that we would use.

In some cases, we would apply reverb directly to the sound file, and in other cases we would wait until we got to the mix. In terms of the loop group voices, Michael Berry spent time figuring out where he wanted those to sit — how far back in the environment they would play and how they would play against the effects tracks that we created. We found a nice balance there.

Where did you mix “The Beach” episode? What console did you use?
Michael Berry was in charge of all the dialog, ADR, music and Foley premixing, which he did at PostWorks/Technicolor in New York, on the Avid S5. I did the sound effects premixing at c5 Sound, in a 5.1 design/mix room on an Avid D-Command. The final mix then happened at PostWorks/Technicolor. All of the sound editorial was done at c5.

What were some challenges you had while mixing “The Beach” and how did you handle them?
The trickiest scene for us was the one under the George Washington Bridge. The production tracks were challenging due to the noise of the river and the George Washington Bridge overhead. However, the performances were so good we really wanted to save them at all costs. Sara Stern (dialogue editor) worked for a good while to clean up the initial dialogue, and then Michael [Berry] really worked at those tracks to find a way to save and salvage the on-camera performances. iZotope RX5 (RX6 wasn’t out yet) was our friend in a big way.

Then we had to figure out where the atmospheres wanted to be because the performances are so strong that you don’t want to put the effects or the music over what the actors are doing. You don’t want to overpower that or take away from what is happening on-screen. There’s a lot of subtlety in our decisions. A little went a long way.

Did you have a favorite scene in terms of mixing sound effects on your side of the board?
I really liked the opening section of the Queens neighborhood during the day and going into the night with the drive into Manhattan. The whole driving sequence into the city in the cab has some real nice moments…the juxtaposing of the interiors of the house and cab with city’s night exteriors.

Of all the episodes you could’ve picked from Season 1, why did you choose the mix on “The Beach” for Emmy consideration?
It’s the first episode and it really grabs you. I was just sitting there on the edge of my seat watching it for the first time. The performances were so powerful and our challenge was to add to that. How can you help build on that?

Steve, Michael and I felt this was the right episode to go with. It has interesting atmospheric sounds, the music is strong and the performances are strong. Across the board, the music, the effects and the dialogue were all there nicely represented.

Let’s talk about the sound editing on “Subtle Beast,” which is up for Emmy consideration. What were some opportunities you had for creative sound on this episode?
What was nice about “Subtle Beast” is that we had so many different and interesting locations to address and figure out. There is the morgue, which is the hallway and the waiting area, the parking lot outside and the morgue itself. All of those were fantastic spots where we could design the backgrounds and sound effects to create the mood. This episode showcased most of the locations from the first episode again. And we see Naz being brought from the police precinct in the van across town to the holding cell under the courthouse, which is a great sequence. Then finally Naz goes into the transport to Riker’s Island. You have this array of locations in which to create this rich tapestry of sound.

Nothing is huge. There are no large gun battles or things of that nature. There are just many different locations for which we can create some interesting moods.

You did a fantastic job on the backgrounds. They are so expressive. I particularly like when the transport van is backing up to the precinct to pick up the prisoners. You hear the music playing from inside the van and it’s bouncing around the street outside.
There is some fantastic music editing by Dan Evans Farkas and Grant Conway that is happening there as well. It was nice to figure out, from an editorial sense, how to get in all your editing food groups — your sound effects, your music, your production, your loop group, ADR and Foley. There were a lot of good moments in that episode. In looking at the episodes we could have chosen, I felt that “Subtle Beast” was the strongest for us.

In terms of sound editing on “Subtle Beast,” what was the most challenging scene?
I’m not sure about most challenging, but the most engaging sequence for me was the trip from the police precinct in the van to the night holding cell. Once that van pulls in and Naz is being marched down the hall it’s a ride of sound, music and tension. And, possibly, fear.

There’s so much to work with, from the point at which the van is backing up, we’ve got the odd metal double doors on the van, then the juxtaposition of the van, to Detective Box’s (Bill Camp) car drive, to John Stone (John Turturro) going home to his brownstone. All these actions are intercutting with each other. When the van pulls up at Baxter Street, we lose the music and are left with these echoing footsteps and police radio surrounded by the dripping water of the location. Then finally down into night holding cells and with the yelling distant voices. Naz doesn’t know what’s coming but it doesn’t sound good. So that was one of the more intense and fun spots for me personally.

In building these backgrounds, what were some of your sources? Being in New York, were you able to go out and capture local ambiences? Or was it completely crafted in post?
We did some recordings around town to pick up what we needed. Since c5 is based in New York, we have a really great library of New York sounds to pull from. Also, the production location recordists did a great job of capturing stuff as well so we were able to use a number of those sounds in our sound bed. I would say 85 percent of the ambiences were created in post, and the other 15 percent was what was recorded on set.

Strangely enough I personally have lived in two of the main locations of the series: the Upper West Side of Manhattan — on the exact street of Andrea’s brownstone — and Jackson Heights, Queens, where Naz’s family lives. So I was well aware of what these neighborhoods sounded like at all hours of the day and night and would use my own internal “appropriate location audio filter” when working on those locations. At the end of the day that’s sort of a silly side note, but I like to think it helps us stay true to the sounds of those neighborhoods.

Beyond the background sounds but in keeping with what we crafted in post, once we get to Riker’s I think it’s worth noting that the entire cellblock set had a floor of painted plywood. So it really fell to our Foley department to make sure all our foot falls on concrete were covered and ready to take center stage if called upon. The whole Foley team led by Marko Costanzo (artist), George Lara (recordist) and Steve Visscher (supervising Foley editor) did a wonderful job.

Anything else you’d like to share about The Night Of?
It was a show that involved a lot of really good collaboration in terms of sound and music. I personally feel very fortunate to have had such a good sound crew comprising so many talented people, and very lucky for the opportunity to get to mix next to Michael Berry and see the care and skill he brings to the process. I am also very appreciative of the support we got along the way from everybody at HBO, our wonderful post supervisor Lori Slomka, as well as our picture editor Nick Houy and his crew.

Lastly, I think through our conversations and discussions with Steve Zaillian we were successful in figuring out how best to shape and mold the tracks into something that is very compelling to watch and listen to and I hope people really enjoy it.


Jennifer Walden is a New Jersey-based audio engineer and writer.

Game of Thrones: VFX associate producer Adam Chazen

With excitement starting to build for the seventh season of HBO’s Game of Thrones, what better time to take a quick look back at last season’s VFX workflow. HBO associate VFX producer Adam Chazen was kind enough to spend some time answering questions after just wrapping Season 7.

Tell us about your background as a VFX associate producer and what led you to Game of Thrones.
I got my first job as a PA at VFX studio Pixomondo. I was there for a few years, working under my current boss Steve Kullback (visual effects producer on Game of Thrones). He took me with him when he moved to work on Yogi Bear, and then on Game of Thrones.

I’ve been with the show since 2011, so this is my sixth year on board. It’s become a real family at this point; lots of people have been on since the pilot.

From shooting to post, what is your role working on Game of Thrones?
As the VFX associate producer, in pre-production mode I assist with organizing our previs and concept work. I help run and manage our VFX database and I schedule reviews with producers, directors and heads of departments.

During production I make sure everyone has what they need on set in order to shoot for the various VFX requirements. Also during production, we start to post the show — I’m in charge of running review sessions with our VFX supervisor Joe Bauer. I make sure that all of his notes get across to the vendors and that the vendors have everything they need to put the shots together.

Season 7 has actually been the longest we’ve stayed on set before going back to LA for post. When in Belfast, it’s all about managing the pre-production and production process, making sure everything gets done correctly to make the later VFX adjustments as streamlined as possible. We’ll have vendors all over the world working on that next step — from Australia to Spain, Vancouver, Montreal, LA, Dublin and beyond. We like to say that the sun never sets on Game of Thrones.

What’s the process for bringing new vendors onto the show?
They could be vendors that we’ve worked with in the past. Other times, we employ vendors that come recommended by other people. We check out industry reels and have studios do testing for us. For example, when we have dragon work we ask around for vendors willing to run dragon animation tests for us. A lot of it is word of mouth. In VFX, you work with the people that you know will do great work.

What’s your biggest challenge in creating Game of Thrones?
We’re doing such complex work that we need to use multiple vendors. This can be a big hurdle. In general, whether it be film or TV, when you have multiple vendors working on the same shot, it becomes a potential issue.

Linking in with cineSync helps. We can have a vendor in Australia and a vendor in Los Angeles both working on the same shot, at exactly the same time. I first started using cineSync while at Pixomondo and found it makes the revision process a lot quicker. We send notes out to vendors, but most of the time it’s easier to get on cineSync, see the same image and draw on it.

Even the simple move of hovering a cursor over the frame can answer a million questions. We have several vendors who don’t use English as their first language, such as those in Spain. In these cases, communication is a lot easier via cineSync. By pointing to a single portion of a single frame, we completely bypass the language barrier. It definitely helps to see an image on screen versus just explaining it.

What is your favorite part of the cineSync toolkit?
We’ve seen a lot of cool updates to cineSync. Specifically, I like the notes section, where you can export a PDF to include whichever frame that note is attributed to.

Honestly, just seeing a cursor move on-screen from someone else’s computer is huge. It makes things so much easier to just point and click. If we’re talking to someone on the phone, trying to tell them about an issue in the upper left hand corner, it’s going to be hard to get our meaning across. cineSync takes away all of the guesswork.

Besides post, we also heavily use cineSync for shoot needs. We shoot the show in Northern Ireland, Iceland, Croatia, Spain and Calgary. With cineSync, we are able to review storyboards, previs, techvis and concepts with the producers, directors, HODs and others, wherever they are in the world. It’s crucial that everyone is on the same page. Being able to look at the same material together helps everyone get what they want from a day on set.

Is there a specific shot, effect or episode you’re particularly proud of?
The Battle of the Bastards — it was a huge episode. Particularly, the first half of the episode when Daenerys came in with her dragons at the battle of Meereen, showing those slavers who is boss. Meereen City itself was a large CG creation, which was unusual for Game of Thrones. We usually try to stay away from fully CG environments and like to get as much in-camera as possible.

For example, when the dragon breathes fire we used an actual flamethrower we shot. Back in Season 5, we started to pre-animate the dragon, translate it to a motion control rig, and attach a flamethrower to it. It moves exactly how the dragon would move, giving us a practical element to use in the shot. CG fire can be done but it’s really tricky. Real is real, so you can’t question it.

With multiple vendors working on the sequence, we had Rodeo FX do the environment while Rhythm & Hues did the dragons. We used cineSync a lot, reviewing shots between both vendors in order to point out areas of concern. Then in the second half of the episode, which was the actual Battle of the Bastards, the work was brilliantly done by Australian VFX studio Iloura.

Qwire’s tool for managing scoring, music licensing upped to v.2.0

Qwire, a maker of cloud-based tools for managing scoring and licensing music to picture, has launched QwireMusic 2.0, which expands the collaboration, licensing and cue sheet capabilities of QwireMusic. The tool also features a new and intuitive user interface as well as support for the Windows OS. User feedback played a role in many of the new updates, including marker import of scenes from Avid for post, Excel export functions for all forms and reports and expanded file sharing options.

QwireMusic is a suite of integrated modules that consolidates and streamlines a wide range of tasks and interactions for pros involved with music and picture across all stages of post, as well as music clearance and administration. QwireMusic was created to help facilitate collaboration among picture editors and post producers, music supervisors and clearance, composers, music editors and production studios.

Here are some highlights of the new version:
Presentations — Presentations allow music cues and songs to be shared between music providers (supervisors and composers) and their clients (picture editors, studio music departments, directors and producers. With Presentations, selected music is synced to video, where viewers can independently adjust the balance between music and dialogue, adding comments on each track. The time-saving efficiency of this tool centralizes the music sharing and review process, eliminating the need for the confusing array of QuickTimes, Web links, emails and unsecured FTP sites that sometimes accompany post production.

Real-time licensing status — QwireMusic 2.0 allows music supervisors to easily audition music, generate request letters, and share potential songs with anyone who needs to review them. When the music supervisor receives a quote approval, the picture editor and music editor are notified, and the studio music budget is updated instantly and seamlessly. In addition, problem songs can be instantly flagged. As with the original version of QwireMusic, request letters can be generated and emailed in one step with project-specific letterhead and signatures.

Electronic Cue Sheets — QwireMusic’s “visual cue sheet,” allows users to review all of the information in a cue sheet displayed alongside the final picture lock.  The cue sheet is automatically populated from data already entered in qwireMusic by the composer, music supervisor and music editor. Any errors or missing information are flagged. When the review is complete, a single button submits the cue sheet electronically to ASCAP and BMI.

QwireMusic has been used by music supervisors, composers, picture editors and music editors on over 40 productions in 2016, including Animals (HBO); Casual (Hulu); Fargo (FX); Guilt (Freeform); Harley and the Davidsons (Discovery); How to Get Away With Murder (ABC); Pitch (Fox); Shameless (Showtime); Teen Wolf (MTV); This Is Us (NBC); and Z: The Beginning of Everything (Amazon).

“Having everyone in the know on every cue ever put in a show saves a huge amount of time,” says Patrick Ward, a post producer for the shows Parenthood, The West Wing and Pure Genius. “With QwireMusic I spend about a tenth of the time that I used to disseminating cue information to different places and entities.”

The sounds of Brooklyn play lead role in HBO’s High Maintenance

By Jennifer Walden

New Yorkers are jaded, and one of the many reasons is that just about anything they want can be delivered right to their door: Chinese food, prescriptions, craft beer, dry cleaning and weed. Yes, weed. This particular item is delivered by “The Guy,” the protagonist of HBO’s new series, High Maintenance.

The Guy (played by series co-creator Ben Sinclair) bikes around Brooklyn delivering pot to a cast of quintessentially quirky New York characters. Series creators Sinclair and Katja Blichfeld string together vignettes — using The Guy as the common thread — to paint a realistic picture of Brooklynites.

andrew-guastella

Nutmeg’s Andrew Guastella. Photo credit: Carl Vasile

“The Guy delivers weed to people, often going into their homes and becoming part of their lives,” explains sound editor/re-recording mixer Andrew Guastella at Nutmeg, a creative marketing and post studio based in New York. “I think that what a lot of viewers like about the show is how quickly you come to know complete strangers in a sort of intimate way.”

Blichfeld and Sinclair find inspiration for their stories from their own experiences, says Guastella, who follows suit in terms of sound. “We focus on the realism of the sound, and that’s what makes this show unique.” The sound of New York City is ever-present, just as it is in real life. “Audio post was essential for texturizing our universe,” says Sinclair. “There’s a loud and vibrant city outside of those apartment walls. It was important to us to feel the presence of a city where people live on top of each other.”

Big City Sounds
That edict for realism drives all sound-related decisions on High Maintenance. On a typical series, Guastella would strive to clean up every noise on the production dialogue, but for High Maintenance, the sound of sirens, horns, traffic, even car alarms are left in the tracks, as long as they’re not drowning out the dialogue. “It’s okay to leave sounds in that aren’t obtrusive and that sell the fact that they are in New York City,” he says.

For example, a car alarm went off during a take. It wasn’t in the way of the dialogue but it did drop out on a cut, making it stand out. “Instead of trying to remove the alarm from the dialogue, I decided to let it roll and I added a chirp from a car alarm, as if the owner turned off the alarm [or locked the car], to help incorporate it into the track. A car alarm is a sound you hear all the time in New York.”

Exterior scenes are acceptably lively, and if an interior scene is feeling too quiet, Guastella can raise a neighborly ruckus. “In New York, there’s always that noisy neighbor. Some show creators might be a little hesitant to use that because it could be distracting, but for this show, as long as it’s real, Ben and Katja are cool with it,” he says. During a particularly quiet interior scene, he tried adding the sounds of cars pulling away and other light traffic to fill up the space, but it wasn’t enough, so Guastella asked the creators, “’How do you feel about the neighbors next door arguing?’ And they said, ‘That’s real. That’s New York. Let’s try it out.’”

Guastella crafted a commotion based on his own experience of living in an apartment in Queens. Every night he and his wife would hear the downstairs neighbors fighting. “One night they were yelling and then all we heard was this loud, enormous slam. Hopefully, it was a door,” jokes Guastella. “Ben and Katja are always pulling from their own experiences, so I tried to do that myself with the soundtrack.”

Despite the skill of production sound mixer Dimitri Kouri, and a high tolerance for the ever-present sound of New York City, Guastella still finds himself cleaning dialogue tracks using iZotope’s RX 5 Advanced. One of his favorite features is RX Connect. With this plug-in feature, he can select a region of dialogue in his Avid Pro Tools session and send that region directly to iZotope’s standalone RX application where he can edit, clean and process the dialogue. Once he’s satisfied, he can return that cleaned up dialogue right back in sync on the timeline of his Pro Tools session where he originally sent it from.

“I no longer have to deal with exporting and importing audio files, which was not an efficient way to work,” he says. “And for me, it’s important that I work within the standalone application. There are plug-in versions of some RX tools, but for me, the standalone version offers more flexibility and the opportunity to use the highly detailed visual feedback of its audio-spectrum analyzer. The spectrogram makes using tools like Spectral Repair and De-click that much more effective and efficient. There are more ways to use and combine the tools in general.”

Guastella has been with the series since 2012, during its webisode days on Vimeo. Back then, it was a passion-project, something he’d work on at home on his own time. From the beginning, he’s handled everything audio: the dialogue cleaning and editing, the ambience builds and Foley and the final mix. “Andrew [Guastella] brought his professional ear and was always such a pleasure to work with. He always delivered and was always on time,” says Blichfeld.

The only aspect that Guastella doesn’t handle is the music. “That’s a combination of licensed music (secured by music supervisor Liz Fulton) and original composition by Chris Bear. The music is well-established by the time the episode gets to me,” he says.

On the Vimeo webisodes, Guastella would work an episode’s soundtrack into shape, and then send it to Blichfeld and Sinclair for notes. “They would email me or we would talk over the phone. The collaborative process wasn’t immediate,” he says. Now that HBO has picked up the series and renewed it for Season 2, Guastella is able to work on High Maintenance in his studio at Nutmeg, where he has access to all the amenities of a full-service post facility, such as sound effects libraries, an ADR booth, a 5.1 surround system and room to accommodate the series creators who like to hang around and work on the sound with Guastella. “They are very particular about sound and very specific. It’s great to have instant access to them. They were here more than I would’ve expected them to be and it was great spending all that time with them personally and professionally.”

In addition to being a series co-creator, co-writer and co-director with Blichfeld, Sinclair is also one of show’s two editors. This meant they were being pulled in several directions, which eventually prevented them from spending so much time in the studio with Guastella. “By the last three episodes of this season, I had absorbed all of their creative intentions. I was able to get an episode to the point of a full mix and they would come in just for a few hours to review and make tweaks.”

With a bigger budget from HBO, Guastella is also able to record ADR when necessary, record loop group and perform Foley for the show at Nutmeg. “Now that we have a budget and the space to record actual Foley, we’re faced with the question of how much Foley do we want to do? When you Foley sound for every movement and footstep, it doesn’t always sound realistic, and the creators are very aware of that,” says Guastella.

5.1 Surround Mix
In addition to a minimalist approach, another way he keeps the Foley sounding real is by recording it in the real world. In Episode 3, the story is told from a dog’s POV. Using a TASCAM DR 680 digital recorder and a Sennheiser 416 shotgun mic, Guastella recorded an “enormous amount of Foley at home with my Beagle, Bailey, and my father-in-law’s Yorkie and Doberman. I did a lot of Foley recording at the dog park, too, to capture Foley for the dog outside.”

Another difference between the Vimeo episodes and the HBO series is the final mix format. “HBO requires a surround sound 5.1 mix and that’s something that demands the infrastructure of a professional studio, not my living room,” says Guastella. He takes advantage of the surround field by working with ambiences, creating a richer environment during exterior shots which he can then contrast with a closer, confined sound for the interior shots.

“This is a very dialogue-driven show so I’m not putting too much information in the surrounds. But there is so much sound in New York City, and you are really able to play with perspective of the interior and exterior sounds,” he explains. For example, the opening of Episode 3, “Grandpa,” follows Gatsby the dog as he enters the front of his house and eventually exits out of the back. Guastella says he was “able to bring the exterior surrounds in with the characters, then gradually pan them from surround to a heavier LCR once he began approaching the back door and the backyard was in front of him.”

The series may have made the jump from Vimeo to HBO but the soul of the show has changed very little, and that’s by design. “Ben, Katja, and Russell Gregory [the third executive producer] are just so loyal to the people who helped get this series off the ground with them. On top of that, they wanted to keep the show feeling how it did on the web, even though it’s now on HBO. They didn’t want to disappoint any fans that were wondering if the series was going to turn into something else… something that it wasn’t. It was really important to the show creators that the series stayed the same, for their fans and for them. Part of that was keeping on a lot of the people who helped make it what it was,” concludes Guastella.

Check out High Maintenance on HBO, Fridays at 11pm.


Jennifer Walden is a NJ-based audio engineer and writer. Follow her at @audiojeney.

Rockin’ music supervision for HBO’s ‘Vinyl’

By Jennifer Walden

Otis Redding, The Velvet Underground, The Rolling Stones, Pink Floyd, The Temptations, Janis Joplin, The Doors… the list of music featured on the HBO series Vinyl would make any music supervisor drool, and that’s just a small sample of the artists whose music has been featured so far. There are still four more episodes to go this season.

As you can imagine, big-name artists come with a big price tag. “When you have this many songs from the golden era of rock ‘n’ roll, you’re going to spend some real money. It’s such a music-driven enterprise that you have to go into it with your eyes open,” says music supervisor Randall Poster. He and co-music supervisor Meghan Currier, at NYC’s Search Party Music, had the job of curating and creating Vinyl’s epic soundtrack.

Randall Poster

Randall Poster

Poster has over 100 feature film credits, including Carol, The Grand Budapest Hotel, The Wolf of Wall Street, Insurgent and Divergent, Boyhood, I’m Not There (a Bob Dylan biopic) and Velvet Goldmine to name just a few. He’s also done a bit of series work too, including HBO’s Boardwalk Empire, where he worked with series creator Terence Winter and executive producer Martin Scorsese — two of the masterminds behind Vinyl. Having already collaborated with Winter and Scorsese on two soundtrack driven series, there’s a lot of trust in their relationship.

“It’s a collaborative medium,” notes Poster. “We all throw in ideas and we all have certain passions. Marty is the master of using songs in movies. I think we’ve developed a pretty strong working relationship and process.”

In 2012, Poster won a Grammy Award for Best Compilation Soundtrack Album for Motion Picture, Television or Other Visual Media for Boardwalk Empire. It wouldn’t be surprising if Vinyl’s soundtrack earns the same recognition.

Vinyl tells the story of Richie Finestra (Bobby Cannavale), owner of the faltering music label American Century, which is struggling to find its footing on the shifting tectonic plates of musical genres in the early ‘70s. One new genre to rise out of the rubble of rock ‘n’ roll’s greatest era is proto-punk, which Finestra feels can re-energize the rock scene. “You are on the verge of punk rock, on the verge of disco, the elements of hip-hop are just beginning to formulate,” explains Poster. “This whole season you are on the verge of these musical revolutions. Also, part of the show’s music is borne out of Richie Finestra’s musical foundations, which he is trying to somehow reconnect with.”

There is a wealth of opportunity for music — interstitials, bands on-screen, diegetic music coming from cassette players, turntables and radios. There’s also music that underscores the drama or helps to reinforce story points. It’s no wonder that there are 20–30 tracks in every episode. According to Poster, the pilot alone had around 60 tracks. “One thing that is really unique about Vinyl is the volume of music, the amount of it.”

Licensing
Some tracks from the aforementioned top-shelf artists were licensed with help from Warner Bros. Records and Atlantic Records, with both labels offering up their catalogs to Poster and Currier. “They were happy to make their artists and most of their catalog available to us,” says Poster.

But those two major labels were by no means the extent of Poster’s and Currier’s reach. Ultimately, if there was a track they wanted to use in the show, regardless of the label, they went for it. “Everyone wanted to do this soundtrack and they really were passionate about it. People saw the ambition of the enterprise and responded to it.”

The hardest part about licensing all the big-name songs — like the hit songs for the lipsync interstitials including Janis Joplin (played by Catherine Stephen) performing “Cry Baby” in Episode 4 — was just tracking down who owned the rights to them. “For the lipsync sequences, we talked to the series writers and we’d land on a song. Then we’d go and work out all the licensing details,” explains Poster.

On-Screen Performers
The real challenge for music on the show lies in Vinyl’s substantial use of on-camera music. Several primary characters are musicians performing original songs, like the fictional punk band the Nasty Bits, led by Kip Stevens (played by Mick Jagger’s son, James Jagger), and the funk-rock band led by Hannibal (played by Daniel J. Watts). Then there are faux versions of popular bands playing re-recorded versions of their hits, such as “Somethin’ Else” performed on-screen by a faux Led Zeppelin in Episode 3, or “Personality Crisis” performed on-screen by a mocked-up New York Dolls at the end of Episode 1. “In terms of the workflow and getting involved in the pre-production process, those were the things that you had to deal with first — landing on repertoire, and casting and rehearsing actors. That was the initial focus,” reports Poster.

They needed to find real musicians to play in the bands on-screen, so Currier took the lead in casting the on-screen musicians that weren’t main characters and didn’t have speaking lines. “She was really chasing people down on the subway, asking them if they played music. We needed to cast people that had that period look, or resembled artists in a particular band. There were so many on-screen acts that we needed to cover. For example, Hannibal’s band in Episode 4 has 12 people in it. We had to find them and then rehearse them, to make sure it all worked correctly,” says Poster.

The re-recorded hits and original tunes involved collaborations with music industry heavy-hitters, like Trey Songz, Dan Auerbach, Elvis Costello, David Johansen (New York Dolls) and Charli XCX. “When we wanted to have Trey Songz, an Atlantic artist, voice one of the characters on the show, and we wanted The Arcs, which is a Dan Auerbach’s (The Black Keys) side project, Atlantic Records helped us in terms of accessing these artists,” explains Poster. “Kevin Weaver, who is the point person there at Atlantic Records, was just a business dynamo. He really helped us cut through a lot of red tape.”

Poster tapped Lee Ranaldo, co-founder of Sonic Youth, to produce the Nasty Bits punk tracks. According to Pitchfork, their tune list includes songs salvaged from the nearly forgotten ‘70s punk band Jack Ruby, lending to the era-authentic punk vibe in Vinyl.

To create the band’s backing tracks for James Jagger’s vocals, Ranaldo chose Yo La Tengo’s bassist James McNew, Sonic Youth drummer Steve Shelley, avant-garde guitarist Alan Licht and guitarist Don Fleming of the ‘80s art-punk band Velvet Monkeys. “We really had a great collection of artists who worked with us, and we relied on them for insight and precision,” says Poster. “I was really excited to do new music with John Doe (from the late 1970s punk band called X). Elvis Costello — one of my rock ‘n’ roll gods, who we worked with on Boardwalk Empire a few times, came in and sang for us. Lenny Kaye, from Patti Smith Group, is someone we have worked with before. He’s a good resource. It’s great to channel the musical energies of some of our rock ‘n’ roll heroes. Musicians are often the best people to talk to about things that they were responding to from an era.”

If you love all the blues, rock ‘n’ roll, punk, funk, disco and ‘70s pop featured in the series, you can purchase the soundtrack “Vinyl: Music From the HBO Original Series — Volume 1” released by Atlantic Records, as a physical CD, digital download or (appropriately) as a vinyl LP. Each week there is also a new five-song digital soundtrack featuring music from that Sunday’s upcoming episode. And as the season wraps up, a “Volume 2” soundtrack will also be available. When the Vinyl digital soundtracks become available, you can download them via iTunes and Google Play, with streaming available on Spotify.

Jennifer Walden is a writer and audio engineer based in New Jersey.

Digging Deeper: Endcrawl co-founder John ‘Pliny’ Eremic

By Randi Altman

Many of you might know John “Pliny” Eremic, a fixture in New York post. When I first met Pliny he was CTO and director of post production at Offhollywood. His post division was later spun off and sold to Light Iron, which was in turn acquired by Panavision.

After Offhollywood, Pliny moved to HBO as a workflow specialist, but he is also the co-founder— with long-time collaborator Alan Grow — of Endcrawl.com, a cloud-based tool for creating end titles for film and television.

Endcrawl has grown significantly over the last year and counts both modest indies and some pretty high-end titles as customers. I figured it was a good time to dig a bit deeper.

How did Endcrawl come about?
End titles were always a huge thorn in my side when I was running the post boutique. The endless, manual revision process is so time intensive that a number of major post houses flat-out refuse to offer this service any more. So, I started hacking on Endcrawl to scratch my own itch.

Both you and your co-founder Alan are working media professionals. Can you talk about how this affected the tool and its evolution?
Most filmmakers aren’t hackers; most coders never made a movie. As a result, many of this industry’s tools are built by folks who are incredibly smart but may lack first-hand post and filmmaking experience. I’ve felt that pain a lot.

Endcrawl is built by filmmakers for filmmakers. We have deep, first-hand experience with file-based specs and formats (DCI, IMF, AS-02), so our renders are targeted at these industry-standard delivery specifications. Occasionally we’re even able to steer customers away from a bad workflow decision.

How is this different than other end credit tools in the world?
For starters we offer unlimited renders.

Why unlimited renders?
This was a mantra from day one. There’s always “one last fix.” A typical indie feature with Endcrawl will keep making revisions six to 12 months after calling it final. That’s where a flat rate with unlimited do-overs comes in very handy. I’ve seen productions start with a $2-3k quote from a designer, and end up with a $6-10k bill. That’s just for the end credits. We’re not interested in dinging you for overages. It’s a flat rate, so render away.

What else differentiates Endcrawl?
Endcrawl is a cloud tool that’s designed to manage the end titles process only — that is its reason for being. So speed, affordability and removing workflow hassles is our goal.

How do people do traditionally do end titles?
Typically there are three options. One is using a title designer. This option costs a lot and they might want to charge you overages after your 89th revision.

There are also do-it-yourself options using products from Adobe or Autodesk, and while these are great tools, the process is extremely time consuming for this use — I’d estimate 40-plus hours of human labor.

Finally, there are affordable plug-ins, but they deliver, in my opinion, cheap-looking results.

Do you need to be a designer to use Endcrawl?
No. We’ve made it so our typography is good-looking right out of the box. After hundreds of projects, we’ve spent a lot of time thinking about what does and does not work typographically.

Do you have tips for these non-deisgners regarding typography?
I could write a book. In fact, we are about to publish a series of articles on this topic, but I’ll give you a few:

• Don’t rely on “classic” typefaces like Helvetica and Futura. Nice on large posters, but lousy on screen in small point sizes.

• Lean toward typefaces with an upright stress — meaning more condensed fonts — which will allow you to make better use of horizontal space. This in turn preserves vertical space, resulting in a smoother scroll.

• Avoid “light” and “ultralight” fonts, or typefaces with a high stroke contrast. Those tend to shimmer quite a bit when in motion. Pick a typeface that has a large variety of designed weights and stick to medium, semibold and bold.

• Make sure your font has strong glyph support for those grips named Bjørn Sæther Løvås and Hansína Þórðardóttir.

Do people have to download the product?
Endcrawl runs right in your web browser. There is nothing to download or install.

What about compatibility?
Our render engine outputs uncompressed DPX, all the standard QuickTime formats, H.264 and PDFs. By far the most common final deliverable is 10-bit DPX, which we typically turn around inside of one hour. The preview renders come in minutes. And the render engine is on-demand, 24/7.

 

How has the product evolved since you first came to market?
Our “lean startup” was a script attached to a Google Doc. We did our first 20 to 30 projects that way. We saw a lot of validation, especially around the speed and ease of the service.

Year one, we had a customer with four films at Sundance. He completed all of his end titles in three days, with many revisions and renders in between. He’s finished over 20 projects with us now.

Since then, Alan has architected a highly optimized cloud render engine. Endcrawl still integrates with Google Docs for collaboration, but that is now connected to a powerful Web UI controlling layout and realtime preview.

How do people pay for Endcrawl?
On the free tier, we provide free and unlimited 1K preview renders in H.264. For $499, a project can upgrade to unlimited, uncompressed DPX renders. We are currently targeting feature films, but we will be deploying more pricing tiers for other types of projects — think episodic and shorts — in 2016.

What films have used the tool?
Some recent titles include Spike Lee’s Chi-Raq and Oliver Stone’s Snowden. Our customers run the gamut from $50K Kickstarter movies to $100 million studio franchises. (I can’t name most of those studio features because several title houses run all of their end credits through us as a white-label service.)

Some 2016 Sundance movies this year include Spa Night, Swiss Army Man, Tallulah and The Bad Kids. Some of my personal favorites are Beasts of No Nation, A Most Violent Year, The Family Fang, Meadowland, The Adderall Diaries and Video Game High School.

What haven’t I asked that is important?
We’re about to roll out 4K. We’ve “unofficially” supported 4K on a few pilot projects like Beasts of No Nation and War Room, but it’s about to be available to everyone.

Also, we have a pretty cool Twitter account @Endcrawl, which you should definitely follow.

Sam Daley on color grading HBO’s ‘Show Me a Hero’

By Ellen Wixted

David Simon’s newest and much-anticipated six-part series Show Me a Hero premiered on HBO in the US in mid-August. Like The Wire, which Simon created, Show Me a Hero explores race and community — this time through the lens of housing desegregation in late-‘80s Yonkers, New York. Co-written by Simon and journalist William F. Zorzi, the show was directed by Paul Haggis with Andrij Parekh as cinematographer, and produced by Simon, Haggis, Zorzi, Gail Mutrux and Simon’s long-time collaborator, Nina Noble. Technicolor PostWorks‘ Sam Daley served as the colorist. I caught up with him recently to talk about the show.

A self-described “film guy,” New York-based Daley has worked as colorist on films ranging from Martin Scorsese’s The Departed to Lena Dunham’s Girls with commercial projects rounding out his portfolio. When I asked Daley what stood out about his experience on Show Me a Hero, his answer was quick: “The work I did on the dailies paid off hugely when we got to finishing.” Originally brought into the project as dailies colorist, Daley’s scope quickly expanded to include finishing — and his unusual workflow set the stage for high-impact results.

Sam Daly

Sam Daly

Daley’s background positioned him perfectly for his role. After graduating from film school and working briefly in production, Daley worked in a film lab before moving into post production. Daley’s deep knowledge of photochemical processing, cameras and filters turned him into a resource for colorists he worked alongside and piqued his interest in the craft. He spent years paying his dues before eventually becoming known for his work as a colorist. “People tend to get pigeonholed, and I was known for my work on dailies,” Daley notes. “But ultimately the cinematographers I worked with insisted that I do both dailies and finishing, as Ed Lachman (cinematographer) did when we worked together on Mildred Pierce.”

The Look
Daley and Show me a Hero’s cinematographer, Andrij Parekh, had collaborated on previous projects, and Parekh’s clear vision from the project’s earliest stages set the stage for success. “Andreij came up with this beautiful color treatment, and created a look book that included references to Giorgio de Chirico’s painted architecture, art deco artist Tamara de Lempicka’s highly stylized faces, and films from the 1970s, including The Conformist, The Insider, The Assassination of Richard Nixon and The Yards. Sometimes look books are aspirational, but Andrij’s footage delivered the look he wanted‚ and that gave me permission to be aggressive with the grade,” says Daley. “Because we’ve worked together before, I came in with an understanding of where he likes his images to be.”

bar before

Parekh shot the series using the Arri Alexa and Leica Summilux-C lenses. Since the show is set in the late ‘80s, a key goal for the production was to ground the look of the show firmly in that era. A key visual element was to have different visual treatments for the series’ two worlds to underscore how separate they are: the cool, stark political realm, and the warmer, brighter world of the housing projects. The team’s relatively simple test process validated the approach, and introduced Daley to the Colorfront On-Set Dailies system, which proved to be a valuable addition to his pipeline.

“Colorfront is really robust for dailies, but primitive for finishing — it offers simple color controls that can be translated by other systems later. Using it for the first time reminded me of when I was training to be a colorist — when everything tactile was very new to me — and it dawned on me that to create a period look you don’t have to add a nostalgic tint or grain. With Colorfront I was able to create the kind of look that would have been around in the ’80s with simple primary grades, contrast, and saturation adjustments.”

meeting before

“This is the crazy thing: by limiting my toolset I was able to get super creative and deliver a look that doesn’t feel at all modern. In a sense, the system handcuffed me — but Andrij wasn’t looking for a lot of razzle-dazzle. Using Colorfront enabled me to create the spine of an appropriate period style that makes the show look like it was created in the ‘80s. Everyone loved the way the dailies looked, and they were watching them for months. By the time we got to finishing, we had something that was 90% of the way there.”

Blackmagic’s DaVinci Resolve 11 was used for finishing, a process that was unusually straightforward because of the up-front work done on the dailies. “Because all shots were already matched, final grading was done scene by scene. We changed the tone of some scenes, but the biggest decision we made was to desaturate everything by an additional 7% to make the flesh tones less buzzy and to set the look more firmly in the period.”

Belushi beforeBelushi after

Daley was enthusiastic about the production overall, and HBO’s role in setting a terrific stage for moving the art of TV forward. “HBO was awesome — and they always seem to provide the extra breathing space needed to do great work. This show in particular felt like a symphony, where everyone had the same goal.”

I asked Daley about his perspective on collaboration, and his answer was surprising. “’The past is prologue.’ Everything you did in the past is preparation for what you’re doing now, and that includes relationships. Andrij and I had a high level of trust and confidence going into this project. I wasn’t nervous because I knew what he wanted, and he trusted that if I was pushing a look it was for a reason. We weren’t tentative, and as a result the project turned into a dream job that went smoothly from production through post.”  He assures this is true for every client — you always have to give 110 percent. “The project I’m working on today is the most important project I’ve ever worked on.”

Daley’s advice for aspiring colorists? “Embrace technology. I was a film guy who resisted digital for a long time, but working on Tiny Furniture threw all of my preconceptions about digital out the window. The feature was shot using a Canon 7D because the budget was micro and the producer already owned the camera. The success of that movie made me stop being an old school film snob — now I look at new tech and think ‘bring it on.’”

 

 

‘Banshee’ VFX Part 2: Technicolor Flame artist Paul Hill

By Randi Altman

A couple of weeks ago we checked in with Banshee associate producer Gwyn Shovelski, who talked about the show’s visual effects workflow. That workflow includes Technicolor Flame artist Paul Hill, who has intimate knowledge of what the Banshee team wants — he worked with most of them during the run of HBO’s True Blood.

Cinemax’s Banshee takes place in a small, picturesque town in Pennsylvania’s Amish country. Banshee is home to a variety of people who have some pretty ugly secrets to hide. It’s also home to a stockpile of guns that would make some drug cartels drool.

A big plot point on Banshee, which ended its third season in March, is showing some of the main Continue reading

SMPTE elects Officers, Governors for 2015-2016

SMPTE (The Society of Motion Picture and Television Engineers) has elected new officers and governors for 2015-16. Robert Seidel, VP of engineering and advanced technology at CBS, will take office as the Society’s new president on Jan. 1, 2015.

Seidel, who previously held SMPTE board roles including executive VP and finance VP, will serve a two-year term as SMPTE president. He succeeds outgoing president Wendy Aylsworth, senior VP of technology at Warner Bros. Technical Operations, who will now become the Society’s past president.

Robert Seidel

“Bob Seidel has been a tremendous asset to the Society in several key positions, and we are confident that he will continue and build on the good work done by Wendy during her successful tenure as president,” said SMPTE executive director Barbara Lange. “Bob and Wendy are among the many SMPTE members who have contributed a great deal to the Society’s growth. The officers and governors elected for 2015-16 — and those who continue on in their existing roles — bring extraordinary knowledge, experience, and energy to the Society and its advancement of the motion-imaging industry.”

Other incoming SMPTE officers elected for the two-year 2015-2016 term include Matthew S. Goldman, senior VP of TV compression technology at Ericsson, who will serve as executive VP; Patrick Griffis, executive director of the technology strategy in the office of the CTO at Dolby, will continue his service as education VP; and Peter Wharton, VP of technology and business development at BroadStream Solutions, who will continue to serve as secretary/treasurer. In January 2015, the board will elect an officer to fill the post vacated by Goldman.

Ten governors, eight of which are incumbents, were elected to serve in SMPTE posts around the world. The re-elected governors include Angelo D’Alessio, GM at the Center for Accessible Media, who will again serve as governor for Europe, the Middle East, Africa and Central and South America. William T. Hayes, director of engineering and technology at Iowa Public Television, will again serve as governor for the central region and Sara J. Kudrle, product marketing manager of monitoring and control at Grass Valley will serve again as governor for the western region. KL Lam, past VP of broadcasting and engineering operations at Hong Kong Cable TV, will serve again as governor for the Asia-Australia region.

Pierre Marion, director of media engineering for French networks at CBC/Radio-Canada, will again serve as governor for the Canadian region. John McCoskey, executive VP/CTO at the Motion Picture Association of America (MPAA), will serve again as governor for the Eastern US region. William C. Miller, president at Miltag Media Technology, will again serve as governor for the New York region. Clyde Smith, senior VP of new technology at Fox Networks Engineering and Operations, will again serve as a governor for the Hollywood region.

Newly elected are Steve Beres, VP of media and technology operations at HBO, who will serve as a governor for the Hollywood region, and Merrick Ackermans, engineering director of global technology and operations for US network operations at Turner, who will serve as a governor for the Southern US region.

The Society’s officers and governors elected for the 2015-2016 term will serve on the SMPTE Board of Governors along with other board officers, regional governors and directors of specific areas, including standards, education and membership.

Officers who were not up for re-election and who continue to serve on the SMPTE Board of Governors Executive Committee include SMPTE Standards VP Alan Lambshead, retired from Evertz, and SMPTE Membership VP Paul Stechly of Applied Electronics.

Governors who were not up for re-election and who continue on the SMPTE Board of Governors include Dan Burnett of Ericsson Television Inc. (Southern US region); Paul Chapman of FotoKem (Hollywood region); Randy Conrad of Imagine Communications (Canadian region); John Ferder of CBS (New York region); Karl Kuhn of Tektronix (Eastern U.S. region); John Maizels of Entropy Enterprises and Productions (Asia/Australia region); Mark Narveson of Patterson & Sheridan (Western US region); T.J. Scott Jr. of Grass Valley (Southern U.S. region); Leon Silverman of The Walt Disney Studios (Hollywood region); and Richard Welsh of Sundog Media Toolkit (Europe, the Middle East, Africa, and Central and South America region).

SMPTE’s annual meeting takes place starting on October 20 in at the Loews Hollywood Hotel in Hollywood.

 

‘The Leftovers’ composer Max Richter on scoring this new HBO series

HBO’s very dramatic series, The Leftovers, focuses on the residents of a small town three years after two percent of the world’s population disappeared without explanation. Viewers get to see how they, and the world in general, struggle to come to terms with what happened.

Created by Damon Lindelof (Lost) and novelist Tom Perrotta, and based on Perrotta’s novel of the same name, The Leftovers is the story of the people who weren’t “chosen.” Lindelof and Perrotta executive produce the series along with Peter Berg and Sarah Aubrey.

The following is a Q&A, courtesy of HBO, with British-born, Berlin-based composer Max Richter, who in addition to scoring The Leftovers, feature films and documentaries, makes albums, writes ballets, plays concerts and more. For the show, he creates different themes based on Continue reading

#PostChat: ‘True Blood’ editor Mark Hartzell

By Randi Altman

This week’s #PostChat featured Mark Hartzell (@tweetermf), editor on the HBO series True Blood, which focuses on a town where vampires and mortals live together. He fielded questions about the art of editing and workflow, including how it finally felt when he was able to sink his teeth into an episode of his own. #PostChat’s Jesse Averna (@Dr0id) moderated the chat.

Hartzell, who started on the Alan Ball created/produced True Blood as an assistant to editor Andy Keir in Season Two, worked his way through the ranks, learning from all along the way. As the show airs its 7th and final season, Hartzell is proud to wear the title of editor.

Continue reading

SportsPost NY event set for February 26

New York — The Sports Video Group, along with HBO, is hosting the second-annual SportsPost NY event on Wednesday, February 26, at the Michael Fuchs Theater at HBO in New York.  Leading post production execs, producers, editors, graphics professionals, and technology manufacturers will come together to discuss the industry’s latest tools, projects, and design philosophies.

An event highlight is a behind-the-scenes look at HBO Sports’ award-winning reality series 24/7. The team behind the show will share favorite moments and discuss how their production and post workflows help them pull together each episode on time and on budget, delivered with the series’ signature style. The show, which follows two contenders in the lead-up to a major sporting event, took home a Sports Emmy for Outstanding Editing for the fifth year in a row last May.

postPerspective’s own Randi Altman will be moderating a panel on post in the cloud, called “Breaking Down Collaborative Walls: Postproduction in the Cloud .” Collaborating in the cloud has been a boon to broadcasters with workflows distributed across in-house and geographically separated facilities. What can and can’t be done in the cloud, and when is it most useful? What are the economic and creative advantages in shifting workflows to distributed shared networks? How are security and redundancy handled? How  collaborative cloud workflows at smaller facilities compare with those used during events like the Olympics.

Panelists include Alex Grossman, Quantum, VP Media and Entertainment, Mike Jackman, Brevity Ventures, Chief Business Development Officer/EVP of Post Production, FilmNation Entertainment, and  Art Raymond, Levels Beyond, CEO.

Here is a full schedule of  SportsPost’s activities:
1:30-2:15 p.m.: Opening Keynote: Behind the Scenes of HBO’s 24/7
2:15-3:00 p.m.:  Cream of the SportsPost Crop: Winning Ways in Storytelling
3:00-3:30 pm: Case Study: Inside RadicalMedia’s New 50-Seat FCPX Edit Suite
3:30 – 3:45: Networking break
3:45-4:30 p.m.: Breaking Down Collaborative Walls: Postproduction in the Cloud
4:30-5:15 pm: Graphics Tools: Scaling Designs from Sizzling Opens to Dynamic On-Air Graphics
5:15-6:15 pm: Networking Reception

For more on the event, check out http://sportsvideo.org/main/sportspostny-2014.