Author Archives: Randi Altman

Company 3 buys Sixteen19, offering full-service post in NYC

Company 3 has acquired Sixteen19, a creative editorial, production and post company based in New York City. The deal includes Sixteen19’s visual effects wing, PowerHouse VFX, and a mobile dailies operation with international reach.

The acquisition helps Company 3 further serve NYC’s booming post market for feature film and episodic TV. As part of the acquisition, industry veterans and Sixteen19 co-founders Jonathan Hoffman and Pete Conlin, along with their longtime collaborator, EVP of business development and strategy Alastair Binks, will join Company 3’s leadership team.

“With Sixteen19 under the Company 3 umbrella, we significantly expand what we bring to the production community, addressing a real unmet need in the industry,” says Company 3 president Stefan Sonnenfeld. “This infusion of talent and infrastructure will allow us to provide a complete suite of services for clients, from the start of production through the creative editing process to visual effects, final color, finishing and mastering. We’ve worked in tandem with Sixteen19 many times over the years, so we know that they have always provided strong client relationships, a best-in-class team and a deeply creative environment. We’re excited to bring that company’s vision into the fold at Company 3.”

Sonnenfeld will continue to serve as president of Company 3, and oversee operations of Sixteen19. As a subsidiary of Deluxe, Company 3 is part of a broad portfolio of post services. Bringing together the complementary services and geographic reach of Company3, Sixteen19 and Powerhouse VFX, will expand Company 3’s overall portfolio of post offerings and reach new markets in the US and internationally.

Sixteen19’s New York location includes 60 large editorial suites; two 4K digital cinema grading theaters; and a number of comfortable spaces, open environments and many common areas. Sixteen19’s mobile dailies services will add a perfect companion to Company 3’s existing offerings in that arena. PowerHouse VFX includes dedicated teams of experienced supervisors, producers and artists in 2D and 3D visual effects and compositing.

“The New York film community initially recognized the potential for a Company 3 and Sixteen19 partnership,” says Sixteen19’s Hoffman. “It’s not just the fact that a significant majority of the projects we work on are finished at Company 3, it’s more that our fundamental vision about post has always been aligned with Stefan’s. We value innovation; we’ve built terrific creative teams; and above all else, we both put clients first, always.”

Sixteen19 and Powerhouse VFX will retain their company names.

Scratch 9.1 now supports AJA Kona 5, Red 8K workflows

Assimilate’s Scratch 9.1 now supports AJA Kona 5 audio and video I/O cards, enabling users to output 8K 60p video via 12G-SDI. Scratch 9.1 is also now supporting AJA’s Io 4K Plus I/O box with Thunderbolt 3 connectivity. The product also works with AJA’s T-Tap Io 4K, Kona 1 and Kona 4.

Scratch support for Kona 5 allows for a smooth dailies and finishing workflow for Red 8K footage. Scratch handles the decoding and deBayering of 8K Red RAW in realtime at full resolution and can now natively output 8K over SDI through Kona 5, facilitating a full end-to-end 8K workflow.

Available immediately, Scratch 9.1 starts at $89 a month and $695 annually. AJA Kona 5 and Io 4K Plus are available now through AJA’s reseller network for $2,995 and $2,495 respectively.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Roger and Big Machine merge, operate as Roger

Creative agency Roger and full-service production company Big Machine have merged — a move that will expand the creative capabilities for their respective agency, brand and entertainment clients. The studios will retain the Roger name and operate at Roger’s newly renovated facility in Los Angeles.

The combined management team includes CD Terence Lee, CD Dane Macbeth, EP Josh Libitsky, director Steve Petersen, CD Ken Carlson and Sean Owolo, who focuses on business development.

Roger now offers expanded talent and resources for projects that require branding, design, animation, VFX, VR/AR, live action and content development. Roger uses Adobe Creative Cloud for most of its workflows. The tools vary from project to project, but outside of the Adobe Suite, they also use Maxon Cinema4D, Autodesk Maya, Blackmagic DaVinci Resolve and Foundry Nuke.

Since the merger, the studio is already embarking on a number of projects, including major creative campaigns for Disney and Sony Pictures.

Roger’s new 6,500-square-foot studio includes four private offices, three editing suites, two conference rooms, an empty shooting space for greenscreen work, a kitchen and a lounge.

Behind the Title: Mission’s head of digital imaging, Pablo Garcia Soriano

NAME: Pablo Garcia Soriano (@pablo.garcia.soriano)

COMPANY: UK-based Mission (@missiondigital)

CAN YOU DESCRIBE YOUR COMPANY?
Mission is a provider of DIT and digital lab services based in London, with additional offices in Cardiff, Rome, Prague and Madrid. We process and manage media and metadata, producing rich deliverables with as much captured metadata as possible — delivering consistency and creating efficiencies in VFX and post production.

WHAT’S YOUR JOB TITLE?
Head of Digital Imaging

WHAT DOES THAT ENTAIL?
I work with cinematographers to preserve their vision from the point of capture until the final deliverable. This means supporting productions through camera tests, pre-production and look design. I also work with manufacturers, which often means I get an early look at new products.

Mission

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
It sounds like a very technical job, but it’s so much more than engineering — it’s creative engineering. It’s problem solving and making technical complexities seem easy to a creative person.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love working with cinematographers to help them achieve their vision and make sure it is preserved through post. I also enjoy being able to experiment with the latest technology and have an influence on products. Recently, I’ve been involved with growing Mission’s international presence with our Madrid office, which is particularly close to my heart.

WHAT’S YOUR LEAST FAVORITE?
Sometimes I get to spend hours in a dark room with a probe calibrating monitors. It’s dull but necessary!

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the early to mid-morning after two coffees. Also at the end of the day when the office is quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Gardening… or motor racing.

WHY DID YOU CHOOSE THIS PROFESSION?
I feel like it chose me. I’m an architect by training, but was a working musician until around the age of 28 when I stepped down from the stage and started as a freelancer doing music promos. I was doing a bit of everything on those, director, editor, finishing, etc. Then I was asked to be the assistant editor on two films by a colleague whom I was sharing and office with.

After this experience (and due to the changes the music industry was going through), I decided to focus fully on editing several documentaries, short films. I then ended up on a weekly TV show where I was in charge of the final assembly. This is where I started paying attention to continuity and the overall look. I was using Apple Final Cut and Apple Color, which I loved. All of this happened in a very organic way and I was always self-taught.

I didn’t take studying seriously until I met the DP Rafa Roche, AEC, on our first film together around the age of 31. Rafa mentored me, teaching me all about cameras, lenses, filters and filled my brain with curiosity about all the technical stuff (signal, codecs, workflows). From there to now it all has been a bit of a rollercoaster with some moments of real vertigo caused by how fast it all has developed.

Downton Abby

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We work on a lot of features and television in the UK and Europe — recent projects include Cats, Downton Abbey, Cursed and Criminal.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
In 2018, I was the HDR image supervisor for the World Cup in Moscow. Knowing the popularity of football and working on a project that would be seen by so many people around the world was truly an honor, despite the pressure!

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
A good reference monitor, a good set of speakers and Spotify.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, music is a huge part of my life. I have very varied taste. For example, I enjoy Wilco, REM and Black Sabbath.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I like to walk by the River Thames in Hammersmith, London, near where I live.

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

MovieLabs, film studios release ‘future of media creation’ white paper

MovieLabs (Motion Pictures Laboratories), a nonprofit technology research lab that works jointly with member studios Sony, Warner Bros., Disney, Universal and Paramount, has published a new white paper presenting an industry vision for the future of media creation technology by 2030.

The paper, co-authored by MovieLabs and technologists from Hollywood studios, paints a bold picture of future technology and discusses the need for the industry to work together now on innovative new software, hardware and production workflows to support and enable new ways to create content over the next 10 years. The white paper is available today for free download on the MovieLabs website.

The 2030 Vision paper lays out key principles that will form the foundation of this technological future, with examples and a discussion of the broader implications of each. The key principles envision a future in which:

1. All assets are created or ingested straight to the cloud and do not need to move.
2. Applications come to the media.
3. Propagation and distribution of assets is a “publish” function.
4. Archives are deep libraries with access policies matching speed, availability and security to the economics of the cloud.
5. Preservation of digital assets includes the future means to access and edit them.
6. Every individual on a project is identified and verified and their access permissions are efficiently and consistently managed.
7. All media creation happens in a highly secure environment that adapts rapidly to changing threats.
8. Individual media elements are referenced, tracked, interrelated and accessed using a universal linking system.
9. Media workflows are non-destructive and dynamically created using common interfaces, underlying data formats and metadata.
10. Workflows are designed around realtime iteration and feedback.

Rich Berger

“The next 10 years will bring significant opportunities, but there are still major challenges and inherent inefficiencies in our production and distribution workflows that threaten to limit our future ability to innovate,” says Richard Berger, CEO of MovieLabs. “We have been working closely with studio technology leaders and strategizing how to integrate new technologies that empower filmmakers to create ever more compelling content with more speed and efficiency. By laying out these principles publicly, we hope to catalyze an industry dialog and fuel innovation, encouraging companies and organizations to help us deliver on these ideas.”

The publication of the paper will be supported with a panel discussion at the IBC Conference in Amsterdam. The panel, “Hollywood’s Vision for the Future of Production in 2030,” will include senior technology leaders from the five major Hollywood motion picture studios. It will take place on Sunday, September 15 at 2:15pm in the IBC Conference in the Forum room of the RAI. postPerspective’s Randi Altman will moderate the panel made up of Sony’s Bill Baggelaar, Disney’s Shadi Almassizadeh, Universal’s Michael Wise and Paramount’s Anthony Guarino. More details can be found here.

“Sony Pictures Entertainment has a deep appreciation for the role that current and future technologies play in content creation,” says CTO of Sony Pictures Don Eklund. “As a subsidiary of a technology-focused company, we benefit from the power of Sony R&D and Sony’s product groups. The MovieLabs 2030 document represents the contribution of multiple studios to forecast and embrace the impact that cloud, machine learning and a range of hardware and software will have on our industry. We consider this a living document that will evolve over time and provide appreciated insight.”

According to Wise, SVP/CTO at Universal Pictures, “With film production experiencing unprecedented growth, and new innovative forms of storytelling capturing our audiences’ attention, we’re proud to be collaborating across the industry to envision new technological paradigms for our filmmakers so we can efficiently deliver worldwide audiences compelling entertainment.”

For those not familiar with MovieLabs, their stated goal is “to enable member studios to work together to evaluate new technologies and improve quality and security, helping the industry deliver next-generation experiences for consumers, reduce costs and improve efficiency through industry automation, and derive and share the appropriate data necessary to protect and market the creative assets that are the core capital of our industry.”

Technicolor adds Patrick Smith, Steffen Wild to prepro studio

Technicolor has added Patrick Smith to head its visualization department, partnering with filmmakers to help them realize their vision in a digital environment before they hit the set. By helping clients define lensing, set dimensions, asset placement and even precise on-set camera moves, Smith and his team will play a vital role in helping clients plan their shoots in the virtual environment in ways that feel completely natural and intuitive to them. He reports to Kerry Shea, who heads Technicolor’s Pre-Production Studio.

“By enabling clients to leverage the latest visualization technologies and techniques while using hardware similar to what they are already familiar with, Patrick and his team will empower filmmakers by ensuring their creative visions are clearly defined at the very start of their projects — and remain at the heart of everything they do from their first day on set to take their stories to the next level,” explains Shea. “Bringing visualization and the other areas of preproduction together under one roof removes redundancy from the filmmaking process which, in turn, reduces stress on the storytellers and allows them as much time as possible to focus on telling their story. Until now, the process of preproduction has been a divided and inefficient process involving different vendors and repeated steps. Bringing those worlds together and making it a seamless, start-to-finish process is a game changer.”

Smith has held a number of senior positions within the industry, including most recently as creative director/senior visualization supervisor at The Third Floor. He has worked on titles such as Bumblebee, Avengers: Infinity War, Spider-Man: Homecoming, Guardians of the Galaxy Vol. 2 and The Secret Life of Walter Mitty.

“Visualization used to involve deciding roughly what you plan to do on set. Today, you can plan out precisely how to achieve your vision on set down to the inch – from the exact camera lens to use, to exactly how much dolly track you’ll need, to precisely where to place your actors,” he says. “Visualization should be viewed as the director’s paint brush. It’s through the process of visualization that directors can visually explore and design their characters and breathe life into their story. It’s a sandbox where they can experiment, play and perfect their vision before the pressure of being on set.”

In other Technicolor news, last week the studio announced that Steffen Wild has joined as head of its virtual production department. “As head of virtual production, Wild will help drive the studio’s approach to efficient filmmaking by bringing previously separate departments together into a single pipeline,” says Shea. “We currently see what used to be separate departments merge together. For example, previz, techviz and postviz, which were all separate ways to find answers to production questions, are now in the process of collaborating together in virtual production.”

Wild has over 20 years of experience, including 10 years spearheading Jim Henson’s Creature Shop’s expending efforts in innovative animation technologies, virtual studio productions and new ways of visual storytelling. As SVP of digital puppetry and visual effects at the Creature Shop, Wild crafted new production techniques using proprietary game engine technologies. He brings with him in-depth knowledge of global and local VFX and animation production, rapid prototyping and cloud-based entertainment projects. In addition to his role in the development of next-generation cinematic technologies, he has set up VFX/animation studios in the US, China and southeast Europe.

Main Image: (L-R) Patrick Smith and Steffen Wild

FilmLight sets speakers for free Color On Stage seminar at IBC

At this year’s IBC, FilmLight will host a free two-day seminar series, Color On Stage, on September 14 and 15. The event features live presentations and discussions with colorists and other creative professionals. The event will cover topics ranging from the colorist today to understanding color management and next-generation grading tools.

“Color on Stage offers a good platform to hear about real-world interaction between colorists, directors and cinematographers,” explains Alex Gascoigne, colorist at Technicolor and one of this year’s presenters. “Particularly when it comes to large studio productions, a project can take place over several months and involve a large creative team and complex collaborative workflows. This is a chance to find out about the challenges involved with big shows and demystify some of the more mysterious areas in the post process.”

This year’s IBC program includes colorists from broadcast, film and commercials, as well as DITs, editors, VFX artists and post supervisors.

Program highlights include:
•    Creating the unique look for Mindhunter Season 2
Colorist Eric Weidt will talk about his collaboration with director David Fincher — from defining the workflow to creating the look and feel of Mindhunter. He will break down scenes and run through color grading details of the masterful crime thriller.

•    Realtime collaboration on the world’s longest running continuing drama, ITV Studios’ Coronation Street
The session will address improving production processes and enhancing pictures with efficient renderless workflows, with colorist Stephen Edwards, finishing editor Tom Chittenden and head of post David Williams.

•    Looking to the future: Creating color for the TV series Black Mirror
Colorist Alex Gascoigne of Technicolor will explain the process behind grading Black Mirror, including the interactive episode Bandersnatch and the latest Season 5.

•    Bollywood: A World of Color
This session will delve into the Indian film industry with CV Rao, technical general manager at Annapurna Studios in Hyderabad. In this talk, CV will discuss grading and color as exemplified by the hit film Baahubali 2: The Conclusion.

•    Joining forces: Strengthening VFX and finishing with the BLG workflow
Mathieu Leclercq, head of post at Mikros Image in Paris, will be joined by colorist Sebastian Mingam and VFX supervisor Franck Lambertz to showcase their collaboration on recent projects.

•    Maintaining the DP’s creative looks from set to post
Meet with French DIT Karine Feuillard, ADIT — who worked on the latest Luc Besson film Anna as well as the TV series The Marvelous Mrs Maisel — and FilmLight workflow specialist Matthieu Straub.

•    New color management and creative tools to make multi-delivery easier
The latest and upcoming Baselight developments, including a host of features aimed to simplify delivery for emerging technologies such as HDR. With FilmLight’s Martin Tlaskal, Daniele Siragusano and Andy Minuth.

Color On Stage will take place in Room D201 on the second floor of the Elicium Centre (Entrance D), close to Hall 13. The event is free to attend but spaces are limited. Registion is available here.

DP Chat: Dopesick Nation cinematographer Greg Taylor

By Randi Altman

Dopesick Nation is a documentary series on Vice Media’s Viceland that follows two recovering heroin addicts, Frankie and Allie, in South Florida as they try to help others while taking a look at corruption and exploitation in the rehab industry. The series was inspired by the feature film American Relapse.

Dopesick Nation

As you might imagine, the shoot was challenging, often taking place at night and in dubious locales, but cinematographers Greg Taylor and Mike Goodman were up for the challenge. Both had worked with series co-creator/executive producer Patrick McGee previously and were happy to collaborate once more.

We reached out to DP Taylor to talk about working with McGee and Goodman and the show’s workflow.

Tell us about Dopesick Nation. How early did you get involved in this series, and how did you work with the director?
Pat McGee tapped myself and Mike Goodman to shoot American Relapse. We were just coming off another show and had a finely tuned team ready to spend long nights on this new project. The movie turned out to have a familiar gritty feel you see in the show but in a feature documentary format.

I imagine it was a natural progression to use us again once the TV show was greenlit by Viceland. Pat would keep on our heels to find the best moments for every story and would push us to go out and produce intimate moments with the subjects on the fly. He and producer Adam Linkenhelt (American Relapse) were with us almost every step of the way, offering advice, watching our backs and looking out for incoming storylines. Honestly, I can’t say enough good things about that whole crew.

(L-R) Mike Goodman, supervising producer Adam Linkenhelt and showrunner Pat McGee (Photo by Greg Taylor)

How did you work with fellow DP Mike Goodman? How did you divvy up the shots?
Mike and I have worked long enough together that we have an efficient shorthand. A gesture or look can set up an entire scene sometimes, and I often can’t tell my shots from his. We both put a lot of effort into creativity in our imagery and pushing the bar as much as we can handle. During rare downtimes, we might brainstorm on a new way to shoot b-roll or decide what “gritty” should look and feel like.

Covering the often late and challenging days took a bit of baton-passing back and forth. Some days, we would split up and shoot single camera as well. It was decided at some point that I would cover more of Frankie’s story, while Mike would cover Allie. When the two met up at the end of the day, we would cover them together. Most of the major scenes we shot together, but there were times when too much was happening to cover it all. We were really in the addicts’ world, so some events were completely unexpected.

How would you describe the look of the doc?
I’d say gritty would be the best single word, but that can be nuanced quite a bit. There was an overall aim to keep some frames dirty during dialogue scenes to achieve a slightly voyeuristic feel but still leave lots of room for intimate, in-your-face, bam-type moments when the story dictated. We always paid attention to our backgrounds, and there was a focus on the contrast between beautiful southeast Florida and the dark underbelly lurking just next to it. The show had to be so real that no one would ever question the legitimacy of what we were showing. No-filter, behind-the-veil type thinking in every shot.

Dopesick Nation

How does your process change when shooting a documentary versus a fictional piece? Or does it not?
Story is king, and I’d say character arcs for the feature American Relapse were different from the TV version. In the film, we gave an overview of the treatment industry told through the eyes of our two main characters, Allie and Frank. It is structured somewhat around their typical day and sit-down interviews.

The TV show did not have formal interviews but did allow us to dig deeper into accounts from individuals with addiction, the world they live in and the hosts themselves. The 10 one-hour episodes and three-plus months spent shooting gave us a little more time to build up a library of transition pieces and specialty b-roll.

Where was it shot?
Almost all of the shooting took place in and around southeast Florida. A few short scenes were picked up in Ohio and LA.

How did you go about choosing the right camera and lenses for this project? Can you talk about camera tests?
It’s funny because Mike and I both independently came up with using the Panasonic VariCam LT after the director came to us asking what we wanted to shoot with. We chatted and decided that we needed solutions for potentially tougher nighttime setups than we had been used to. When we gathered for a meeting and started up the gear list, Mike and I both had the LT on the top of our requests.

Dopesick Nation

I think that signaled to the preproduction team we were unanimous on what the best system was to use and production manager Keith Plant made it happen. I had seen the camera in action at NAB and watched some tests a friend had shot on it a few months before. I was easily sold on its rich blacks and dual native ISO. That camera could see into the dark and wasn’t so heavy we would collapse at the end of the day; it worked out very well.

Can you talk about the lighting and how the camera worked while grabbing shots when you could?
Lighting on this show was minimal, but we did use fills and background illumination to enhance some scenes. Working mostly at night — in dubious surroundings — often meant we couldn’t light scenes. Lights bring unwanted attention to the crew and subjects, and we found it changed the feel of the scene in a negative way.

Using the available light at each location quickly became fundamentally important to maintain the unfiltered nature of the show. Every bright spot in the darkness was carefully considered, and if we could pull subjects slightly toward a source, even to get 1/3 a stop more, we would take it.

Any challenging scenes that you are particularly proud of or found most challenging?
There were a lot of scenes that were challenging to shoot technically, but that happens on any project. You don’t always want to see what you are standing next to, but the story needs to be told. There are a lot of people out there really struggling with addiction, and it can be really painful to watch. Being present with everyone and being real with them had to be in your mind constantly. I kept thinking the whole time, “Am I doing them justice? What can I do better? How can I help?”

DPs Mike Goodman and Greg Taylor shoot Ally interviewing one of the subjects (Photo by Tara Sarzen)

Let’s move on to more some more general questions. How did you become interested in cinematography?
I’ve always loved working with celluloid and photography and was brought up with a darkroom in the house. I remember taking a filmmaking summer camp when I was 14 in Oxford, Mississippi, and was basically blown away. I’ve been aiming for a career in cinematography ever since.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
Artistically, I love Dali, Picasso and the works of Caravaggio and Rembrandt. The way light plays in a chiaroscuro painting is really something worth studying and it isn’t easy to replicate.

I like to try and pay homage to the films I enjoy and artworks I’ve visited by incorporating some of their ideas into my own work. With film cameras, things changed slower over the years, and it was often the film stock that became the technological advancement of its day. Granular structure turned to crystal structures, higher ISO/ASA were achieved, color reproduction improved. The same is with the new camera systems coming out. Sensors are the new film stock. You pick what is appropriate to the story.

What new technology has changed the way you work?
I rarely go anywhere nowadays without a drone. The advancements in drone technology have changed the aerial world entirely, and I’m happy to see these new angles open up in an increasingly responsible and licensed way.

DP Greg Taylor shooting in SE Florida. (Photo by Evan Parquette)

Gimbals are a game changer in the way the Steadicam came onto the scene, and I don’t expect them to go anywhere. Also motion-control devices and newer, more sensitive sensors are certainly fitting the bill of ever-evolving and improving tech.

What are some of your best practices or rules you try to follow on each job?
Be aware and attentive of your surroundings and safety. Treat others with respect. Maintain a professional attitude under stress. If you are five minutes early, you’re late.

Explain your ideal collaboration with the director when setting the look of a project.
I love discussing what the heart of the script or concept really means and trying to find the deeper connection with how it can be told visually. Referencing other films/art/ TV we both have experience with and finding a common language that makes sense for the vision.

What’s your go-to gear — things you can’t live without?
I have an old Nikkor 55mm f1.2 lens I love, and I often shoot personal projects on prime vintage glass. The edges aren’t quite as sharp as modern lenses so in the case of the 55mm, you get a lovely yet subtle sharpness vignette along with a warm overall feel.

It’s great for interviews because it softens the digital crispness newer sensors exhibit without the noticeable changes you might see with certain filtration. The Hip Shot belt has been one of my best friends over the past while, and it saves you on the long days and low, long dialogue scenes when handholding seated subjects.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

True Detective’s quiet, tense Emmy-nominated sound

By Jennifer Walden

When there’s nothing around, there’s no place to hide. That’s why quiet soundtracks can be the most challenging to create. Every flaw in the dialogue — every hiss, every off-mic head turn, every cloth rustle against the body mic — stands out. Every incidental ambient sound — bugs, birds, cars, airplanes — stands out. Even the noise-reduction processing to remove those flaws can stand out, particularly when there’s a minimalist approach to sound effects and score.

That is the reason why the sound editing and mixing on Season 3 of HBO’s True Detective has been recognized with Emmy nominations. The sound team put together a quiet, tense soundtrack that perfectly matched the tone of the show.

L to R: Micah Loken, Tateum Kohut, Mandell Winter, David Esparza and Greg Orloff.

We reached out to the team at Sony Pictures Post Production Services to talk about the work — supervising sound editor Mandell Winter; sound designer David Esparza, MPSE; dialogue editor Micah Loken; as well as re-recording mixers Tateum Kohut and Greg Orloff (who mixed the show in 5.1 surround on an Avid S6 console at Deluxe Hollywood Stage 5.)

Of all the episodes in Season 3 of True Detective, why did you choose “The Great War and Modern Memory” for award consideration for sound editing?
Mandell Winter: This episode had a little bit of everything. We felt it represented the season pretty well.

David Esparza: It also sets the overall tone of the season.

Why this episode for sound mixing?
Tateum Kohut: The episode had very creative transitions, and it set up the emotion of our main characters. It establishes the three timelines that the season takes place in. Even though it didn’t have the most sound or the most dynamic sound, we chose it because, overall, we were pleased with the soundtrack, as was HBO. We were all pleased with the outcome.

Greg Orloff: We looked at Episode 5 too, “If You Have Ghosts,” which had a great seven-minute set piece with great action and cool transitions. But overall, Episode 1 was more interesting sonically. As an episode, it had great transitions and tension all throughout, right from the beginning.

Let’s talk about the amazing dialogue on this show. How did you get it so clean while still retaining all the quality and character?
Winter: Geoffrey Patterson was our production sound mixer, and he did a great job capturing the tracks. We didn’t do a ton of ADR because our dialogue editor, Micah Loken, was able to do quite a bit with the dialogue edit.

Micah Loken: Both the recordings and acting were great. That’s one of the most crucial steps to a good dialogue edit. The lead actors — Mahershala Ali and Stephen Dorff — had beautiful and engaging performances and excellent resonance to their voices. Even at a low-level whisper, the character and quality of the voice was always there; it was never too thin. By using the boom, the lav, or a special combination of both, I was able to dig out the timbre while minimizing noise in the recordings.

What helped me most was Mandell and I had the opportunity to watch the first two episodes before we started really digging in, which provided a macro view into the content. Immediately, some things stood out, like the fact that it was wall-to-wall dialogue on each episode, and that became our focus. I noticed that on-set it was hot; the exterior shots were full of bugs and the actors would get dry mouths, which caused them to smack their lips — which is commonly over-accentuated in recordings. It was important to minimize anything that wasn’t dialogue while being mindful to maintain the quality and level of the voice. Plus, the story was so well-written that it became a personal endeavor to bring my A game to the team. After completion, I would hand off the episode to Mandell and our dialogue mixer, Tateum.

Kohut: I agree. Geoffrey Patterson did an amazing job. I know he was faced with some challenges and environmental issues there in northwest Arkansas, especially on the exteriors, but his tracks were superbly recorded.

Mandell and Micah did an awesome job with the prep, so it made my job very pleasurable. Like Micah said, the deep booming voices of our two main actors were just amazing. We didn’t want to go too far with noise reduction in order to preserve that quality, and it did stand out. I did do more d-essing and d-ticking using iZotope RX 7 and FabFilter Pro-Q 2 to knock down some syllables and consonants that were too sharp, just because we had so much close-up, full-frame face dialogue that we didn’t want to distract from the story and the great performances that they were giving. But very little noise reduction was needed due to the well-recorded tracks. So my job was an absolute pleasure on the dialogue side.

Their editing work gave me more time to focus on the creative mixing, like weaving in the music just the way that series creator Nic Pizzolatto and composer T Bone Burnett wanted, and working with Greg Orloff on all these cool transitions.

We’re all very happy with the dialogue on the show and very proud of our work on it.

Loken: One thing that I wanted to remain cognizant of throughout the dialogue edit was making sure that Tateum had a smooth transition from line to line on each of the tracks in Pro Tools. Some lines might have had more intrinsic bug sounds or unwanted ambience but, in general, during the moments of pause, I knew the background ambience of the show was probably going to be fairly mild and sparse.

Mandell, how does your approach to the dialogue on True Detective compare to Deadwood: The Movie, which also earned Emmy nominations this year for sound editing and mixing?
Winter: Amazingly enough, we had the same production sound mixer on both — Geoffrey Patterson. That helps a lot.

We had more time on True Detective than on Deadwood. Deadwood was just “go.” We did the whole film in about five or six weeks. For True Detective, we had 10 days of prep time before we hit a five-day mix. We also had less material to get through on an episode of True Detective within that time frame.

Going back to the mix on the dialogue, how did you get the whispering to sound so clear?
Kohut: It all boils down to how well the dialogue was recorded. We were able to preserve that whispering and get a great balance around it. We didn’t have to force anything through. So, it was well-recorded, well-prepped and it just fit right in.

Let’s talk about the space around the dialogue. What was your approach to world building for “The Great War And Modern Memory?” You’re dealing with three different timelines from three different eras: 1980, 1990, and 2015. What went into the sound of each timeline?
Orloff: It was tough in a way because the different timelines overlapped sometimes. We’d have a transition happening, but with the same dialogue. So the challenge became how to change the environments on each of those cuts. One thing that we did was to make the show as sparse as possible, particularly after the discovery of the body of the young boy Will Purcell (Phoenix Elkin). After that, everything in the town becomes quiet. We tried to take out as many birds and bugs as possible, as though the town had died along with the boy. From that point on, anytime we were in that town in the original timeline, it was dead-quiet. As we went on later, we were able to play different sounds for that location, as though the town is recovering.

The use of sound on True Detective is very restrained. Were the decisions on where to have sound and how much sound happening during editorial? Or were those decisions mostly made on the dub stage when all the elements were together? What were some factors that helped you determine what should play?
Esparza: Editorially, the material was definitely prepared with a minimalistic aesthetic in mind. I’m sure it got paired down even more once it got to the mix stage. The aesthetic of the True Detective series in general tends to be fairly minimalistic and atmospheric, and we continued with that in this third season.

Orloff: That’s purposeful, from the filmmakers on down. It’s all about creating tension. Sometimes the silence helps more to create tension than having a sound would. Between music and sound effects, this show is all about tension. From the very beginning, from the first frame, it starts and it never really lets up. That was our mission all along, to keep that tension. I hope that we achieved that.

That first episode — “The Great War And Modern Memory” — was intense even the first time we played it back, and I’ve seen it numerous times since, and it still elicits the same feeling. That’s the mark of great filmmaking and storytelling and hopefully we helped to support that. The tension starts there and stays throughout the season.

What was the most challenging scene for sound editorial in “The Great War And Modern Memory?” Why?
Winter: I would say it was the opening sequence with the kids riding the bikes.

Esparza: It was a challenge to get the bike spokes ticking and deciding what was going to play and what wasn’t going to play and how it was going to be presented. That scene went through a lot of work on the mix stage, but editorially, that scene took the most time to get right.

What was the most challenging scene to mix in that episode? Why?
Orloff: For the effects side of the mix, the most challenging part was the opening scene. We worked on that longer than any other scene in that episode. That first scene is really setting the tone for the whole season. It was about getting that right.

We had brilliant sound design for the bike spokes ticking that transitions into a watch ticking that transitions into a clock ticking. Even though there’s dialogue that breaks it up, you’re continuing with different transitions of the ticking. We worked on that both editorially and on the mix stage for a long time. And it’s a scene I’m proud of.

Kohut: That first scene sets up the whole season — the flashback, the memories. It was important to the filmmakers that we got that right. It turned out great, and I think it really sets up the rest of the season and the intensity that our actors have.

What are you most proud of in terms of sound this season on True Detective?
Winter: I’m most proud of the team. The entire team elevated each other and brought their A-game all the way around. It all came together this season.

Orloff: I agree. I think this season was something we could all be proud of. I can’t be complimentary enough about the work of Mandell, David and their whole crew. Everyone on the crew was fantastic and we had a great time. It couldn’t have been a better experience.

Esparza: I agree. And I’m very thankful to HBO for giving us the time to do it right and spend the time, like Mandell said. It really was an intense emotional project, and I think that extra time really paid off. We’re all very happy.

Winter: One thing we haven’t talked about was T Bone and his music. It really brought a whole other level to this show. It brought a haunting mood, and he always brings such unique tracks to the stage. When Tateum would mix them in, the whole scene would take on a different mood. The music at times danced that thin line, where you weren’t sure if it was sound design or music. It was very cool.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Rob Legato to receive HPA’s Lifetime Achievement Award 

The Hollywood Professional Association (HPA) will honor renowned visual effects supervisor and creative Robert Legato with its Lifetime Achievement Award at the HPA Awards at the Skirball Cultural Center in Los Angeles on November 21. Now in its 14th year, the HPA Awards recognize creative artistry, innovation and engineering excellence in the media content industry. The Lifetime Achievement Award honors the recipients’ dedication to the betterment of the industry.

Legato is an iconic figure in the visual effects industry with multiple Oscar, BAFTA and Visual Effects Society nominations and awards to his credit. He is a multi-hyphenate on many of his projects, serving as visual effects supervisor, VFX director of photography and second unit director. From his work with studios and directors and in his roles at Sony Pictures Imageworks and Digital Domain, he has developed a variety of digital workflows.

He has enjoyed collaborations with leading directors including James Cameron, Jon Favreau, Martin Scorsese and Robert Zemeckis. Legato’s career in VFX began in television at Paramount Pictures, where he supervised visual effects on two Star Trek series, which earned him two Emmy awards. He left Paramount to join the newly formed Digital Domain where he worked with founders James Cameron, Stan Winston and Scott Ross. He remained at Digital Domain until he segued to Sony Imageworks.

Legato began his feature VFX career on Neil Jordan’s Interview with the Vampire. He then served as VFX supervisor and DP for the VFX unit on Ron Howard’s Apollo 13, which earned him his first Academy Award nomination, and a win at the BAFTAs. He worked with James Cameron on Titanic, earning him his first Academy Award. Legato continued to work with Cameron, conceiving and creating the virtual cinematography pipeline for Cameron’s visionary Avatar.

Legato has also enjoyed a long collaboration with Martin Scorsese that began with his consultation on Kundun and continued with the multi-award winning film The Aviator, on which he served as co-second unit director/cameraman and VFX supervisor. Legato’s work on The Aviator won him three VES awards. He returned to work with the director on the Oscar Best Picture winner The Departed as the 2nd unit director/cameraman and VFX supervisor.  Legato and Scorsese collaborated once again on Shutter Island, on which he was both VFX supervisor and 2nd unit director/cameraman. He continued on to Scorsese’s 3D film Hugo, which was nominated for 11 Oscars and 11 BAFTAs, including Best Picture and Best Visual Effects. Legato won his second Oscar for Hugo as well as three VES Society Awards. His collaboration with Scorsese continued with The Wolf of Wall Street as well as with non-theatrical and advertising projects such as the Clio award-winning Freixenet: The Key to Reserva, a 10-minute commercial project, and the Rolling Stones feature documentary, Shine a Light.

Legato worked with director Jon Favreau on Disney’s The Jungle Book (second unit director/cinematographer and VFX supervisor) for which he received his third Academy Award, a British Academy Award, five VES Awards, an HPA Award and the Critics’ Choice Award for Best Visual Effects for 2016. His latest film with Favreau is Disney’s The Lion King, which surpassed $1 billion in box office after fewer than three weeks in theaters.

Legato’s extensive credits include serving as VFX supervisor on Chris Columbus’ Harry Potter and the Sorcerer’s Stone, as well as on two Robert Zemeckis films, What Lies Beneath and Cast Away. He was senior VFX supervisor on Michael Bay’s Bad Boys II, which was nominated for a VES Award for Outstanding Supporting Visual Effects, and for Digital Domain he worked on Bay’s Armageddon.

Legato is a member of ASC, BAFTA, DGA, AMPAS, VES, and the Local 600 and Local 700 unions.

GLOW’s DP and colorist adapt look of new season for Vegas setting

By Adrian Pennington

Netflix’s Gorgeous Ladies of Wrestling (GLOW) are back in the ring for a third round of the dramatic comedy, but this time the girls are in Las Vegas. The glitz and glamour of Sin City seems tailor-made for the 1980s-set GLOW and provided the main creative challenge for Season 3 cinematographer Chris Teague (Russian Doll, Broad City).

DP Chris Teague

“Early on, I met with Christian Sprenger, who shot the first season and designed the initial look,” says Teague, who was recently nominated for an Emmy for his work on Russian Doll. “We still want GLOW to feel like GLOW, but the story and character arc of Season 3 and the new setting led us to build on the look and evolve elements like lighting and dynamic range.”

The GLOW team is headlining the Fan-Tan Hotel & Casino, one of two main sets along with a hotel built for the series and featuring the distinctive Vegas skyline as a backdrop.

“We discussed compositing actors against greenscreen, but that would have turned every shot into a VFX shot and would have been too costly, not to mention time-intensive on a TV schedule like ours,” he says. “Plus, working with a backdrop just felt aesthetically right.”

In that vein, production designer Todd Fjelsted built a skyline using miniatures, a creative decision in keeping with the handcrafted look of the show. That decision, though, required extensive testing of lenses, lighting and look prior to shooting. This testing was done in partnership with post house Light Iron.

“There was no overall shift in the look of the show, but together with Light Iron, we felt the baseline LUT needed to be built on, particularly in terms of how we lit the sets,” explains Teague.

“Chris was clear early on that he wanted to build upon the look of the first two seasons,” says Light Iron colorist Ian Vertovec. “We adjusted the LUT to hold a little more color in the highlights than in past seasons. Originally, the LUT was based on a film emulation and adjusted for HDR. In Season 1, we created a period film look and transformed it for HDR to get a hybrid film emulation LUT. For Season 3, for HDR and standard viewing, we made tweaks to the LUT so that some of the colors would pop more.”

The show was also finished in Dolby Vision HDR. “There was some initial concern about working with backdrops and stages in HDR,” Teague says. “We are used to the way film treats color over its exposure range — it tends to desaturate as it gets more overexposed — whereas HDR holds a lot more color information in overexposure. However, Ian showed how it can be a creative tool.”

Colorist Ian Vertovec

“The goal was to get the 1980s buildings in the background and out the hotel windows to look real — emulating marquees with flashing lights,” adds Vertovec. “We also needed it to be a believable Nevada sky and skyline. Skies and clouds look different in HDR. So, when dialing this in, we discussed how they wanted it to look. Did it feel real? Is the sky in this scene too blue? Information from testing informed production, so everything was geared toward these looks.”

“Ian has been on the first two seasons, so he knows the look inside and out and has a great eye,” Teague continues. “It’s nice to come into a room and have his point of view. Sometimes when you are staring at images all day, it’s easy to lose your objectivity, so I relied on Ian’s insight.” Vertovec grades the show on FilmLight’s Baselight.

As with Season 2, GLOW Season 3 was a Red Helium shoot using Red’s IPP2 color pipeline in conjunction with Vertovec’s custom LUTs all the way to post. Teague shot full 8K resolution to accommodate his choice of Cooke anamorphic lenses, desqueezed and finished in a 2:1 ratio.

“For dailies I used an iPad with Moxion, which is perhaps the best dailies viewing platform I’ve ever worked with. I feel like the color is more accurate than other platforms, which is extremely useful for checking out contrast and shadow level. Too many times with dailies you get blacks washed out and highlights blown and you can’t judge anything critical.”

Teague sat in on the grade of the first three of the 10 episodes and then used the app to pull stills and make notes remotely. “With Ian I felt like we were both on the same page. We also had a great DIT [Peter Brunet] who was doing on-set grading for reference and was able to dial in things at a much higher level than I’ve been able to do in the past.”

The most challenging but also rewarding work was shooting the wrestling performances. “We wanted to do something that felt a little bigger, more polished, more theatrical,” Teague says. “The performance space had tiered seating, which gave us challenges and options in terms of moving the cameras. For example, we could use telescoping crane work to reach across the room and draw characters in as they enter the wrestling ring.”

He commends gaffer Eric Sagot for inspiring lighting cues and building them into the performance. “The wrestling scenes were the hardest to shoot but they’re exciting to watch — dynamic, cinematic and deliberately a little hokey in true ‘80s Vegas style.”


Adrian Pennington is a UK-based journalist, editor and commentator in the film and TV production space. He has co-written a book on stereoscopic 3D and edited several publications.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Digital Arts expands team, adds Nutmeg Creative talent

Digital Arts, an independently owned New York-based post house, has added several former Nutmeg Creative talent and production staff members to its roster — senior producer Lauren Boyle, sound designer/mixers Brian Beatrice and Frank Verderosa, colorist Gary Scarpulla, finishing editor/technical engineer Mark Spano and director of production Brian Donnelly.

“Growth of talent, technology, and services has always been part of the long-term strategy for Digital Arts, and we’re fortunate to welcome some extraordinary new talent to our staff,” says Digital Arts owner Axel Ericson. “Whether it’s long-form content for film and television, or working with today’s leading agencies and brands creating dynamic content, we have the talent and technology to make all of our clients’ work engaging, and our enhanced services bring their creative vision to fruition.”

Brian Donnelly, Lauren Boyle and Mark Spano.

As part of this expansion, Digital Arts will unveil additional infrastructure featuring an ADR stage/mix room. The current facility boasts several state-of-the-art audio suites, a 4K finishing theater/mixing dubstage, four color/finishing suites and expansive editorial and production space, which is spread over four floors.

The former Nutmeg team has hit the ground running working their long-time ad agency, network, animation and film studio clients. Gary Scarpulla worked on color for HBO’s Veep and Los Espookys, while Frank Verderosa has been working with agency Ogilvy on several Ikea campaigns. Beatrice mixed spots for Tom Ford’s cosmetics line.

In addition, Digital Arts’ in-house theater/mixing stage has proven to be a valuable resource for some of the most popular TV productions, including recording recent commentary sessions for the legendary HBO series, Game of Thrones and the final season of Veep.

Especially noteworthy is colorist Ericson’s and finishing editor Mark Spano’s collaboration with Oscar-winning directors Karim Amer and Jehane Noujaim to bring to fruition the Netflix documentary The Great Hack.

Digital Arts also recently expanded its offerings to include production services. The company has already delivered projects for agencies Area 23, FCB Health and TCA.

“Digital Arts’ existing infrastructure was ideally suited to leverage itself into end-to-end production,” Donnelly says. “Now we can deliver from shoot to post.”

Tools employed across post are Avid Pro Tools, D Control ES, S3 for audio post and Avid Media Composer, Adobe Premiere and Blackmagic Resolve for editing. Color grading is via Resolve.

Main Image: (L-R) Frank Verderosa, Brian Beatrice and Gary Scarpulla

 

Cabin adds two editors, promotes another

LA-based editorial studio Cabin Editing Company has grown its editing staff with the addition of Greg Scruton and Debbie Berman. They have also promoted Scott Butzer to editor. The trio will work on commercials, music videos, branded content and other short-form projects.

Scruton, who joins Cabin from Arcade Edit, has worked on dozens of high-profile commercials and music videos throughout his career, including Pepsi’s 2019 Grammy’s spot Okurrr, starring Cardi B; Palms Casino Resort’s star-filled Unstatus Quo; and Kendrick Lamar’s iconic Humble music video, for which he earned an AICE Award. Scruton has worked with high-profile ad agencies and directors, including Anomaly; Wieden + Kennedy; 72andSunny; Goodby, Silverstein & Partners; Dave Meyers; and Nadia Lee Cohen. He uses Avid Media Composer and Adobe Premiere.

Feature film editor Berman joins Cabin on the heels of her successful run with Marvel Studios, having recently served as an editor on Spider-Man: Homecoming, Black Panther and Captain Marvel. Her work extends across mediums, with experience editing everything from PSAs and documentaries to animated features. Now expanding her commercial portfolio with Cabin, Berman is currently at work on a Toyota campaign through Saatchi & Saatchi. She will continue to work in features as well. She mostly uses Media Composer but can also work on Premiere.

Cabin’s Butzer was recently promoted to editor after joining the company in 2017 and honing his talent across many platforms, including commercials, music videos and documentaries. His strengths include narrative and automotive work. Recent credits include Every Day Is Your Day for Gatorade celebrating the 2019 Women’s World Cup, The Professor for Mercedes Benz and Vince Staples’ Fun! music video. Butzer has worked with ad agencies and directors, including TBWA\Chiat\Day; Wieden + Kennedy; Goodby, Silverstein & Partners; Team One; Marcus Sonderland; Ryan Booth; and Rachel McDonald. Butzer previously held editorial positions at Final Cut and Whitehouse Post. He studied film at the University of Colorado at Boulder. He also uses Media Composer and Premiere.

Signiant update simplifies secure content exchanges

Signiant will be at IBC this year showing off new capabilities to its SDCX (Software-Defined Content Exchange) SaaS platform designed to simplify secure content exchange between companies. These capabilities will appear first in the company’s newest product, Signiant Jet, which makes it easy to automate and accelerate the transfer of large files between geographically dispersed locations.

Targeted at “lights-out” use cases, Jet meets the growing need to replace scripted FTP and legacy transfer tools with a faster, more reliable and more secure alternative. Jet was first introduced at the 2019 NAB Show. Jet is built on Signiant’s innovative SDCX SaaS platform, which also underpins the company’s widely deployed Media Shuttle solution that sends and shares large files around the world.

At IBC, Jet will include the new content exchange capabilities, offering a secure cloud handshake mechanism that simplifies intercompany transfers. The new functionality enables Jet customers to make storage endpoints private, discoverable to all or discoverable only to select partners in their supply chain. Via a secure web interface, companies can request a connection with a partner. Once both sides accept, specific jobs can be configured and mutually approved to allow for secure, automated transfers between the companies. Jet’s predictable pricing model makes it accessible to companies of all sizes and enables easy cost sharing for intercompany content exchange.

Behind the Title: MPC Senior Compositor Ruairi Twohig

After studying hand-drawn animation, this artist found his way to visual effects.

NAME: NYC-based Ruairi Twohig

COMPANY: Moving Picture Company (MPC)

CAN YOU DESCRIBE YOUR COMPANY?
MPC is a global creative and visual effects studio with locations in London, Los Angeles, New York, Shanghai, Paris, Bangalore and Amsterdam. We work with clients and brands across a range of different industries, handling everything from original ideas through to finished production.

WHAT’S YOUR JOB TITLE?
I work as a 2D lead/senior compositor.

Cadillac

WHAT DOES THAT ENTAIL?
The tasks and responsibilities can vary depending on the project. My involvement with a project can begin before there’s even a script or storyboard, and we need to estimate how much VFX will be involved and how long it will take. As the project develops and the direction becomes clearer, with scripts and storyboards and concept art, we refine this estimate and schedule and work with our clients to plan the shoot and make sure we have all the information and assets we need.

Once the commercial is shot and we have an edit, the bulk of the post work begins. This can involve anything from compositing fully CG environments, dragons or spaceships to beauty and product/pack-shot touch-ups or rig removal. So, my role involves a combination of overall project management and planning. But I also get into the detailed shot work and ultimately delivering the final picture. But the majority of the work I do can require a large team of people with different specializations, and those are usually the projects I find the most fun and rewarding due to the collaborative nature of the work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think the variety of the work would surprise most people unfamiliar with the industry. In a single day, I could be working on two or three completely different commercials with completely different challenges while also bidding future projects or reviewing prep work in the early stages of a current project.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I’ve been working in the industry for over 10 years.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING?
The VFX industry is always changing. I find it exciting to see how quickly the technology is advancing and becoming more widely accessible, cost-effective and faster.

I still find it hard to comprehend the idea of using optical printers for VFX back in the day … before my time. Some of the most interesting areas for me at the moment are the developments in realtime rendering from engines such as Unreal and Unity, and the implementation of AI/machine learning tools that might be able to automate some of the more time-consuming tasks in the future.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
I remember when I was 13, my older brother — who was studying architecture at the time — introduced me to 3ds Max, and I started playing around with some very simple modeling and rendering.

I would buy these monthly magazines like 3D World, which came with demo discs for different software and some CG animation compilations. One of the issues included the short CG film Fallen Art by Tomek Baginski. At the time I was mostly familiar with Pixar’s feature animation work like Toy Story and A Bug’s Life, so watching this short film created using similar techniques but with such a dark, mature tone and story really blew me away. It was this film that inspired me to pursue animation and, ultimately, visual effects.

DID YOU GO TO FILM SCHOOL?
I studied traditional hand-drawn animation at the Dun Laoghaire Institute of Art, Design and Technology in Dublin. This was a really fun course in which we spent the first two years focusing on the craft of animation and the fundamental principles of art and design, followed by another two years in which we had a lot of freedom to make our own films. It was during these final two years of experimentation that I started to move away from traditional animation and focus more on learning CG and VFX.

I really owe a lot to my tutors, who were really supportive during that time. I also had the opportunity to learn from visiting animation masters such as Andreas Deja, Eric Goldberg and John Canemaker. Although on the surface the work I do as a compositor is very different to animation, understanding those fundamental principles has really helped my compositing work; any additional disciplines or skills you develop in your career that require an eye for detail and aesthetics will always make you a better overall artist.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Even after 10 years in the industry, I still get satisfaction from the problem-solving aspect of the job, even on the smaller tasks. I love getting involved on the more creative projects, where I have the freedom to develop the “look” of the commercial/film. But, day to day, it’s really the team-based nature of the work that keeps me going. Working with other artists, producers, directors and clients to make a project look great is what I find really enjoyable.

WHAT’S YOUR LEAST FAVORITE?
Sometimes even if everything is planned and scheduled accordingly, a little hiccup along the way can easily impact a project, especially on jobs where you might only have a limited amount of time to get the work done. So it’s always important to work in such a way that allows you to adapt to sudden changes.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I used to draw all day, every day as a kid. I still sketch occasionally, but maybe I would have pursued a more traditional fine art or illustration career if I hadn’t found VFX.

Tiffany & Co.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Over the past year, I’ve worked on projects for clients such as Facebook, Adidas, Samsung and Verizon. I also worked on the Tiffany & Co. campaign “Believe in Dreams” directed by Francis Lawrence, as well as the company’s holiday campaign directed by Mark Romanek.

I also worked on Cadillac’s “Rise Above” campaign for the 2019 Oscars, which was challenging since we had to deliver four spots within a short timeframe. But it was a fun project. There was also the Michelob Ultra Robots Super Bowl spot earlier this year. That was an interesting project, as the work was completed between our LA, New York and London studios.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Last year, I had the chance to work with my friend and director Sofia Astrom on the music video for the song “Bone Dry” by Eels. It was an interesting project since I’d never done visual effects for a stop-motion animation before. This had its own challenges, and the style of the piece was very different compared to what I’m used to working on day to day. It had a much more handmade feel to it, and the visual effects design had to reflect that, which was such a change to the work I usually do in commercials, which generally leans more toward photorealistic visual effects work.

WHAT TOOLS DO YOU USE DAY TO DAY?
I mostly work with Foundry Nuke for shot compositing. When leading a job that requires a broad overview of the project and timeline management/editorial tasks, I use Nuke Studio or
Autodesk Flame, depending on the requirements of the project. I also use ftrack daily for project management.

WHERE DO YOU FIND INSPIRATION NOW?
I follow a lot of incredibly talented concept artists and photographers/filmmakers on Instagram. Viewing these images/videos on a tiny phone doesn’t always do justice to the work, but the platform is so active that it’s a great resource for inspiration and finding new artists.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I like to run and cycle around the city when I can. During the week it can be easy to get stuck in a routine of sitting in front of a screen, so getting out and about is a much-needed break for me.

Blackmagic: Resolve 16.1 in public beta, updates Pocket Cinema Camera

Blackmagic Design has announced DaVinci Resolve 16.1, an updated version of its edit, color, visual effects and audio post software that features updates to the new cut page, further speeding up the editing process.

With Resolve 16, introduced at NAB 2019, now in final release, the Resolve 16.1 public beta is now available for download from the Blackmagic Design website. This new public beta will help Blackmagic continue to develop new ideas while collaborating with users to ensure those ideas are refined for real-world workflows.

The Resolve 16.1 public beta features changes to the bin that now make it possible to place media in various folders and isolate clips from being used when viewing them in the source tape, sync bin or sync window. Clips will appear in all folders below the current level, and as users navigate around the levels in the bin, the source tape will reconfigure in real time. There’s even a menu for directly selecting folders in a user’s project.

Also new in this public beta is the smart indicator. The new cut page in DaVinci Resolve 16 introduced multiple new smart features, which work by estimating where the editor wants to add an edit or transition and then applying it without the editor having to waste time placing exact in and out points. The software guesses what the editor wants to do and just does it — it adds the inset edit or transition to the edit closest to where the editor has placed the CTI.

But a problem can arise in complex edits, where it is hard to know what the software would do and which edit it would place the effect or clip into. That’s the reason for the beta version’s new smart indicator. The smart indicator provides a small marker in the timeline so users get constant feedback and always know where DaVinci Resolve 16.1 will place edits and transitions. The new smart indicator constantly live-updates as the editor moves around the timeline.

One of the most common items requested by users was a faster way to cut clips in the timeline, so now DaVinci Resolve 16.1 includes a “cut clip” icon in the user interface. Clicking on it will slice the clips in the timeline at the CTI point.

Multiple changes have also been made to the new DaVinci Resolve Editor Keyboard, including a new adaptive scroll feature on the search dial, which will automatically slow down a job when editors are hunting for an in point. The live trimming buttons have been renamed to the same labels as the functions in the edit page, and they have been changed to trim in, trim out, transition duration, slip in and slip out. The function keys along the top of the keyboard are now being used for various editing functions.

There are additional edit models on the function keys, allowing users to access more types of editing directly from dedicated keys on the keyboard. There’s also a new transition window that uses the F4 key, and pressing and rotating the search dial allows instant selection from all the transition types in DaVinci Resolve. Users who need quick picture picture-in in-picture effects can use F5 and apply them instantly.

Sometimes when editing projects with tight deadlines, there is little time to keep replaying the edit to see where it drags. DaVinci Resolve 16.1 features something called a Boring Detector that highlights the timeline where any shot is too long and might be boring for viewers. The Boring Detector can also show jump cuts, where shots are too short. This tool allows editors to reconsider their edits and make changes. The Boring Detector is helpful when using the source tape. In that case, editors can perform many edits without playing the timeline, so the Boring Detector serves as an alternative live source of feedback.

Another one of the most requested features of DaVinci Resolve 16.1 is the new sync bin. The sync bin is a digital assistant editor that constantly sorts through thousands of clips to find only what the editor needs and then displays them synced to the point in the timeline the editor is on. The sync bin will show the clips from all cameras on a shoot stacked by camera number. Also, the viewer transforms into a multi-viewer so users can see their options for clips that sync to the shot in the timeline. The sync bin uses date and timecode to find and sync clips, and by using metadata and locking cameras to time of day, users can save time in the edit.

According to Blackmagic, the sync bin changes how multi-camera editing can be completed. Editors can scroll off the end of the timeline and keep adding shots. When using the DaVinci Resolve Editor Keyboard, editors can hold the camera number and rotate the search dial to “live overwrite” the clip into the timeline, making editing faster.

The closeup edit feature has been enhanced in DaVinci Resolve 16.1. It now does face detection and analysis and will zoom the shot based on face positioning to ensure the person is nicely framed.

If pros are using shots from cameras without timecode, the new sync window lets them sort and sync clips from multiple cameras. The sync window supports sync by timecode and can also detect audio and sync clips by sound. These clips will display a sync icon in the media pool so editors can tell which clips are synced and ready for use. Manually syncing clips using the new sync window allows workflows such as multiple action cameras to use new features such as source overwrite editing and the new sync bin.

Blackmagic Pocket Cinema Camera
Besides releasing the DaVinci Resolve 16.1 public beta, Blackmagic also updated the Blackmagic Pocket Cinema Camera. Blackmagic not only upgraded the camera from 4K to 6K resolution, but it changed the mount to the much used Canon EF style. Previous iterations of the Pocket Cinema Camera used a Micro 4/3s mount, but many users chose to purchase a Micro 4/3s-to-Canon EF adapter, which easily runs over $500 new. Because of the mount change in the Pocket Cinema Camera 6K, users can avoid buying the adapter and — if they shoot with Canon EF — can use the same lenses.

London’s Cheat expands with color and finishing suites

London-based color and finishing house Cheat has expanded, adding three new grading and finishing suites, a production studio and a client lounge/bar space. Cheat now has four large broadcast color suites and services two other color suites at Jam VFX and No.8 in Fitzrovia and Soho, respectively. Cheat has a creative partnership with these studios.

Located in the Arthaus building in Hackney, all four of Cheat’s color suites have calibrated projection or broadcast monitoring and are equipped with cutting-edge hardware for HDR and working with 8K. Cheat was the first color company to complete a TV series in 8K on Netflix’s The End of The F***ing World in 2017. Having invested in improved storage and network infrastructure during this period, the facility is well-equipped to take on 8K and HDR projects.

Cheat uses Autodesk Flame for finishing and Blackmagic DaVinci Resolve for color grading.

The new HDR grading suite offers HDR mastering above 2,000 nits with a Flanders Scientific XM310K reference monitor that can master up to 3,000 nits. Cheat is also now a full-fledged Dolby Vision-certified mastering facility.

“Improving client experience was, of course, a key consideration in shaping the design of the renovation,” says Toby Tomkins, founder of Cheat. “The new color suite is our largest yet and comfortably seats up to 10 people. We designed it from the ground up with a raised client platform and a custom-built bias wall. This allows everyone to look at the same single monitor while grading and maintaining the spacious and relaxed feel of our other suites. The new lounge and bar area also offer a relaxing area for clients to feel at home.”

Dick Wolf’s television empire: his production and post brain trust

By Iain Blair

The TV landscape is full of scripted police procedurals and true crime dramas these days, but the indisputable and legendary king of that crowded landscape is Emmy-winning creator/producer Dick Wolf, whose name has become synonymous with high-quality drama.

Arthur Forney

Since it burst onto the scene back in 1990, his Law & Order show has spawned six dramas and four international spinoffs, while his “Chicago” franchise gave birth to another four series — the hugely popular Chicago Med, Chicago Fire and Chicago P.D. His Chicago Justice was cancelled after one season.

Then there’s his “FBI” shows, as well as the more documentary-style Cold Justice. If you’ve seen Cold Justice — and you should — you know that this is the real deal, focusing on real crimes. It’s all the more fascinating and addictive because of it.

Produced by Wolf and Magical Elves, the real-life crime series follows veteran prosecutor Kelly Siegler, who gets help from seasoned detectives as they dig into small-town murder cases that have lingered for years without answers or justice for the victims. Together with local law enforcement from across the country, the Cold Justice team has successfully helped bring about 45 arrests and 20 convictions. No case is too cold for Siegler, as the new season delves into new unsolved homicides while also bringing updates to previous cases. No wonder Wolf calls it “doing God’s work.” Cold Justice airs on true crime network Oxygen.

I recently spoke with Emmy-winning Arthur Forney, executive producer of all Wolf Entertainment’s scripted series (he’s also directed many episodes), about posting those shows. I also spoke with Cold Justice showrunner Liz Cook and EP/head of post Scott Patch.

Chicago Fire

Dick Wolf has said that, as head of post, you are “one of the irreplaceable pieces of the Wolf Films hierarchy.” How many shows do you oversee?
Arthur Forney: I oversee all of Wolf Entertainment’s scripted series, including Law & Order: Special Victims Unit, Chicago Fire, Chicago P.D., Chicago Med, FBI and FBI: Most Wanted.

Where is all the post done?
Forney: We do it all at NBCUniversal StudioPost in LA.

How involved is Dick Wolf?
Forney: Very involved, and we talk all the time.

How does the post pipeline work?
Forney: All film is shot on location and then sent back to the editing room and streamed into the lab. From there we do all our color corrections, which takes us into downloading it into Avid Media Composer.

What are the biggest challenges of the post process on the shows?
Forney: Delivering high-quality programming with a shortened post schedule.

Chicago Med

What are the editing challenges involved?
Forney: Trying to find the right way of telling the story, finding the right performances, shaping the show and creating intensity that results in high-quality television.

What about VFX? Who does them?
Forney: All of our visual effects are done by Spy Post in Santa Monica. All of the action is enhanced and done by them.

Where do you do the color grading?
Forney: Coloring/grading is all done at NBCUniversal StudioPost.

Now let’s talk to Cook and Patch about Cold Justice:

Liz and Scott, I recently saw the finale to Season 5 of Cold Justice. That was a long season.
Liz Cook: Yes, we did 26 episodes, so it was a lot of very long days and hard work.

It seems that there’s more focus than ever on drug-related cases now.
Cook: I don’t think that was the intention going in, but as we’ve gone on, you can’t help but recognize the huge drug problem in America now. Meth and opioids pop up in a lot of cases, and it’s obviously a crisis, and even if they aren’t the driving force in many cases, they’re definitely part of many.

L-R: Kelly Siegler, Dick Wolf, Scott Patch and Liz Cook. Photo by Evans Vestal Ward

How do you go about finding cases for the show?
Cook: We have a case-finding team, and they get the cases various ways, including cold-calling. We have a team dedicated to that, calling every day, and we get most of them that way. A lot come through agencies and sheriff’s departments that have worked with us before and want to help us again. And we get some from family members and some from hits on the Facebook page we have.

I assume you need to work very closely with local law agencies as you need access to their files?
Cook: Exactly. That’s the first part of the whole puzzle. They have to invite us in. The second part is getting the family involved. I don’t think we’d ever take on a case that the family didn’t want us to do.

What’s involved for you, and do you like being a showrunner?
Cook: It’s a tough job and pretty demanding, but I love it. We go through a lot of steps and stuff to get a case approved, and to get the police and family on board, and then we get the case read by one of our legal readers to evaluate it and see if there’s a possibility that we can solve it. At that point we pitch it to the network, and once they approve it and everyone’s on board, then if there are certain things like DNA and evidence that might need testing, we get all that going, along with ballistics that need researching, and stuff like phone records and so on. And it actually moves really fast – we usually get all these people on board within three weeks.

How long does it take to shoot each show?
Cook: It varies, as each show is different, but around seven or eight days, sometimes longer. We have a case coming up with cadaver dogs, and that stuff will happen before we even get to the location, so it all depends. And some cases will have 40 witnesses, while others might have over 100. So it’s flexible.

Cold Justice

Where do you post, and what’s the schedule like?
Scott Patch: We do it all at the Magical Elves offices here in Hollywood — the editing, sound, color correction. The online editor and colorist is Pepe Serventi, and we have it all on one floor, and it’s really convenient to have all the post in house. The schedule is roughly two months from the raw footage to getting it all locked and ready to air, which is quite a long time.

Dailies come back to us and we do our first initial pass by the story team and editors, and they’ll start whittling all the footage down. So it takes us a couple of weeks to just look at all the footage, as we usually have about 180 hours of it, and it takes a while to turn all that into something the editors can deal with. Then it goes through about three network passes with notes.

What about dealing with all the legal aspects?
Patch: That makes it a different kind of show from most of the others, so we have legal people making sure all the content is fine, and then sometimes we’ll also get notes from local law agencies, as well as internal notes from our own producers. That’s why it takes two months from start to finish.

Cook: We vet it through local law, and they see the cuts before it airs to make sure there are no problems. The biggest priority for us is that we don’t hurt the case at all with our show, so we always check it all with the local D.A. and police. And we don’t sensationalize anything.

Cold Justice

Patch: That’s another big part of editing and post – making sure we keep it authentic. That can be a challenge, but these are real cases with real people being accused of murder.

Cook: Our instinct is to make it dramatic, but you can’t do that. You have to protect the case, which might go to trial.

Talk about editing. You have several editors, I assume because of the time factor. How does that work?
Patch: Some of these cases have been cold for 25 or 30 years, so when the field team gets there, they really stand back and let the cops talk about the case, and we end up with a ton of stuff that you couldn’t fit into the time slot however hard you tried. So we have to decide what needs to be in, what doesn’t.

Cook: On day one, our “war room” day, we meet with the local law and everyone involved in the case, and that’s eight hours of footage right there.

Patch: And that gets cut down to just four or five minutes. We have a pretty small but tight team, with 10 editors who split up the episodes. Once in a while they’ll cross over, but we like to have each team and the producers stay with each episode as long as they can, as it’s so complicated. When you see the finished show, it doesn’t seem that complicated, but there are so many ways you could handle the footage that it really helps for each team to really take ownership of that particular episode.

How involved is Dick Wolf in post?
Cook: He loves the whole post process, and he watches all the cuts and has input.

Patch: He’s very supportive and obviously so experienced, and if we’re having a problem with something, he’ll give notes. And for the most part, the network gives us a lot of flexibility to make the show.

What about VFX on the show?
Patch: We have some, but nothing too fancy, and we use an outside VFX/graphics company, LOM Design. We have a lot of legal documents on the show, and that stuff gets animated, and we’ll also have some 3D crime scene VFX. The only other outside vendor is our composer, Robert ToTeras.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Speed controls now available in Premiere Rush V.1.2

Adobe has added a new panel in Premiere Rush called Speed, which allows users to manipulate the speed of their footage while maintaining control over the audio pitch, range, ramp speed and duration of the edited clip. Adobe’s Premiere Rush teams say speed control has been the most requested feature by users.

Basic speed adjustments: A clip’s speed is displayed as a percentage value, with 100% being realtime. Values below 100% result in slow motion, and values above 100% create fast motion. To adjust the speed, users simply open the speed panel, select “Range Speed” and drag the slider. Or they can tap on the speed percentage next to the slider and enter a specific value.

Speed ranges: Speed ranges allow users to adjust the speed within a specific section of a clip. To create a range, users drag the blue handles on the clip in the timeline or in the speed panel under “Range.” The speed outside the range is 100%, while speed inside the range is adjustable.

Ramping: Rush’s adjustable speed ramps make it possible to progressively speed up or slow down into or out of a range. Ramping helps smooth out speed changes that might otherwise seem jarring.

Duration adjustments: For precise control, users can manually set a clip’s duration. After setting the duration, Rush will do the math and adjust the clip speed to the appropriate value — a feature that is especially useful for time lapses.

Maintain Pitch: Typically, speeding up footage will raise the audio’s pitch (think mouse voice), while slowing down footage will lower it (think deep robot voice). Maintain Pitch in the speed panel takes care of the problem by preserving the original pitch of the audio at any speed.

As with everything in Rush, speed adjustments will transfer seamlessly when opening a Rush project in Premiere Pro.

Bluefish444 adds edit-while-record, REST API to IngeSTore

Bluefish444, makers of uncompressed UltraHD SDI, ASI, video over IP and HDMI I/O cards, and mini converters, has released IngeSTore version 1.1.2. This free update of IngeSTore adds support for new codecs, edit-while-record workflows and a REST API.

Bluefish444 developed IngeSTore software as a complementary multi-channel ingest tool enabling Bluefish444 hardware to capture multiple independent format SDI sources simultaneously.

In IngeSTore 1.1.2, Bluefish444 has expanded codec support to include the popular formats OP1A MPEG-2 and DNxHD within the BlueCodecPack license. Edit-while-record workflows are supported through both industry standard growing files and through Bluefish444’s BlueRT plug-in for Adobe Premiere Pro and Avid Media Composer. BlueRT allows Adobe and Avid NLEs to access media files as they are still being recorded by IngeSTore multi-channel capture software, increasing production efficiency via immediate access to recorded media during live workflows.

Review: LaCie mobile, high Speed 1TB SSD

By Brady Betzel

With the flood of internal and external hard drives hitting the market at relatively low prices, it is sometimes hard to wade through the swamp and find the drive that is right for your workflow. In terms of external drives, do you need a RAID? USB-C? Is Thunderbolt 3 the same as USB-C? Should I save money and go with a spinning drive? Are spinning drives even cheaper than SSD drives these days? All of these questions are valid and, hopefully, I will answer them.

For this review, I’m taking a look at the LaCie Mobile SSD  which comes in three versions: 500GB, 1TB and 2TB, costing around $129.95, $219.95 and $399.95, respectively. According to LaCie’s website the mobile SSD drives are exclusive to Apple, but with some searching on Amazon you can find all three available as well and at lower prices than I’ve mentioned. The 1TB version I am seeing for $152.95 is being sold on Amazon through LaCie, so I assume the warranty still holds up.

I was sent the 1TB version of the LaCie Mobile SSD for review and testing. Along with the drive itself, you will get two connection cables: a (USB 3.0 speed) USB-A to USB-C cable, as well as a (USB 3.1 Gen2 speed) GenUSB-C to USB-C cable. For clarity, USB-C is the type of connection — the oval-like shape and technology used to transfer data. While USB-C will work on Thunderbolt 3 connections, Thunderbolt 3 only connections will not work on USB-C connections. Yes, that is super-confusing considering they look the same. But in the real world, Thunderbolt 3 is more Mac OS-based while USB-C is more Windows-based. You can find rare Thunderbolt 3 connections on Windows-based PCs, but you are more likely to find USB-C. That being said, the LaCie Mobile SSD is compatible with both USB-C and Thunderbolt 3, as well as USB 3.0. Keep in mind you will not get the high transfer speed with the USB 3.0 to USB-C cable. You will only get that with the (USB 3.1 Gen 2) USB-C to USB-C cable. The drive comes formatted as exFAT, which is immediately compatible with both Mac OS and Windows.

So, are spinning drives worth the cheaper price? In my opinion, no. Spinning drives are more fragile when moved around a lot and they transfer at much slower speeds. Advertised speeds vary from about 130MB/s for spinning drives to 540MB/s for SSDs, so for today what amounts to $100 more will give you a significant speed increase.

A very valuable piece of the LaCie Mobile SSD purchase is the limited three-year warranty and three years of data recovery services for free. No matter how your data becomes corrupted, Seagate will try and recover it — Seagate became LaCie’s parent company in 2014. Each product is eligible for one in-lab data recovery attempt and can be turned around in as little as two days, depending on the type of recovery. The recovered media will then be sent back to you on a storage device as well as be available to you from a cloud-based account that will be hosted online for 60 days. This is a great feature that’s included in the price.

The drive itself is small, measuring approximately .35” x 3” x 3.8” and weighing only .22 lbs. The outside has sharp lines much in the vein of a faceted diamond. It feels solid and great to carry. The color is about the same as a MacBook Pro, space gray and is made of aluminum.

Transfer SpeedsAlright, let’s get to the nitty-gritty: transfer speeds. I tested the LaCie Mobile SSD on both a Windows-based PC with USB-C and an iMac Pro with Thunderbolt 3/USB-C. On the Windows PC, I initially connected the drive to a port on the front of my system and I was only getting around 150MB/s write speed (about the speed of USB 3.0). Immediately, I knew something was wrong, so I connected to a USB-C port that was in a PCI-e slot in the rear of my PC. On that port I was getting 440.9MB/s write speed and 516.3MB/s read speeds. Moral of the story, make sure your USB-C ports are not just for charging or simply the USB-C connector running at USB 3.0 speeds.

On the iMac Pro, I was getting write speeds of 487.2MB/s and read speeds of 523.9MB/s. This is definitely on par with the correct Windows PC transfer speeds. The retail packaging on the LaCie Mobile SSD states a 540MB/s speed (doesn’t differentiate between read or write), but much like retail miles-per-gallon readouts on car sales brochures, you have to take their numbers with a few grains of salt. And while I have previoulsy tested drives (not from LaCie) that would initially transfer at a high rate and drop down, the LaCie Mobile SSD drive sustained the high speed transfer rates.

Summing Up
In the end, the size and design of the LaCie Mobile SSD will be one of the larger factors in determining if you buy this drive. It’s small. Like real small, but it feels sturdy. I don’t think anyone can argue that the LaCie Rugged drives (the ones that are orange-rubber encased) are a staple of the post industry. I really wish LaCie kept that tradition and added a tiny little orange rubberized edge. Not only does it feel safer for some reason, but it is a trademark that immediately says, “I’m a professional.”

Besides the appearance, the $152.95 price tag for a 1TB SSD drive that can easily fit into your shirt pocket without being noticed is pretty reasonable. At $219.95 I might say keep looking around. In addition, if you aren’t already an Adobe Creative Cloud subscriber you will get a free 30-day trial (normally seven days) included with purchase.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Shipping + Handling adds Jerry Spivack, Mike Pethel, Matthew Schwab

VFX creative director Jerry Spivack and colorists Michael Pethel and Matthew Schwab have joined LA’s Shipping + Handling, Spot Welders‘ VFX, color grading, animation, and finishing arm/sister company.

Alongside executive producer Scott Friske and current creative director Casey Price, Spivack will help lead the company’s creative team. As the creative director/co-founder at Ring of Fire, Spivack was responsible for crafting and spearheading VFX on commercials for brands including FedEx, Nike and Jaguar; episodic work for series television including Netflix’s Wormwood and 12 seasons of FX’s It’s Always Sunny in Philadelphia; promos for NBC’s The Voice and The Titan Games; and feature films such as Sony Pictures’ Spider-Man 2, Bold Films’ Drive and Warner Bros.’ The Bucket List.

Colorist Pethel was a founding partner of Company 3 and for the past five years has served client and director relationships under his BeachHouse Color brand, which he will continue to maintain. Pethel’s body of work includes campaigns for Carl’s Jr., Chase, Coke, Comcast/Xfinity, Hyundai, Jeep, Netflix and Southwest Airlines.

Commenting on the move, Pethel says, “I’m thrilled to be joining such a fantastic group of highly regarded and skilled professionals at Shipping + Handling. There is so much creativity here; the people are awesome to work with and the technology they are able to offer clientele at the facility is top-notch.”

Schwab formally joins the Shipping + Handling roster after working closely with the company over the past two years on multiple campaigns for Apple, Acura, QuickBooks and many others. Aside from his role at Shipping + Handling, Schwab will also continue his work through Roving Picture Company. Having worked with a number of internationally recognized brands, Schwab has collaborated on projects for Amazon, Honda, Mercedes-Benz, National Geographic, Netflix, Nike, PlayStation and Smirnoff.

“It’s exciting to be part of a team that approaches every project with such energy. This partnership represents a shared commitment to always deliver outstanding color and technical results for our clients,” says Schwab.

“Pethel is easily amongst the best colorists in our industry. As a longtime client of his, I have a real understanding of the professionalism he brings to every session. He is a delight in the room and wickedly talented. Schwab’s talent has just been realized in the last few years, and we are pleased to offer his skill to our clients. If our experience working with him over the last couple of years is any indication, we’re going to make a lot of clients happy he’s on our roster,” adds Friske.

Spivack, Pethel and Schwab will operate out of Shipping + Handling’s West Coast office on the creative campus it shares with its sister company, editorial post house Spot Welders.

Image: (L-R) Mike Pethel, Matthew Schwab, Jerry Spivack

 

Matthew Bristowe joins Jellyfish as COO

UK-based VFX and animation studio Jellyfish Pictures has hired Matthew Bristowe as director of operations. With a career spanning over 20 years, Bristowe joins Jellyfish Pictures after a stint as head of production at Technicolor.

During his 20 years in the industry, Bristowe has overseen hundreds of productions, including; Aladdin (Disney), Star Wars: The Last Jedi (Lucasfilm/Disney), Avengers: Age of Ultron (Marvel) and Guardians of the Galaxy (Marvel). In 2014 he was honored with the Advanced Imaging Society’s Lumiere Award for his work on Alfonso Cuarón’s Academy Award-winning Gravity.

Bristowe led the One Of Us VFX team to success in the category of Special, Visual and Graphic Effects at the BAFTAs and Best Digital Effects at the Royal Television Society Awards for The Crown Season 1. Another RTS award and BAFTA nomination followed in 2018 for The Crown Season 2. Prior to working with Technicolor and One of Us, Bristowe held senior positions at MPC and Prime Focus.

“Matt joining Jellyfish Pictures is a substantial hire for the company,” explains CEO Phil Dobree. “2019 has seen us focus on our growth, following the opening of our newest studio in Sheffield, and Matt’s extensive experience of bringing together creativity and strategy will be instrumental in our further expansion.”

Quick Chat: Bonfire Labs’ Mary Mathaisell

Over the course of nearly 30 years, San Francisco’s Bonfire Labs has embraced change. Over the years, the company evolved from an editorial and post house to a design and creative content studio that leverages the best aspects of the agency and production company models without adhering to either one.

This hybrid model has worked well for product launches for Google, Facebook, Salesforce, Logitech and many others.

The latest change is in the company’s ownership, with the last of the original founders stepping down and a new management partnership taking over — led by executive producer Mary Mathaisell, managing director Jim Bartel and head of strategy and creative Chris Weldon.

We spoke with Mathaisell to get a better sense of Bonfire Labs’ past, present and future.

Can you give us some history of Bonfire Labs? When did you join the company? How/why did you first get into producing?
I’ve been with Bonfire Labs for seven years. I started here as head of production. After being at several large digital agencies working on campaigns and content for brands like Target, Gap, LG and PayPal, I wanted to build something more sustainable than just another campaign and was thrilled that Bonfire was interested in growing into a full-service creative company with integrated production.

Prior to working at AKQA and Publicis, I worked in VFX and production as well as design for products and interfaces, but my primary focus and love has always been commercial production.

The studio has evolved from a traditional post studio to creative strategy and content company. What were the factors that drove those changes?
Bonfire Labs has always been smart about staying small and strategic about the kind of work and clients to focus on. We have been able to change based on both the kind of work we want to be doing and what the market needs. With a giant need for content, especially video content, we have decided to staff and service clients as experts across all the phases of creative development and production and finishing. Instead of going to an agency and a production company and post houses, our clients can work directly with us on everything from concept to finishing.

Silicon Valley is clearly a big client base for you. What are they generally coming to you for? Are the content needs in high tech different from other business sectors?
Our clients usually have a new product, feature or brand that they want the world to know about. We work on product launches, brand awareness campaigns, product education, event content and social content. Most of our work is for technology companies, but every company these days has a technology component. I would say that speed to market is one key differentiator for our clients. We are often building stories as we are in production, so we get a lot done with our clients through creative collaboration and by not following the traditional rules of an agency or a production company.

Any specific trends that you’re seeing recently from your clients? New areas that Bonfire is looking to explore, either new markets for your talents or technology you’re looking to explore further?
Rapid brand prototyping is a new service we are offering to much excitement. Because we have experience across so many technology brands and work closely with our clients, we can develop a language and brand voice faster than most traditional agencies. Technology brands are evolving so quickly that we often start working on content creation before a brand has defined itself or transitioned to its next phase. Rapid brand prototyping allows brands to test content and grow the brand simultaneously.

Blade Shadow

Can you talk about some projects that you have done recently that challenged you and the team?
We rolled out a launch film for a new start-up client called Blade Shadow. We are working with Salesforce to develop trailblazer stories and anthem films for its .org branch, which focuses on NGOs, education and philanthropy.

The company is undergoing a transition with some of the original partners. Can you talk about that a bit as well?
The original founders have passed the torch to the group of people who have been managing and producing the work over the past five to 15 years. We have six new owners, three managing partners and three associate partners. Jim Bartel is the managing director; Chris Weldon is the head of strategy and creative, and I’m the executive producer in charge of content development and production. The three of us make up the management team.

The three of us make up the management team. Sheila Smith (head of production) Robbie Proctor (head of editorial) and Phil Spitler (creative technology lead) are associate partners as they contribute to and lead so much of our work and process and have been part of the company for over 10 years each.

 

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.

Behind the Title: Compadre’s Mika Saulitis

This creative started writing brand campaigns for his favorite oatmeal at eight years old.

NAME: Mika Saulitis

COMPANY: Culver City, California’s Compadre

CAN YOU DESCRIBE YOUR COMPANY?
We’re a creative marketing agency. I could get into the nuts and bolts of our process and services, but what we specialize in is pretty simple: building a brand’s story, telling that story and spreading that story everywhere people can fall in love with it.

WHAT’S YOUR JOB TITLE?
Director of Creative Strategy

WHAT DOES THAT ENTAIL?
The short answer is that I oversee brand strategy and integrated marketing campaigns. The longer answer is that our creative strategy team’s primary goal is to take complex insights and business challenges and develop simple, clear creative solutions. Sometimes that’s renaming a company or developing a new brand position, or conceiving a big “hook” for a 360 marketing campaign and rippling it out across on-air, social, experiential, and brand partnerships.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Identifying the unique differentiator of a brand or product and figuring out how to express that in a succinct, unexpected way.

WHAT’S YOUR LEAST FAVORITE?
Proofreading 150-page presentations.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
9am. Once that coffee hits, I’m off to the races.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I always wanted to be a garbage man growing up, so if they still ride along on the back of the truck, probably that.

WHY DID YOU CHOOSE THIS PROFESSION?
I started writing ads for my favorite oatmeal to convert my classmates when I was eight years old, so I’ve been a marketer at heart for pretty much my whole life.

Freeform

Freeform

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We just developed a brand campaign for Freeform that rejects dated societal norms. Our concept, “It’s Not Us, It’s You,” was a breakup letter to society; we shot real people, as well as the network’s talent, and empowered them to speak their piece and break up with all the things that suck about societal standards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Noise-canceling headphones, Apple TV and my bike. That was technology at one point, right?

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
I’m not ashamed to admit that Flo Rida gets my creative juices flowing.

THIS IS A HIGH STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS.
Golf is my ultimate stress reliever. Being surrounded by trees, chirping birds and the occasional “fore” puts me at ease.

Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”

Skywalker Sound’s audio post mix for Toy Story 4

By Jennifer Walden

Pixar’s first feature-length film, 1995’s Toy Story, was a game-changer for animated movies. There was no going back after that blasted onto screens and into the hearts of millions. Fast-forward 24 years to the franchise’s fourth installment — Toy Story 4 — and it’s plain to see that Pixar’s approach to animated fare hasn’t changed.

Visually, Toy Story 4 brings so much to the screen, with its near-photorealistic imagery, interesting camera angles and variations in depth of field. “It’s a cartoon, but not really. It’s a film,” says Skywalker Sound’s Oscar-winning re-recording mixer Michael Semanick, who handled the effects/music alongside re-recording mixer Nathan Nance on dialogue/Foley.

Nathan Nance

Here, Semanick and Nance talk about their approach to mixing Toy Story 4, how they use reverb and Foley to bring the characters to life, and how they used the Dolby Atmos surround field to make the animated world feel immersive. They also talk about mixing the stunning rain scene, the challenges of mixing the emotional carnival scenes near the end and mixing the Bo Peep and Woody reunion scene.

Is your approach to mixing an animated film different from how you’d approach the mix on a live-action film? Mix-wise, what are some things you do to make an animated world feel like a real place?
Nathan Nance: The approach to the mix isn’t different. No matter if it’s an animated movie or a live-action movie, we are interested in trying to complement the story and direct the viewer’s attention to whatever the director wants their attention to be on.

With animation, you’re starting with just the ADR, and the approach to the whole sound job is different because you have to pick and choose every single sound and really create those environments. Even with the dialogue, we’re creating spaces with reverb (or lack of reverb) and helping the emotions of the story in the mix. You might not have the same options in a live-action movie.

Michael Semanick:

Michael Semanick: I don’t approach a film differently. Live action or animated, it comes down to storytelling. In today’s world, some of these live-action movies are like animated films. And the animated films are like live-action. I’m not sure which is which anymore.

Whether it’s live action or animation, the sound team is creating the environments. For live-action, they’re often shooting on a soundstage or they’re shooting on greenscreen, and the sound team creates those environments. For live-action films, they try to get the location to be as quiet as it can be to get the dialogue as clean as possible. So, the sound team is only working with dialogue and ADR.

It’s like an animation in that they need to recreate the entire environment. The production sound mixer is trying to capture the dialogue and not the extraneous sounds. The production sound mixer is there to capture the performance from the actors on that day at that time. Sometimes there are production effects, but the post sound team still preps the scene with sound effects, Foley and loop group. Then on the dub stage, we choose how much of that to put in.

For an animated film, they do the same thing. They prep a whole bunch of sounds and then on the dub stage we decide how busy we want the scene to be.

How do you use reverb to help define the spaces and make the animated world feel believable?
Semanick: Nathan really sets the tone when he’s doing the dialogue, defining how the environments and different spaces are going to sound. That works in combination with the background ambiences. It’s really the voice bouncing off objects that gives you the sense of largeness and depth of field. So reverb is really important in establishing the size of the room and also outdoors — how your voice slaps off a building versus how it slaps off of trees or mountains. Reverb is a really essential tool for creating the environments and spaces that you want to put your actors or characters in.

Nance: You can use reverb to try and make the spaces sound “real” — whatever that means for cinema. Or, you can use it to create something that’s more emotional or has a certain vibe. Reverb is really important for making the dry dialogue sound believable, especially in these Pixar films. They are all in on the environments they’ve created. They want it to sound real and really put the viewer there. But then, there are moments when we use reverb creatively to push the moment further and add to the emotional experience.

What are some other things you do mix-wise to help make this animated world feel believable?
Semanick: The addition of Foley helps ground a lot of the animation. Those natural sounds, like footsteps and movements, we take for granted — just walking down the street or sitting in a restaurant. Those become a huge part of these films. The Foley helps to ground the animation. It gives it life, something to hold onto.

Foley is a big part of making the animated world feel believable. You have Foley artists performing to the actual picture, and the way they put a cup down or how they come to a stop adds character to the sound. It can make it sound more human, more real. Really good Foley artists can become the character. They pick up on the nuances — like how the character drags their feet or puts down a cup. All those little things we take for granted but they are all part of our character. Maybe the way you hold a wine glass and set it down is different from how I would do it. So good Foley artists tune into that right away, and they’ll match it with their performance. They’ll put one edge of the cup down and then the other if that’s how the character does it. So Foley helps to ground a lot of the animation and the VFX to reality. It adds realism. Give it up for the Foley artists!

Nance: So many times the sounds that are in Foley are the ones we recognize and take for granted. You hear those little sounds and think, yeah, that’s exactly what that sounds like. It’s because the Foley artists perform it and these are sounds that you recognize from everyday life. That adds to the realism, like Michael said.

Mix-wise, it must have been pretty difficult to push the subtle sounds through a full mix, like the sounds of the little spork named Forky. What are some techniques and sound tools that help you to get these character sounds to cut through?
Semanick: Director Josh Cooley was very particular about the sounds Forky was going to make. Supervising sound editors Ren Klyce and Coya Elliott and their team went out and got a big palette of sounds for different things.

We weeded through them here with Josh and narrowed it down. Josh then kind of left it up to me. He said he just wanted to hear Forky when he needed to hear him and then not ever have to think about it. The problem with Forky is that if there’s too much sound for him then you’re constantly watching what he’s doing as opposed to listening to what he’s saying. I was very diligent about weeding things out a lot of the time and adding sounds in for the eye movements and other tiny, specific sounds. But there’s not much sound in there for him. It’s just the voice because often his sounds were getting in the way of the dialogue and being distracting. We were very diligent about choosing what to hear and not to hear. Josh was very particular about what those sounds should be. He had been working with Ren on those for a couple months.

In balancing a film (and particularly Toy Story 4 with so many characters and so much going on), you have to really pick and choose sounds. You don’t want to pull the audience away in a direction you don’t want. That was one of the main things for Forky — getting his sounds right.

The opening rain scene was stunning! What was your approach to mixing that scene? How did you use the Dolby Atmos surround field to enhance it?
Semanick: That was a tough scene to mix. There is a lot of rain coming down and the challenge was how to get clarity out of the scene and make sure the audience can follow what was happening. So the scene starts out with rain sounds, but during the action sequence there’s actually no rain in the track.

Amazingly, your human ears and your brain fill in that information. I establish the rain and then when the action starts I literally pull all of the rain out. But your mind puts the rain there still. You think you hear it but it’s actually not there. When the track gets quiet all of a sudden, I bring the rain back up so you never miss the rain. No one has ever said anything about not hearing the rain.

I love the sound of rain; don’t get me wrong. I love the sound of rain on windows, rain on cars, rain on metals… Ren and his team did such an amazing job with that. We had a huge palette of rain. But there’s a certain point in the scene where we need the audience to focus on all of the action that’s happening, what’s really going on.

There’s Woody and Slinky Dog being stretched and RC in the gutter, and all this. So when I put all of the sounds up there you couldn’t make out anything. It was confusing. So I pulled all of the rain out. Then we put in all of the specific sounds. We made sure all of the dialogue, music and sounds worked together so the audience could follow the action. Then I went back through and added the rain back in. When we didn’t need it, I drifted it out. And when we needed it, I brought it back in. It took a lot of time to do that and some careful balancing to make it work.

That was a fun thing to do, but it took time. We’re working on a movie that kids and adults are going to see. We didn’t want to make it too loud. We wanted to make it comfortable. But it’s an action scene, so you want it to be exciting. And it had to work with the music. We were very careful about how loud we made things. When things started to hurt, we pulled it all back. We were diligent about keeping control of the volume and getting those balances was very difficult. We don’t want to make it too quiet, but it’s exciting. If we make it too loud then that pushes you away and you don’t pay attention.

That scene was fun in Dolby Atmos. I had the rain all around the theater, in the ceiling. But it does go away and comes back in when needed. It was a fun thing to do.

Did you have a favorite scene for mixing in Atmos?
Semanick: One of my favorite scenes for Atmos was when Bo Peep takes Woody to the top of the carousel and she asks why Woody would ever want to stay with one kid when you can have all of this. I do a subtle thing with the music — there are a few times in the film where I do this — where I pull the music forward as they’re climbing to the top of the carousel. There’s no music in the surrounds or the tops. I pull it so far forward that it’s almost mono.

Then, as they pop up from atop the carousel and the camera sweeps around, I let the music open up. I bloom it into the surrounds and into the overheads. I bloom it really hard with the camera moves. If you’re paying attention, you will feel the music sweep around you. You’re just supposed to feel it, not to really know that it happened. That’s one of the mixing techniques that I learned over the years. The picture editor, Axel Geddes, would ask me to make it “magical” and put more “magic” into it. I started to interpret that as: fill up the surrounds more.

One of the best parts of Atmos is that you have surrounds that are the same as the front speakers so the sound doesn’t fall off. It’s more full-range because it has bass management toward the back. That helps me, mix-wise, to really bring the sound into the room and fill the room out when I need to do that. There are a few scenes like that and Nathan would look at me funny and say, “Wow, I really hear it.”

We’re so concentrated on the sound. I’m just hoping that the audience will feel it wrap around them and give them a good sense of warmth. I’m trying to help push the emotional content. The music was so good. Randy Newman did a great job on a lot of the music. It really helped the story and I wanted to help that be the best it could be emotionally. It was already there, but I just wanted to give that little extra. Pulling the music into the front and then pushing out into the whole theater gave the music an emotional edge.

Nance: There are a couple of fun Atmos moments for effects. When they’re in the dark closet and the sound is happening all around. Also, when Woody wakes up from his voice box removal surgery. Michael was bringing the sewing machine right up into the overheads. We have the pull string floating around the room and into the ceiling. Those two moments were a pretty cool use of the point-source and the enveloping capability of Atmos.

What was the most challenging scene to mix? Why?
Nance: The whole scene with the lost girl and Gabby all the way through the toys’ goodbyes. That was two full sections, but we get so quiet even though there’s a huge carnival happening. It was a huge cheat. It took a lot of work to get into these quiet, delicate moments where we take everything out, all the backgrounds, and it’s very simple. Michael pulled the music forward in some of those spots and the whole mix becomes very simple and quiet. You’re almost holding your breath in these different moments with the goodbyes. Sometimes we think of the really loud, bombastic scenes as being tough. And they were! The escape from the antique store took quite a lot of work to balance and shape. But I think the quiet, delicate scenes take more work because they take more shaping.

Semanick: I agree. Those areas were very difficult. There was a whole carnival going on and I had to strip it all down. I had my moments. When they’re together above the carnival, it looks beautiful up there. The carnival rides behind them are blurry and we didn’t need to hear the sounds. We heard them before. We know what they sound like. Plus, that moment was with the toys. We were just with them. The whole world has dissolved, and the sound of the world too. You see the carnival back there, but you’re not really paying attention to it. You’re paying attention to Woody and Bo Peep or Gabby and the lost girl.

Another interesting scene was when Woody and Forky first walk through the antique store. It was interesting how the tones in each place change and the reverbs on the voices change in every single room. Those scenes were interesting. The challenge was how to establish the antique store. It’s very quiet, so we were very specific on each cut. Where are they? What’s around them? How high is the camera sitting? You start looking closely at the scene. I was able to do things with Atmos, put things in the ceiling.

What scene went through the most evolution mix-wise? What were some of the different ways you tried mixing it? Ultimately, why did you go with the way it’s mixed in the final?
Semanick: There’s a scene when Woody and Bo Peep reunite on the playground. A little girl picks up Woody and she has Bo Peep in her hands. They meet again for the first time. That scene went through changes musically and dialogue-wise. What do we hear? How much of the girl do we hear before we see Bo Peep and Woody looking at each other? We tried several different ways. There were many opinions that came in on that. When does the music bloom? When does it fill the room out? Is the score quite right? They recut the score. They had a different version.

That scene went through quite a bit of ups and downs. We weren’t sure which way to go. Ultimately, Josh was happy with it, and it plays well.

There was another version of Randy’s score that I liked. But, it’s not about what I like. It’s about how the overall room feels — if everybody feels like it’s the best that we can do. If that’s yes, then that’s the way it goes. I’ll always speak up if I have ideas. I’ll say, “Think about this. Think about that.”

That scene went through some changes, and I’m still on the fence. It works great, but I know there’s another version of the music that I preferred. I’ll just have to live with that.

Nance: We just kept trying things out on that scene until we had it feeling good, like it was hitting the right beats. We had to figure out what the timing was, what would have the most emotional impact. That’s why we tried out so many different versions.

Semanick: That’s a big moment in the film. It’s what starts the back half of the film. Woody gets reacquainted with Bo Peep and then we’re off to the races.

What console did you mix Toy Story 4 on and why?
Semanick: We both mixed on the Neve DFC. It’s my console of choice. I love the console; I love the way it sounds. I love that it has separate automation. There’s the editor’s automation that they did. I can change my automation and that doesn’t affect their automation. It’s the best of both worlds. It runs really smoothly. It’s one of the best sounding consoles around.

Nance: I really enjoy working on the Neve DFC. It’s my console of choice when there’s the option.

Semanick: There are a lot of different consoles and control surfaces you can use now, but I’m used to the DFC. I can really play the console as a musical instrument. It’s like a performance. I can perform these balances. I can grab knobs and change EQ or add reverb and pull things back. It’s like a performance and that console seems the most reliable one for me. I know it really well. It helps when you know your instrument.

Any final thoughts you’d like to share on mixing Toy Story 4?
Semanick: With these Pixar films, I get to benefit from the great storytelling and what they’ve done visually. All the aspects of these films Pixar does — the cinematography down to the lighting down to the character development, the costumes and set design — they spent so many hours debating how things are going to look and the design.

So, on the sound side, it’s about matching what they’ve done. How can I help support it? It’s amazing to me how much time they spend on these films. It’s hardcore filmmaking. It’s a cartoon, but not really. It’s a film. and it’s a really good film. You look at all the aspects of it, like how the camera moves. It’s not a real camera but you’re watching through the lens, seeing the camera angles, where and how they place the camera. They have to debate all that.

One of the hardest scenes for them must have been when Bo Peep and Woody are in the antique store and they turn and look at all the chandeliers. It was gorgeous, a beautiful shot. I bloom the music out there, around the theater. That was a delicate scene. When you look at the filmmaking they’re doing there and the reflections of the lights, you know they’re good. They’re really good.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Dalet to acquire Ooyala Flex Media Platform business

Dalet, a provider of solutions and services for content pros and broadcasters, has signed a definitive agreement to acquire the Ooyala Flex Media Platform business. The asset deal includes the Ooyala Flex Media Platform, as well as Ooyala personnel across sales, marketing, engineering, professional services and support.

The Ooyala Flex Media Platform, which is primarily sold as a subscription/SaaS offering, services OTT and digital media distribution workflows. The acquisition of these assets, personnel, and customers will expand the Dalet solutions offering to more verticals and tiers beyond its traditional customer base in production and news workflows, and will accelerate Dalet’s strategic move to increase recurring revenue models, with a subscription/SaaS-based services offering.

“By acquiring Ooyala, Dalet widens the markets it can address in terms of verticals and their respective tiers of complexity. A perfect complement to our existing Dalet Galaxy five offering in our traditional markets, the Ooyala Flex Media Platform also opens opportunities for new customers such as corporate brands, telcos, leagues and sports teams, who are looking to simply manage their media assets. The modern metadata management and orchestration capabilities of the Ooyala Flex Media Platform bring what these organizations need to lower TCO, improve agility and reduce time to market,” says David Lasry, chief executive officer for Dalet.

Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Beecham House‘s VFX take viewers back in time

Cambridge, UK-based Vine FX was the sole visual effects vendor on Gurinder Chadha’s Beecham House, a new Sunday night drama airing on ITV in the UK. Set in the India of 1795, Beecham House is the story of John Beecham (Tom Bateman), an Englishman who resigned from military service to set up as an honorable trader of the East India Company.

The series was shot at Ealing Studios and at some locations in India, with the visual effects work focusing on the Port of Delhi, the emperor’s palace and Beecham’s house. Vine FX founder Michael Illingworth assisted during development of the series and supervised his team of artists, creating intricate set extensions, matte paintings and period assets.

To make the shots believable and true to the era, the Vine FX team consulted closely with the show’s production designer and researched the period thoroughly. All modern elements — wires, telegraph poles, cars and lamp posts — had to be removed from the shoot footage, but the biggest challenge for the team was the Port of Delhi itself, a key location in the series.

Vine FX created a digital matte painting to extend the port and added numerous 3D boats and 3D people people working on the docks to create a busy working port of 1795 — a complex task and achieved by the expert eye of the Vine team.

“The success of this type of VFX is in its subtlety. We had to create a Delhi of 1795 that the audience believed, and this involved a great deal of research into how this would have looked that was essential to making it realistic,” says Illingworth. “Hopefully, we managed to do this.  I’m particularly happy with the finished port sequences as originally there were just three boats.

“I worked very closely with on-set supervisor Oliver Milburn while he was on set in India so was very much part of the production process in terms of VFX,” he continues. “Oliver would send me reference material from the shoot; this is always fundamental to the outcome of the VFX, as it allows you to plan ahead and work out any potential upcoming challenges. I was working on the VFX in Cambridge while Oliver was on set in Delhi — perfect!”

Vine FX used Photoshop and Nuke are its main tools. The artists modeled assets with Maya and Zbrush and painted assets using Substance painter. They rendered with Arnold.

Vine FX is currently working on War of the Worlds for Fox Networks and Canal+, due for release next year.

The Umbrella Academy‘s Emmy-nominated VFX supe Everett Burrell

By Iain Blair

If all ambitious TV shows with a ton of visual effects aspire to be cinematic, then Netflix’s The Umbrella Academy has to be the gold standard. The acclaimed sci-fi, superhero, adventure mash-up was just Emmy-nominated for its season-ending episode “The White Violin,” which showcased a full range of spectacular VFX. This included everything from the fully-CG Dr. Pogo to blowing up the moon and a mansion to the characters’ varied superpowers. Those VFX, mainly created by movie powerhouse Weta Digital in New Zealand and Spin VFX in Toronto, indeed rival anything in cinema. This is partly thanks to Netflix’s 4K pipeline.

The Umbrella Academy is based on the popular, Eisner Award-winning comics and graphic novels created and written by Gerard Way (“My Chemical Romance”), illustrated by Gabriel Bá, and published by Dark Horse Comics.

The story starts when, on the same day in 1989, 43 infants are born to unconnected women who showed no signs of pregnancy the day before. Seven are adopted by Sir Reginald Hargreeves, a billionaire industrialist, who creates The Umbrella Academy and prepares his “children” to save the world. But not everything went according to plan. In their teenage years, the family fractured and the team disbanded. Now, six of the surviving members reunite upon the news of Hargreeves’ death. Luther, Diego, Allison, Klaus, Vanya and Number Five work together to solve a mystery surrounding their father’s death. But the estranged family once again begins to come apart due to divergent personalities and abilities, not to mention the imminent threat of a global apocalypse.

The live-action series stars Ellen Page, Tom Hopper, Emmy Raver-Lampman, Robert Sheehan, David Castañeda, Aidan Gallagher, Cameron Britton and Mary J. Blige. It is produced by Universal Content Productions for Netflix. Steve Blackman (Fargo, Altered Carbon) is the executive producer and showrunner, with additional executive producers Jeff F. King, Bluegrass Television, and Mike Richardson and Keith Goldberg from Dark Horse Entertainment.

Everett Burrell

I spoke with senior visual effects supervisor and co-producer Everett Burrell (Pan’s Labyrinth, Altered Carbon), who has an Emmy for his work on Babylon 5, about creating the VFX and the 4K pipeline.

Congratulations on being nominated for the first season-ending episode “The White Violin,” which showcased so many impressive visual effects.
Thanks. We’re all really proud of the work.

Have you started season two?
Yes, and we’re already knee-deep in the shooting up in Canada. We shoot in Toronto, where we’re based, as well as Hamilton, which has this great period look. So we’re up there quite a bit. We’re just back here in LA for a couple of weeks working on editorial with Steve Blackman, the executive producer and showrunner. Our offices are in Encino, in a merchant bank building. I’m a co-producer as well, so I also deal a lot with editorial — more than normal.

Have you planned out all the VFX for the new season?
To a certain extent. We’re working on the scripts and have a good jump on them. We definitely plan to blow the first season out of the water in terms of what we come up with.

What are the biggest challenges of creating all the VFX on the show?
The big one is the sheer variety of VFX, which are all over the map in terms of the various types. They go from a completely animated talking CG chimpanzee Dr. Pogo to creating a very unusual apocalyptic world, with scenes like blowing up the moon and, of course, all the superpowers. One of the hardest things we had to do — which no one will ever know just watching it — was a ton of leaf replacement on trees.

Digital leaves via Montreal’s Folks.

When we began shooting, it was winter and there were no leaves on the trees. When we got to editorial we realized that the story spans just eight days, so it wouldn’t make any sense if in one scene we had no leaves and in the next we had leaves. So we had to add every single leaf to the trees for all of the first five episodes, which was a huge amount of work. The way we did it was to go back to all the locations and re-shoot all the trees from the same angles once they were in bloom. Then we had to composite all that in. Folks in Montreal did all of it, and it was very complicated. Lola did a lot of great work on Hargreeves, getting his young look for the early 1900s and cleaning up the hair and wrinkles and making it all look totally realistic. That was very tricky too.

Netflix is ahead of the curve thanks to its 4K policy. Tell us about the pipeline.
For a start, we shoot with the ARRI Alexa 65, which is a very robust cinema camera that was used on The Revenant. With its 65mm sensor, it’s meant for big-scope, epic movies, and we decided to go with it to give our show that great cinema look. The depth of field is like film, and it can also emulate film grain for this fantastic look. That camera shoots natively at 5K — it won’t go any lower. That means we’re at a much higher resolution than any other show out there.

And you’re right, Netflix requires a 4K master as future-proofing for streaming and so on. Those very high standards then trickle down to us and all the VFX. We also use a very unique system developed by Deluxe and Efilm called Portal, which basically stores the entire show in the cloud on a server somewhere, and we can get background plates to the vendors within 10 minutes. It’s amazing. Back in the old days, you’d have to make a request and maybe within 24 or 48 hours, you’d get those plates. So this system makes it almost instantaneous, and that’s a lifesaver.

   
Method blows up the moon.

How closely do you work with Steve Blackman and the editors?
I think Steve said it best:”There’s no daylight between the two of us” We’re linked at the hip pretty much all the time. He comes to my office if he has issues, and I go to his if we have complications; we resolve all of it together in probably the best creative relationship I’ve ever had. He relies on me and counts on me, and I trust him completely. Bottom line, if we need to write ourselves out of a sticky situation, he’s also the head writer, so he’ll just go off and rewrite a scene to help us out.

How many VFX do you average for each show?
We average between 150 and 200 per episode. Last season we did nearly 2,000 in total, so it’s a huge amount for a TV show, and there’s a lot of data being pushed. Luckily, I have an amazing team, including my production manager Misato Shinohara. She’s just the best and really takes care of all the databases, and manages all the shot data, reference, slates and so on. All that stuff we take on set has to go into this massive database, and just maintaining that is a huge job.

Who are the main VFX vendors?
The VFX are mainly created by Weta in New Zealand and Spin VFX in Toronto. Weta did all the Pogo stuff. Then we have Folks, Lola, Marz, Deluxe Toronto, DigitalFilm Tree in LA… and then Method Studios in Vancouver did great work on our end-of-the-world apocalyptic sequence. They blew up the moon and had a chunk of it hitting the Earth, along with all the surrounding imagery. We started R&D on that pretty early to get a jump on it. We gave them storyboards and they did previz. We used that as a cut to get iterations of it all. There were a lot of particle simulations, which was pretty intense.

Weta created Dr. Pogo

What have been the most difficult VFX sequences to create?
Just dealing with Pogo is obviously very demanding, and we had to come up with a fast shortcut to dealing with the photo-real look as we just don’t have the time or budget they have for the Planet of the Apes movies. The big thing is integrating him in the room as an actor with the live actors, and that was a huge challenge. We used just two witness cameras to capture our Pogo body performer. All the apocalyptic scenes were also very challenging because of the scale, and then those leaves were very hard to do and make look real. That alone took us a couple of months. And we might have the same problem this year, as we’re shooting in the summer through fall, and I’m praying that the leaves don’t start falling before we wrap.

What have been the main advances in technology that have really helped you pull off some of the show’s VFX?
I think the rendering and the graphics cards are the big ones, and the hardware talks together much more efficiently now. Even just a few years ago, and it might have taken weeks and weeks to render a Pogo. Now we can do it in a day. Weta developed new software for creating the texture and fabric of Pogo’s clothes. They also refined their hair programs.

 

I assume as co-producer that you’re very involved with the DI?
I am… and keeping track of all that and making sure we keep pushing the envelope. We do the DI at Company 3 with colorist Jill Bogdanowicz, who’s a partner in all of this. She brings so much to the show, and her work is a big part of why it looks so good. I love the DI. It’s where all the magic happens, and I get in there early with Jill and take care of the VFX tweaks. Then Steve comes in and works on contrast and color tweaks.By the time Steve gets there, we’re probably 80% of the way there already.

What can fans expect from season two?
Bigger, better visual effects. We definitely pay attention to the fans. They love the graphic novel, so we’re getting more of that into the show.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Masv now integrates with Premiere for fast file delivery

Masv, which sends large video files via the cloud, is offering a new extension for Adobe Premiere. The extension simplifies the delivery of data-heavy video projects by embedding Masv’s accelerated cloud transfer technology directly within the NLE.

The new extension is available for free at www.massive.io/premiere or the Adobe Exchange.

The new Masv Panel for Premiere reliably renders, uploads and sends large (20GB and higher) files that are typically too big for conventional cloud transfer services. Masv sends files over a high-performance global network, exploiting users’ maximum Internet bandwidth.

“Today’s video professionals are increasingly independent and distributed globally. They need to deliver huge projects faster, often from home studios or remote locations, while collaborating with teams that can change from project to project,” says Dave Horne. “This new production paradigm has broken traditional transfer methods, namely the shipping of hard drives and use of expensive on-prem transfer tools.

“By bringing MASV directly inside Premiere Pro, now even the largest Premiere project can be delivered via Cloud, streamlining the export process and tightly integrating digital project delivery within editors’ workflows.”

Key Features:
• The new Masv extension installs in a dockable panel, integrating perfectly into Premiere Pro CC 2018 and higher (MacOS/Windows)
• Users can upload full projects, project sequences, files and folders from within Premiere Pro. The Masv Panel retries aggressively, sending files successfully even in poor networking conditions.
• Users can render projects to any Adobe Media Encoder export preset and then send. Favorite export formats can be stored for quick use on future uploads.
• When exporting to Media Encoder, users can choose to automatically upload and send after rendering. Alternatively, they can opt to review your export before uploading.
• Users can monitor export and transfer progress, plus upload performance stats, in realtime.• Users can distribute transfer notifications via email and Slack.
• Deliveries are secured by adding a password. Transfers are fully encrypted at rest and in flight and comply with GDPR standards.
• Users can set storage duration based on project requirements. Set a nearer delete date for maximum security or longer for convenience.
• Set download limits protect sensitive content and manage transfer costs.
• Users can send files from Premiere and then use the Masv Web Application to review delivery status, delivery costs and manage active deliveries easily.
• Users can send terabytes of data, at very fast speeds without having to manage storage or deal with file size limits.

Masv launched a new version of the service in February, followed by a chain of significant product updates.

DP Chat: David Makes Man’s Todd A. Dos Reis, ASC

The series David Makes Man follows a 14-year-old boy attending a prestigious magnet school and his formerly drug-addicted mother, who is relying on him and his potential to get them out of the rough Miami neighborhood they live in. David is torn between the streets he grew up on and the life he’s capable of living.

Written by playwright Tarell Alvin McCraney, an Oscar winner for co-writing Moonlight, David Makes Man will be premiere on Oprah Winfrey’s OWN network on August 14. Along with McCraney, some of the show’s producers include Nantale Corbett, Mike Kelley, Michael B. Jordan, Oprah Winfrey. Dee Harris-Lawrence is a showrunner, along with McCraney.

The series depicts David’s two very different worlds — home and school — each of which  McCraney wanted to have different looks. He called on cinematographer Todd A. Dos Reis, ASC, to help create those two worlds. We reached out to Dos Reis to find out how he accomplished this and his workflow.

Tell us about David Makes Man. How would you describe the overarching look of the film?
Early on in pre-production, showrunner Tarell McCraney and I came up with the idea to have our young protagonist, David, live in two worlds and give each world its own distinct look.

One world was The Ville, the Miami housing project where he lived with his mother and younger brother. David’s home life was unpredictable, and we wanted the viewer to be on edge as David was on a daily basis. The Ville had low-income families and drug dealers that ran their business out of the projects. The Ville would not have the typical lushness dripping with color that everyone is used to seeing in Miami. Our Miami would be a desaturated limited color palette leaning toward the cool blue side of the color wheel.

David’s other world was his middle school that encompassed a warmer tone, with natural lighting that you would see in the early morning and the late afternoon. David is a prodigy and excels in this world, so we wanted to make this environment more welcoming.

How did the director tell you about the look that was wanted?
In our initial meeting, Tarell McCraney, the EP, showrunner and writer, talked about Fresh (1994) and Juice (1992) being a good place to start when discussing the tone of the show. He said he wanted David Makes Man to be a 10-hour film versus 10 one-hour episodes.

We also discussed the works of artist Kerry James Marshall when looking at the blackness of a frame. In David Makes Man, we wanted to accept darkness as a point of expression versus a deficit. Director Michael Williams came in with an amazing look book that referenced images from Mother of George, Daughters of the Dust, Selma and Belly.

How early did you get involved in the production?
As soon as I got the call from producer Wayne Morris that I was their choice for DP, I made myself available for discussions with the showrunners Tarell McCraney and Dee Lawrence Harris. I had three weeks of unofficial prep in Los Angeles and three weeks of prep in Orlando.

It was shot in Orlando?
The story of David Makes Man takes place in Miami, but we filmed in Orlando. We were based at Universal Studios Orlando, where we built the interiors of The Ville housing project apartments (David’s family apartment and friend of the family Elijah’s apartment) and any swing sets that appeared in various episodes. There was one day of filming in Miami with a second unit.

How did you go about choosing the right camera and lenses?
There were many factors that I had to consider. First, how to visually create the two worlds of David. The Ville, where David lived, was going to be hand-held, subjective, wider lenses in your face, and more intimate and chaotic.

His school and outside The Ville world were going to be photographed on a stable platform, i.e. dollies, cranes and SteadiCam. This world was going to have a natural calming feel to offset his home life. I needed a camera that could be used hand-held, on a dolly and on a SteadiCam and switched back and forth quickly. I chose three ARRI Alexa Minis.

David’s two worlds were also enhanced by filming in both spherical and anamorphic. Discussions with the director of Episode 1, Michael Williams, led us to film The Ville with Cooke anamorphic lenses. Because many scenes in the story take place in David’s alternate reality, and I was going to be using the Lensbaby lenses to heighten David’s visions, the Cooke anamorphics created a great foundation to have under David’s visions. The spherical lenses, Cooke Panchro/i Classics, would be used to show the normalcy of David’s school and anything outside of The Ville.

Are there any scenes that you are particularly proud of or found most challenging?
Our most challenging scenes usually took place at The Ville. We built a two-story section of a housing project where some of the interior apartments were practical. For day exteriors in the ever-changing Florida sky and weather, we used a 40×40 quarter silk to cover the courtyard. We could have used an 80×80. Key grip Joel Wheatley and his crew managed the silk like a sail on a yacht, constantly trimming and adjusting for weather changes and shot selection.

Night exteriors at The Ville called for an array of lighting instruments. Working from the inner circle of the hallways, we built fluorescent housings to hang above the exterior hallways and hold two 4-foot cool white fluorescents with cyan 60. This would give our wide range of African American skin tones an unnatural and eerie color. The next circle of color is what lit The Ville courtyard and exterior.

Gaffer Marc Wostak bought safety lights at a local hardware store, and we gelled them with high sodium gel. We built four poles for inside the courtyard and hung the gelled safety lights on the outside corners of each housing project building. The final outside diameter of The Ville had a sprinkling of mercury vapor lighting (1/2 blue and ¼ plus green). To give moonlight ambience, we always used one or two helium balloons above the courtyard and parking area at The Ville. Because helium was a rare commodity on our budget, we usually hung the balloons without helium from 80-foot Condors.

Without giving any story points away, there were night interior scenes where there was no electricity and we were blocked out of any possible moonlight. Being a big fan of John Alcott, BSC, and the film Barry Lyndon, I took my impetus from here. Not having the fast T1.3 lens that Mr. Alcott used, I had the art department buy every three-wick and two-wick candle they could find in Orlando. I augmented the scenes with small china balls and LED Light Gear patches that I could tape to candles and hide behind objects in the room. In some scenes we had the luxury of a character carrying a flashlight, but that was rare.

The most challenging scene would have to be when two characters have a heated discussion with someone holding a Zippo lighter. We taped four dots of tungsten LED Light Gear to the back side of the Zippo and ran the cable down the actor’s wardrobe with my gaffer Marc Wostak walking and adjusting as the actor moved around the room. The choreography between camera operator Bob Scott, Marc Wostak and the actors was something out of a Bob Fosse film.

Can you talk about shooting anamorphic for The Ville housing project scenes?
We wanted to shoot David’s world at The Ville with anamorphic lenses because this is the place he did not want to be. One of David’s main goals in the story is to get out of this life at The Ville. I felt the anamorphic lenses would help isolate David from his surroundings and the drug dealers he didn’t want to be associated with.

The shallow depth of field that the lenses give you was a characteristic that we wanted to create visually. We wanted to show the emotions on his face that David was going through as well as heighten the tension of what was lurking around the dark corners of The Ville. The lenses also helped in giving us a more filmic quality and made all the episodes feel more like a feature film instead of 10 episodes.

How did you become interested in cinematography?
From an early age, I watched a great deal of TV and frequented the local movie theater to see any film that hit my small city of New Bedford, Massachusetts. It started with Disney films with my family, then went to Bruce Lee triple features and the Blaxploitation genre.

When I was 13, my grandparents bought me my first Canon still camera and I was fascinated. This led me to photography classes and running the TV studio at my high school. My love for the image grew, and I researched the best film schools for college. I ended up at USC Cinema. I started focusing on cinematography and learned that I could tell a story with just the visual image.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
I am inspired artistically by the journey to be original. I am constantly trying to never repeat myself, and I never want to imitate anyone else in this industry. I use other DPs and directors that I admire as inspirations.

I try to stay on top of advancing technology that serves my vision by always educating myself and surrounding myself with artists and craftsmen who are willing to take chances and are not afraid of failing.

What new technology has changed the way you work (looking back over the past few years)?
I think all the advancements in the LED lighting category have opened up amazing opportunities for filmmakers. In productions where space is always a factor, there is always some nook and cranny to create beautiful, artistic or dramatic lighting.

What are some of your best practices or rules you try to follow on each job?
On every project, I use the script as my bible. What is the story? What is the auteur trying to convey? What is the emotion of each scene? My job is to visually collaborate with the director, showrunner or writer to get their vision to the screen. The rule I try to follow is that there are no rules in filmmaking. The more rules I can break, the more original I will be as an artist.

Explain your ideal collaboration with the director when setting the look of a project.
It starts with the script. I like to meet with a director as early and as often as possible. If a director is open to ideas that are not his/hers, then I know I am in a good place. Sharing ideas, watching films together and collaborating and experimenting on the set opens up my creativity.

What’s your go-to gear (camera, lens, mount/accessories) — things you can’t live without?
My most recent go-to gear is a set of Lensbaby lenses that my camera house, Otto Nemenz, created for me. I am also a big fan of Tiffen and Schneider streak filters. The lighting instrument that I can’t do without is a Source Four Leko. I would like to do a project with all Lekos, daylight and tungsten.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Audio houses Squeak E. Clean and Nylon Studios have merged

Music and sound studios Squeak E. Clean and Nylon Studios have merged to form Squeak E. Clean Studios. This union brings together a diverse roster of artists offering musical talent and exceptional audio production to agencies and brands. The company combines leadership from both former houses, with Nylon’s Hamish Macdonald serving as managing director and Nylon’s Simon Lister and Squeak E. Clean’s Sam Spiegel overseeing the company’s creative vision as co-executive creative directors. Nylon’s founding partner, David Gaddie, will become strategy partner.

The new Squeak E. Clean Studios has absorbed and operates all the existing studios of the former companies in Los Angeles, New York, Chicago, Austin, Sydney and Melbourne. Clients can now access a full range of services in every studio, including original composition, sound design and mix, music licensing, artist partnerships, experiential and spatial sound and sonic branding. Clients will also be able to license tracks from a vast, consolidated music catalog.

New York-based EP Christina Carlo is transferring to the West Coast to lead the Los Angeles studio alongside Amanda Patterson as senior producer. Deb Oh is executive producer of the New York studio, with Cindy Chao as head of sales. Squeak E. Clean Studios’ Sydney studio is led by executive creative producer Karla Henwood, Ceri Davies is EP of the Melbourne studio, and Jocelyn Brown is leading the Chicago location.The company is deeply committed to strong support of the Free the Bid initiative, with three full-time female staff composers already on the roster.

“I always admired the ‘culture changing’ work that Squeak E. Clean Productions crafted–like the Adidas Hello Tomorrow spot with Karen O and Spike Jonze’s Kenzo World with Ape Drums (featuring Assassin),” says Lister. “These are truly the kind of jobs that are not just famous in advertising, but are part of our popular culture.”

“It’s exciting to be able to combine the revolutionary creativity of Squeak E. Clean with the outstanding post, creative music and exceptional client service that Nylon Studios has always offered at the highest level. We love what we do, and this collaboration is going to be an amazing opportunity for all of our artists and clients,” adds Spiegel. “As a combined force, we will make music and sound that people love.”

Main Image: (L-R) Hamish Macdonald, Simon Lister, Sam Spiegel
Image Credit: Shruti Ashok

 

Review: Dell UltraSharp 27 4K InfinityEdge monitor

By Sophia Kyriacou

The Dell UltraSharp U2718Q monitor did not disappoint. Getting started requires minimal effort. You are up and running in no time — from taking it out of the box to switching it on. The stand, the standard Dell mount, is simple to assemble and intuitive, so you barely need to look at any instructions. But if you do, there is a step-by-step guide to help you set up within minutes.

The monitor comes in a well-designed package, which ensures it gets to you safely and securely. The Dell stand is easily adjustable without fuss and remains in place to your liking, with a swivel of 45 degrees to the left or right, a 90-degree pivot clockwise and counter clockwise, and a maximum height of 130mm. This adjustability means it will certainly meet all your comfort and workflow needs, with the pivot being incredibly useful when working in portrait formats.

The InfinityEdge display not only makes the screen look attractive but, more importantly, gives you extra surface area. When working with more than one monitor, having the ultra-thin edge makes the viewing experience less of a distraction, especially when monitors are butted up together. For me, the InfinityEdge is what makes it … in addition to the image quality and resolution, of course!

The Dell UltraSharp U2718Q has a flicker-free screen, making it comfortable on the eyes. It also has 3480×2160 pixels and boasts a color depth of 1.07 billion colors. The anti-glare coating works very well and meets all the needs of work environments with multiple and varied lighting conditions.

There are several connectors to choose from: one DP (v 1.2), one mDP (v 1.2), one HDMI (v 2.0), one USB 3.0 port (upstream), four USB 3.0 ports (including two USB 3.0 BC 1.2) with charging capability at 2A (max), and an audio line out. You are certainly not going to be short of inputs. I found the on-screen navigation incredibly easy to use. The overall casing design is minimal and subtle, with tones of black and dark silver. With the addition of the InfinityEdge, this monitor looks attractive. There is also a matching keyboard and mouse available.

Summing Up
Personally, I like to set my main monitor at a comfortable distance, with the second monitor butted up to my left at an angle of -35 degrees. Being left-handed, this setup works for me ergonomically, keeping my browser, timeline and editing window on that side, so I’m free to focus on the larger-scale composition in front of me.

The two Dell UltraSharp U2718Q monitors I use are great, as they give me the breathing space to focus on creating without having to constantly move windows around, breaking my flow. And thanks to InfinityEdge, the overall experience feels seamless. I have both monitors set up exactly the same so the color matches and retains the same maximum quality perfectly.


Sophia Kyriacou is an award-winning conceptual creative motion designer and animator with over 22 years experience within the broadcast design industry. She’s splits her time between working at the BBC in London and taking on freelance jobs. She is a full voting member at BAFTA and is currently working on a script for a 3D animated short film. 

Post vet Chris Peterson joins NYC’s Chimney North

Chimney’s New York studio has hired Chris Peterson as its new EP of longform entertainment, building on the company’s longform credits, which include The Dead Don’t Die, Atomic Blonde, Chappaquiddick, The Wife and Her.

Chimney is a full-service company working in feature films, television, commercials, digital media, live events and business-to-business communications. The studio has offices in 11 cities and eight countries worldwide.

In his new role, Peterson will be using his expertise in film finance, tax credit maximization and technical workflows to grow Chimney’s feature film and television capabilities. He brings over 20 years of experience in production and post, including a stint at Mechanism Digital and Post Factory, NY.

Peterson’s resume is diverse and spans the television, film, technology, advertising, music, video game and XR industries. Projects include the Academy Award-winning feature Spotlight, the Academy Award-winning documentary OJ: Made in America, and the Grammy-nominated Roger Waters: The Wall. For E! Entertainment’s travel series Wild On, he produced shows in Argentina, Brazil, Trinidad and across the United States. He was also a post producer on Howard Stern on Demand.

“Chimney combines the best of both worlds: a boutique feel and global resources,” says Peterson. “Add to that the company’s expertise in financing and tax credits, and you have a unique resource for producers and filmmakers.”

For the past eight years, Peterson has been board secretary of the Post New York Alliance, which was co-founded by Chimney North America CEO Marcelo Gandola. The PNYA is a trade association that lobbied for and passed the first post-only tax credit, which was recently extended for two years. Peterson is also a member of SMPTE.

Behind the Title: Amazon senior post exec Frank Salinas

NAME: Frank Salinas

COMPANY: Amazon Studios

CAN YOU DESCRIBE YOUR COMPANY?
We’re Amazon.com….Look us up. Small e-commerce bookstore turned global marketplace, cloud storage services and content maker and broadcaster.

WHAT’S YOUR JOB TITLE?
Senior Post Production Executive

WHAT DOES THAT ENTAIL?
My core responsibility is to support and shepherd our series, specials and/or episodes in partnership with our production company from preproduction to delivery.

From the early stages of conceptualizing and planning our productions through color grading, mixing, QC, mastering, publishing and broadcast/launch, it’s my responsibility to oversee that our timelines are met and our commitments to our customers are kept.

Our customers expect the highest standards for quality. I work closely and in tandem with all the other departments to assure that our content is ready for distribution on time, under budget and to the utmost standards. Meaning we are shooting at the highest quality, localizing (whether subtitles or dubbing) in all the languages we are distributing to and that the quality is upheld throughout that process.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I’m making it a point of getting involved in the post production process before cameras are chosen or scripts are ever finalized to assure we have a clear runway and a set workflow for success.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Being on set or leading into that moment before going on set and having a plan and a strategy in motion and being able to watch it be executed. It almost never plays out as you predicted, but having the knowledge and the confidence to adjust, and being fluid in that moment, is my favorite part of the job.

WHAT’S YOUR LEAST FAVORITE?
My least favorite part of the job would have to be the extraneous meetings that go into making a series. It’s part of the process but I’m not a big fan of meetings

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
My most productive part of the day would likely be my 90-minute drive into the office. This is when I can create my “to-do’s list” for that day, and then the two to three hours I have in the morning before anyone arrives. This allows me to tackle the list without interruption. That and the few times I have the opportunity to run in the morning. It’s those times that allow me to clear my head and process my thoughts linearly.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
If I wasn’t a post executive, I’d likely be a real estate agent or TV/film agent. I get a lot of joy whenever I’m able to make someone happy by being able to pair them with something or someone that fits them perfectly — whatever it is that they are looking for. Finding that perfect marriage between that person and that thing they are needing or wanting brings me a lot of happiness.

WHY DID YOU CHOOSE THIS PROFESSION?
I’ve enjoyed television and the film medium for as long as I can remember. From the moment I saw my first episode of The Twilight Zone and realized that you could really leave your audience asking the question of “Is this real?” or “What if? I thought there was something so powerful about that.

Lorena

CAN YOU NAME A RECENT PROJECT YOU HAVE WORKED ON?
The documentary Lorena; Last One Laughing Mexico;This Is Football, premiering early August; Gymkhana; The Jonas Brothers film Chasing Happiness; The live Prime Day concert 2019;
The series Carnival Row (launching 8/31); and the All or Nothing series, just to name a few.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I have a few, but most of them stem from my time at 25/7 Productions. Ultimate Beastmaster, The Briefcase and Strong all hold a special place in my heart, not only because I was able to work on them with people whom I consider my family but because we created something that positively changed peoples lives and expanded their way of thinking

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I’m going to list four since I’m a techy through and through…

My phone. It’s my safety blanket and my window to the world.

My laptop, which is just a larger window or blanket.

My car. Although it’s basic in nature and not pretentious at all, it allows me to be mobile but still allows me a safe place to work. For the amount of time I spend in my car it’s really become my mobile office.

My headphones. Whether I’m running in my neighborhood or traveling on a plane, the joy I get from listening to music and podcasts is absolute. I love music.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram and Facebook are the two I find myself on, and I tend to follow things that I’m passionate about. My sports teams — the Dodgers, Lakers and Kings — and I love architecture and food so I tend to follow those publications that showcase great photos of both.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
I love music… almost all of it. Classic rock, reggae, pop, hip-hop, rap, house, country, jazz, Latin, punk. Everything but Phish or Grateful Dead? I just don’t get it.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
My love for running, cooking or eating great food, traveling and being with my family helps to remind me that it’s only TV. I constantly need to be reminded that what we are doing, while important, is also just entertainment.

UK’s Molinare adds two to its VFX team

Molinare has boosted its visual effects team with the addition of head of VFX production Kerrie Bryant and VFX supervisor Andy Tusabe.

Bryant comes to Molinare after working at DNeg TV and Technicolor, where she oversaw all projects within the studio, as well as supervising line producers and coordinators on their projects.

Tusabe joins Molinare with over 26 years’ experience across TV, film and commercials production and post production. He knows the Molinare VFX team well, having worked with them as a freelancer over the past two years, on titles such as Good Omens, The Crown, A Discovery of Witches, King Lear and Yardie.

So far this year, Molinare has completed VFX post on high-end dramas such as Good Omens, Strike Back: Silent War, Beecham House and the next series of The Crown, as well as Gurinder Chadha‘s new feature film Blinded by the Light, which will be released internationally in August.

HPA’s 2019 Engineering Excellence Award winners  

The Hollywood Professional Association (HPA) Awards Committee have announced the winners of the 2019 HPA Engineering Excellence Award. They were selected by a judging panel after a session held at IMAX on June 22. Honors will be bestowed on November 21 at the 14th annual HPA Awards gala at the Skirball Cultural Center in Los Angeles.

The HPA Awards were founded in 2005 to recognize creative artistry and innovation in the professional media content industry. A coveted honor, the Engineering Excellence Award rewards outstanding technical and creative ingenuity in media, content production, finishing, distribution and archive.

“Every year, it is an absolute pleasure and a privilege to witness the innovative work that is brewing in our industry,” says HPA Awards Engineering Committee chair Joachim Zell. “Judging by the number of entries, which was our largest ever, there is genuine excitement within our community to push our capabilities to the next level. It was a close race and shows us that the future is being plotted by the brilliance that we see in the Engineering Excellence Awards. Congratulations to the winners, and all the entrants, for impressive and inspiring work.”

Adobe After Effects

Here are the winners:
Adobe – Content-Aware Fill for Video in Adobe After Effects
Content-Aware Fill for video uses intelligent algorithms to automatically remove unwanted objects like boom mics or distracting signs from video. Using optical flow technology, Content-Aware Fill references frames before, next to or after an object and fills the area automatically making it look as if the object was never there.

Epic Games — Unreal Engine 4
Unreal Engine is a flexible and scalable realtime visualization platform enabling animation, simulation, performance capture and photorealistic renders at unprecedented speeds. Filmmakers, broadcasters and beyond use Unreal Engine to scout virtual locations and sets, complete previsualization, achieve in-camera final-pixel VFX on set, deliver immersive live mixed reality broadcasts, edit CG characters and more in realtime. Unreal Engine dramatically streamlines content creation and virtual production, affording creators greater flexibility and freedom to achieve their visions.

Pixelworks — TrueCut Motion
TrueCut Motion is a cinematic video tool for finely tuning motion appearance.  It uses Pixelworks’ 20 years of experience in video processing, together with a new motion appearance model and motion dataset. Used as a part of the creative process, TrueCut Motion enables filmmakers to explore a broader range of motion appearances than previously possible.

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs
OLED televisions are commonly used in Hollywood for various uses, including as a client viewing monitor, SDR BT.709 reference monitor and as QC monitor for consumer deliverables, including broadcasting, optical media and OTT. To be used in these professional settings, a highly accurate color calibration is essential. Portrait Displays and LG Electronics partnered to bring 1D and 3D LUT-based hardware level CalMan AutoCal to the 2018 and newer LG OLED televisions.

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix, Inc. for Photon.

In addition to the honors for excellence in engineering, the HPA Awards will recognize excellence in 12 craft categories, including color grading, editing, sound and visual effects. The recipients of the Judges Award for Creativity and Innovation and Lifetime Achievement Award will be announced in the coming weeks.
Tickets for the 14th annual HPA Awards will be available for purchase later this summer.

DP and director talk about shooting indie Concrete Kids

Director Lije Sarki’s Concrete Kids tells the story of two nine year old boys from Venice, California, who set off on a mission to cross Los Angeles on skateboards at night to reach the Staples Center by morning for a contest. Now streaming on Amazon Prime, the low-budget feature from The Orchard was shot by cinematographer Daron Keet with VariCam LTs in available light mainly at night.

There were many challenges in shooting Concrete Kids, including working with child actors, who could only shoot for three hours per night. “Seventeen nights at three hours per night, that really only equals like four days of shooting,” explains Sarki. “The only way we could shoot and be that mobile and fast was if we used ambient light with Daron holding the camera and using the occasional sticks for wide shots. I also didn’t want to use kids that didn’t skate because I don’t like to cheat authenticity. I didn’t want to cheat the skateboarding and wanted it to feel real. I really wanted to make everything small — just a camera and someone recording sound.”

“When Lije said he didn’t want to have a crew,” says Keet, “I was a little [surprised] because I’m used to having a full crew, and I like using traditional film tools. I also don’t like making movies as documentaries. But I always like to push myself and have different challenges. One reason I didn’t hesitate in doing the film is that Lije is very organized. The more work you do up front, the easier the shoot will be.”

Keet shot the film with the VariCam LT. For Keet, the look was going to be determined by the actual environment, not influenced by his lighting. “We were working at such low light levels,” explains Keet. “We shot in alleys that to your eye, looked black. I would just aim the VariCam down this alley and then you would see something different to what your eye was seeing. It was amazing. We even had experiences where a traffic light would change from green to red and change the illumination from a green ambiance to a red ambiance. For me, it was an incredible challenge and a different way of working where I’m not just looking for opportunities for available light but I’m looking for opportunities where I’m needing the camera to find those opportunities.

DP Daron Keet (in wheelchair) on set

“It’s really nice to have a tool where you can tell the story you want with the tools you have,” he continues. “A lot of people don’t like night shooting, but I actually love it because as a DP you have more control at night because everything is turned off and you can place lights where you want. Concrete Kids was a much different challenge because I was shooting at night, but I didn’t have much control. I had to be able to see things in a different way.”

With the VariCam LT, Keet captured 10-bit 422 4K (4096×2160) AVC-Intra files in V-Log while monitoring his footage with the Panasonic V-709 LUT. Since over 90% of the movie was shot at night in available light, Keet captured at native 5000 ISO. For certain shots he even used the LT’s internal ND for night sequences where he wanted more shallow depth of field.

Although he stuck to mainly available street lights, Keet occasionally used a magnetized one-foot tube light that he kept in his back pocket. “There was one scene that was very well illuminated in the background, but the foreground was dark, so I wanted to balance it out,” explains Keet. “I stuck the light onto a stop sign and it balanced the light perfectly. With digital I’m pretty good at knowing what’s going to overexpose. It’s more about camera angles and always trying to have things backlit because you’re always getting enough light to bounce around.”

For lenses, Keet employed a vintage set of Super Baltar prime lenses, which the production received from Steve Gelb at LensWorksRentals.com. Keet loved how the lenses spread the light throughout the frame. “A lot of the new lights will hold the flares,” explains Keet. “The Super Baltars spreads the flares and makes the image look really creamy and it gives you more of a rounded bokeh. If you look at anamorphic for example, it would you give you an oblong shape. With the Baltars, the iris is smooth and rounded. If you see an out of focus street lamp, the outer edge on newer glass might be sharp but with older glass, it will be much softer. A creamy look is also very forgiving on faces.”

Keet shot wide open most of the time and relied on his skill working as a focus puller years prior. Sarki also had a wireless director’s monitor so he could see check focus for Keet as well.

The film was edited by Pete Lazarus using Adobe Premiere Pro. Studio Unknown in Baltimore did the sound mix remotely. The film was color graded by Asa Fox at The Mill in LA pro bono. Fox gave Sarki a few different options for the look and Keet and Sarki would make adjustments so the film would feel consistent. Because they didn’t have a lot of time for the color grade, Keet relied on a trick he learned to keep things moving. He and Sarki would find their favorite moment that encapsulates a certain scene and work on that color before moving on to the next scene. “When you do that, you can work pretty quickly and then just walk away, leaving the colorist to do his job since we didn’t want to waste any time.”

“So many people helped make this project work because of their contributions without financial benefit,” says Sarki. “I’m super happy with the end result.”

Main Image: Director Lije Sarki

 

KRK intros audio tools app to help Rokit G4 monitor setup

KRK Systems has introduced the KRK Audio Tools App for iOS and Android. This free suite of professional studio tools includes five professional analysis-based components that work with any monitor setup, and one tool (EQ Recommendation) that helps acclimate the new KRK Rokit G4 monitors to their individual acoustic environment.

In addition to the EQ Recommendation tool, the app also includes a Spectrum Real Time Analyzer (RTA), Level Meter, Delay and Polarity Analyzers, as well as a Monitor Align tool that helps users set their monitor positioning more accurately to their listening area. Within the app is a sound generator giving the user sound analysis options of sine, continuous sine sweep, white noise and pink noise—all of which can help the analysis process in different conditions.

“We wanted to build something game-changing for the new Rokit G4 line that enables our users to achieve better final mixes overall,” explains Rich Renken, product manager for the pro audio division of Gibson Brands, which owns KRK. “In terms of critical listening, the G4 monitors are completely different and a major upgrade from the previous G3 line.Our intentions with the EQ Recommendation tool are to suggest a flatter condition and help get the user to a better starting point. Ultimately, it still comes down to preference and using your musical ear, but it’s certainly great to have this feature available along with the others in the app.”

Five of the app tools work with any monitor setup. This includes the Level Meter, which assists with monitor level calibration to ensure all monitors are at the same dB level, as well as the Delay Analysis feature that helps calculate the time from each monitor to the user’s ears. Additionally, the app’s Polarity function is used to verify the correct wiring of monitors, minimizing bass loss and incorrect stereo imaging reproduction — the results of monitors being out of phase, while the Spectrum RTA and Sound Generator are made for finding nuances in any environment.

Also included is a Monitor Alignment feature, which is used to determine the best placement of multiple monitors within proximity. This is accomplished by placing a smart device on each monitor separately and then rotating to the correct angle degree. A sixth tool, exclusive to Rokit G4 users, is the EQ Recommendation tool that helps acclimate monitors to an environment by analyzing the app-generated pink noise and subsequently suggesting the best EQ preset, which is set manually on the back of the G4 monitors.

Meet the Artist: The Mill’s Anne Trotman

Anne Trotman is a senior Flame artist and VFX supervisor at The Mill in New York. She specializes in beauty and fashion work but gets to work on a variety of other projects as well.

A graduate of Kings College in London, Trotman took on what she calls “a lot of very random temp jobs” before finally joining London’s Blue Post Production as a runner.

“In those days a runner did a lot of ‘actual’ running around SoHo, dropping off tapes and picking up lunches,” she says, admitting she was also sent out for extra green for color bars and warm sake at midnight. After being promoted to the machine room, she spent her time assisting all the areas of the company, including telecine grading, offline, online, VFX and audio. “This gave me a strong understanding of the post production process as a whole.”

Trotman then joined the 2D VFX teams from Blue, Clear Post Production, The Hive and VTR to create a team at Prime Focus London. She moved into film compositing where she headed up the 2D team as a senior Flame operator. Overseeing projects, including shot allocation and VFX reviews. Then she joined SFG-Technicolor’s commercials facility in Shanghai. After a year in China she joined The Mill in New York, where she is today.

We reached out to Trotman to find out more about The Mill, a technology and visual effects studio, how she works and some recent projects. Enjoy.

Bumble

Can you talk about some recent high-profile projects you’ve completed?
The most recent high-profile project I’ve worked on was for Bumble’s Super Bowl 2019 spot. It was its first commercial ever. Being that Bumble is a female-founded company, it was important for this project to celebrate female artists and empowerment, something I strongly support. Therefore, I was thrilled to lead an all-female team for this project. The agency creatives and producers were all female and so was almost the whole post team, including the editor, colorist and all the VFX artists.

How did you first learn Flame, and how has your use of it evolved over the years?
I had been assisting artists working on a Quantel Editbox at Blue. They then installed a Flame and hired a female artist who had worked on Gladiator. That’s when I knew I had found my calling. Working with technical equipment was very attractive to me, and in those days it was a dark art, and you had to work in a company to get your hands on one. I worked nights doing a lot of conforming and rotoscoping. I also started doing small jobs for clients I knew well. I remember assisting on an Adele pop video, which is where my love of beauty started.

When I first started using Flame, the whole job was usually completed by one artist. These days, jobs are much bigger, and with so many versions for social media, some days a lot of my day is coordinating the team of artists. Workshare and remote artists are becoming a big part of our industry, so communicating with artists all over the world has become a big part of my job in order to bring everything together to create the final film.

In addition to Flame, what other tools are used in your workflow?
Post production has changed so much in the past five years. My job is not just to press buttons on a Flame to get a commercial on television anymore; that’s only a small part. My job is to help the director and/or the agency position a brand and connect it with the consumer.

My workflow usually starts with bidding an agency or a director’s brief. Sometimes they need tests to sell an idea to a client. I might supervise a previz artist on Maxon Cinema 4D to help them achieve the director’s vision. I attend most of the shoots, which gives me an insight into the project while assessing the client’s goals and vision. I can take Flame on a laptop to my shoots to do tests for the director to help explain how certain shots will look after post. This process is so helpful all around in order for me to see if what we are shooting is correct and for the client to understand the director’s vision.

At The Mill, I work closely with the colorists who work on FilmLight Baselight before completing the work on Flame. All the artists at The Mill use Flame and Foundry Nuke, although my Flame skills are 100% better than my Nuke skills.

What are the most fulfilling aspects of the work you do?
I’m lucky to work with many directors and agency creatives that I now call friends. It still gives me a thrill when I’m able to interpret the vision of the creative or director to create the best work possible and convey the message of the brand.

I also love working with the next generation of artists. I especially love being able to work alongside the young female talent at The Mill. This is the first company I’ve worked at where I’ve not been “the one and only female Flame artist.”

At the Mill NY, we currently have 11 full-time female 2D artists working in our team, which has a 30/70 male to female ratio. Still a way to go to get to 50/50, so if I can inspire another female intern or runner who is thinking of becoming a VFX artist or colorist, then it’s a good day. Helping the cycle continue for female artists is so important to me.

What is the greatest challenge you’ve faced in your career?
Moving to Shanghai. Not only did I have the challenge of the language barrier to overcome but also the culture — from having lunch at noon to working with clients from a completely different background than mine. I had to learn all I could about the Chinese culture to help me connect with my clients.

Covergirl with Issa Rae

Out of all of the projects you’ve worked on, which one are you the most proud of?
There are many, but one that stands out is the Covergirl brand relaunch (2018) for director Matt Lambert at Prettybird. As an artist working on high-profile beauty brands, what they stand for is very important to me. I know every young girl will want to use makeup to make themselves feel great, but it’s so important to make sure young women are using it for the right reason. The new tagline “I am what I make-up” — together with a very diverse group of female ambassadors — was such a positive message to put out into the world.

There was also 28 Weeks Later, a feature film from director Juan Carlos Fresnadillo. My first time working on a feature was an amazing experience. I got to make lifelong friends working on this project. My technical abilities as an artist grew so much that year, from learning the patience needed to work on the same shot for two months to discovering the technical difficulties in compositing fire to be able to blow up parts of London. Such fun!

Finally, there was also a spot for the Target Summer 2019 campaign. It was directed by Whitelabel’s Lacey, who I collaborate with together on a lot of projects. Tristan Sheridan was the DP and the agency was Mother NY.

Target Summer Campaign

What advice do you have for a young professional trying to break into the industry?
Try everything. Don’t get pigeonholed into one area of the industry too early on. Learn about every part of the post process; it will be so helpful to you as you progress through your career.

I was lucky my first boss in the industry (Dave Cadle) was patient and gave me time to find out what I wanted to focus on. I try to be a positive mentor to the young runners and interns at The Mill, especially the young women. I was so lucky to have had female role models throughout my career, from the person that employed me to the first person that started training me on Flame. I know how important it is to see someone like you in a role you are thinking of pursuing.

Outside of work, how do you enjoy spending your free time?
I travel as much as I can. I love learning about new cultures; it keeps me grounded. I live in New York City, which is a bubble, and if you stay here too long, you start to forget what the real world looks like. I also try to give back when I can. I’ve been helping a director friend of mine with some films focusing on the issue of female homelessness around the world. We collaborated on some lovely films about women in LA and are currently working on some London-based ones.

You can find out more here.

Anne Trotman Image: Photo by Olivia Burke

Point.360 adds senior colorist Patrick Woodard

Senior colorist Patrick Woodard has joined the creative team at Point.360 in Burbank. He was most recently at Hollywood’s DigitalFilm Tree, where he colored dozens of television shows, including ABC’s American Housewife, CBS’ NCIS: Los Angeles, NBC’s Great News and TBS’ Angie Tribeca. Over the years, he also worked on Weeds, Everybody Hates Chris, Cougar Town and Sarah Silverman: We Are Miracles.

Woodard joins Point.360 senior colorist Charlie Tucker, whose recent credits include the final season of the Netflix’s Orange Is the New Black, CW’s Legacies and Roswell, New Mexico, YouTube’s Cobra Kai, as well as the Netflix comedy Medical Police.

“Patrick is an exceptional artist with an extensive background in photography,” says Point.360’s SVP of episodic Jason Kavner. “His ability to combine his vast depth of technical expertise and his creative vision to quickly create a highly-developed aesthetic has the won the loyalty of many DPs and creatives alike.”

Point360 has four color suites at its Burbank facility. “Although we have the feel of a boutique episodic facility, we are able to offer a robust end to end pipeline thanks to our long history as a premier mastering company,” reports Kavner. “We are currently servicing 4K Dolby Vision projects for Netflix such as the upcoming Jenji Kohan series currently being called Untitled Vigilante Project, as well as the UHD SDR Sony produced YouTube series Cobra Kai. We also continue to offer the same end-to-end service to our traditional studio and network clients on series such as Legacies for the CW, Fresh Off The Boat, Family Guy and American Dad for 20th Century Fox, and Drunk History and Robbie for Comedy Central.

Woodard, who will be working on Resolve at Point360, was also a recent subject of our Behind the Title series. You can read that here.