Author Archives: Randi Altman

SGO Mistika Boutique at IBC with Dolby Vision, color workflows

At IBC, SGO will be showing enhancements and upgrades of its subscription-based finishing solution, Mistika Boutique. The company will demo color management solutions as well as HDR content delivery workflows with recently integrated Dolby Vision support.

This professional color grading toolset combined with the finishing functionality of Mistika Boutique will be showcased running on a Mac Pro workstation with Tangent Arc control panels and output to a Canon 4K HDR reference display through Blackmagic Design DeckLink I/O.

Mistika Boutique is hardware-agnostic and runns on both Windows and MacOS.

SGO is offering a variety of sessions highlighting the trending topics for the content creation industry that feature Mistika Boutique as well as Mistika Workflows and Mistika VR at their stand.

While at the show, SGO is offering a special IBC promotion for Mistika Boutique. Anyone who subscribes by September 30, 2019 will get the Professional Immersive Edition for €99/month or €990/year (or whatever your bank’s conversion rate is), which represents a saving of over 65% from the normal price. The special IBC promotional price will be maintained as long as the subscription is not canceled and remains active.

DP Chat: Peaky Blinders‘ Si Bell ramps up the realism for Season 5

By Randi Altman

UK-based cinematographer Si Bell is known for his work on the critically acclaimed feature films Electricity (2015), In Darkness (2019) and Tiger Raid (2016), as well as high-profile TV shows such as Fortitude, Hard Sun, Britannia and Ripper Street. He is currently working on the new Steven Knight drama special, A Christmas Carol.

Si Bell

He also shot the new season of Peaky Blinders, which begins airing on BBC One on August 25 and then makes its way to Netflix on October 4. Peaky Blinders takes place in Birmingham, England not long after World War I, and follows the Shelby family and its mafia-like business. The show is often dark, brutally violent and completely compelling. It stars Cillian Murphy as Thomas Shelby.

We recently reached out to Bell to ask him about his work on this current season of the edgy crime drama, followed by a look at his career in cinematography.

Tell us about Peaky Blinders Season 5. How early did you get involved in planning for the season? What direction did the showrunners give you about the look they wanted this season?
I got involved pretty early on and ended up having over 10 weeks prep, which is a long time for a TV show. I worked closely with Anthony Byrne, our director, whom I know very well. As the scripts came in, we began to discuss and plan how we were going to tackle the story.

I met with the showrunners early on as well, and they really loved the work Anthony and I had done in the past together on the movie In Darkness and on Ripper Street. Anthony is a very visual director and they trusted us both, so that was really amazing. They wanted us to do Peaky but also to bring our own style and way of working to the table. We were massive fans of the show and had big respect for what the previous directors and cinematographers had done. We knew we had big shoes to fill!

How would you describe the look?
I would describe the Peaky Blinders look as very stylized and larger than life. Lighting wise, it’s known for beams of light, smoke and atmosphere and an almost theatrical look with over cranked camera moves and speed ramps. I wanted to push some realism into the show and not make things quite as theatrical this season yet still keep that Peaky vibe. Tommy (Cillian Murphy) is battling with himself and his own demons more than anyone else in our story.

I wanted to try and show this with the lighting and the camera style. We also tried to use more developing shots in certain scenes to put the audience right in the center of the action and create this sense of visceral realism. We tried to motivate every decision based on how to tell the story in the best and most powerful way to bring out the emotional aspects and really connect with audience.

How did you work with the directors and colorist to achieve the intended look?
I used my DIT James Shovlar to create a look on set for the offline edit and we used that as a starting point for the grade. Then Anthony and I worked with grader Paul Staples at Deluxe in London, whom we had worked with on Ripper Street, and from the reference grade Paul created the finished look. Paul really understood where we wanted to take it, and I’m really pleased with how it turned out. We didn’t want it to feel too pushed but we still wanted it to look like Peaky Blinders.

Where was it shot, and how long was the shoot?
We shot around the northwest of England. We were based mainly in Manchester where we built a number of sets, including the Garrison, Houses of Parliament and Shelby HQ. We also shot in Birmingham, Liverpool, Rochdale and Bradford. We shot 16 five-day weeks in total.

How did you go about choosing the right camera and lenses for this project?
We had to shoot 4K, so the standard ARRI Alexa was off the table. A friend of mine, Sam McCurdy, BSC, had mentioned he had been shooting on the new Red Monstro and said he was really blown away by the images. I tested it and thought it was perfect for us. We coupled that with Cooke Anamorphic lenses and delivered in a 2:1 ratio.

Can you describe the lighting?
The lighting is a big part of Peaky Blinders, and it had to be right. My gaffer Oliver Whickman and I used our prep time to draw up detailed lighting plans, which included all of our machine and rigging requirements. We had 91 different lighting diagrams, and because we were scouting and planning the whole six episodes, it was very important that everything had to be written down in a clear, accurate way that could be passed on to our rigging crews.

We were scouting in September 2018, but some of the locations we weren’t shooting until January 2019 and we weren’t going to come back to them because we were so busy shooting. Oliver used the Shot Designer app to make the plans and we made printed books for the rigging gaffer and our best boy Alan Millar. It was certainly the most technically difficult job I have ever done in terms of planning, but everything went very smoothly.

Are there any scenes that you are particularly proud of or found most challenging?
There were many challenging scenes and sets. I’m really pleased how the opening sequence in Chinatown turned out. Also, there’s a big sequence set around a ballet, and I loved how that came together. I thought the design was great, with all the practicals that our designer Nicole Northridge installed in the set. There’s so much in this series, it’s hard to mention one thing.

I’m very proud of all our team. Everyone worked so hard and put so much into it, and I really think it shows. My camera operator Andrew Fletcher, focus puller Tom Finch and key grip Paul Kemp provided exceptional talent to the project. Not only are they great friends, they are the best of the best at what they do and I’m very proud of everything they did on Peaky.

Now let’s dig into some general DP questions. How did you become interested in cinematography?
I used to make skate videos, and then I studied photography in college and started to get interested in the idea of making films. I studied film production at university, and then started to work as a camera trainee once I left. At first I thought I wanted to be a director and made some short films, but after training under some great DPs — Sam McCurdy, BSC, and Lol Crawley, BSC — I realized that’s what I wanted to do, so I started shooting as much as I could and went from there.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology?
I am inspired by watching movies or TV with great stories. I’m also inspired by working with talented people, great directors, great producers and people with a great passion for what they do. Peaky Blinders was massively inspiring as we got to work with some of the greatest actors of our age who are at the top of their game. Working at that level, you need to up your game and that also was massively inspiring.

I always stay on top of new technology by going to trade shows and reading trade magazines.

What new technology has changed the way you work?
I think the camera getting smaller has been the biggest change, as we can use drones, Trinity rigs and other gimbals to move the camera in ways we could never even have dreamed of five years ago.

What are some of your best practices you try to follow on each job?
I always try to bring all my own crew if I can. We have a tight team and it’s so much easier if I can bring all of my guys onto a job as we all have a shorthand with each other. Additionally, I always do detailed lighting diagrams with my gaffer and put in lots of prep and time into the planning of the lighting so we can move quickly and adapt on the day. I also try to build a good relationship with the director as much as I can before shooting.

Explain your ideal collaboration with the director or showrunner when starting a new project.
For me it’s ideal when you work with someone who wants to hear your ideas and bounces off you creatively. It should be a collaboration, and you should be able to talk openly about ideas and feel like you’re valued. That connection is very important — sometimes you click, and sometimes you don’t — it’s about chemistry.

What’s your go-to gear? Things you can’t live without?
Things change depending on the show, but I love a Technocrane and a good remote head. If the show has the budget, they are such brilliant tools to move a camera and find the shot quickly.

On Peaky Blinders we used the ARRI Trinity camera stabilizer quite a lot, which is especially great if you have operator Andrew Fletcher, who is a master!


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Review: Dell’s Precision T5820 workstation

By Brady Betzel

Multimedia creators are looking for faster, more robust computer systems and seeing an increase in computing power among all brands and products. Whether it’s an iMac Pro with a built-in 5K screen or a Windows-based, Nvidia-powered PC workstation, there are many options to consider. Many of today’s content creation apps are operating-system-agnostic, but that’s not necessarily true of hardware — mainly GPUs. So for those looking at purchasing a new system, I am going to run through one of Dell’s Windows-based offerings: the Dell Precision T5820 workstation.

The most important distinction between a “standard” computer system and a workstation is the enterprise-level quality and durability of internal parts. While you might build or order a custom-built system for less money, you will most likely not get the same back-end assurances that “workstations” bring to the party. Workstations aren’t always the fastest, but they are built with zero downtime and hardware/software functionality in mind. So while non-workstations might use high-quality components, like an Nvidia RTX 2080 Ti (a phenomenal graphics card), they aren’t necessarily meant to run 24 hours a day, 365 days a year. On the other hand, the Nvidia Quadro series GPUs are enterprise-level graphics cards that are meant to run constantly with low failure rates. This is just one example, but I think you get the point: Workstations run constantly and are warrantied against breakdowns — typically.

Dell Precision T5820
Dell has a long track record of building everyday computer systems that work. Even more impressive are its next-level workstation computers that not only stand up to constant use and abuse but are also certified with independent software vendors (ISVs). ISV is a designation that suggests Dell has not only tested but supports the end-user’s primary software choices. For instance, in the nonlinear editing software space I found out that Dell had tested the Precision T5820 workstation with Adobe Premiere Pro 13.x in Windows 10 and has certified that the AMD Radeon Pro WX 2100 and 3100 GPUs with 18.Q3.1 drivers are approved.

You can see for yourself here. Dell also has driver suggestions from some recent versions of Avid Media Composer, as well as other software packages. That being said, Dell not only tests but will support hardware configurations in the approved software apps.

Beyond the ISV certifications and the included three-year hardware warranty with on-site/in-home service after remote diagnostics, how does the Dell Precision T5820 perform? Well, it’s fast and well-built.

The specs are as follows:
– Intel Xeon W-2155 3.3GHz, 4.5GHz Turbo, 10-core, 13.75MB cache with hyperthreading
– Windows 10 Pro (four cores plus for workstations — this is an additional cost)
– Precision 5820 Tower with 950W chassis
– Nvidia Quadro P4000, 8GB, four DisplayPorts (5820T)
– 64GB (8x8GB) 2666MHz DDR and four RDIMM ECC
– Intel vPro technology enabled
– Dell Ultra-Speed Drive Duo PCIe SSD x8 Card, 1 M.2 512GB PCIe NVMe class 50 Solid State Drive (boot drive)
– 3.5-inch 2TB 7200rpm SATA hard drive (secondary drive)
– Wireless keyboard and mouse
– 1Gb network interface card
– USB 3.1 G2 PCIe card (two Type C ports, one DisplayPort)
– Three years hardware warranty with onsite/in-home service after remote diagnosis

All of this costs around $5,200 without tax or shipping and not including any sale prices.

The Dell Precision T5820 is the mid-level workstation offering from Dell that finds the balance between affordability, performance and reliability — kind of the “better, Cheaper, faster” concept. It is one of the quietest Dell workstations I have tested. Besides the spinning hard drive that was included on the model I was sent, there aren’t many loud cards or fans that distract me when I turn on the system. Dell is touting the new multichannel thermal design for advanced cooling and acoustics.

The actual 5820 case is about the size of a mid-sized tower system but feels much slimmer. I even cracked open the case to tinker around with the internal components. The inside fans and multichannel cooling are sturdy and even a little hard to remove without some force — not necessarily a bad thing. You can tell that Dell made it so that when something fails, it is a relatively simple replacement. The insides are very modular. The front of the 5820 has an optical drive, some USB ports (including two USB-C ports) and an audio port. If you get fancy, you can order the systems with what Dell calls “Flex Bays” in the front. You can potentially add up to six 2.5-inch or five 3.5-inch drives and front-accessible storage of up to four M.2 or U.2 PCIe NVMe SSDs. The best part about the front Flex Bays is that, if you choose to use M.2 or U.2 media, they are hot-swappable. This is great for editing projects that you want to archive to an M.2 or save to your Blackmagic DaVinci Resolve cache and remove later.

In the back of the workstation, you get audio in/out, one serial port, PS/2, Ethernet and six USB 3.1 Gen 1 Type A ports. This particular system was outfitted with an optional USB 3.1 Gen 2 10GB/s Type C card with one DisplayPort passthrough. This is used for the Dell UltraSharp 32-inch 4K (UHD) USB-C monitor that I received along with the T5820.

The large Dell UltraSharp 32-inch monitor (U3219Q) offers a slim footprint and a USB-C connection that is very intriguing, but they aren’t giving them away. They cost $879.99 if ordered through Dell.com. With the ultra-minimal Infinity Edge bezel, 400 nits of brightness for HDR content, up to UHD (3840×2160) resolution, 60Hz refresh rate and multiple input/output connections, you can see all of your work in one large IPS panel. For those of you who want to run two computers off one monitor, this Dell UltraSharp has a built-in KVM switch function. Anyone with a MacBook Pro featuring USB-C/Thunderbolt 3 ports can in theory use one USB-C cable to connect and charge. I say “in theory” only because I don’t have a new MacBook Pro to test it on. But for PCs, you can still use the USB-C as a hub.

The monitor comes equipped with a DisplayPort 1.4, HDMI, four USB 3.0 Type A ports and a USB-C port. Because I use my workstation mainly for video and photo editing, I am always concerned with proper calibration. The U3219Q is purported by Dell to be 99% Adobe sRGB-, 95% DCI-P3- and 99% Rec. 709-accurate, so if you are using Resolve and outputting through a DeckLink, you will be able to get some decent accuracy and even use it for HDR. Over the years, I have really fallen in love with Dell monitors. They don’t break the bank, and they deliver crisp and accurate images, so there is a lot to love. Check out more of this monitor here.

Performance
Working in media creation I jump around between a bunch of apps and plugins, from Media Composer to Blackmagic’s DaVinci Resolve and even from Adobe After Effects to Maxon’s Cinema 4D. So I need a system that can not only handle CPU-focused apps like After Effects but GPU-weighted apps like Resolve. With the Intel Xeon and Nvidia Quadro components, this system should work just fine. I ran some tests in Premiere Pro, After Effects and Resolve. In fact, I used Puget Systems’ benchmarking tool with Premiere and After Effects projects. You can find one for Premiere here. In addition, I used the classic 3D benchmark Cinebench R20 from Maxon, and even did some of my own benchmarks.

In Premiere, I was able to play 4K H.264 (50MB and 100MB 10-bit) and ProRes files (HQ and 4444) in realtime at full resolution. Red Raw 4K was able to playback in full-quality debayer. But as the Puget Systems’ Premiere Benchmark shows, 8K (as well as heavily effected clips) started to bog the system down. With 4K, the addition of Lumetri color correction slowed down playback and export a little bit — just a few frames under realtime. It was close though. At half quality I was essentially playing in realtime. According to the Puget Systems’ Benchmark, the overall CPU score was much higher than the GPU score. Adobe uses a lot of single core processing. While certain effects, like resizes and blurs, will open up the GPU pipes, I saw the CPU (single-core) kicking in here.

In the Premiere Pro tests, the T5820 really shined bright when working with mezzanine codec-based media like ProRes (HQ and 4444) and even in Red 4K raw media. The T5820 seemed to slow down when multiple layers of effects, such as color correction and blurs, were added on top of each other.

In After Effects, I again used Puget Systems’ benchmark — this time the After Effects-specific version. Overall, the After Effects scoring was a B or B-, which isn’t terrible considering it was up against the prosumer powerhouse Nvidia RTX 2080. (Puget Systems used the 2080 as the 100% score). It seemed the tracking on the Dell T5820 was a 90%, while Render and Preview scores were around 80%. While this is just what it says — a benchmark — it’s a great way to see comparisons between machines like the benchmark standard Intel i9, RTX 2080 GPU, 64GB of memory and much more.

In Resolve 16 Beta 7, I ran multiple tests on the same 4K (UHD), 29.97fps Red Raw media that Puget Systems used in its benchmarks. I created four 10-minute sequences:
Sequence 1: no effects or LUTs
Sequence 2: three layers of Resolve OpenFX Gaussian blurs on adjustment layers in the Edit tab
Sequence 3: five serial nodes of Blur Radius (at 1.0) created in the Color tab
Sequence 4: in the Color tab, spatial noise reduction was set at 25 radius to medium, blur set to 1.0 and sharpening in the Blur tab set to zero (it starts at 0.5).

Sequence 1, without any effects, would play at full debayer quality in real time and export at a few frames above real time, averaging about 33fps. Sequence 2, with Resolve’s OpenFX Gaussian blur applied three times to the entire frame via adjustment layers in the Edit tab, would play back in real time and export at between 21.5fps and 22.5fps. Sequence 3, with five serial nodes of blur radius set at 1.0 in the Blur tab in the Color tab, would play realtime and export at about 23fps. Once I added a sixth serial blur node, the system would no longer lock onto realtime playback. Sequence 4 — with spatial noise reduction set at 25 radius to medium, blur set to 1.0 and sharpening in the Blur tab set to zero in the Color tab — would play back at 1fps to 2fps and export at 6.5fps.

All of these exports were QuickTime-based H.264s exported using the Nvidia encoder (the native encoder would slow it down by 10 frames or so). The settings were UHD resolution; “automatic — best” quality; disabled frame reordering; force sizing to highest quality; force debayer to highest quality and no audio. Once I stacked two layers of raw Red 4K media, I started to drop below realtime playback, even without color correction or effects. I even tried to play back some 8K media, and I would get about 14fps on full-res. Premium debayer, 14 to 16 on half res. Premium 25 on half res. good, and 29.97fps (realtime) on quarter res. good.

Using the recently upgraded Maxon Cinebench R20 benchmark, I found the workstation to be performing adequately around the fourth-place spot. Keep in mind, there are thousands of combinations of results that can be had depending on CPU, GPU, memory and more. These are only sample results that you could verify against your own for 3D artists. The Cinebench R20 results were CPU: 4682, CPU (single-core): 436, and MP ratio: 10.73x. If you Google or check out some threads for Cinebench R20 result comparisons, you will eventually find some results to compare mine against. My results are a B to B+. A much higher-end Intel Xeon or i9 or an AMD Threadripper processor would really punch this system up a weight class.

Summing Up
The Dell Precision T5820 workstation comes with a lot of enterprise-level benefits that simply don’t come with your average consumer system. The components are meant to be run constantly, and Dell has tested its systems against current industry applications using the hardware in these systems to identify the best optimizations and driver packages with these ISVs. Should anything fail, Dell’s three-year warranty (which can be upgraded) will get you up and running fast. Before taxes and shipping, the Dell T5820 I was sent for review would retail for just under $5,200 (maybe even a little more with the DVD drive, recovery USB drive, keyboard and mouse). This is definitely not the system to look at if you are a DIYer or an everyday user who does not need to be running 24 hours a day, seven days a week.

But in a corporate environment, where time is money and no one wants to be searching for answers, the Dell T5820 workstation with accompanying three-year ProSupport with next-day on-site service will be worth the $5,200. Furthermore, it’s invaluable that optimization with applications such as the Adobe Creative Suite is built-in, and Dell’s ProSupport team has direct experience working in those professional apps.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 

Nigel Bennett upped to managing director at UK’s Molinare

Molinare has promoted Nigel Bennett to the role of managing director. He joined the studio earlier this year from Pinewood Studios, where over a 20-year period he worked his way up from re-recording mixer to group director of creative services, a position that he held since 2014.

Bennett’s responsibilities include growing revenue across feature film, TV drama, feature documentaries and reality TV. Over the coming months he will work with the existing senior team at Molinare to implement a new business growth and investment plan with the full support of Molinare’s shareholders, Saphir Capital and Next Wave Partners.

Bennett replaces Julie Parmenter, who has left the company after seven years. While at Molinare, Parmenter was integral to maintaining the successful Molinare brand, subsequent acquisition of Hackenbacker and expansion into Hoxton.

Official Secrets director Gavin Hood talks workflow on this real-life thriller

By Iain Blair

South African writer/director Gavin Hood burst onto the international scene when he wrote and directed 2005’s Academy Award-winning Tsotsi. The film, which was also nominated for a Golden Globe and a BAFTA, won the People’s Choice Award at the Toronto International Film Festival.

Gavin Hood

Hood followed up that success with the harrowing political drama Rendition (Reese Witherspoon, Meryl Streep), X-Men Origins: Wolverine (Hugh Jackman), the sci-fi offering Ender’s Game (with Asa Butterfield, Harrison Ford, Ben Kingsley) and the thriller Eye in the Sky (Helen Mirren, Aaron Paul, Alan Rickman).

For his new film, Official Secrets, Hood returns to the murky world of government secrets and political double-dealing with a true but largely forgotten story that could have prevented the disaster that was the Iraq invasion and war. It tells the gripping story of Katharine Gun (Keira Knightley), a British intelligence specialist whose job involves routine handling of classified information. In 2003, in the lead up to the Iraq War, Gun receives a memo from the NSA with a shocking directive: the United States is enlisting Britain’s help in collecting compromising information on United Nations Security Council members in order to blackmail them into voting in favor of an invasion of Iraq. Unable to stand by and watch the world be rushed into an illegal war, Gun defies her government and leaks the memo to the press. So begins an explosive chain of events that ignited an international firestorm, exposed a vast political conspiracy, and put Gun and her family directly in harm’s way.

I recently spoke with Hood about making the film — which co-stars Ralph Fiennes as Gun’s lawyer and Matt Smith as journalist Martin Bright, who helped break the story — and his workflow.

To be honest, I’d never heard of Katharine Gun and her amazing story. Had you?
No, I knew nothing about it either. My producer Ged Doherty, who did Eye in the Sky with me, told me about this incredible true story and suggested I Google Katharine. Two hours later, having done a deep dive into this truly fascinating story, I realized it was this way of getting into the Iraq War and all that convoluted history through a very personal story.

What attracted you to this project?
That personal angle. Here’s a person who’s not a big political figure, but just someone going about her job. She comes across something that just smells rotten and decides she must say so. I thought, this could be any of us, in any organization, and who would be brave enough to become a whistleblower and risk losing our job in order to reveal the truth? She also risked losing her freedom as well, so whatever you think politically, she was very brave in following her conscience. I was intrigued right away by this character but not sure I actually wanted to do it.

I flew to London to meet Katharine. I sat down with her for five days, and each day we’d just talk and work for four or five hours. I’d take all these notes and over those five days, I think I won her trust. The main thing was, I just let her tell me about the events and what really happened without trying to make it into something more “Hollywood” or more exciting in terms of a movie. After that, I felt, “OK, we can do this.”

Lack of government transparency seems more timely than ever.
Absolutely, and that’s why this story is so important.

What sort of film did you set out to make?
A political thriller that’s also an understated personal drama. But making these kinds of films is far more difficult than making non-controversial entertainment fare, and it’s always so difficult getting financing.

Do you feel more responsibility when it’s based on real events and characters?
I do, and though I’ve made these kind of films before, this was the first time for me that all the main people were still alive, so it brings with it certain restrictions. You don’t want to fall short and have them scoff at your efforts, and you can’t take liberties with the narrative and the facts. Then this had the challenge of being a true story that doesn’t follow the conventional Hollywood “hero whistleblower” tale. It didn’t change the world. She’s just an ordinary person who did something extraordinary, and it’s about common decency and dignity.

What did Keira bring to the lead role, as well as Ralph Fiennes as her lawyer and Matt Smith as journalist Martin Bright?
They were all so committed and did a lot of research into their characters. Keira told me it was great to play a strong woman without having to wear a corset, and she really inhabits the role and makes you feel what it was like to be in Katharine’s shoes. She shows so much with just her eyes, so we used a lot of close-up work with a 75mm lens. Ralph shot all his scenes in just six days because of our tight 34-day schedule.

Your DP was Florian Hoffmeister, and you shot with the new Sony 6K Venice camera. Can you talk about how you collaborated on the look and how that affected the DI?
Yes, we were actually the first feature film to use it, so we did a lot of tests. It has an incredible dynamic range, not only moving from highlights to shadow but it’s got this great nuanced control of color. I’ve always loved shooting on film, but this camera’s so amazing that I’m now totally comfortable going all digital. In terms of post and the DI, we were really able to play with the footage.

When I shoot, I never want to push the look too much in-camera, as then you’re really limited in your choices in the DI. So my goal in shooting is always to get as much really detailed raw footage as I can, so I can then manipulate it in the DI. I don’t like to shoot with lots of filters and toys on the lens.

Where did you post?
At Technicolor in London and LA, and we did all the sound at Tribeca West in LA.

Do you like the post process?
I love it. I love writing, I love shooting, but post is where you actually make the film.

Talk about editing with your go-to editor Megan Gill who’s cut almost every film since 2005’s Tsotsi. How did that work?
She visits the set once or twice, but she doesn’t like to see how the sausage is made. She was on location and just to look at the dailies and do her assembly, and I’d drop by and we’d discuss it. Then she started cutting in London and then we finished in LA.

What were the big editing challenges?
It was basically a meticulous search for the most nuanced performances and trusting that we could then let them play out. There’s a scene where Katharine’s visited at home by a detective who tells her she can’t talk to a lawyer or anyone without clearing it with the authorities first. Instead of cutting back and forth between them as you’d usually do, we kept it on Keira and you see her slow burn, and it was far more effective that way. So, often it’s more important where you don’t cut rather than where you do.

VFX play a role. How many were there and what did they entail?
Technicolor VFX did them all, and they were mostly comps for scenes shot in places like rooftops in Manchester and Liverpool, which doubled for London. So it was live shots augmented with matte paintings and VFX for the London skyline. And we had a lot of television comps, but not nearly as many VFX as I had on my last film, Eye in the Sky.

Can you talk about the importance of sound and music, as again, you recruited composers Paul Hepker and Mark Kilian, who have scored a number of your films, including Tsotsi and Rendition.
We go way back, and they’re both amazing South African composers who have such a range — from classical to jazz and world music. That range works so well with my films, which are often multicultural. So in this film we have Britain, but it’s also about Iraq. And Katharine’s husband is Kurdish-Turkish, so we had to build a soundtrack that vibrates with the sound and emotional resonance of all these different places and cultures.

They crafted a great score that did exactly that. We did most of the sound work at Technicolor. Then sound editor Craig Mann, who won the Oscar for Whiplash and who did Eye in the Sky, did all the mixing at Tribeca West… most of it in a very small room. And he worked closely with Paul and Mark and is so good at building atmosphere and tension.

Where did you do the DI, and how important is it to you?
Also at Tribeca West, and it’s extremely important to me, as I have a background in photography. Florian and I worked very closely with colorist Doug Delaney, and it’s a period piece so we wanted a dusty, slightly period feel without pushing it too far.

What’s next?
I’m developing several projects, so whatever comes together first.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Facilis, ATTO partner on 25Gb adapters for new Macs

Facilis, which makes high-performance shared storage solutions, has partnered with ATTO Technology to integrate ATTO’s new Thunderlink NS 3252 Thunderbolt 3 to 25GbE adapter within the Facilis Hub shared storage platform. The solution provides flexible, scalable, high-bandwidth connectivity for Apple’s new Mac Pro, iMac Pro and Mac mini.

At IBC in Amsterdam, Facilis will demonstrate 4K and 8K editing workflows featuring their Hub shared storage platform with ATTO Celerity 32Gb and 16Gb Fibre Channel HBAs and FastFrame 25Gbps Ethernet. In addition, Facilis servers include 10GigE optical and copper ATTO HBAs as well as ATTO 12GB SAS internal and external interface cards. These technologies allow Facilis to create powerful solutions that fulfill a diverse set of customer connectivity needs and workflow demands.

Facilis has been beta testing the soon-to-be released ATTO 360 tuning, monitoring and analytics application, an Ethernet network optimization tool designed for creative professionals looking to unlock the potential of ATTO FastFrame and ThunderLink adapters.

“We’re very happy to expand our longstanding partnership with Facilis” says ATTO CEO Jeff Lowe. “The new Facilis Hub Shared Storage platform is a powerful storage solution for media professionals working in compressed and uncompressed high-resolution video finishing formats utilizing Ethernet, Fibre Channel or both.”

At the IBC show, Facilis will also show the newly shipped Facilis Hub shared storage system, and previews of version 8.0 Hub software management. Built as an entirely new platform, Facilis Hub represents the evolution of the Facilis shared file system with the block-level virtualization and multi-connectivity performance required for demanding media production workflows. Version 7.2 of the Facilis system software and FastTracker 3.0 are available now and included in all Hub systems.

The sounds of HBO’s Divorce: Keeping it real

HBO’s Divorce, which stars Sarah Jessica Parker and Thomas Haden Church, focuses on a long-married couple who just can’t do it anymore. It follows them from divorce through their efforts to move on with their lives, and what that looks like. The show deftly tackles a very difficult subject with a heavy dose of humor mixed in with the pain and angst. The story takes place in various Manhattan locations and a nearby suburb. And as you can imagine the sounds of the neighborhoods vary.

                           
Eric Hirsch                                                              David Briggs

Sound post production for the third season of HBO’s comedy Divorce was completed at Goldcrest Post in New York City. Supervising sound editor David Briggs and re-recording mixer Eric Hirsch worked together to capture the ambiances of upscale Manhattan neighborhoods that serve as the backdrop for the story of the tempestuous breakup between Frances and Robert.

As is often the case with comedy series, the imperative for Divorce’s sound team was to support the narrative by ensuring that the dialogue is crisp and clear, and jokes are properly timed. However, Briggs and Hirsch go far beyond that in developing richly textured soundscapes to achieve a sense of realism often lacking in shows of the genre.

“We use sound to suggest life is happening outside the immediate environment, especially for scenes that are shot on sets,” explains Hirsch. “We work to achieve the right balance, so that the scene doesn’t feel empty but without letting the sound become so prominent that it’s a distraction. It’s meant to work subliminally so that viewers feel that things are happening in suburban New York, while not actually thinking about it.”

Season three of the show introduces several new locations and sound plays a crucial role in capturing their ambience. Parker’s Frances, for example, has moved to Inwood, a hip enclave on the northern tip of Manhattan, and background sound effects help to distinguish it from the woodsy village of Hastings-on-Hudson, where Haden Church’s Robert continues to live. “The challenge was to create separation between those two worlds, so that viewers immediately understand where we are,” explains series producer Mick Aniceto. “Eric and David hit it. They came up with sounds that made sense for each part of the city, from the types of cars you hear on the streets to the conversations and languages that play in the background.”

Meanwhile, Frances’ friend, Diane, (Molly Shannon) has taken up residence in a Manhattan high-rise and it, too, required a specific sonic treatment. “The sounds that filter into a high-rise apartment are much different from those in a street-level structure,” Aniceto notes. “The hum of traffic is more distant, while you hear things like the whirl of helicopters. We had a lot of fun exploring the different sonic environments. To capture the flavor of Hudson-on-Hastings, our executive producer and showrunner came up the idea of adding distant construction sounds to some scenes.”

A few scenes from the new season are set inside a prison. Aniceto says the sound team was able to help breathe life into that environment through the judicious application of very specific sound design. “David Briggs had just come off of Escape at Dannemora, so he was very familiar with the sounds of a prison,” he recalls. “He knew the kind of sounds that you hear in communal areas, not only physical sounds like buzzers and bells, but distant chats among guards and visitors. He helped us come up with amusing bits of background dialogue for the loop group.”

Most of the dialogue came directly from the production tracks, but the sound team hosted several ADR sessions at Goldcrest for crowd scenes. Hirsch points to an episode from the new season that involves a girls basketball team. ADR mixer Krissopher Chevannes recorded groups of voice actors (provided by Dann Fink and Bruce Winant of Loopers Unlimited) to create background dialogue for a scene on a team bus and another that happens during a game.

“During the scene on the bus, the girls are talking normally, but then the action shifts to slo-mo. At that point the sound design goes away and the music drives it,” Hirsch recalls. “When it snaps back to reality, we bring the loop-group crowd back in.”

The emotional depth of Divorce marks it as different from most television comedies, it also creates more interesting opportunities for sound. “The sound portion of the show helps take it over the line and make it real for the audience,” says Aniceto. “Sound is a big priority for Divorce. I get excited by the process and the opportunities it affords to bring scenes to life. So, I surround myself by smart and talented people like Eric and David, who understand how to do that and give the show the perfect feel.”

All three seasons of Divorce are available on HBO Go and HBO Now.

Building a massive editing storage setup on a budget

By Mike McCarthy

This year, I oversaw the editing process for a large international film production. This involved setting up a collaborative editing facility in the US, at Vasquez Saloon, with a large amount of high-speed storage for the source footage. While there was “only” 6.5TB of offline DNxHR files, they shot around 150TB of Red footage that we needed to have available for onsite VFX, conform, etc. Once we finished the edit, we were actually using 40TB of that footage in the cut, which we needed at another location for further remote collaboration. So I was in the market for some large storage solutions.

Our last few projects have been small enough to fit on eight-bay desktop eSAS arrays, which are quiet and relatively cheap. (Act of Valor was on a 24TB array of 3TB drives in 2010, while 6 Below was on 64TB arrays of 8TB drives.) Now that we have 12TB drives available, that allows those to go to 96TB, but we needed more capacity than that. With that much data on a single spindle, you lose more capacity to maintain redundancy, with RAID-6 dropping the raw space to 72TB.

Large numbers of smaller drives offer better performance and more efficient redundancy, as well as being cheaper per TB, at least for the drives. But once you get into large rack-mounted arrays, they are much louder and need to be located farther from the creative space, requiring different interconnects than direct attached SAS. My initial quotes were for a 24x 8TB solution, offering 192TB storage, before RAID-6 and such left us with 160 usable Terabytes of space for around $15K.

I was in the process of ordering one of those from ProAvio when they folded last Thanksgiving, resetting my acquisition process. I looked into building one myself, with a SAS storage chassis and bare drives, when I stumbled across refurbished servers on eBay. There are numerous companies selling used servers that include storage chassis, backplanes and RAID cards, for less than just this case costs new.

The added benefit is that these include a fully functioning Xeon-level computer system as well. At the very least, this allows you to share the storage over a 10GbE network, and in our case we were also able to use it as a rendernode and eventually a user workstation. That solution worked well enough that we will be using similar items for future artist stations, even without that type of storage requirement. I have setup two separate systems so far, for different needs, and learned a lot in the process. I thought I would share some of those details on here.

Why use refurbished systems for top end work? Most of the CPU advances in the last few years have come in the form of increased core counts and energy efficiency. This means that in lightly threaded applications, CPUs from a few years ago will perform nearly as well as brand-new ones. And previous generation DDR3 RAM is much cheaper than DDR4. PCIe 3.0 has been around for many generations, but older systems won’t have Thunderbolt3 and may not even have USB 3. USB 3 can be added with an expansion card, but Thunderbolt will require a current generation system. The other primary limitation is finding systems that have drivers for running Windows 10, since those systems are usually designed for Linux and Windows Server. Make sure you verify the motherboard will support Windows 10 before you make a selection. (Unfortunately, Windows 7 is finally dying, with no support from Microsoft or current application releases.)

Workstations and servers are closely related at the hardware level, but have a few design differences. They use the same chipsets and Xeon processors, but servers are designed for remote administration in racks while workstations are designed to be quieter towers with more graphics capability. But servers can be used for workstation tasks with a few modifications, and used servers can be acquired very cheaply. Also, servers frequently have the infrastructure for large drive arrays, while workstations are usually designed to connect to separate storage for larger datasets.

Recognizing these facts, I set out to build a large repository for my 150TB of Red footage on a system that could also run my Adobe applications and process the data. While 8TB drives are currently the optimal size for storing the most data for the lowest total price that will change over time. And 150TB of data required more than 16 drives, so I focused on 4U systems with 24 drive bays. With 192TB of RAW storage, minus two drives for RAID-6 (16TB) and 10% for Windows overhead leaves me with 160TB of storage space reported in Windows.

4U chassis also allow for full-height PCIe cards, which is important for modern GPUs. Finding support for full-height PCIe slots is probably the biggest challenge in selecting a chassis, as most server cards are low profile. A 1U chassis can fit a dual-slot GPU if it’s designed to accept one horizontally, but cooling may be an issue for workstation cards. A 2U chassis has the same issue, so you must have a 3U or 4U chassis to install full-height PCIe cards vertically, and the extra space will help with cooling and acoustics as well.

Dell and HP offer options as well, but I went with Supermicro since their design fit my needs the best. I got a 4U chassis with a 24-port pass through SAS back plane for maximum storage performance and a X9DRi-LNF4+ motherboard that was supposed to support Windows 7 and Windows 10. The pass-through backplane gave full speed access to 24 drives over six-quad channel SFF-8643 ports, but required a 24-port RAID card and more cables. The other option is a port multiplying backplane, which has a single or dual SFF-8643 connection to the RAID card. This allows for further expansion at the expense of potential complexity and latency. And 12G SAS is 1.5GB/s per lane, so in theory a single SFF-8643 cable can pass up to 6GB/s, which should be as much as most RAID controllers can handle anyway.

The system cost about $2K, plus $5K for the 24 drives, which is less than half of what I was looking at paying for a standalone external SAS array and it included a full computer with 20 CPU cores and 128GB RAM. I considered it a bit of a risk, as I had never done something at that scale and there was no warranty, but we decided that the cost savings was worth a try. It wasn’t without its challenges, but it is definitely a viable solution for a certain type of customer. (One with more skills than money.)

Putting it to Use
The machine ran loud, as was to be expected with 24 drives and five fans but it was installed in a machine room with our rack mount UPS and network switches, so the noise wasn’t a problem. I ran 30-foot USB and HDMI cables to the user station in the next room and frequently controlled it via VNC. I added an Nvidia Pascal Quadro card, a 10GbE card and a USB 3 card, as well as a SATA SSD for the OS in an optional 2.5-inch drive tray. Once I got the array set up and initialized, it benchmarked at over 3000MB/s transfer rate. This was far more than I needed for Red files, but I won’t turn down excess speed for future use with uncompressed 8K frames or 40GbE network connections.

Initially, I had trouble with Windows 10. I was getting bluescreen APCI bios errors on boot, but Windows 7 worked flawlessly. I used Win7 for a month, but I knew I would need to move to Win10 within the year and was looking at building more systems. So I knew I needed to confirm that Win10 could work successfully. I eventually determined that it was Windows Update — always been the bane of my existence when using Win10 — which was causing the problem. It was automatically updating one of the chipset drivers to a version that prevented the system from booting. The only solution was to prevent Win10 from accessing the Internet until after the current driver was successfully involved. The only way to disable Windows update during install is to totally disconnect the system from the network. Once I did that everything worked great, and I ordered another system.

The second time I didn’t need as much data, so I went with a 16-bay 3U chassis… which was a mistake. It ran hotter and louder with less case space, and it doesn’t fit GPUs with top-mounted power plugs or full-sized CPU coolers. So regardless of how many drive bays you need, I recommend buying a 24-bay 4U system for the space it gives you. (The SuperMicro 36 bay systems look the same from the front, but have less space available since the extra 12 bays in the rear constrain the motherboard similar to a 2U case.) The extra space also gives you more options for cards and cooling solutions.

I also tried an NVMe drive in a PCIe slot and while it works booting is not an option without modding the BIOS, which I was not about to experiment with. So I installed the OS on a SATA SSD again, and was able to adapt it to one of the 16 standard drive bays, as I only needed 8 of them for my 64TB array. This system had a pass through backplane with 16 single port SATA connectors, which is much messier than the SFF-8643 connectors. But it works, and it’s simpler to mix the drives between the RAID card and the motherboard, which is a plus.

When I received the unit, it was FAR louder than the previously ordered 4U one, for a number of reasons. It had 800W power supplies — instead of the 920W-SQ (Super-quiet) ones in my first one — and the smaller case had different airflow limitations. I needed this one to be quieter than the first system, as I was going to be running it next to my desk instead of in a machine room. So I set about redesigning the cooling system, which was the source of 90% of the noise. I got the power supplies replaced with 920SQ ones, although the 700W ones are supposed to be quiet as well, and much cheaper.

I replaced the 5x 80mm 5000RPM jet engine system fans with Noctua 1800RPM fans, which made the system quiet but didn’t provide enough air flow for the passively cooled CPUs. I then ordered two large CPU coolers with horizontally mounted 92mm fans to cool the Xeon chips, replacing the default passive heatsinks that us case airflow for cooling. I also installed a 40mmx20 fan on the RAID card that had been overheating even with the default jet engine sounding fans. Once I had those eight Noctua fans installed, the system was whisper quiet and could render at 100% CPU usage without throttling or overheating. So I was able to build a system with 16 cores and 128GB RAM for about $1500, not counting the 64TB storage, which doubles that price, and the GPU, which I already had. (Although a GTX1660 can be had for $300, and would be a good fit in that budget range.) The first one I built had 20 cores at 3GHz, and 128GB RAM for about $2,000, plus $5000 for the 192TB storage. I was originally looking at getting just the 192TB external arrays for twice that price, so by comparison this was half the cost with a high-end computer tossed in as a bonus.

Looking Ahead
The things I plan to do differently in the future include:
Always getting the 4U chassis for maximum flexibility,
making sure to get quiet power supplies ($50 to $150) and
budgeting to replace all the fans and CPU coolers if noise is going to be an issue ($200).

But at the end of the day, you should be able to get a powerful dual-socket system ready to support massive storage volume for around $2,000. This solution makes the most sense when you need large capacity storage, as well as the editing system. Otherwise some of what you are paying for is going to waste.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

3D Design + Motion Tour to visit 26 cities, 3D artists presenting

The 3D Design + Motion Tour will hit the road September 2 through December 10, making stops in 26 cities throughout North America and Europe. Artists who want to break into high-end 3D digital production will have the chance to learn from VIPs in the motion design and visual effects industry, including Andrew Kramer from Video Copilot, Nick Campbell from Greyscalegorilla, EJ Hassenfratz with Eyedesyn, Chris Schmidt with Rocket Lasso and Thanassis Pozantzis with Noseman. The tour is sponsored by Maxon, Adobe, Nvidia and Dell and produced by Future Media Conferences with partnership support from NAB Show.

At each 3D Design + Motion Tour stop, attendees will learn how to create a state-of-the-art production company vanity logo animation. Event organizers have tapped the creative team at Perception — the motion graphics design studio recognized for redesigning the Marvel Studios logo and opening animation and visual effects for Black Panther, Ant-Man and the Wasp, Thor: Ragnarok and Batman v. Superman: Dawn of Justice —to design and execute an exclusive project for the tour. Industry-leading guest artists will break down key project elements and share motion design techniques that highlight the integration and performance between Maxon’s Cinema 4D application, Adobe’s Creative Suite and the Redshift GPU-accelerated renderer for unlimited creative capabilities.

The $95 cost to attend the 3D Design + Motion Tour includes software apps to help artists sharpen their 3D skillsets:
• Adobe CC 30-day license
• Cinema 4D 90-day license
• Redshift 90-day license
• Sketchfab 90-day license
• Project files and resources
• Inclusion in event drawings

Tour registration also includes networking with local artists and VIP presenters, access to tutorials and project files, and follow-up webinars. Space is limited. Registration details are here.

Additional details on the 3D Design + Motion Tour, including tour stops and guest presenters, are available here.

Danielle Katvan joins 1stAveMachine’s directorial roster

Film and commercial director Danielle Katvan has joined the roster at Brooklyn-based production company 1stAveMachine. Her work includes the Clio-winning spot for Vogue and Free People, as well as commercials for The Venetian Resort Las Vegas and Service Now’s The Future of Work. Her short film, The Foster Portfolio, premiered at the 2017 Tribeca Film Festival.

Katvan grew up in her parents photography studio in New York City, so she was exposed to the art of storytelling from a young age. She began by taking 35mm photographs, developing the film in their home’s darkroom. This fascination evolved into an interest in moving images, and she bought her first video camera at age 12.

Katvan’s style includes adding offbeat humor into highly stylized, cinematic worlds. “It’s like our world, but with the volume turned up a bit,” she explains. “The beauty of filmmaking is that you can escape to another place but still feel emotionally connected to what you’re watching – and good performances are such a huge part of making that connection.”

“We have been big fans of Danielle’s work for some time. Her eye for authentic performances and beautiful cinematography, set against thoughtful art direction, have made for some incredibly compelling films,” says Sam Penfield, a partner at 1stAveMachine.

Behind the Title: Element EP Kristen Kearns

NAME: Kristen Kearns

COMPANY: Boston’s Element Productions

CAN YOU DESCRIBE YOUR COMPANY?
Element has been in business for 20 years. We handle production and post production for video content on all platforms.

WHAT’S YOUR JOB TITLE?
Executive Producer / COO

WHAT DOES THAT ENTAIL?
I oversee the office operations and company culture, and I work with clients on their production and post projects. I handle sales and bidding and work with our post and production talent to keep growing and expanding their creative goals.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I wear a lot of hats. I think people are always surprised by how much I have to juggle. From hiring employees, approving bills, bidding projects and collaborating with directors on treatments.

WHAT TOOLS DO YOU USE?
We love Slack, Box and Google Apps. Collaboration is such a big part of what we do, and we could not function as seamlessly as we do without these awesome tools.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The people. I love who I work with.

WHAT’S YOUR LEAST FAVORITE?
When we work really hard on bidding a project and we don’t win. I understand this is a competitive business, but it is still really hard to lose after you put so much time and energy into a bid.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I love the mornings. I like the quiet before everyone comes in. I get into the office early and take that time to think through my day and my priorities. Or, sometimes I use the time to brainstorm and think through business challenges or business goals for the overall growth of the company.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I am a bit obsessed with The Home Edit. If you don’t follow them on Instagram, you should. Their stories are hilarious. Anyway, I would want to work for them. Crazy lives all wrapped up in tidy cabinets.

Alzheimer’s Association

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We recently launched a project for a local bank that featured a Yeti, a unicorn and a Sasquatch. Projects like this are what keep my job interesting and challenging. I had to do a bunch of research on costumes and prosthetics.

We also just wrapped on a short film for the Alzheimer’s Association. Giving back is a really important part of our company culture. We were so moved by the story of this couple and their struggles with this debilitating disease. I was really proud to be a part of this production.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I am proud of a lot of the work that we do, but I would say most recently we worked on a multi-platform project with Dunkin’ that really stretched our producing skills. The idea was very innovative, with the goal being to power a home entirely on coffee grounds.

We connected all the dots of the projects, from finding a biofuel manufacturer to the builder in Nashville, and documented the entire process. The project manifested itself into a live event in New York City before traveling to the coast of Massachusetts to be listed as an Airbnb.

Dunkin

NAME PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I recently went to Washington, DC, with my family, and the National Museum of American History had an exhibit “Within These Walls.” It highlighted the evolution of one home, and with it the changing technology. I remember being really taken aback by the laundry exhibit. I think we all take for granted the time and convenience it saves us. Can you imagine if we had to spend hours dunking and ringing out clothes? It has actually given us more freedom and convenience to pursue passions and interests. I could live without my phone or a television, but trap me with a bucket and a clothesline and I would lose my mind.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
I grew up in a dance studio, so I actually find that I work better with some sort of music in the background. The office has a Sonos system, so we all take turns playing music.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Immersing myself in art and culture. Whether it is going to a museum to view artwork, seeing a band or heading to a movie to truly appreciate other people’s creativity. It is the best way for me to unwind as I enjoy the talent and art of others.

Dialects, guns and Atmos mixing: Tom Clancy’s Jack Ryan

By Jennifer Walden

Being an analyst is supposed to be a relatively safe job. A paper cut is probably the worst job-related injury you’d get… maybe, carpal tunnel. But in Amazon Studios/Paramount’s series Tom Clancy’s Jack Ryan, CIA analyst Jack Ryan (John Krasinski) is hauled away from his desk at CIA headquarters in Langley, Virginia, and thrust into an interrogation room in Syria where he’s asked to extract info from a detained suspect. It’s a far cry from a sterile office environment and the cuts endured don’t come from paper.

Benjamin Cook

Four-time Emmy award-winning supervising sound editor Benjamin Cook, MPSE — at 424 Post in Culver City — co-supervised Tom Clancy’s Jack Ryan with Jon Wakeham. Their sound editorial team included sound effects editors Hector Gika and David Esparza, MPSE, dialogue editor Tim Tuchrello, music editor Alex Levy, Foley editor Brett Voss, and Foley artists Jeff Wilhoit and Dylan Tuomy-Wilhoit.

This is Cook’s second Emmy nomination this season, being nominated also for sound editing on HBO’s Deadwood: The Movie.

Here, Cook talks about the aesthetic approach to sound editing on Jack Ryan and breaks down several scenes from the Emmy-nominated “Pilot” episode in Season 1.

Congratulations on your Emmy nomination for sound editing on Tom Clancy’s Jack Ryan! Why did you choose the first episode for award consideration?
Benjamin Cook: It has the most locations, establishes the CIA involvement, and has a big battle scene. It was a good all-around episode. There were a couple other episodes that could have been considered, such as Episode 2 because of the Paris scenes and Episode 6 because it’s super emotional and had incredible loop group and location ambience. But overall, the first episode had a little bit better balance between disciplines.

The series opens up with two young boys in Lebanon, 1983. They’re playing and being kids; it’s innocent. Then the attack happens. How did you use sound to help establish this place and time?
Cook: We sourced a recordist to go out and record material in Syria and Turkey. That was a great resource. We also had one producer who recorded a lot of material while he was in Morocco. Some of that could be used and some of it couldn’t because the dialect is different. There was also some pretty good production material recorded on-set and we tried to use that as much as we could as well. That helped to ground it all in the same place.

The opening sequence ends with explosions and fire, which makes an interesting juxtaposition to the tranquil water scene that follows. What sounds did you use to help blend those two scenes?
Cook: We did a muted effect on the water when we first introduced it and then it opens up to full fidelity. So we were going from the explosions and that concussive blast to a muted, filtered sound of the water and rowing. We tried to get the rhythm of that right. Carlton Cuse (one of the show’s creators) actually rows, so he was pretty particular about that sound. Beyond that, it was filtering the mix and adding design elements that were downplayed and subtle.

The next big scene is in Syria, when Sheikh Al Radwan (Jameel Khoury) comes to visit Sheikh Suleiman (Ali Suliman). How did you use sound to help set the tone of this place and time?
Cook: It was really important that we got the dialects right. Whenever we were in the different townships and different areas, one of the things that the producers were concerned about was authenticity with the language and dialect. There are a lot of regional dialects in Arabic, but we also needed Kurdish, Turkish — Kurmanji, Chechen and Armenian. We had really good loop group, which helped out tremendously. Caitlan McKenna our group leader cast several multi-linguist voice actors who were familiar with the area and could give us a couple different dialects; that really helped to sell location for sure. The voices — probably more than anything else — are what helped to sell the location.

Another interesting juxtaposition of sound was going from the sterile CIA office environment to this dirty, gritty, rattley world of Syria.
Cook: My aesthetic for this show — besides going for the authenticity that the showrunners were after — was trying to get as much detail into the sound as possible (when appropriate). So, even when we’re in the thick of the CIA bullpen there is lots of detail. We did an office record where we set mics around an office and moved papers and chairs and opened desk drawers. This gave the office environment movement and life, even when it is played low.

That location seems sterile when we go to the grittiness of the black-ops site in Yemen with its sand gusts blowing, metal shacks rattling and tents flapping in the wind. You also have off and on screen vehicles and helicopters. Those textures were really helpful in differentiating those two worlds.

Tell me about Jack Ryan’s panic attack at 4:47am. It starts with that distant siren and then an airplane flyover before flashing back to the kid in Syria. What went into building that sequence?
Cook: A lot of that was structured by the picture editor, and we tried to augment what they had done and keep their intention. We changed out a few sounds here and there, but I can’t take credit for that one. Sometimes that’s just the nature of it. They already have an idea of what they want to do in the picture edit and we just augment what they’ve done. We made it wider, spread things out, added more elements to expand the sound more into the surrounds. The show was mixed in Dolby Home Atmos so we created extra tracks to play in the Atmos sound field. The soundtrack still has a lot of detail in the 5.1 and a 7.1 mixes but the Atmos mix sounds really good.

Those street scenes in Syria, as we’re following the bank manager through the city, must have been a great opportunity to work with the Atmos surround field.
Cook: That is one of my favorite scenes in the whole show. The battles are fun but the street scene is a great example of places where you can use Atmos in an interesting way. You can use space to your advantage to build the sound of a location and that helps to tell the story.

At one point, they’re in the little café and we have glass rattles and discrete sounds in the surround field. Then it pans across the street to a donkey pulling a cart and a Vespa zips by. We use all of those elements as opportunities to increase the dynamics of the scene.

Going back to the battles, what were your challenges in designing the shootout near the end of this episode? It’s a really long conflict sequence.
Cook: The biggest challenge was that it was so long and we had to keep it interesting. You start off by building everything, you cut everything, and then you have to decide what to clear out. We wanted to give the different sides — the areas inside and outside — a different feel. We tried to do that as much as possible but the director wanted to take it even farther. We ended up pulling the guns back, perspective-wise, making them even farther than we had. Then we stripped out some to make it less busy. That worked out well. In the end, we had a good compromise and everyone was really happy with how it plays.

The guns were those original recordings or library sounds?
Cook: There were sounds in there that are original recordings, and also some library sounds. I’ve gotten material from sound recordist Charles Maynes — he is my gun guru. I pretty much copy his gun recording setups when I go out and record. I learned everything I know from Charles in terms of gun recording. Watson Wu had a great library that recently came out and there is quite a bit of that in there as well. It was a good mix of original material and library.

We tried to do as much recording as we could, schedule permitting. We outsourced some recording work to a local guy in Syria and Turkey. It was great to have that material, even if it was just to use as a reference for what that place should sound like. Maybe we couldn’t use the whole recording but it gave us an idea of how that location sounds. That’s always helpful.

Locally, for this episode, we did the office shoot. We recorded an MRI machine and Greer’s car. Again, we always try to get as much as we can.

There are so many recordists out there who are a great resource, who are good at recording weapons, like Charles, Watson and Frank Bry (at The Recordist). Frank has incredible gun sounds. I use his libraries all the time. He’s up in Idaho and can capture these great long tails that are totally pristine and clean. The quality is so good. These guys are recording on state-of-the-art, top-of-the-line rigs.

Near the end of the episode, we’re back in Lebanon, 1983, with the boys coming to after the bombing. How did you use sound to help enhance the tone of that scene?
Cook: In the Avid track, they had started with a tinnitus ringing and we enhanced that. We used filtering on the voices and delays to give it more space and add a haunting aspect. When the older boy really wakes up and snaps to we’re playing up the wailing of the younger kid as much as possible. Even when the older boy lifts the burning log off the younger boy’s legs, we really played up the creak of the wood and the fire. You hear the gore of charred wood pulling the skin off his legs. We played those elements up to make a very visceral experience in that last moment.

The music there is very emotional, and so is seeing that young boy in pain. Those kids did a great job and that made it easy for us to take that moment further. We had a really good source track to work with.

What was the most challenging scene for sound editorial? Why?
Cook: Overall, the battle was tough. It was a challenge because it was long and it was a lot of cutting and a lot of material to get together and go through in the mix. We spent a lot of time on that street scene, too. Those two scenes were where we spent the most time for sure.

The opening sequence, with the bombs, there was debate on whether we should hear the bomb sounds in sync with the explosions happening visually. Or, should the sound be delayed? That always comes up. It’s weird when the sound doesn’t match the visual, when in reality you’d hear the sound of an explosion that happen miles away much later than you’d see the explosion happen.

Again, those are the compromises you make. One of the great things about this medium is that it’s so collaborative. No one person does it all… or rarely it’s one person. It does take a village and we had great support from the producers. They were very intentional on sound. They wanted sound to be a big player. Right from the get-go they gave us the tools and support that we needed and that was really appreciated.

What would you want other sound pros to know about your sound work on Tom Clancy’s Jack Ryan?
Cook: I’m really big into detail on the editing side, but the mix on this show was great too. It’s unfortunate that the mixers didn’t get an Emmy nomination for mixing. I usually don’t get recognized unless the mixing is really done well.

There’s more to this series than the pilot episode. There are other super good sounding episodes; it’s a great sounding season. I think we did a great job of finding ways of using sound to help tell the story and have it be an immersive experience. There is a lot of sound in it and as a sound person, that’s usually what we want to achieve.

I highly recommend that people listen to the show in Dolby Atmos at home. I’ve been doing Atmos shows now since Black Sails. I did Lost in Space in Atmos, and we’re finishing up Season 2 in Atmos as well. We did Counterpart in Atmos. Atmos for home is here and we’re going to see more and more projects mixed in Atmos. You can play something off your phone in Atmos now. It’s incredible how the technology has changed so much. It’s another tool to help us tell the story. Look at Roma (my favorite mix last year). That film really used Atmos mixing; they really used the sound field and used extreme panning at times. In my honest opinion, it made the film more interesting and brought another level to the story.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Company 3 buys Sixteen19, offering full-service post in NYC

Company 3 has acquired Sixteen19, a creative editorial, production and post company based in New York City. The deal includes Sixteen19’s visual effects wing, PowerHouse VFX, and a mobile dailies operation with international reach.

The acquisition helps Company 3 further serve NYC’s booming post market for feature film and episodic TV. As part of the acquisition, industry veterans and Sixteen19 co-founders Jonathan Hoffman and Pete Conlin, along with their longtime collaborator, EVP of business development and strategy Alastair Binks, will join Company 3’s leadership team.

“With Sixteen19 under the Company 3 umbrella, we significantly expand what we bring to the production community, addressing a real unmet need in the industry,” says Company 3 president Stefan Sonnenfeld. “This infusion of talent and infrastructure will allow us to provide a complete suite of services for clients, from the start of production through the creative editing process to visual effects, final color, finishing and mastering. We’ve worked in tandem with Sixteen19 many times over the years, so we know that they have always provided strong client relationships, a best-in-class team and a deeply creative environment. We’re excited to bring that company’s vision into the fold at Company 3.”

Sonnenfeld will continue to serve as president of Company 3, and oversee operations of Sixteen19. As a subsidiary of Deluxe, Company 3 is part of a broad portfolio of post services. Bringing together the complementary services and geographic reach of Company3, Sixteen19 and Powerhouse VFX, will expand Company 3’s overall portfolio of post offerings and reach new markets in the US and internationally.

Sixteen19’s New York location includes 60 large editorial suites; two 4K digital cinema grading theaters; and a number of comfortable spaces, open environments and many common areas. Sixteen19’s mobile dailies services will add a perfect companion to Company 3’s existing offerings in that arena. PowerHouse VFX includes dedicated teams of experienced supervisors, producers and artists in 2D and 3D visual effects and compositing.

“The New York film community initially recognized the potential for a Company 3 and Sixteen19 partnership,” says Sixteen19’s Hoffman. “It’s not just the fact that a significant majority of the projects we work on are finished at Company 3, it’s more that our fundamental vision about post has always been aligned with Stefan’s. We value innovation; we’ve built terrific creative teams; and above all else, we both put clients first, always.”

Sixteen19 and Powerhouse VFX will retain their company names.

Scratch 9.1 now supports AJA Kona 5, Red 8K workflows

Assimilate’s Scratch 9.1 now supports AJA Kona 5 audio and video I/O cards, enabling users to output 8K 60p video via 12G-SDI. Scratch 9.1 is also now supporting AJA’s Io 4K Plus I/O box with Thunderbolt 3 connectivity. The product also works with AJA’s T-Tap Io 4K, Kona 1 and Kona 4.

Scratch support for Kona 5 allows for a smooth dailies and finishing workflow for Red 8K footage. Scratch handles the decoding and deBayering of 8K Red RAW in realtime at full resolution and can now natively output 8K over SDI through Kona 5, facilitating a full end-to-end 8K workflow.

Available immediately, Scratch 9.1 starts at $89 a month and $695 annually. AJA Kona 5 and Io 4K Plus are available now through AJA’s reseller network for $2,995 and $2,495 respectively.

ADR, loop groups, ad-libs: Veep‘s Emmy-nominated audio team

By Jennifer Walden

HBO wrapped up its seventh and final season of Veep back in May, so sadly, we had to say goodbye to Julia Louis-Dreyfus’ morally flexible and potty-mouthed Selina Meyer. And while Selina’s political career was a bit rocky at times, the series was rock-solid — as evidenced by its 17 Emmy wins and 68 nominations over show’s seven-year run.

For re-recording mixers William Freesh and John W. Cook II, this is their third Emmy nomination for Sound Mixing on Veep. This year, they entered the series finale — Season 7, Episode 7 “Veep” — for award consideration.

L-R: William Freesh, Sue Cahill, John W. Cook, II

Veep post sound editing and mixing was handled at NBCUniversal Studio Post in Los Angeles. In the midst of Emmy fever, we caught up with re-recording mixer Cook (who won a past Emmy for the mix on Scrubs) and Veep supervising sound editor Sue Cahill (winner of two past Emmys for her work on Black Sails).

Here, Cook and Cahill talk about how Veep’s sound has grown over the years, how they made the rapid-fire jokes crystal clear, and the challenges they faced in crafting the series’ final episode — like building the responsive convention crowds, mixing the transitions to and from the TV broadcasts, and cutting that epic three-way argument between Selina, Uncle Jeff and Jonah.

You’ve been with Veep since 2016? How has your approach to the show changed over the years?
John W. Cook II: Yes, we started when the series came to the states (having previously been posted in England with series creator Armando Iannucci).

Sue Cahill: Dave Mandel became the showrunner, starting with Season 5, and that’s when we started.

Cook: When we started mixing the show, production sound mixer Bill MacPherson and I talked a lot about how together we might improve the sound of the show. He made some tweaks, like trying out different body mics and negotiating with our producers to allow for more boom miking. Notwithstanding all the great work Bill did before Season 5, my job got consistently easier over Seasons 5 through 7 because of his well-recorded tracks.

Also, some of our tools have changed in the last three years. We installed the Avid S6 console. This, along with a handful of new plugins, has helped us work a little faster.

Cahill: In the dialogue editing process this season, we started using a tool called Auto-Align Post from Sound Radix. It’s a great tool that allowed us to cut both the boom and the ISO mics for every clip throughout the show and put them in perfect phase. This allowed John the flexibility to mix both together to give it a warmer, richer sound throughout. We lean heavily on the ISO mics, but being able to mix in the boom more helped the overall sound.

Cook: You get a bit more depth. Body mics tend to be more flat, so you have to add a little bit of reverb and a lot of EQing to get it to sound as bright and punchy as the boom mic. When you can mix them together, you get a natural reverb on the sound that gives the dialogue more depth. It makes it feel like it’s in the space more. And it requires a little less EQing on the ISO mic because you’re not relying on it 100%. When the Auto-Align Post technology came out, I was able to use both mics together more often. Before Auto-Align, I would shy away from doing that if it was too much work to make them sound in-phase. The plugin makes it easier to use both, and I find myself using the boom and ISO mics together more often.

The dialogue on the show has always been rapid-fire, and you really want to hear every joke. Any tools or techniques you use to help the dialogue cut through?
Cook: In my chain, I’m using FabFilter Pro-Q 2 a lot, EQing pretty much every single line in the show. FabFilter’s built-in spectrum analyzer helps get at that target EQ that I’m going for, for every single line in the show.

In terms of compression, I’m doing a lot of gain staging. I have five different points in the chain where I use compression. I’m never trying to slam it too much, just trying to tap it at different stages. It’s a music technique that helps the dialogue to never sound squashed. Gain staging allows me to get a little more punch and a little more volume after each stage of compression.

Cahill: On the editing side, it starts with digging through the production mic tracks to find the cleanest sound. The dialogue assembly on this show is huge. It’s 13 tracks wide for each clip, and there are literally thousands of clips. The show is very cutty, and there are tons of overlaps. Weeding through all the material to find the best lav mics, in addition to the boom, really takes time. It’s not necessarily the character’s lav mic that’s the best for a line. They might be speaking more clearly into the mic of the person that is right across from them. So, listening to every mic choice and finding the best lav mics requires a couple days of work before we even start editing.

Also, we do a lot of iZotope RX work in editing before the dialogue reaches John’s hands. That helps to improve intelligibility and clear up the tracks before John works his magic on it.

Is it hard to find alternate production takes due to the amount of ad-libbing on the show? Do you find you do a lot of ADR?
Cahill: Exactly, it’s really hard to find production alts in the show because there is so much improv. So, yeah, it takes extra time to find the cleanest version of the desired lines. There is a significant amount of ADR in the show. In this episode in particular, we had 144 lines of principal ADR. And, we had 250 cues of group. It’s pretty massive.

There must’ve been so much loop group in the “Veep” episode. Every time they’re in the convention center, it’s packed with people!
Cook: There was the larger convention floor to consider, and the people that were 10 to 15 feet away from whatever character was talking on camera. We tried to balance that big space with the immediate space around the characters.

This particular Veep episode has a chaotic vibe. The main location is the nomination convention. There are huge crowds, TV interviews (both in the convention hall and also playing on Selina’s TV in her skybox suite and hotel room) and a big celebration at the end. Editorially, how did you approach the design of this hectic atmosphere?
Cahill: Our sound effects editor Jonathan Golodner had a lot of recordings from prior national conventions. So those recordings are used throughout this episode. It really gives the convention center that authenticity. It gave us the feeling of those enormous crowds. It really helped to sell the space, both when they are on the convention floor and from the skyboxes.

The loop group we talked about was a huge part of the sound design. There were layers and layers of crafted walla. We listened to a lot of footage from past conventions and found that there is always a speaker on the floor giving a speech to ignite the crowd, so we tried to recreate that in loop group. We did some speeches that we played in the background so we would have these swells of the crowd and crowd reactions that gave the crowd some movement so that it didn’t sound static. I felt like it gave it a lot more life.

We recreated chanting in loop group. There was a chant for Tom James (Hugh Laurie), which was part of production. They were saying, “Run Tom Run!” We augmented that with group. We changed the start of that chant from where it was in production. We used the loop group to start that chant sooner.

Cook: The Tom James chant was one instance where we did have production crowd. But most of the time, Sue was building the crowds with the loop group.

Cahill: I used casting director Barbara Harris for loop group, and throughout the season we had so many different crowds and rallies — both interior and exterior — that we built with loop group because there wasn’t enough from production. We had to hit on all the points that they are talking about in the story. Jonah (Timothy Simons) had some fun rallies this season.

Cook: Those moments of Jonah’s were always more of a “call-and-response”-type treatment.

The convention location offered plenty of opportunity for creative mixing. For example, the episode starts with Congressman Furlong (Dan Bakkedahl) addressing the crowd from the podium. The shot cuts to a CBSN TV broadcast of him addressing the crowd. Next the shot cuts to Selina’s skybox, where they’re watching him on TV. Then it’s quickly back to Furlong in the convention hall, then back to the TV broadcast, and back to Selina’s room — all in the span of seconds. Can you tell me about your mix on that sequence?
Cook: It was about deciding on the right reverb for the convention center and the right reverbs for all the loop group and the crowds and how wide to be (how much of the surrounds we used) in the convention space. Cutting to the skybox, all of that sound was mixed to mono, for the most part, and EQ’d a little bit. The producers didn’t want to futz it too much. They wanted to keep the energy, so mixing it to mono was the primary way of dealing with it.

Whenever there was a graphic on the lower third, we talked about treating that sound like it was news footage. But we decided we liked the energy of it being full fidelity for all of those moments we’re on the convention floor.

Another interesting thing was the way that Bill Freesh and I worked together. Bill was handling all of the big cut crowds, and I was handling the loop group on my side. We were trying to walk the line between a general crowd din on the convention floor, where you always felt like it was busy and crowded and huge, along with specific reactions from the loop group reacting to something that Furlong would say, or later in the show, reacting to Selina’s acceptance speech. We always wanted to play reactions to the specifics, but on the convention floor it never seems to get quiet. There was a lot of discussion about that.

Even though we cut from the convention center into the skybox, those considerations about crowd were still in play — whether we were on the convention floor or watching the convention through a TV monitor.

You did an amazing job on all those transitions — from the podium to the TV broadcast to the skybox. It felt very real, very natural.
Cook: Thank you! That was important to us, and certainly important to the producers. All the while, we tried to maintain as much energy as we could. Once we got the sound of it right, we made sure that the volume was kept up enough so that you always felt that energy.

It feels like the backgrounds never stop when they’re in the convention hall. In Selina’s skybox, when someone opens the door to the hallway, you hear the crowd as though the sound is traveling down the hallway. Such a great detail.
Cook and Cahill: Thank you!

For the background TV broadcasts feeding Selina info about the race — like Buddy Calhoun (Matt Oberg) talking about the transgender bathrooms — what was your approach to mixing those in this episode? How did you decide when to really push them forward in the mix and when to pull back?
Cook: We thought about panning. For the most part, our main storyline is in the center. When you have a TV running in the background, you can pan it off to the side a bit. It’s amazing how you can keep the volume up a little more without it getting in the way and masking the primary characters’ dialogue.

It’s also about finding the right EQ so that the TV broadcast isn’t sharing the same EQ bandwidth as the characters in the room.

Compression plays a role too, whether that’s via a plugin or me riding the fader. I can manually do what a side-chained compressor can do by just riding the fader and pulling the sound down when necessary or boosting it when there’s a space between dialogue lines from the main characters. The challenge is that there is constant talking on this show.

Going back to what has changed over the last three years, one of the things that has changed is that we have more time per episode to mix the show. We got more and more time from the first mix to the last mix. We have twice as much time to mix the show.

Even with all the backgrounds happening in Veep, you never miss the dialogue lines. Except, there’s a great argument that happens when Selina tells Jonah he’s going to be vice president. His Uncle Jeff (Peter MacNicol) starts yelling at him, and then Selina joins in. And Jonah is yelling back at them. It’s a great cacophony of insults. Can you tell me about that scene?
Cahill: Those 15 seconds of screen time took us several hours of work in editorial. Dave (Mandel) said he couldn’t understand Selina clearly enough, but he didn’t want to loop the whole argument. Of course, all three characters are overlapped — you can hear all of them on each other’s mics — so how do you just loop Selina?

We started with an extensive production alt search that went back and forth through the cutting room a few times. We decided that we did need to ADR Selina. So we ended up using a combination of mostly ADR for Selina’s side with a little bit of production.

For the other two characters, we wanted to save their production lines, so our dialogue editor Jane Boegel (she’s the best!) did an amazing job using iZotope RX’s De-bleed feature to clear Selina’s voice out of their mics, so we could preserve their performances.

We didn’t loop any of Uncle Jeff, and it was all because of Jane’s work cleaning out Selina. We were able to save all of Uncle Jeff. It’s mostly production for Jonah, but we did have to loop a few words for him. So it was ADR for Selina, all of Uncle Jeff and nearly all of Jonah from set. Then, it was up to John to make it match.

Cook: For me, in moments like those, it’s about trying to get equal volumes for all the characters involved. I tried to make Selina’s yelling and Uncle Jeff’s yelling at the exact same level so the listener’s ear can decide what it wants to focus on rather than my mix telling you what to focus on.

Another great mix sequence was Selina’s nomination for president. There’s a promo video of her talking about horses that’s playing back in the convention hall. There are multiple layers of processing happening — the TV filter, the PA distortion and the convention hall reverb. Can you tell me about the processing on that scene?
Cook: Oftentimes, when I do that PA sound, it’s a little bit of futzing, like rolling off the lows and highs, almost like you would do for a small TV. But then you put a big reverb on it, with some pre-delay on it as well, so you hear it bouncing off the walls. Once you find the right reverb, you’re also hearing it reflecting off the walls a little bit. Sometimes I’ll add a little bit of distortion as well, as if it’s coming out of the PA.

When Selina is backstage talking with Gary (Tony Hale), I rolled off a lot more of the highs on the reverb return on the promo video. Then, in the same way I’d approach levels with a TV in the room, I was riding the level on the promo video to fit around the main characters’ dialogue. I tried to push it in between little breaks in the conversation, pulling it down lower when we needed to focus on the main characters.

What was the most challenging scene for you to mix?
Cook: I would say the Tom James chanting was challenging because we wanted to hear the chant from inside the skybox to the balcony of the skybox and then down on the convention floor. There was a lot of conversation about the microphones from Mike McLintock’s (Matt Walsh) interview. The producers decided that since there was a little bit of bleed in the production already, they wanted Mike’s microphone to be going out to the PA speakers in the convention hall. You hear a big reverb on Tom James as well. Then, the level of all the loop group specifics and chanting — from the ramp up of the chanting from zero to full volume — we negotiated with the producers. That was one of the more challenging scenes.

The acceptance speech was challenging too, because of all of the cutaways. There is that moment with Gary getting arrested by the FBI; we had to decide how much of that we wanted to hear.
There was the Billy Joel song “We Didn’t Start the Fire” that played over all the characters’ banter following Selina’s acceptance speech. We had to balance the dialogue with the desire to crank up that track as much as we could.

There were so many great moments this season. How did you decide on the series finale episode, “Veep,” for Emmy consideration for Sound Mixing?
Cook: It was mostly about story. This is the end of a seven-year run (a three-year run for Sue and I), but the fact that every character gets a moment — a wrap-up on their character — makes me nostalgic about this episode in that way.

It also had some great sound challenges that came together nicely, like all the different crowds and the use of loop group. We’ve been using a lot of loop group on the show for the past three years, but this episode had a particularly massive amount of loop group.

The producers were also huge fans of this episode. When I talked to Dave Mandel about which episode we should put up, he recommended this one as well.

Any other thoughts you’d like to add on the sound of Veep?
Cook: I’m going to miss Veep a lot. The people on it, like Dave Mandel, Julia Louis-Dreyfus and Morgan Sackett … everyone behind the credenza. They were always working to create an even better show. It was a thrill to be a team member. They always treated us like we were in it together to make something great. It was a pleasure to work with people that recognize and appreciate the time and the heart that we contribute. I’ll miss working with them.

Cahill: I agree with John. On that last playback, no one wanted to leave the stage. Dave brought champagne, and Julia brought chocolates. It was really hard to say goodbye.

Roger and Big Machine merge, operate as Roger

Creative agency Roger and full-service production company Big Machine have merged — a move that will expand the creative capabilities for their respective agency, brand and entertainment clients. The studios will retain the Roger name and operate at Roger’s newly renovated facility in Los Angeles.

The combined management team includes CD Terence Lee, CD Dane Macbeth, EP Josh Libitsky, director Steve Petersen, CD Ken Carlson and Sean Owolo, who focuses on business development.

Roger now offers expanded talent and resources for projects that require branding, design, animation, VFX, VR/AR, live action and content development. Roger uses Adobe Creative Cloud for most of its workflows. The tools vary from project to project, but outside of the Adobe Suite, they also use Maxon Cinema4D, Autodesk Maya, Blackmagic DaVinci Resolve and Foundry Nuke.

Since the merger, the studio is already embarking on a number of projects, including major creative campaigns for Disney and Sony Pictures.

Roger’s new 6,500-square-foot studio includes four private offices, three editing suites, two conference rooms, an empty shooting space for greenscreen work, a kitchen and a lounge.

Behind the Title: Mission’s head of digital imaging, Pablo Garcia Soriano

NAME: Pablo Garcia Soriano (@pablo.garcia.soriano)

COMPANY: UK-based Mission (@missiondigital)

CAN YOU DESCRIBE YOUR COMPANY?
Mission is a provider of DIT and digital lab services based in London, with additional offices in Cardiff, Rome, Prague and Madrid. We process and manage media and metadata, producing rich deliverables with as much captured metadata as possible — delivering consistency and creating efficiencies in VFX and post production.

WHAT’S YOUR JOB TITLE?
Head of Digital Imaging

WHAT DOES THAT ENTAIL?
I work with cinematographers to preserve their vision from the point of capture until the final deliverable. This means supporting productions through camera tests, pre-production and look design. I also work with manufacturers, which often means I get an early look at new products.

Mission

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
It sounds like a very technical job, but it’s so much more than engineering — it’s creative engineering. It’s problem solving and making technical complexities seem easy to a creative person.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love working with cinematographers to help them achieve their vision and make sure it is preserved through post. I also enjoy being able to experiment with the latest technology and have an influence on products. Recently, I’ve been involved with growing Mission’s international presence with our Madrid office, which is particularly close to my heart.

WHAT’S YOUR LEAST FAVORITE?
Sometimes I get to spend hours in a dark room with a probe calibrating monitors. It’s dull but necessary!

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
In the early to mid-morning after two coffees. Also at the end of the day when the office is quieter.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Gardening… or motor racing.

WHY DID YOU CHOOSE THIS PROFESSION?
I feel like it chose me. I’m an architect by training, but was a working musician until around the age of 28 when I stepped down from the stage and started as a freelancer doing music promos. I was doing a bit of everything on those, director, editor, finishing, etc. Then I was asked to be the assistant editor on two films by a colleague whom I was sharing and office with.

After this experience (and due to the changes the music industry was going through), I decided to focus fully on editing several documentaries, short films. I then ended up on a weekly TV show where I was in charge of the final assembly. This is where I started paying attention to continuity and the overall look. I was using Apple Final Cut and Apple Color, which I loved. All of this happened in a very organic way and I was always self-taught.

I didn’t take studying seriously until I met the DP Rafa Roche, AEC, on our first film together around the age of 31. Rafa mentored me, teaching me all about cameras, lenses, filters and filled my brain with curiosity about all the technical stuff (signal, codecs, workflows). From there to now it all has been a bit of a rollercoaster with some moments of real vertigo caused by how fast it all has developed.

Downton Abby

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We work on a lot of features and television in the UK and Europe — recent projects include Cats, Downton Abbey, Cursed and Criminal.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
In 2018, I was the HDR image supervisor for the World Cup in Moscow. Knowing the popularity of football and working on a project that would be seen by so many people around the world was truly an honor, despite the pressure!

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
A good reference monitor, a good set of speakers and Spotify.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Yes, music is a huge part of my life. I have very varied taste. For example, I enjoy Wilco, REM and Black Sabbath.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I like to walk by the River Thames in Hammersmith, London, near where I live.

Game of Thrones’ Emmy-nominated visual effects

By Iain Blair

Once upon a time, only glamorous movies could afford the time and money it took to create truly imaginative and spectacular visual effects. Meanwhile, television shows either tried to avoid them altogether or had to rely on hand-me-downs. But the digital revolution changed all that with its technological advances, and new tools quickly leveling the playing field. Today, television is giving the movies a run for their money when it comes to sophisticated visual effects, as evidenced by HBO’s blockbuster series, Game of Thrones.

Mohsen Mousavi

This fantasy series was recently Emmy-nominated a record-busting 32 times for its eighth and final season — including one for its visually ambitious VFX in the penultimate episode, “The Bells.”

The epic mass destruction presented Scanline’s VFX supervisor, Mohsen Mousavi, and his team many challenges. But his expertise in high-end visual effects, and his reputation for constant innovation in advanced methodology, made him a perfect fit to oversee Scanline’s VFX for the crucial last three episodes of the final season of Game of Thrones.

Mousavi started his VFX career in the field of artificial intelligence and advanced-physics-based simulations. He spearheaded designing and developing many different proprietary toolsets and pipelines for doing crowd, fluid and rigid body simulation, including FluidIT, BehaveIT and CardIT, a node-based crowd choreography toolset.

Prior to joining Scanline VFX Vancouver, Mousavi rose through the ranks of top visual effects houses, working in jobs that ranged from lead effects technical director to CG supervisor and, ultimately, VFX supervisor. He’s been involved in such high-profile projects as Hugo, The Amazing Spider-Man and Sucker Punch.

In 2012, he began working with Scanline, acting as digital effects supervisor on 300: Rise of an Empire, for which Scanline handled almost 700 water-based sea battle shots. He then served as VFX supervisor on San Andreas, helping develop the company’s proprietary city-generation software. That software and pipeline were further developed and enhanced for scenes of destruction in director Roland Emmerich’s Independence Day: Resurgence. In 2017, he served as the lead VFX supervisor for Scanline on the Warner Bros. shark thriller, The Meg.

I spoke with Mousavi about creating the VFX and their pipeline.

Congratulations on being Emmy-nominated for “The Bells,” which showcased so many impressive VFX. How did all your work on Season 4 prepare you for the big finale?
We were heavily involved in the finale of Season 4, however the scope was far smaller. What we learned was the collaboration and the nature of the show, and what the expectations were in terms of the quality of the work and what HBO wanted.

You were brought onto the project by lead VFX supervisor Joe Bauer, correct?
Right. Joe was the “client VFX supervisor” on the HBO side and was involved since Season 3. Together with my producer, Marcus Goodwin, we also worked closely with HBO’s lead visual effects producer, Steve Kullback, who I’d worked with before on a different show and in a different capacity. We all had daily sessions and conversations, a lot of back and forth, and Joe would review the entire work, give us feedback and manage everything between us and other vendors, like Weta, Image Engine and Pixomondo. This was done both technically and creatively, so no one stepped on each other’s toes if we were sharing a shot and assets. But it was so well-planned that there wasn’t much overlap.

[Editor’s Note: Here is the full list of those nominated for their VFX work on Game of Thrones — Joe Bauer, lead visual effects supervisor; Steve Kullback, lead visual effects producer; Adam Chazen, visual effects associate producer; Sam Conway, special effects supervisor; Mohsen Mousavi, visual effects supervisor; Martin Hill, visual effects supervisor; Ted Rae, visual effects plate supervisor; Patrick Tiberius Gehlen, previz lead; and Thomas Schelesny, visual effects and animation supervisor.]

What were you tasked with doing on Season 8?
We were involved as one of the lead vendors on the last three episodes and covered a variety of sequences. In episode four, “The Last of the Starks,” we worked on the confrontation between Daenerys and Cersei in front of the King’s Landing’s gate, which included a full CG environment of the city gate and the landscape around it, as well as Missandei’s death sequence, which featured a full CG Missandei. We also did the animated Drogon outside the gate while the negotiations took place.

Then for “The Bells” we were responsible for most of the Battle of King’s Landing, which included full digital city, Daenerys’ army camp site outside the walls of King’s Landing, the gathering of soldiers in front of the King’s Landing walls, Danny’s attack on the scorpions, the city gate, streets and the Red Keep, which had some very close-up set extensions, close-up fire and destruction simulations and full CG crowd of various different factions — armies and civilians. We also did the iconic Cleaganebowl fight between The Hound and The Mountain and Jamie Lannister’s fight with Euron at the beach underneath the Red Keep. In Episode 5, we received raw animation caches of the dragon from Image Engine and did the full look-dev, lighting and rendering of the final dragon in our composites.

For the final episode, “The Iron Throne, we were responsible for the entire Deanerys speech sequence, which included a full 360 digital environment of the city aftermath and the Red Keep plaza filled with digital unsullied Dothrakies and CG horses leading into the majestic confrontation between Jon and Drogon, where it revealed itself from underneath a huge pile of snow outside Red Keep. We were also responsible for the iconic throne melt sequence, which included some advance simulation of high viscous fluid and destruction of the area around the throne and finishing the dramatic sequence with Drogon carrying Danny out of the throne room and away from King’s Landing into the unknown.

Where was all this work done?
The majority of the work was done here in Vancouver, which is the biggest Scanline office. Additionally we had teams working in our Munich, Montreal and LA offices. We’re a 100% connected company, all working under the same infrastructure in the same pipeline. So if I work with the team in Munich, it’s like they’re sitting in the next room. That allows us to set up and attack the project with a larger crew and get the benefit of the 24/7 scenario; as we go home, they can continue working, and it makes us far more productive.

How many VFX did you have to create for the final season?
We worked on over 600 shots across the final three episodes which gave us approximately over an hour of screen time of high-end consistent visual effects.

Isn’t that hour length unusual for 600 shots?
Yes, but we had a number of shots that were really long, including some ground coverage shots of Arya in the streets of King’s Landing that were over four or five minutes long. So we had the complexity along with the long duration.

How many people were on your team?
At the height, we had about 350 artists on the project, and we began in March 2018 and didn’t wrap till nearly the end of April 2019 — so it took us over a year of very intense work.

Tell us about the pipeline specific to Game of Thrones.
Scanline has an industry-wide reputation for delivering very complex, full CG environments combined with complex simulation scenarios of all sort of fluid dynamics and destruction based on our simulation framework “Flowline.” We had a high-end digital character and hero creature pipeline that gave the final three episodes a boost up front. What was new were the additions to our procedural city generation pipeline for the recreation of King’s Landing, making sure it can deliver both in wide angle shots as well as some extreme close-up set extensions.

How did you do that?
We used a framework we developed back for Independence Day: Resurgence, which is a module-based procedural city generation leveraging some incredible scans of the historical city of Dubrovnik as a blueprint and foundation of King’s Landing. Instead of doing the modeling conventionally, you model a lot of small modules, kind of like Lego blocks. You create various windows, stones, doors, shingles and so on, and once it’s encoded in the system, you can semi-automatically generate variations of buildings on the fly. That also goes for texturing. We had procedurally generated layers of façade textures, which gave us a lot of flexibility on texturing the entire city, with full control over the level of aging and damage. We could decide to make a block look older easily without going back to square one. That’s how we could create King’s Landing with its hundreds of thousands of unique buildings.

The same technology was applied to the aftermath of the city in Episode 6. We took the intact King’s Landing and ran a number of procedural collapsing simulations on the buildings to get the correct weight based on references from the bombed city of Dresden during WWII, and then we added procedurally created CG snow on the entire city.

It didn’t look like the usual matte paintings were used at all.
You’re right, and there were a lot of shots that normally would be done that way, but to Joe’s credit, he wanted to make sure the environments weren’t cheated in any way. That was a big challenge, to keep everything consistent and accurate. Even if we used traditional painting methods, it was all done on top of an accurate 3D representation with correct lighting and composition.

What other tools did you use?
We use Autodesk Maya for all our front-end departments, including modeling, layout, animation, rigging and creature effects, and we bridge the results to Autodesk 3ds Max, which encapsulates our look-dev/FX and rendering departments, powered by Flowline and Chaos Group’s V-Ray as our primary render engine, followed by Foundry’s Nuke as our main compositing package.

At the heart of our crowd pipeline, we use Massive and our creature department is driven with Ziva muscles which was a collaboration we started with Ziva Dynamics back for the creation of the hero Megalodon in The Meg.

Fair to say that your work on Game of Thrones was truly cutting-edge?
Game of Thrones has pushed the limit above and beyond and has effectively erased the TV/feature line. In terms of environment and effects and the creature work, this is what you’d do for a high-end blockbuster for the big screen. No difference at all.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

MovieLabs, film studios release ‘future of media creation’ white paper

MovieLabs (Motion Pictures Laboratories), a nonprofit technology research lab that works jointly with member studios Sony, Warner Bros., Disney, Universal and Paramount, has published a new white paper presenting an industry vision for the future of media creation technology by 2030.

The paper, co-authored by MovieLabs and technologists from Hollywood studios, paints a bold picture of future technology and discusses the need for the industry to work together now on innovative new software, hardware and production workflows to support and enable new ways to create content over the next 10 years. The white paper is available today for free download on the MovieLabs website.

The 2030 Vision paper lays out key principles that will form the foundation of this technological future, with examples and a discussion of the broader implications of each. The key principles envision a future in which:

1. All assets are created or ingested straight to the cloud and do not need to move.
2. Applications come to the media.
3. Propagation and distribution of assets is a “publish” function.
4. Archives are deep libraries with access policies matching speed, availability and security to the economics of the cloud.
5. Preservation of digital assets includes the future means to access and edit them.
6. Every individual on a project is identified and verified and their access permissions are efficiently and consistently managed.
7. All media creation happens in a highly secure environment that adapts rapidly to changing threats.
8. Individual media elements are referenced, tracked, interrelated and accessed using a universal linking system.
9. Media workflows are non-destructive and dynamically created using common interfaces, underlying data formats and metadata.
10. Workflows are designed around realtime iteration and feedback.

Rich Berger

“The next 10 years will bring significant opportunities, but there are still major challenges and inherent inefficiencies in our production and distribution workflows that threaten to limit our future ability to innovate,” says Richard Berger, CEO of MovieLabs. “We have been working closely with studio technology leaders and strategizing how to integrate new technologies that empower filmmakers to create ever more compelling content with more speed and efficiency. By laying out these principles publicly, we hope to catalyze an industry dialog and fuel innovation, encouraging companies and organizations to help us deliver on these ideas.”

The publication of the paper will be supported with a panel discussion at the IBC Conference in Amsterdam. The panel, “Hollywood’s Vision for the Future of Production in 2030,” will include senior technology leaders from the five major Hollywood motion picture studios. It will take place on Sunday, September 15 at 2:15pm in the IBC Conference in the Forum room of the RAI. postPerspective’s Randi Altman will moderate the panel made up of Sony’s Bill Baggelaar, Disney’s Shadi Almassizadeh, Universal’s Michael Wise and Paramount’s Anthony Guarino. More details can be found here.

“Sony Pictures Entertainment has a deep appreciation for the role that current and future technologies play in content creation,” says CTO of Sony Pictures Don Eklund. “As a subsidiary of a technology-focused company, we benefit from the power of Sony R&D and Sony’s product groups. The MovieLabs 2030 document represents the contribution of multiple studios to forecast and embrace the impact that cloud, machine learning and a range of hardware and software will have on our industry. We consider this a living document that will evolve over time and provide appreciated insight.”

According to Wise, SVP/CTO at Universal Pictures, “With film production experiencing unprecedented growth, and new innovative forms of storytelling capturing our audiences’ attention, we’re proud to be collaborating across the industry to envision new technological paradigms for our filmmakers so we can efficiently deliver worldwide audiences compelling entertainment.”

For those not familiar with MovieLabs, their stated goal is “to enable member studios to work together to evaluate new technologies and improve quality and security, helping the industry deliver next-generation experiences for consumers, reduce costs and improve efficiency through industry automation, and derive and share the appropriate data necessary to protect and market the creative assets that are the core capital of our industry.”

Technicolor adds Patrick Smith, Steffen Wild to prepro studio

Technicolor has added Patrick Smith to head its visualization department, partnering with filmmakers to help them realize their vision in a digital environment before they hit the set. By helping clients define lensing, set dimensions, asset placement and even precise on-set camera moves, Smith and his team will play a vital role in helping clients plan their shoots in the virtual environment in ways that feel completely natural and intuitive to them. He reports to Kerry Shea, who heads Technicolor’s Pre-Production Studio.

“By enabling clients to leverage the latest visualization technologies and techniques while using hardware similar to what they are already familiar with, Patrick and his team will empower filmmakers by ensuring their creative visions are clearly defined at the very start of their projects — and remain at the heart of everything they do from their first day on set to take their stories to the next level,” explains Shea. “Bringing visualization and the other areas of preproduction together under one roof removes redundancy from the filmmaking process which, in turn, reduces stress on the storytellers and allows them as much time as possible to focus on telling their story. Until now, the process of preproduction has been a divided and inefficient process involving different vendors and repeated steps. Bringing those worlds together and making it a seamless, start-to-finish process is a game changer.”

Smith has held a number of senior positions within the industry, including most recently as creative director/senior visualization supervisor at The Third Floor. He has worked on titles such as Bumblebee, Avengers: Infinity War, Spider-Man: Homecoming, Guardians of the Galaxy Vol. 2 and The Secret Life of Walter Mitty.

“Visualization used to involve deciding roughly what you plan to do on set. Today, you can plan out precisely how to achieve your vision on set down to the inch – from the exact camera lens to use, to exactly how much dolly track you’ll need, to precisely where to place your actors,” he says. “Visualization should be viewed as the director’s paint brush. It’s through the process of visualization that directors can visually explore and design their characters and breathe life into their story. It’s a sandbox where they can experiment, play and perfect their vision before the pressure of being on set.”

In other Technicolor news, last week the studio announced that Steffen Wild has joined as head of its virtual production department. “As head of virtual production, Wild will help drive the studio’s approach to efficient filmmaking by bringing previously separate departments together into a single pipeline,” says Shea. “We currently see what used to be separate departments merge together. For example, previz, techviz and postviz, which were all separate ways to find answers to production questions, are now in the process of collaborating together in virtual production.”

Wild has over 20 years of experience, including 10 years spearheading Jim Henson’s Creature Shop’s expending efforts in innovative animation technologies, virtual studio productions and new ways of visual storytelling. As SVP of digital puppetry and visual effects at the Creature Shop, Wild crafted new production techniques using proprietary game engine technologies. He brings with him in-depth knowledge of global and local VFX and animation production, rapid prototyping and cloud-based entertainment projects. In addition to his role in the development of next-generation cinematic technologies, he has set up VFX/animation studios in the US, China and southeast Europe.

Main Image: (L-R) Patrick Smith and Steffen Wild

FilmLight sets speakers for free Color On Stage seminar at IBC

At this year’s IBC, FilmLight will host a free two-day seminar series, Color On Stage, on September 14 and 15. The event features live presentations and discussions with colorists and other creative professionals. The event will cover topics ranging from the colorist today to understanding color management and next-generation grading tools.

“Color on Stage offers a good platform to hear about real-world interaction between colorists, directors and cinematographers,” explains Alex Gascoigne, colorist at Technicolor and one of this year’s presenters. “Particularly when it comes to large studio productions, a project can take place over several months and involve a large creative team and complex collaborative workflows. This is a chance to find out about the challenges involved with big shows and demystify some of the more mysterious areas in the post process.”

This year’s IBC program includes colorists from broadcast, film and commercials, as well as DITs, editors, VFX artists and post supervisors.

Program highlights include:
•    Creating the unique look for Mindhunter Season 2
Colorist Eric Weidt will talk about his collaboration with director David Fincher — from defining the workflow to creating the look and feel of Mindhunter. He will break down scenes and run through color grading details of the masterful crime thriller.

•    Realtime collaboration on the world’s longest running continuing drama, ITV Studios’ Coronation Street
The session will address improving production processes and enhancing pictures with efficient renderless workflows, with colorist Stephen Edwards, finishing editor Tom Chittenden and head of post David Williams.

•    Looking to the future: Creating color for the TV series Black Mirror
Colorist Alex Gascoigne of Technicolor will explain the process behind grading Black Mirror, including the interactive episode Bandersnatch and the latest Season 5.

•    Bollywood: A World of Color
This session will delve into the Indian film industry with CV Rao, technical general manager at Annapurna Studios in Hyderabad. In this talk, CV will discuss grading and color as exemplified by the hit film Baahubali 2: The Conclusion.

•    Joining forces: Strengthening VFX and finishing with the BLG workflow
Mathieu Leclercq, head of post at Mikros Image in Paris, will be joined by colorist Sebastian Mingam and VFX supervisor Franck Lambertz to showcase their collaboration on recent projects.

•    Maintaining the DP’s creative looks from set to post
Meet with French DIT Karine Feuillard, ADIT — who worked on the latest Luc Besson film Anna as well as the TV series The Marvelous Mrs Maisel — and FilmLight workflow specialist Matthieu Straub.

•    New color management and creative tools to make multi-delivery easier
The latest and upcoming Baselight developments, including a host of features aimed to simplify delivery for emerging technologies such as HDR. With FilmLight’s Martin Tlaskal, Daniele Siragusano and Andy Minuth.

Color On Stage will take place in Room D201 on the second floor of the Elicium Centre (Entrance D), close to Hall 13. The event is free to attend but spaces are limited. Registion is available here.

DP Chat: Dopesick Nation cinematographer Greg Taylor

By Randi Altman

Dopesick Nation is a documentary series on Vice Media’s Viceland that follows two recovering heroin addicts, Frankie and Allie, in South Florida as they try to help others while taking a look at corruption and exploitation in the rehab industry. The series was inspired by the feature film American Relapse.

Dopesick Nation

As you might imagine, the shoot was challenging, often taking place at night and in dubious locales, but cinematographers Greg Taylor and Mike Goodman were up for the challenge. Both had worked with series co-creator/executive producer Patrick McGee previously and were happy to collaborate once more.

We reached out to DP Taylor to talk about working with McGee and Goodman and the show’s workflow.

Tell us about Dopesick Nation. How early did you get involved in this series, and how did you work with the director?
Pat McGee tapped myself and Mike Goodman to shoot American Relapse. We were just coming off another show and had a finely tuned team ready to spend long nights on this new project. The movie turned out to have a familiar gritty feel you see in the show but in a feature documentary format.

I imagine it was a natural progression to use us again once the TV show was greenlit by Viceland. Pat would keep on our heels to find the best moments for every story and would push us to go out and produce intimate moments with the subjects on the fly. He and producer Adam Linkenhelt (American Relapse) were with us almost every step of the way, offering advice, watching our backs and looking out for incoming storylines. Honestly, I can’t say enough good things about that whole crew.

(L-R) Mike Goodman, supervising producer Adam Linkenhelt and showrunner Pat McGee (Photo by Greg Taylor)

How did you work with fellow DP Mike Goodman? How did you divvy up the shots?
Mike and I have worked long enough together that we have an efficient shorthand. A gesture or look can set up an entire scene sometimes, and I often can’t tell my shots from his. We both put a lot of effort into creativity in our imagery and pushing the bar as much as we can handle. During rare downtimes, we might brainstorm on a new way to shoot b-roll or decide what “gritty” should look and feel like.

Covering the often late and challenging days took a bit of baton-passing back and forth. Some days, we would split up and shoot single camera as well. It was decided at some point that I would cover more of Frankie’s story, while Mike would cover Allie. When the two met up at the end of the day, we would cover them together. Most of the major scenes we shot together, but there were times when too much was happening to cover it all. We were really in the addicts’ world, so some events were completely unexpected.

How would you describe the look of the doc?
I’d say gritty would be the best single word, but that can be nuanced quite a bit. There was an overall aim to keep some frames dirty during dialogue scenes to achieve a slightly voyeuristic feel but still leave lots of room for intimate, in-your-face, bam-type moments when the story dictated. We always paid attention to our backgrounds, and there was a focus on the contrast between beautiful southeast Florida and the dark underbelly lurking just next to it. The show had to be so real that no one would ever question the legitimacy of what we were showing. No-filter, behind-the-veil type thinking in every shot.

Dopesick Nation

How does your process change when shooting a documentary versus a fictional piece? Or does it not?
Story is king, and I’d say character arcs for the feature American Relapse were different from the TV version. In the film, we gave an overview of the treatment industry told through the eyes of our two main characters, Allie and Frank. It is structured somewhat around their typical day and sit-down interviews.

The TV show did not have formal interviews but did allow us to dig deeper into accounts from individuals with addiction, the world they live in and the hosts themselves. The 10 one-hour episodes and three-plus months spent shooting gave us a little more time to build up a library of transition pieces and specialty b-roll.

Where was it shot?
Almost all of the shooting took place in and around southeast Florida. A few short scenes were picked up in Ohio and LA.

How did you go about choosing the right camera and lenses for this project? Can you talk about camera tests?
It’s funny because Mike and I both independently came up with using the Panasonic VariCam LT after the director came to us asking what we wanted to shoot with. We chatted and decided that we needed solutions for potentially tougher nighttime setups than we had been used to. When we gathered for a meeting and started up the gear list, Mike and I both had the LT on the top of our requests.

Dopesick Nation

I think that signaled to the preproduction team we were unanimous on what the best system was to use and production manager Keith Plant made it happen. I had seen the camera in action at NAB and watched some tests a friend had shot on it a few months before. I was easily sold on its rich blacks and dual native ISO. That camera could see into the dark and wasn’t so heavy we would collapse at the end of the day; it worked out very well.

Can you talk about the lighting and how the camera worked while grabbing shots when you could?
Lighting on this show was minimal, but we did use fills and background illumination to enhance some scenes. Working mostly at night — in dubious surroundings — often meant we couldn’t light scenes. Lights bring unwanted attention to the crew and subjects, and we found it changed the feel of the scene in a negative way.

Using the available light at each location quickly became fundamentally important to maintain the unfiltered nature of the show. Every bright spot in the darkness was carefully considered, and if we could pull subjects slightly toward a source, even to get 1/3 a stop more, we would take it.

Any challenging scenes that you are particularly proud of or found most challenging?
There were a lot of scenes that were challenging to shoot technically, but that happens on any project. You don’t always want to see what you are standing next to, but the story needs to be told. There are a lot of people out there really struggling with addiction, and it can be really painful to watch. Being present with everyone and being real with them had to be in your mind constantly. I kept thinking the whole time, “Am I doing them justice? What can I do better? How can I help?”

DPs Mike Goodman and Greg Taylor shoot Ally interviewing one of the subjects (Photo by Tara Sarzen)

Let’s move on to more some more general questions. How did you become interested in cinematography?
I’ve always loved working with celluloid and photography and was brought up with a darkroom in the house. I remember taking a filmmaking summer camp when I was 14 in Oxford, Mississippi, and was basically blown away. I’ve been aiming for a career in cinematography ever since.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
Artistically, I love Dali, Picasso and the works of Caravaggio and Rembrandt. The way light plays in a chiaroscuro painting is really something worth studying and it isn’t easy to replicate.

I like to try and pay homage to the films I enjoy and artworks I’ve visited by incorporating some of their ideas into my own work. With film cameras, things changed slower over the years, and it was often the film stock that became the technological advancement of its day. Granular structure turned to crystal structures, higher ISO/ASA were achieved, color reproduction improved. The same is with the new camera systems coming out. Sensors are the new film stock. You pick what is appropriate to the story.

What new technology has changed the way you work?
I rarely go anywhere nowadays without a drone. The advancements in drone technology have changed the aerial world entirely, and I’m happy to see these new angles open up in an increasingly responsible and licensed way.

DP Greg Taylor shooting in SE Florida. (Photo by Evan Parquette)

Gimbals are a game changer in the way the Steadicam came onto the scene, and I don’t expect them to go anywhere. Also motion-control devices and newer, more sensitive sensors are certainly fitting the bill of ever-evolving and improving tech.

What are some of your best practices or rules you try to follow on each job?
Be aware and attentive of your surroundings and safety. Treat others with respect. Maintain a professional attitude under stress. If you are five minutes early, you’re late.

Explain your ideal collaboration with the director when setting the look of a project.
I love discussing what the heart of the script or concept really means and trying to find the deeper connection with how it can be told visually. Referencing other films/art/ TV we both have experience with and finding a common language that makes sense for the vision.

What’s your go-to gear — things you can’t live without?
I have an old Nikkor 55mm f1.2 lens I love, and I often shoot personal projects on prime vintage glass. The edges aren’t quite as sharp as modern lenses so in the case of the 55mm, you get a lovely yet subtle sharpness vignette along with a warm overall feel.

It’s great for interviews because it softens the digital crispness newer sensors exhibit without the noticeable changes you might see with certain filtration. The Hip Shot belt has been one of my best friends over the past while, and it saves you on the long days and low, long dialogue scenes when handholding seated subjects.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Harbor expands to LA and London, grows in NY

New York-based Harbor has expanded into Los Angeles and London and has added staff and locations in New York. Industry veteran Russ Robertson joins Harbor’s new Los Angeles operation as EVP of sales, features and episodic after a 20-year career with Deluxe and Panavision. Commercial director James Corless and operations director Thom Berryman will spearhead Harbor’s new UK presence following careers with Pinewood Studios, where they supported clients such as Disney, Netflix, Paramount, Sony, Marvel and Lucasfilm.

Harbor’s LA-based talent pool includes color grading from Yvan Lucas, Elodie Ichter, Katie Jordan and Billy Hobson. Some of the team’s projects include Once Upon a Time … in Hollywood, The Irishman, The Hunger Games, The Maze Runner, Maleficent, The Wolf of Wall Street, Snow White and the Huntsman and Rise of the Planet of the Apes.

Paul O’Shea, formerly of MPC Los Angeles, heads the visual effects teams, tapping lead CG artist Yuichiro Yamashita for 3D out of Harbor’s Santa Monica facility and 2D creative director Q Choi out of Harbor’s New York office. The VFX artists have worked with brands such as Nike, McDonald’s, Coke, Adidas and Samsung.

Harbor’s Los Angeles studio supports five grading theaters for feature film, episodic and commercial productions, offering private connectivity to Harbor NY and Harbor UK, with realtime color-grading sessions, VFX reviews and options to conform and final-deliver in any location.

The new UK operation, based out of London and Windsor, will offer in-lab and near-set dailies services along with automated VFX pulls and delivery through Harbor’s Anchor system. The UK locations will draw from Harbor’s US talent pool.

Meanwhile, the New York operation has grown its talent roster and Soho footprint to six locations, with a recently expanded offering for creative advertising. Veteran artists on the commercial team include editors Bruce Ashley and Paul Kelly, VFX supervisor Andrew Granelli, colorist Adrian Seery, and sound mixers Mark Turrigiano and Steve Perski.

Harbor’s feature and episodic offering continues to expand, with NYC-based artists available in Los Angeles and London.

Goosing the sound for Allstate’s action-packed ‘Mayhem’ spots

By Jennifer Walden

While there are some commercials you’d rather not hear, there are some you actually want to turn up, like those of Leo Burnett Worldwide’s “Mayhem” campaign for Allstate Insurance.

John Binder

The action-packed and devilishly hilarious ads have been going strong since April 2010. Mayhem (played by actor Dean Winters) is a mischievous guy who goes around breaking things that cut-rate insurance won’t cover. Fond of your patio furniture? Too bad for all that wind! Been meaning to fix that broken front porch step? Too bad the dog walker just hurt himself on it! Parked your car in the driveway and now it’s stolen? Too bad — and the thief hit your mailbox and motorcycle too!

Leo Burnett Worldwide’s go-to for “Mayhem” is award-winning post sound house Another Country, based in Chicago and Detroit. Sound designer/mixer John Binder (partner of Cutters Studios and managing director of Another Country) has worked on every single “Mayhem” spot to date. Here, he talks about his work on the latest batch: Overly Confident Dog Walker, Car Thief and Bunch of Wind. And Binder shares insight on a few of his favorites over the years.

In Overly Confident Dog Walker, Mayhem is walking an overwhelming number of dogs. He can barely see where he’s walking. As he’s going up the front stairs of a house, a brick comes loose, causing Mayhem to fall and hit his head. As Mayhem delivers his message, one of the dogs comes over and licks Mayhem’s injury.

Overly Confident Dog Walker

Sound-wise, what were some of your challenges or unique opportunities for sound on this spot?
A lot of these “Mayhem” spots have the guy put in ridiculous situations. There’s often a lot of noise happening during production, so we have to do a lot of clean up in post using iZotope RX 7. When we can’t get the production dialogue to sound intelligible, we hook up with a studio in New York to record ADR with Dean Winters. For this spot, we had to ADR quite a bit of his dialogue while he is walking the dogs.

For the dog sounds, I have added my dog in there. I recorded his panting (he pants a lot), the dog chain and straining sounds. I also recorded his licking for the end of the spot.

For when Mayhem falls and hits his head, we had a really great sound for him hitting the brick. It was wonderful. But we sent it to the networks, and they felt it was too violent. They said they couldn’t air it because of both the visual and the sound. So, instead of changing the visuals, it was easier to change the sound of his head hitting the brick step. We had to tone it down. It’s neutered.

What’s one sound tool that helped you out on Overly Confident Dog Walker?
In general, there’s often a lot of noise from location in these spots. So we’re cleaning that up. iZotope RX 7 is key!


In Bunch of Wind, Mayhem represents a windy rainstorm. He lifts the patio umbrella and hurls it through the picture window. A massive tree falls on the deck behind him. After Mayhem delivers his message, he knocks over the outdoor patio heater, which smashes on the deck.

Bunch of Wind

Sound-wise, what were some of your challenges or unique opportunities for sound on Bunch of Wind?
What a nightmare for production sound. This one, understandably, was all ADR. We did a lot of Foley work, too, for the destruction to make it feel natural. If I’m doing my job right, then nobody notices what I do. When we’re with Mayhem in the storm, all that sound was replaced. There was nothing from production there. So, the rain, the umbrella flapping, the plate-glass window, the tree and the patio heater, that was all created in post sound.

I had to build up the storm every time we cut to Mayhem. When we see him through the phone, it’s filtered with EQ. As we cut back and forth between on-scene and through the phone, it had to build each time we’re back on him. It had to get more intense.

What are some sound tools that helped you put the ADR into the space on screen?
Sonnox’s Oxford EQ helped on this one. That’s a good plugin. I also used Audio Ease’s Altiverb, which is really good for matching ambiences.


In Car Thief, Mayhem steals cars. He walks up onto a porch, grabs a decorative flagpole and uses it to smash the driver-side window of a car parked in the driveway. Mayhem then hot wires the car and peels out, hitting a motorcycle and mailbox as he flees the scene.

Car Thief

Sound-wise, what were some of your challenges or unique opportunities for sound on Car Thief?
The location sound team did a great job of miking the car window break. When Mayhem puts the wooden flagpole through the car window, they really did that on-set, and the sound team captured it perfectly. It’s amazing. If you hear safety glass break, it’s not like a glass shatter. It has this texture to it. The car window break was the location sound, which I loved. I saved the sound for future reference.

What’s one sound tool that helped you out on Car Thief?
Jeff, the car owner in the spot, is at a sports game. You can hear the stadium announcer behind him. I used Altiverb on the stadium announcer’s line to help bring that out.

What have been your all-time favorite “Mayhem” spots in terms of sound?
I’ve been on this campaign since the start, so I have a few. There’s one called Mayhem is Coming! that was pretty cool. I did a lot of sound design work on the extended key scrape against the car door. Mayhem is in an underground parking garage, and so the key scrape reverberates through that space as he’s walking away.

Deer

Another favorite is Fast Food Trash Bag. The edit of that spot was excellent; the timing was so tight. Just when you think you’ve got the joke, there’s another joke and another. I used the Sound Ideas library for the bear sounds. And for the sound of Mayhem getting dragged under the cars, I can’t remember how I created that, but it’s so good. I had a lot of fun playing perspective on this one.

Often on these spots, the sounds we used were too violent, so we had to tone them down. On the first campaign, there was a spot called Deer. There’s a shot of Mayhem getting hit by a car as he’s standing there on the road like a deer in headlights. I had an excellent sound for that, but it was deemed too violent by the network.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Rob Legato to receive HPA’s Lifetime Achievement Award 

The Hollywood Professional Association (HPA) will honor renowned visual effects supervisor and creative Robert Legato with its Lifetime Achievement Award at the HPA Awards at the Skirball Cultural Center in Los Angeles on November 21. Now in its 14th year, the HPA Awards recognize creative artistry, innovation and engineering excellence in the media content industry. The Lifetime Achievement Award honors the recipients’ dedication to the betterment of the industry.

Legato is an iconic figure in the visual effects industry with multiple Oscar, BAFTA and Visual Effects Society nominations and awards to his credit. He is a multi-hyphenate on many of his projects, serving as visual effects supervisor, VFX director of photography and second unit director. From his work with studios and directors and in his roles at Sony Pictures Imageworks and Digital Domain, he has developed a variety of digital workflows.

He has enjoyed collaborations with leading directors including James Cameron, Jon Favreau, Martin Scorsese and Robert Zemeckis. Legato’s career in VFX began in television at Paramount Pictures, where he supervised visual effects on two Star Trek series, which earned him two Emmy awards. He left Paramount to join the newly formed Digital Domain where he worked with founders James Cameron, Stan Winston and Scott Ross. He remained at Digital Domain until he segued to Sony Imageworks.

Legato began his feature VFX career on Neil Jordan’s Interview with the Vampire. He then served as VFX supervisor and DP for the VFX unit on Ron Howard’s Apollo 13, which earned him his first Academy Award nomination, and a win at the BAFTAs. He worked with James Cameron on Titanic, earning him his first Academy Award. Legato continued to work with Cameron, conceiving and creating the virtual cinematography pipeline for Cameron’s visionary Avatar.

Legato has also enjoyed a long collaboration with Martin Scorsese that began with his consultation on Kundun and continued with the multi-award winning film The Aviator, on which he served as co-second unit director/cameraman and VFX supervisor. Legato’s work on The Aviator won him three VES awards. He returned to work with the director on the Oscar Best Picture winner The Departed as the 2nd unit director/cameraman and VFX supervisor.  Legato and Scorsese collaborated once again on Shutter Island, on which he was both VFX supervisor and 2nd unit director/cameraman. He continued on to Scorsese’s 3D film Hugo, which was nominated for 11 Oscars and 11 BAFTAs, including Best Picture and Best Visual Effects. Legato won his second Oscar for Hugo as well as three VES Society Awards. His collaboration with Scorsese continued with The Wolf of Wall Street as well as with non-theatrical and advertising projects such as the Clio award-winning Freixenet: The Key to Reserva, a 10-minute commercial project, and the Rolling Stones feature documentary, Shine a Light.

Legato worked with director Jon Favreau on Disney’s The Jungle Book (second unit director/cinematographer and VFX supervisor) for which he received his third Academy Award, a British Academy Award, five VES Awards, an HPA Award and the Critics’ Choice Award for Best Visual Effects for 2016. His latest film with Favreau is Disney’s The Lion King, which surpassed $1 billion in box office after fewer than three weeks in theaters.

Legato’s extensive credits include serving as VFX supervisor on Chris Columbus’ Harry Potter and the Sorcerer’s Stone, as well as on two Robert Zemeckis films, What Lies Beneath and Cast Away. He was senior VFX supervisor on Michael Bay’s Bad Boys II, which was nominated for a VES Award for Outstanding Supporting Visual Effects, and for Digital Domain he worked on Bay’s Armageddon.

Legato is a member of ASC, BAFTA, DGA, AMPAS, VES, and the Local 600 and Local 700 unions.

GLOW’s DP and colorist adapt look of new season for Vegas setting

By Adrian Pennington

Netflix’s Gorgeous Ladies of Wrestling (GLOW) are back in the ring for a third round of the dramatic comedy, but this time the girls are in Las Vegas. The glitz and glamour of Sin City seems tailor-made for the 1980s-set GLOW and provided the main creative challenge for Season 3 cinematographer Chris Teague (Russian Doll, Broad City).

DP Chris Teague

“Early on, I met with Christian Sprenger, who shot the first season and designed the initial look,” says Teague, who was recently nominated for an Emmy for his work on Russian Doll. “We still want GLOW to feel like GLOW, but the story and character arc of Season 3 and the new setting led us to build on the look and evolve elements like lighting and dynamic range.”

The GLOW team is headlining the Fan-Tan Hotel & Casino, one of two main sets along with a hotel built for the series and featuring the distinctive Vegas skyline as a backdrop.

“We discussed compositing actors against greenscreen, but that would have turned every shot into a VFX shot and would have been too costly, not to mention time-intensive on a TV schedule like ours,” he says. “Plus, working with a backdrop just felt aesthetically right.”

In that vein, production designer Todd Fjelsted built a skyline using miniatures, a creative decision in keeping with the handcrafted look of the show. That decision, though, required extensive testing of lenses, lighting and look prior to shooting. This testing was done in partnership with post house Light Iron.

“There was no overall shift in the look of the show, but together with Light Iron, we felt the baseline LUT needed to be built on, particularly in terms of how we lit the sets,” explains Teague.

“Chris was clear early on that he wanted to build upon the look of the first two seasons,” says Light Iron colorist Ian Vertovec. “We adjusted the LUT to hold a little more color in the highlights than in past seasons. Originally, the LUT was based on a film emulation and adjusted for HDR. In Season 1, we created a period film look and transformed it for HDR to get a hybrid film emulation LUT. For Season 3, for HDR and standard viewing, we made tweaks to the LUT so that some of the colors would pop more.”

The show was also finished in Dolby Vision HDR. “There was some initial concern about working with backdrops and stages in HDR,” Teague says. “We are used to the way film treats color over its exposure range — it tends to desaturate as it gets more overexposed — whereas HDR holds a lot more color information in overexposure. However, Ian showed how it can be a creative tool.”

Colorist Ian Vertovec

“The goal was to get the 1980s buildings in the background and out the hotel windows to look real — emulating marquees with flashing lights,” adds Vertovec. “We also needed it to be a believable Nevada sky and skyline. Skies and clouds look different in HDR. So, when dialing this in, we discussed how they wanted it to look. Did it feel real? Is the sky in this scene too blue? Information from testing informed production, so everything was geared toward these looks.”

“Ian has been on the first two seasons, so he knows the look inside and out and has a great eye,” Teague continues. “It’s nice to come into a room and have his point of view. Sometimes when you are staring at images all day, it’s easy to lose your objectivity, so I relied on Ian’s insight.” Vertovec grades the show on FilmLight’s Baselight.

As with Season 2, GLOW Season 3 was a Red Helium shoot using Red’s IPP2 color pipeline in conjunction with Vertovec’s custom LUTs all the way to post. Teague shot full 8K resolution to accommodate his choice of Cooke anamorphic lenses, desqueezed and finished in a 2:1 ratio.

“For dailies I used an iPad with Moxion, which is perhaps the best dailies viewing platform I’ve ever worked with. I feel like the color is more accurate than other platforms, which is extremely useful for checking out contrast and shadow level. Too many times with dailies you get blacks washed out and highlights blown and you can’t judge anything critical.”

Teague sat in on the grade of the first three of the 10 episodes and then used the app to pull stills and make notes remotely. “With Ian I felt like we were both on the same page. We also had a great DIT [Peter Brunet] who was doing on-set grading for reference and was able to dial in things at a much higher level than I’ve been able to do in the past.”

The most challenging but also rewarding work was shooting the wrestling performances. “We wanted to do something that felt a little bigger, more polished, more theatrical,” Teague says. “The performance space had tiered seating, which gave us challenges and options in terms of moving the cameras. For example, we could use telescoping crane work to reach across the room and draw characters in as they enter the wrestling ring.”

He commends gaffer Eric Sagot for inspiring lighting cues and building them into the performance. “The wrestling scenes were the hardest to shoot but they’re exciting to watch — dynamic, cinematic and deliberately a little hokey in true ‘80s Vegas style.”


Adrian Pennington is a UK-based journalist, editor and commentator in the film and TV production space. He has co-written a book on stereoscopic 3D and edited several publications.

Review: iZotope’s Neutron 3 Advanced with Mix Assistant

By Tim Wembly

iZotope has been doing more to elevate and simplify the workflows of this generation’s audio pros than any of its competitors. It’s a bold statement, but I stand behind it. From their range of audio restoration tools within RX to their measurement and visualization tools in Ozone to their creative approach to VST effects and instruments like Iris, Breaktweaker and DDLY… they have shown time and time again that they know what audio post pros need.

iZotope breaks their products out into categories that are aimed at different levels of professionalism by providing Essential, Standard and Advanced tiers. This lowers the barrier of entry for users who can’t rationalize the Advanced price tag but still want some of its features. In the newest edition of Neutron 3 Advanced, iZotope has added a tool that might make the extra investment a little more attractive. It’s called Mix Assistant, and for some users this feature will cut down session prep time considerably.

iZotope Neutron 3 Advanced ($279) is a collection of six modules — Sculptor, Exciter, Transient Shaper, Gate, Compressor and Equalizer — aimed at making the mix process less of a daunting technical task and making it more of a fun, creative endeavor. In addition to the modules there is the new Mix Assistant. The Mix Assistant has two modes: Track Enhance and Balance. Track Enhance will analyze a track’s audio content and based on the instrument profile you select and its modules will make your track sound like the best version of that instrument. This can be useful if you don’t want to spend time tweaking the sound of an instrument to get it to sound like itself. I believe the philosophy behind providing this feature is that the creative energy you would spend tweaking you can now reserve for other tasks to complete your sonic vision.

The Balance mode is a virtual mix prep technician, and for some engineers it will be a revolutionary tool when used in the preliminary stages of their mix. Through groundbreaking machine learning, it analyzes every track containing iZotope’s Relay plugin and sets a trim gain at the appropriate level based on what you choose as your “Focus.” For example, if you’re mixing an R&B song with a strong vocal, you would choose your main vocal track as your Focus.

Alternately, if you were mixing a virtuosic guitar song ala Al Di Meola or Santana, you might choose your guitar track as your Focus. Once Neutron analyzes your tracks, it will set the level of each track and then provide you with five groups (Focus, Voice, Bass, Percussion, Musical) that you can further adjust at a macro level. Once you’ve got everything to your preference, you simply click “Accept” and you’re left with a much more manageable session. Depending on your workflow, the drudgery associated with getting your gain staging setup correctly might be an arduous and repetitive task that is streamlined and simplified by using this tool.

As you may have noticed the categories you’re given in the penultimate step of the process are targeting engineers mixing a music session. Since this is a giant portion of the market, it makes sense that the geniuses over at iZotope give people mixing music their attention, but that doesn’t mean you can’t use Neutron for other post audio scenarios.

For example, if someone delivers a commercial with stems for music, a VO track and several sound effect tracks, you can still use the Balance feature; you’ll just have to be a little creative with how you classify each track. Perhaps you can set the VO as your focus and divide the sound effects between the other categories as you see fit considering their timbre.

Since this is a process that happens at the beginning of the mix you are provided with a session that is prepped in the gain staging department so you can start making creative decisions. You can still tweak to your heart’s content you’ll just have one of the more time intensive processes simplified considerably. Neutron 3 Advanced is available from iZotope.


Tim Wembly is an audio post pro and connoisseur of fine and obscure cheeses working at New York City’s Silver Sound Studios

Digital Arts expands team, adds Nutmeg Creative talent

Digital Arts, an independently owned New York-based post house, has added several former Nutmeg Creative talent and production staff members to its roster — senior producer Lauren Boyle, sound designer/mixers Brian Beatrice and Frank Verderosa, colorist Gary Scarpulla, finishing editor/technical engineer Mark Spano and director of production Brian Donnelly.

“Growth of talent, technology, and services has always been part of the long-term strategy for Digital Arts, and we’re fortunate to welcome some extraordinary new talent to our staff,” says Digital Arts owner Axel Ericson. “Whether it’s long-form content for film and television, or working with today’s leading agencies and brands creating dynamic content, we have the talent and technology to make all of our clients’ work engaging, and our enhanced services bring their creative vision to fruition.”

Brian Donnelly, Lauren Boyle and Mark Spano.

As part of this expansion, Digital Arts will unveil additional infrastructure featuring an ADR stage/mix room. The current facility boasts several state-of-the-art audio suites, a 4K finishing theater/mixing dubstage, four color/finishing suites and expansive editorial and production space, which is spread over four floors.

The former Nutmeg team has hit the ground running working their long-time ad agency, network, animation and film studio clients. Gary Scarpulla worked on color for HBO’s Veep and Los Espookys, while Frank Verderosa has been working with agency Ogilvy on several Ikea campaigns. Beatrice mixed spots for Tom Ford’s cosmetics line.

In addition, Digital Arts’ in-house theater/mixing stage has proven to be a valuable resource for some of the most popular TV productions, including recording recent commentary sessions for the legendary HBO series, Game of Thrones and the final season of Veep.

Especially noteworthy is colorist Ericson’s and finishing editor Mark Spano’s collaboration with Oscar-winning directors Karim Amer and Jehane Noujaim to bring to fruition the Netflix documentary The Great Hack.

Digital Arts also recently expanded its offerings to include production services. The company has already delivered projects for agencies Area 23, FCB Health and TCA.

“Digital Arts’ existing infrastructure was ideally suited to leverage itself into end-to-end production,” Donnelly says. “Now we can deliver from shoot to post.”

Tools employed across post are Avid Pro Tools, D Control ES, S3 for audio post and Avid Media Composer, Adobe Premiere and Blackmagic Resolve for editing. Color grading is via Resolve.

Main Image: (L-R) Frank Verderosa, Brian Beatrice and Gary Scarpulla

 

Cabin adds two editors, promotes another

LA-based editorial studio Cabin Editing Company has grown its editing staff with the addition of Greg Scruton and Debbie Berman. They have also promoted Scott Butzer to editor. The trio will work on commercials, music videos, branded content and other short-form projects.

Scruton, who joins Cabin from Arcade Edit, has worked on dozens of high-profile commercials and music videos throughout his career, including Pepsi’s 2019 Grammy’s spot Okurrr, starring Cardi B; Palms Casino Resort’s star-filled Unstatus Quo; and Kendrick Lamar’s iconic Humble music video, for which he earned an AICE Award. Scruton has worked with high-profile ad agencies and directors, including Anomaly; Wieden + Kennedy; 72andSunny; Goodby, Silverstein & Partners; Dave Meyers; and Nadia Lee Cohen. He uses Avid Media Composer and Adobe Premiere.

Feature film editor Berman joins Cabin on the heels of her successful run with Marvel Studios, having recently served as an editor on Spider-Man: Homecoming, Black Panther and Captain Marvel. Her work extends across mediums, with experience editing everything from PSAs and documentaries to animated features. Now expanding her commercial portfolio with Cabin, Berman is currently at work on a Toyota campaign through Saatchi & Saatchi. She will continue to work in features as well. She mostly uses Media Composer but can also work on Premiere.

Cabin’s Butzer was recently promoted to editor after joining the company in 2017 and honing his talent across many platforms, including commercials, music videos and documentaries. His strengths include narrative and automotive work. Recent credits include Every Day Is Your Day for Gatorade celebrating the 2019 Women’s World Cup, The Professor for Mercedes Benz and Vince Staples’ Fun! music video. Butzer has worked with ad agencies and directors, including TBWA\Chiat\Day; Wieden + Kennedy; Goodby, Silverstein & Partners; Team One; Marcus Sonderland; Ryan Booth; and Rachel McDonald. Butzer previously held editorial positions at Final Cut and Whitehouse Post. He studied film at the University of Colorado at Boulder. He also uses Media Composer and Premiere.

Signiant update simplifies secure content exchanges

Signiant will be at IBC this year showing off new capabilities to its SDCX (Software-Defined Content Exchange) SaaS platform designed to simplify secure content exchange between companies. These capabilities will appear first in the company’s newest product, Signiant Jet, which makes it easy to automate and accelerate the transfer of large files between geographically dispersed locations.

Targeted at “lights-out” use cases, Jet meets the growing need to replace scripted FTP and legacy transfer tools with a faster, more reliable and more secure alternative. Jet was first introduced at the 2019 NAB Show. Jet is built on Signiant’s innovative SDCX SaaS platform, which also underpins the company’s widely deployed Media Shuttle solution that sends and shares large files around the world.

At IBC, Jet will include the new content exchange capabilities, offering a secure cloud handshake mechanism that simplifies intercompany transfers. The new functionality enables Jet customers to make storage endpoints private, discoverable to all or discoverable only to select partners in their supply chain. Via a secure web interface, companies can request a connection with a partner. Once both sides accept, specific jobs can be configured and mutually approved to allow for secure, automated transfers between the companies. Jet’s predictable pricing model makes it accessible to companies of all sizes and enables easy cost sharing for intercompany content exchange.

Behind the Title: MPC Senior Compositor Ruairi Twohig

After studying hand-drawn animation, this artist found his way to visual effects.

NAME: NYC-based Ruairi Twohig

COMPANY: Moving Picture Company (MPC)

CAN YOU DESCRIBE YOUR COMPANY?
MPC is a global creative and visual effects studio with locations in London, Los Angeles, New York, Shanghai, Paris, Bangalore and Amsterdam. We work with clients and brands across a range of different industries, handling everything from original ideas through to finished production.

WHAT’S YOUR JOB TITLE?
I work as a 2D lead/senior compositor.

Cadillac

WHAT DOES THAT ENTAIL?
The tasks and responsibilities can vary depending on the project. My involvement with a project can begin before there’s even a script or storyboard, and we need to estimate how much VFX will be involved and how long it will take. As the project develops and the direction becomes clearer, with scripts and storyboards and concept art, we refine this estimate and schedule and work with our clients to plan the shoot and make sure we have all the information and assets we need.

Once the commercial is shot and we have an edit, the bulk of the post work begins. This can involve anything from compositing fully CG environments, dragons or spaceships to beauty and product/pack-shot touch-ups or rig removal. So, my role involves a combination of overall project management and planning. But I also get into the detailed shot work and ultimately delivering the final picture. But the majority of the work I do can require a large team of people with different specializations, and those are usually the projects I find the most fun and rewarding due to the collaborative nature of the work.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think the variety of the work would surprise most people unfamiliar with the industry. In a single day, I could be working on two or three completely different commercials with completely different challenges while also bidding future projects or reviewing prep work in the early stages of a current project.

HOW LONG HAVE YOU BEEN WORKING IN VFX?
I’ve been working in the industry for over 10 years.

HOW HAS THE VFX INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING?
The VFX industry is always changing. I find it exciting to see how quickly the technology is advancing and becoming more widely accessible, cost-effective and faster.

I still find it hard to comprehend the idea of using optical printers for VFX back in the day … before my time. Some of the most interesting areas for me at the moment are the developments in realtime rendering from engines such as Unreal and Unity, and the implementation of AI/machine learning tools that might be able to automate some of the more time-consuming tasks in the future.

DID A PARTICULAR FILM INSPIRE YOU ALONG THIS PATH IN ENTERTAINMENT?
I remember when I was 13, my older brother — who was studying architecture at the time — introduced me to 3ds Max, and I started playing around with some very simple modeling and rendering.

I would buy these monthly magazines like 3D World, which came with demo discs for different software and some CG animation compilations. One of the issues included the short CG film Fallen Art by Tomek Baginski. At the time I was mostly familiar with Pixar’s feature animation work like Toy Story and A Bug’s Life, so watching this short film created using similar techniques but with such a dark, mature tone and story really blew me away. It was this film that inspired me to pursue animation and, ultimately, visual effects.

DID YOU GO TO FILM SCHOOL?
I studied traditional hand-drawn animation at the Dun Laoghaire Institute of Art, Design and Technology in Dublin. This was a really fun course in which we spent the first two years focusing on the craft of animation and the fundamental principles of art and design, followed by another two years in which we had a lot of freedom to make our own films. It was during these final two years of experimentation that I started to move away from traditional animation and focus more on learning CG and VFX.

I really owe a lot to my tutors, who were really supportive during that time. I also had the opportunity to learn from visiting animation masters such as Andreas Deja, Eric Goldberg and John Canemaker. Although on the surface the work I do as a compositor is very different to animation, understanding those fundamental principles has really helped my compositing work; any additional disciplines or skills you develop in your career that require an eye for detail and aesthetics will always make you a better overall artist.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Even after 10 years in the industry, I still get satisfaction from the problem-solving aspect of the job, even on the smaller tasks. I love getting involved on the more creative projects, where I have the freedom to develop the “look” of the commercial/film. But, day to day, it’s really the team-based nature of the work that keeps me going. Working with other artists, producers, directors and clients to make a project look great is what I find really enjoyable.

WHAT’S YOUR LEAST FAVORITE?
Sometimes even if everything is planned and scheduled accordingly, a little hiccup along the way can easily impact a project, especially on jobs where you might only have a limited amount of time to get the work done. So it’s always important to work in such a way that allows you to adapt to sudden changes.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I used to draw all day, every day as a kid. I still sketch occasionally, but maybe I would have pursued a more traditional fine art or illustration career if I hadn’t found VFX.

Tiffany & Co.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Over the past year, I’ve worked on projects for clients such as Facebook, Adidas, Samsung and Verizon. I also worked on the Tiffany & Co. campaign “Believe in Dreams” directed by Francis Lawrence, as well as the company’s holiday campaign directed by Mark Romanek.

I also worked on Cadillac’s “Rise Above” campaign for the 2019 Oscars, which was challenging since we had to deliver four spots within a short timeframe. But it was a fun project. There was also the Michelob Ultra Robots Super Bowl spot earlier this year. That was an interesting project, as the work was completed between our LA, New York and London studios.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Last year, I had the chance to work with my friend and director Sofia Astrom on the music video for the song “Bone Dry” by Eels. It was an interesting project since I’d never done visual effects for a stop-motion animation before. This had its own challenges, and the style of the piece was very different compared to what I’m used to working on day to day. It had a much more handmade feel to it, and the visual effects design had to reflect that, which was such a change to the work I usually do in commercials, which generally leans more toward photorealistic visual effects work.

WHAT TOOLS DO YOU USE DAY TO DAY?
I mostly work with Foundry Nuke for shot compositing. When leading a job that requires a broad overview of the project and timeline management/editorial tasks, I use Nuke Studio or
Autodesk Flame, depending on the requirements of the project. I also use ftrack daily for project management.

WHERE DO YOU FIND INSPIRATION NOW?
I follow a lot of incredibly talented concept artists and photographers/filmmakers on Instagram. Viewing these images/videos on a tiny phone doesn’t always do justice to the work, but the platform is so active that it’s a great resource for inspiration and finding new artists.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I like to run and cycle around the city when I can. During the week it can be easy to get stuck in a routine of sitting in front of a screen, so getting out and about is a much-needed break for me.

Blackmagic: Resolve 16.1 in public beta, updates Pocket Cinema Camera

Blackmagic Design has announced DaVinci Resolve 16.1, an updated version of its edit, color, visual effects and audio post software that features updates to the new cut page, further speeding up the editing process.

With Resolve 16, introduced at NAB 2019, now in final release, the Resolve 16.1 public beta is now available for download from the Blackmagic Design website. This new public beta will help Blackmagic continue to develop new ideas while collaborating with users to ensure those ideas are refined for real-world workflows.

The Resolve 16.1 public beta features changes to the bin that now make it possible to place media in various folders and isolate clips from being used when viewing them in the source tape, sync bin or sync window. Clips will appear in all folders below the current level, and as users navigate around the levels in the bin, the source tape will reconfigure in real time. There’s even a menu for directly selecting folders in a user’s project.

Also new in this public beta is the smart indicator. The new cut page in DaVinci Resolve 16 introduced multiple new smart features, which work by estimating where the editor wants to add an edit or transition and then applying it without the editor having to waste time placing exact in and out points. The software guesses what the editor wants to do and just does it — it adds the inset edit or transition to the edit closest to where the editor has placed the CTI.

But a problem can arise in complex edits, where it is hard to know what the software would do and which edit it would place the effect or clip into. That’s the reason for the beta version’s new smart indicator. The smart indicator provides a small marker in the timeline so users get constant feedback and always know where DaVinci Resolve 16.1 will place edits and transitions. The new smart indicator constantly live-updates as the editor moves around the timeline.

One of the most common items requested by users was a faster way to cut clips in the timeline, so now DaVinci Resolve 16.1 includes a “cut clip” icon in the user interface. Clicking on it will slice the clips in the timeline at the CTI point.

Multiple changes have also been made to the new DaVinci Resolve Editor Keyboard, including a new adaptive scroll feature on the search dial, which will automatically slow down a job when editors are hunting for an in point. The live trimming buttons have been renamed to the same labels as the functions in the edit page, and they have been changed to trim in, trim out, transition duration, slip in and slip out. The function keys along the top of the keyboard are now being used for various editing functions.

There are additional edit models on the function keys, allowing users to access more types of editing directly from dedicated keys on the keyboard. There’s also a new transition window that uses the F4 key, and pressing and rotating the search dial allows instant selection from all the transition types in DaVinci Resolve. Users who need quick picture picture-in in-picture effects can use F5 and apply them instantly.

Sometimes when editing projects with tight deadlines, there is little time to keep replaying the edit to see where it drags. DaVinci Resolve 16.1 features something called a Boring Detector that highlights the timeline where any shot is too long and might be boring for viewers. The Boring Detector can also show jump cuts, where shots are too short. This tool allows editors to reconsider their edits and make changes. The Boring Detector is helpful when using the source tape. In that case, editors can perform many edits without playing the timeline, so the Boring Detector serves as an alternative live source of feedback.

Another one of the most requested features of DaVinci Resolve 16.1 is the new sync bin. The sync bin is a digital assistant editor that constantly sorts through thousands of clips to find only what the editor needs and then displays them synced to the point in the timeline the editor is on. The sync bin will show the clips from all cameras on a shoot stacked by camera number. Also, the viewer transforms into a multi-viewer so users can see their options for clips that sync to the shot in the timeline. The sync bin uses date and timecode to find and sync clips, and by using metadata and locking cameras to time of day, users can save time in the edit.

According to Blackmagic, the sync bin changes how multi-camera editing can be completed. Editors can scroll off the end of the timeline and keep adding shots. When using the DaVinci Resolve Editor Keyboard, editors can hold the camera number and rotate the search dial to “live overwrite” the clip into the timeline, making editing faster.

The closeup edit feature has been enhanced in DaVinci Resolve 16.1. It now does face detection and analysis and will zoom the shot based on face positioning to ensure the person is nicely framed.

If pros are using shots from cameras without timecode, the new sync window lets them sort and sync clips from multiple cameras. The sync window supports sync by timecode and can also detect audio and sync clips by sound. These clips will display a sync icon in the media pool so editors can tell which clips are synced and ready for use. Manually syncing clips using the new sync window allows workflows such as multiple action cameras to use new features such as source overwrite editing and the new sync bin.

Blackmagic Pocket Cinema Camera
Besides releasing the DaVinci Resolve 16.1 public beta, Blackmagic also updated the Blackmagic Pocket Cinema Camera. Blackmagic not only upgraded the camera from 4K to 6K resolution, but it changed the mount to the much used Canon EF style. Previous iterations of the Pocket Cinema Camera used a Micro 4/3s mount, but many users chose to purchase a Micro 4/3s-to-Canon EF adapter, which easily runs over $500 new. Because of the mount change in the Pocket Cinema Camera 6K, users can avoid buying the adapter and — if they shoot with Canon EF — can use the same lenses.

London’s Cheat expands with color and finishing suites

London-based color and finishing house Cheat has expanded, adding three new grading and finishing suites, a production studio and a client lounge/bar space. Cheat now has four large broadcast color suites and services two other color suites at Jam VFX and No.8 in Fitzrovia and Soho, respectively. Cheat has a creative partnership with these studios.

Located in the Arthaus building in Hackney, all four of Cheat’s color suites have calibrated projection or broadcast monitoring and are equipped with cutting-edge hardware for HDR and working with 8K. Cheat was the first color company to complete a TV series in 8K on Netflix’s The End of The F***ing World in 2017. Having invested in improved storage and network infrastructure during this period, the facility is well-equipped to take on 8K and HDR projects.

Cheat uses Autodesk Flame for finishing and Blackmagic DaVinci Resolve for color grading.

The new HDR grading suite offers HDR mastering above 2,000 nits with a Flanders Scientific XM310K reference monitor that can master up to 3,000 nits. Cheat is also now a full-fledged Dolby Vision-certified mastering facility.

“Improving client experience was, of course, a key consideration in shaping the design of the renovation,” says Toby Tomkins, founder of Cheat. “The new color suite is our largest yet and comfortably seats up to 10 people. We designed it from the ground up with a raised client platform and a custom-built bias wall. This allows everyone to look at the same single monitor while grading and maintaining the spacious and relaxed feel of our other suites. The new lounge and bar area also offer a relaxing area for clients to feel at home.”

Dick Wolf’s television empire: his production and post brain trust

By Iain Blair

The TV landscape is full of scripted police procedurals and true crime dramas these days, but the indisputable and legendary king of that crowded landscape is Emmy-winning creator/producer Dick Wolf, whose name has become synonymous with high-quality drama.

Arthur Forney

Since it burst onto the scene back in 1990, his Law & Order show has spawned six dramas and four international spinoffs, while his “Chicago” franchise gave birth to another four series — the hugely popular Chicago Med, Chicago Fire and Chicago P.D. His Chicago Justice was cancelled after one season.

Then there’s his “FBI” shows, as well as the more documentary-style Cold Justice. If you’ve seen Cold Justice — and you should — you know that this is the real deal, focusing on real crimes. It’s all the more fascinating and addictive because of it.

Produced by Wolf and Magical Elves, the real-life crime series follows veteran prosecutor Kelly Siegler, who gets help from seasoned detectives as they dig into small-town murder cases that have lingered for years without answers or justice for the victims. Together with local law enforcement from across the country, the Cold Justice team has successfully helped bring about 45 arrests and 20 convictions. No case is too cold for Siegler, as the new season delves into new unsolved homicides while also bringing updates to previous cases. No wonder Wolf calls it “doing God’s work.” Cold Justice airs on true crime network Oxygen.

I recently spoke with Emmy-winning Arthur Forney, executive producer of all Wolf Entertainment’s scripted series (he’s also directed many episodes), about posting those shows. I also spoke with Cold Justice showrunner Liz Cook and EP/head of post Scott Patch.

Chicago Fire

Dick Wolf has said that, as head of post, you are “one of the irreplaceable pieces of the Wolf Films hierarchy.” How many shows do you oversee?
Arthur Forney: I oversee all of Wolf Entertainment’s scripted series, including Law & Order: Special Victims Unit, Chicago Fire, Chicago P.D., Chicago Med, FBI and FBI: Most Wanted.

Where is all the post done?
Forney: We do it all at NBCUniversal StudioPost in LA.

How involved is Dick Wolf?
Forney: Very involved, and we talk all the time.

How does the post pipeline work?
Forney: All film is shot on location and then sent back to the editing room and streamed into the lab. From there we do all our color corrections, which takes us into downloading it into Avid Media Composer.

What are the biggest challenges of the post process on the shows?
Forney: Delivering high-quality programming with a shortened post schedule.

Chicago Med

What are the editing challenges involved?
Forney: Trying to find the right way of telling the story, finding the right performances, shaping the show and creating intensity that results in high-quality television.

What about VFX? Who does them?
Forney: All of our visual effects are done by Spy Post in Santa Monica. All of the action is enhanced and done by them.

Where do you do the color grading?
Forney: Coloring/grading is all done at NBCUniversal StudioPost.

Now let’s talk to Cook and Patch about Cold Justice:

Liz and Scott, I recently saw the finale to Season 5 of Cold Justice. That was a long season.
Liz Cook: Yes, we did 26 episodes, so it was a lot of very long days and hard work.

It seems that there’s more focus than ever on drug-related cases now.
Cook: I don’t think that was the intention going in, but as we’ve gone on, you can’t help but recognize the huge drug problem in America now. Meth and opioids pop up in a lot of cases, and it’s obviously a crisis, and even if they aren’t the driving force in many cases, they’re definitely part of many.

L-R: Kelly Siegler, Dick Wolf, Scott Patch and Liz Cook. Photo by Evans Vestal Ward

How do you go about finding cases for the show?
Cook: We have a case-finding team, and they get the cases various ways, including cold-calling. We have a team dedicated to that, calling every day, and we get most of them that way. A lot come through agencies and sheriff’s departments that have worked with us before and want to help us again. And we get some from family members and some from hits on the Facebook page we have.

I assume you need to work very closely with local law agencies as you need access to their files?
Cook: Exactly. That’s the first part of the whole puzzle. They have to invite us in. The second part is getting the family involved. I don’t think we’d ever take on a case that the family didn’t want us to do.

What’s involved for you, and do you like being a showrunner?
Cook: It’s a tough job and pretty demanding, but I love it. We go through a lot of steps and stuff to get a case approved, and to get the police and family on board, and then we get the case read by one of our legal readers to evaluate it and see if there’s a possibility that we can solve it. At that point we pitch it to the network, and once they approve it and everyone’s on board, then if there are certain things like DNA and evidence that might need testing, we get all that going, along with ballistics that need researching, and stuff like phone records and so on. And it actually moves really fast – we usually get all these people on board within three weeks.

How long does it take to shoot each show?
Cook: It varies, as each show is different, but around seven or eight days, sometimes longer. We have a case coming up with cadaver dogs, and that stuff will happen before we even get to the location, so it all depends. And some cases will have 40 witnesses, while others might have over 100. So it’s flexible.

Cold Justice

Where do you post, and what’s the schedule like?
Scott Patch: We do it all at the Magical Elves offices here in Hollywood — the editing, sound, color correction. The online editor and colorist is Pepe Serventi, and we have it all on one floor, and it’s really convenient to have all the post in house. The schedule is roughly two months from the raw footage to getting it all locked and ready to air, which is quite a long time.

Dailies come back to us and we do our first initial pass by the story team and editors, and they’ll start whittling all the footage down. So it takes us a couple of weeks to just look at all the footage, as we usually have about 180 hours of it, and it takes a while to turn all that into something the editors can deal with. Then it goes through about three network passes with notes.

What about dealing with all the legal aspects?
Patch: That makes it a different kind of show from most of the others, so we have legal people making sure all the content is fine, and then sometimes we’ll also get notes from local law agencies, as well as internal notes from our own producers. That’s why it takes two months from start to finish.

Cook: We vet it through local law, and they see the cuts before it airs to make sure there are no problems. The biggest priority for us is that we don’t hurt the case at all with our show, so we always check it all with the local D.A. and police. And we don’t sensationalize anything.

Cold Justice

Patch: That’s another big part of editing and post – making sure we keep it authentic. That can be a challenge, but these are real cases with real people being accused of murder.

Cook: Our instinct is to make it dramatic, but you can’t do that. You have to protect the case, which might go to trial.

Talk about editing. You have several editors, I assume because of the time factor. How does that work?
Patch: Some of these cases have been cold for 25 or 30 years, so when the field team gets there, they really stand back and let the cops talk about the case, and we end up with a ton of stuff that you couldn’t fit into the time slot however hard you tried. So we have to decide what needs to be in, what doesn’t.

Cook: On day one, our “war room” day, we meet with the local law and everyone involved in the case, and that’s eight hours of footage right there.

Patch: And that gets cut down to just four or five minutes. We have a pretty small but tight team, with 10 editors who split up the episodes. Once in a while they’ll cross over, but we like to have each team and the producers stay with each episode as long as they can, as it’s so complicated. When you see the finished show, it doesn’t seem that complicated, but there are so many ways you could handle the footage that it really helps for each team to really take ownership of that particular episode.

How involved is Dick Wolf in post?
Cook: He loves the whole post process, and he watches all the cuts and has input.

Patch: He’s very supportive and obviously so experienced, and if we’re having a problem with something, he’ll give notes. And for the most part, the network gives us a lot of flexibility to make the show.

What about VFX on the show?
Patch: We have some, but nothing too fancy, and we use an outside VFX/graphics company, LOM Design. We have a lot of legal documents on the show, and that stuff gets animated, and we’ll also have some 3D crime scene VFX. The only other outside vendor is our composer, Robert ToTeras.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Speed controls now available in Premiere Rush V.1.2

Adobe has added a new panel in Premiere Rush called Speed, which allows users to manipulate the speed of their footage while maintaining control over the audio pitch, range, ramp speed and duration of the edited clip. Adobe’s Premiere Rush teams say speed control has been the most requested feature by users.

Basic speed adjustments: A clip’s speed is displayed as a percentage value, with 100% being realtime. Values below 100% result in slow motion, and values above 100% create fast motion. To adjust the speed, users simply open the speed panel, select “Range Speed” and drag the slider. Or they can tap on the speed percentage next to the slider and enter a specific value.

Speed ranges: Speed ranges allow users to adjust the speed within a specific section of a clip. To create a range, users drag the blue handles on the clip in the timeline or in the speed panel under “Range.” The speed outside the range is 100%, while speed inside the range is adjustable.

Ramping: Rush’s adjustable speed ramps make it possible to progressively speed up or slow down into or out of a range. Ramping helps smooth out speed changes that might otherwise seem jarring.

Duration adjustments: For precise control, users can manually set a clip’s duration. After setting the duration, Rush will do the math and adjust the clip speed to the appropriate value — a feature that is especially useful for time lapses.

Maintain Pitch: Typically, speeding up footage will raise the audio’s pitch (think mouse voice), while slowing down footage will lower it (think deep robot voice). Maintain Pitch in the speed panel takes care of the problem by preserving the original pitch of the audio at any speed.

As with everything in Rush, speed adjustments will transfer seamlessly when opening a Rush project in Premiere Pro.

Bluefish444 adds edit-while-record, REST API to IngeSTore

Bluefish444, makers of uncompressed UltraHD SDI, ASI, video over IP and HDMI I/O cards, and mini converters, has released IngeSTore version 1.1.2. This free update of IngeSTore adds support for new codecs, edit-while-record workflows and a REST API.

Bluefish444 developed IngeSTore software as a complementary multi-channel ingest tool enabling Bluefish444 hardware to capture multiple independent format SDI sources simultaneously.

In IngeSTore 1.1.2, Bluefish444 has expanded codec support to include the popular formats OP1A MPEG-2 and DNxHD within the BlueCodecPack license. Edit-while-record workflows are supported through both industry standard growing files and through Bluefish444’s BlueRT plug-in for Adobe Premiere Pro and Avid Media Composer. BlueRT allows Adobe and Avid NLEs to access media files as they are still being recorded by IngeSTore multi-channel capture software, increasing production efficiency via immediate access to recorded media during live workflows.

Review: LaCie mobile, high Speed 1TB SSD

By Brady Betzel

With the flood of internal and external hard drives hitting the market at relatively low prices, it is sometimes hard to wade through the swamp and find the drive that is right for your workflow. In terms of external drives, do you need a RAID? USB-C? Is Thunderbolt 3 the same as USB-C? Should I save money and go with a spinning drive? Are spinning drives even cheaper than SSD drives these days? All of these questions are valid and, hopefully, I will answer them.

For this review, I’m taking a look at the LaCie Mobile SSD  which comes in three versions: 500GB, 1TB and 2TB, costing around $129.95, $219.95 and $399.95, respectively. According to LaCie’s website the mobile SSD drives are exclusive to Apple, but with some searching on Amazon you can find all three available as well and at lower prices than I’ve mentioned. The 1TB version I am seeing for $152.95 is being sold on Amazon through LaCie, so I assume the warranty still holds up.

I was sent the 1TB version of the LaCie Mobile SSD for review and testing. Along with the drive itself, you will get two connection cables: a (USB 3.0 speed) USB-A to USB-C cable, as well as a (USB 3.1 Gen2 speed) GenUSB-C to USB-C cable. For clarity, USB-C is the type of connection — the oval-like shape and technology used to transfer data. While USB-C will work on Thunderbolt 3 connections, Thunderbolt 3 only connections will not work on USB-C connections. Yes, that is super-confusing considering they look the same. But in the real world, Thunderbolt 3 is more Mac OS-based while USB-C is more Windows-based. You can find rare Thunderbolt 3 connections on Windows-based PCs, but you are more likely to find USB-C. That being said, the LaCie Mobile SSD is compatible with both USB-C and Thunderbolt 3, as well as USB 3.0. Keep in mind you will not get the high transfer speed with the USB 3.0 to USB-C cable. You will only get that with the (USB 3.1 Gen 2) USB-C to USB-C cable. The drive comes formatted as exFAT, which is immediately compatible with both Mac OS and Windows.

So, are spinning drives worth the cheaper price? In my opinion, no. Spinning drives are more fragile when moved around a lot and they transfer at much slower speeds. Advertised speeds vary from about 130MB/s for spinning drives to 540MB/s for SSDs, so for today what amounts to $100 more will give you a significant speed increase.

A very valuable piece of the LaCie Mobile SSD purchase is the limited three-year warranty and three years of data recovery services for free. No matter how your data becomes corrupted, Seagate will try and recover it — Seagate became LaCie’s parent company in 2014. Each product is eligible for one in-lab data recovery attempt and can be turned around in as little as two days, depending on the type of recovery. The recovered media will then be sent back to you on a storage device as well as be available to you from a cloud-based account that will be hosted online for 60 days. This is a great feature that’s included in the price.

The drive itself is small, measuring approximately .35” x 3” x 3.8” and weighing only .22 lbs. The outside has sharp lines much in the vein of a faceted diamond. It feels solid and great to carry. The color is about the same as a MacBook Pro, space gray and is made of aluminum.

Transfer SpeedsAlright, let’s get to the nitty-gritty: transfer speeds. I tested the LaCie Mobile SSD on both a Windows-based PC with USB-C and an iMac Pro with Thunderbolt 3/USB-C. On the Windows PC, I initially connected the drive to a port on the front of my system and I was only getting around 150MB/s write speed (about the speed of USB 3.0). Immediately, I knew something was wrong, so I connected to a USB-C port that was in a PCI-e slot in the rear of my PC. On that port I was getting 440.9MB/s write speed and 516.3MB/s read speeds. Moral of the story, make sure your USB-C ports are not just for charging or simply the USB-C connector running at USB 3.0 speeds.

On the iMac Pro, I was getting write speeds of 487.2MB/s and read speeds of 523.9MB/s. This is definitely on par with the correct Windows PC transfer speeds. The retail packaging on the LaCie Mobile SSD states a 540MB/s speed (doesn’t differentiate between read or write), but much like retail miles-per-gallon readouts on car sales brochures, you have to take their numbers with a few grains of salt. And while I have previoulsy tested drives (not from LaCie) that would initially transfer at a high rate and drop down, the LaCie Mobile SSD drive sustained the high speed transfer rates.

Summing Up
In the end, the size and design of the LaCie Mobile SSD will be one of the larger factors in determining if you buy this drive. It’s small. Like real small, but it feels sturdy. I don’t think anyone can argue that the LaCie Rugged drives (the ones that are orange-rubber encased) are a staple of the post industry. I really wish LaCie kept that tradition and added a tiny little orange rubberized edge. Not only does it feel safer for some reason, but it is a trademark that immediately says, “I’m a professional.”

Besides the appearance, the $152.95 price tag for a 1TB SSD drive that can easily fit into your shirt pocket without being noticed is pretty reasonable. At $219.95 I might say keep looking around. In addition, if you aren’t already an Adobe Creative Cloud subscriber you will get a free 30-day trial (normally seven days) included with purchase.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Shipping + Handling adds Jerry Spivack, Mike Pethel, Matthew Schwab

VFX creative director Jerry Spivack and colorists Michael Pethel and Matthew Schwab have joined LA’s Shipping + Handling, Spot Welders‘ VFX, color grading, animation, and finishing arm/sister company.

Alongside executive producer Scott Friske and current creative director Casey Price, Spivack will help lead the company’s creative team. As the creative director/co-founder at Ring of Fire, Spivack was responsible for crafting and spearheading VFX on commercials for brands including FedEx, Nike and Jaguar; episodic work for series television including Netflix’s Wormwood and 12 seasons of FX’s It’s Always Sunny in Philadelphia; promos for NBC’s The Voice and The Titan Games; and feature films such as Sony Pictures’ Spider-Man 2, Bold Films’ Drive and Warner Bros.’ The Bucket List.

Colorist Pethel was a founding partner of Company 3 and for the past five years has served client and director relationships under his BeachHouse Color brand, which he will continue to maintain. Pethel’s body of work includes campaigns for Carl’s Jr., Chase, Coke, Comcast/Xfinity, Hyundai, Jeep, Netflix and Southwest Airlines.

Commenting on the move, Pethel says, “I’m thrilled to be joining such a fantastic group of highly regarded and skilled professionals at Shipping + Handling. There is so much creativity here; the people are awesome to work with and the technology they are able to offer clientele at the facility is top-notch.”

Schwab formally joins the Shipping + Handling roster after working closely with the company over the past two years on multiple campaigns for Apple, Acura, QuickBooks and many others. Aside from his role at Shipping + Handling, Schwab will also continue his work through Roving Picture Company. Having worked with a number of internationally recognized brands, Schwab has collaborated on projects for Amazon, Honda, Mercedes-Benz, National Geographic, Netflix, Nike, PlayStation and Smirnoff.

“It’s exciting to be part of a team that approaches every project with such energy. This partnership represents a shared commitment to always deliver outstanding color and technical results for our clients,” says Schwab.

“Pethel is easily amongst the best colorists in our industry. As a longtime client of his, I have a real understanding of the professionalism he brings to every session. He is a delight in the room and wickedly talented. Schwab’s talent has just been realized in the last few years, and we are pleased to offer his skill to our clients. If our experience working with him over the last couple of years is any indication, we’re going to make a lot of clients happy he’s on our roster,” adds Friske.

Spivack, Pethel and Schwab will operate out of Shipping + Handling’s West Coast office on the creative campus it shares with its sister company, editorial post house Spot Welders.

Image: (L-R) Mike Pethel, Matthew Schwab, Jerry Spivack

 

Matthew Bristowe joins Jellyfish as COO

UK-based VFX and animation studio Jellyfish Pictures has hired Matthew Bristowe as director of operations. With a career spanning over 20 years, Bristowe joins Jellyfish Pictures after a stint as head of production at Technicolor.

During his 20 years in the industry, Bristowe has overseen hundreds of productions, including; Aladdin (Disney), Star Wars: The Last Jedi (Lucasfilm/Disney), Avengers: Age of Ultron (Marvel) and Guardians of the Galaxy (Marvel). In 2014 he was honored with the Advanced Imaging Society’s Lumiere Award for his work on Alfonso Cuarón’s Academy Award-winning Gravity.

Bristowe led the One Of Us VFX team to success in the category of Special, Visual and Graphic Effects at the BAFTAs and Best Digital Effects at the Royal Television Society Awards for The Crown Season 1. Another RTS award and BAFTA nomination followed in 2018 for The Crown Season 2. Prior to working with Technicolor and One of Us, Bristowe held senior positions at MPC and Prime Focus.

“Matt joining Jellyfish Pictures is a substantial hire for the company,” explains CEO Phil Dobree. “2019 has seen us focus on our growth, following the opening of our newest studio in Sheffield, and Matt’s extensive experience of bringing together creativity and strategy will be instrumental in our further expansion.”

Quick Chat: Bonfire Labs’ Mary Mathaisell

Over the course of nearly 30 years, San Francisco’s Bonfire Labs has embraced change. Over the years, the company evolved from an editorial and post house to a design and creative content studio that leverages the best aspects of the agency and production company models without adhering to either one.

This hybrid model has worked well for product launches for Google, Facebook, Salesforce, Logitech and many others.

The latest change is in the company’s ownership, with the last of the original founders stepping down and a new management partnership taking over — led by executive producer Mary Mathaisell, managing director Jim Bartel and head of strategy and creative Chris Weldon.

We spoke with Mathaisell to get a better sense of Bonfire Labs’ past, present and future.

Can you give us some history of Bonfire Labs? When did you join the company? How/why did you first get into producing?
I’ve been with Bonfire Labs for seven years. I started here as head of production. After being at several large digital agencies working on campaigns and content for brands like Target, Gap, LG and PayPal, I wanted to build something more sustainable than just another campaign and was thrilled that Bonfire was interested in growing into a full-service creative company with integrated production.

Prior to working at AKQA and Publicis, I worked in VFX and production as well as design for products and interfaces, but my primary focus and love has always been commercial production.

The studio has evolved from a traditional post studio to creative strategy and content company. What were the factors that drove those changes?
Bonfire Labs has always been smart about staying small and strategic about the kind of work and clients to focus on. We have been able to change based on both the kind of work we want to be doing and what the market needs. With a giant need for content, especially video content, we have decided to staff and service clients as experts across all the phases of creative development and production and finishing. Instead of going to an agency and a production company and post houses, our clients can work directly with us on everything from concept to finishing.

Silicon Valley is clearly a big client base for you. What are they generally coming to you for? Are the content needs in high tech different from other business sectors?
Our clients usually have a new product, feature or brand that they want the world to know about. We work on product launches, brand awareness campaigns, product education, event content and social content. Most of our work is for technology companies, but every company these days has a technology component. I would say that speed to market is one key differentiator for our clients. We are often building stories as we are in production, so we get a lot done with our clients through creative collaboration and by not following the traditional rules of an agency or a production company.

Any specific trends that you’re seeing recently from your clients? New areas that Bonfire is looking to explore, either new markets for your talents or technology you’re looking to explore further?
Rapid brand prototyping is a new service we are offering to much excitement. Because we have experience across so many technology brands and work closely with our clients, we can develop a language and brand voice faster than most traditional agencies. Technology brands are evolving so quickly that we often start working on content creation before a brand has defined itself or transitioned to its next phase. Rapid brand prototyping allows brands to test content and grow the brand simultaneously.

Blade Shadow

Can you talk about some projects that you have done recently that challenged you and the team?
We rolled out a launch film for a new start-up client called Blade Shadow. We are working with Salesforce to develop trailblazer stories and anthem films for its .org branch, which focuses on NGOs, education and philanthropy.

The company is undergoing a transition with some of the original partners. Can you talk about that a bit as well?
The original founders have passed the torch to the group of people who have been managing and producing the work over the past five to 15 years. We have six new owners, three managing partners and three associate partners. Jim Bartel is the managing director; Chris Weldon is the head of strategy and creative, and I’m the executive producer in charge of content development and production. The three of us make up the management team.

The three of us make up the management team. Sheila Smith (head of production) Robbie Proctor (head of editorial) and Phil Spitler (creative technology lead) are associate partners as they contribute to and lead so much of our work and process and have been part of the company for over 10 years each.

 

Avid’s new control surfaces for Pro Tools, Media Composer, other apps

By Mel Lambert

During a recent come-and-see MPSE Sound Advice evening at Avid’s West Coast offices in Burbank, MPSE members and industry colleagues were treated to an exclusive look at two new control surfaces for editorial suites and film/TV post stages.

The S1 and S4 controllers join the current S3 and larger S6 control surfaces. Session files from all S Series surfaces are fully compatible with one another, enabling edit and mix session data to move freely from facility to facility. All surfaces provide comprehensive control of Eucon-enabled software, including Pro Tools, Cubase, Nuendo, Logic Pro, Media Composer and other apps to create and record tracks, write automation, control plugins, set up routing and a host of other essential operations via assignable faders, buttons and rotary controls.

S1

S1

Jeff Komar, one of Avid’s pro audio solutions specialists, served as our guide during the evening’s demo sessions of the new surfaces for fully integrated sample-accurate editing and immersive mixing. Expected to ship toward the end of the year, the S1 is said to offer full software integration with Avid’s high-end consoles in a portable, slim-line surface, while the S4 — which reportedly begins shipping in September — is said to bring workstation control to small- to mid-sized post facilities in an ergonomic and compact package.

Pro-user prices start at $24,000 for a three-foot S4 with eight faders; a five-foot configuration with 24 on-surface faders and post-control sections should retail for around $50,000. The S1’s expected end-user price will be approximately $1,200.

The S4 provides extensive visual feedback, including switchable display from channel meters, groups, EQ curves and automation data, in addition to scrolling Pro Tools waveforms that can be edited from the surface. The semi-modular architecture accommodates between eight and 24 assignable faders in eight-fader blocks, with add-on displays, joysticks, PEC/direct paddles and all-knob attention modules. The S4 also features assignable talkback, listen back and speaker sources/levels for Foley/ADR recording plus Dolby Atmos and other formats of immersive audio monitoring. The unit can command two connected playback/record workstations. In essence, the S4 replaces the current S6 M10 system.

Avid’s Jeff Komar

From recording and editing tracks to mixing and monitoring in stereo or surround, the smaller S1 surface provides comprehensive control and visual feedback with full-on Eucon compatibility for Pro Tools and Media Composer. There is also native support for third-party applications, such as Apple Logic Pro, Steinberg Cubase, Adobe Premiere Pro and others. Users can connect up to four units — and also add a Pro Tools|Dock — to create an extended controller. Each S1 has an upper shelf designed to hold an iOS- or Android-compatible tablet running the Pro Tools|Control app. With assignable motorized faders and knobs, as well as fast-access touchscreen workflows and programmable Soft Keys, the S1 is said to offer the speed and versatility needed to accelerate post and video projects.

Reaching deeper into the S4’s semi-modular topology, the surface can be configured with up to three Channel Strip Modules (offering a maximum of 24 faders), four Display Modules to provide visual feedback of each session, and up to three optional modules. The Display Module features a high-resolution TFT screen to show channel names, channel meters, routing, groups, automation data and DAW settings, as well as scrolling waveforms and master meters.

Eucon connectivity can be used to control two different software applications simultaneously, with single key press of editing plugins, writing session automation and other complex tasks. Adding joysticks, PEC/Direct paddles and attention panels enable more functions to be controlled simultaneously from the modular control surface to handle various editing and mixing workflows.

S4

The Master Touch Module (MTM) provides fast access to mix and control parameters through a tilting 12.1-inch multipoint touchscreen, with eight programmable rotary encoders and dedicated knobs and keys. The Master Automation Module (MAM) streamlines session navigation plus project automation and features a comprehensive transport control section with shuttle/jog wheel, a Focus Fader, automation controls and numeric keypad. The Channel Strip Module (CSM) handles control-track levels, plugins and other parameters through eight channel faders, 32 top-lit knobs (four per channel) plus other programmable keys and switches.

For mixing and panning surround and immersive audio projects, including Atmos and Ambisonics, the Joystick Module features a pair of controllers with TFT and OLED displays. The Post Module enables switching between live and recorded tracks/stems through two rows of 10 PEC/direct paddles, while the Attention Knob Module features 32 top-lit knobs — or up to 64 via two modules — to provide extra assignable controls and feedback for plugins, EQ, dynamics, panning and more.

Dependent upon the number of Channel Strip Modules and other options, a customized S4 surface can be housed in either a three-, four- or five -foot pre-assembled frame. As a serving suggestion, the S4-3_CB_Top includes one CSM, one MTM, one MAM and filler panels/plates in a three-foot frame, reaching up to an S4-24-fader, five-foot base system that includes three CSMs, one MTM, one MAM and filler panels/plates in a five-foot frame.

My sincere thanks to members of Avid’s Burbank crew, including pro audio solutions specialists Tony Joy and Gil Gowing, together with Richard McKernan, professional console sales manager for the western region, for their hospitality and patience with my probing questions.


LA-based Mel Lambert is principal of Content Creators. He can be reached at mel.lambert@content-creators.com. Follow him on Twitter @MelLambertLA.

An artist’s view of SIGGRAPH 2019

By Andy Brown

While I’ve been lucky enough to visit NAB and IBC several times over the years, this was my first SIGGRAPH. Of course, there are similarities. There are lots of booths, lots of demos, lots of branded T-shirts, lots of pairs of black jeans and a lot of beards. I fit right in. I know we’re not all the same, but we certainly looked like it. (The stats regarding women and diversity in VFX are pretty poor, but that’s another topic.)

Andy Brown

You spend your whole career in one industry and I guess you all start to look more and more like each other. That’s partly the problem for the people selling stuff at SIGGRAPH.

There were plenty of compositing demos from of all sorts of software. (Blackmagic was running a hands-on class for 20 people at a time.) I’m a Flame artist, so I think that Autodesk’s offering is best, obviously. Everyone’s compositing tool can play back large files and color correct, composite, edit, track and deliver, so in the midst of a buzzy trade show, the differences feel far fewer than the similarities.

Mocap
Take the world of tracking and motion capture as another example. There were more booths demonstrating tracking and motion capture than anything in the main hall, and all that tech came in different shapes and sizes and an interesting mix of hardware and software.

The motion capture solution required for a Hollywood movie isn’t the same as the one to create a live avatar on your phone, however. That’s where it gets interesting. There are solutions that can capture and translate the movement of everything from your fingers to your entire body using hardware from an iPhone X to a full 360-camera array. Some solutions used tracking ball markers, some used strips in the bodysuit and some used tiny proximity sensors, but the results were all really impressive.

Vicon

Vicon

Some tracking solution companies had different versions of their software and hardware. If you don’t need all of the cameras and all of the accuracy, then there’s a basic version for you. But if you need everything to be perfectly tracked in real time, then go for the full-on pro version with all the bells and whistles. I had a go at live-animating a monkey using just my hands, and apart from ending with him licking a banana in a highly inappropriate manner, I think it worked pretty well.

AR/VR
AR and VR were everywhere, too. You couldn’t throw a peanut across the room without hitting someone wearing a VR headset. They’d probably be able to bat it away whilst thinking they were Joe Root or Max Muncy (I had to Google him), with the real peanut being replaced with a red or white leather projectile. Haptic feedback made a few appearances, too, so expect to be able to feel those virtual objects very soon. Some of the biggest queues were at the North stand where the company had glasses that looked like the glasses everyone was wearing already (like mine, obviously) except the glasses incorporated a head-up display. I have mixed feelings about this. Google Glass didn’t last very long for a reason, although I don’t think North’s glasses have a camera in them, which makes things feel a bit more comfortable.

Nvidia

Data
One of the central themes for me was data, data and even more data. Whether you are interested in how to capture it, store it, unravel it, play it back or distribute it, there was a stand for you. This mass of data was being managed by really intelligent components and software. I was expecting to be writing all about artificial intelligence and machine learning from the show, and it’s true that there was a lot of software that used machine learning and deep neural networks to create things that looked really cool. Environments created using simple tools looked fabulously realistic because of deep learning. Basic pen strokes could be translated into beautiful pictures because of the power of neural networks. But most of that machine learning is in the background; it’s just doing the work that needs to be done to create the images, lighting and physical reactions that go to make up convincing and realistic images.

The Experience Hall
The Experience Hall was really great because no one was trying to sell me anything. It felt much more like an art gallery than a trade show. There were long waits for some of the exhibits (although not for the golf swing improver that I tried), and it was all really fascinating. I didn’t want to take part in the experiment that recorded your retina scan and made some art out of it, because, well, you know, its my retina scan. I also felt a little reluctant to check out the booth that made light-based animated artwork derived from your date of birth, time of birth and location of birth. But maybe all of these worries are because I’ve just finished watching the Netflix documentary The Great Hack. I can’t help but think that a better source of the data might be something a little less sinister.

The walls of posters back in the main hall described research projects that hadn’t yet made it into full production and gave more insight into what the future might bring. It was all about refinement, creating better algorithms, creating more realistic results. These uses of deep learning and virtual reality were applied to subjects as diverse as translating verbal descriptions into character design, virtual reality therapy for post-stroke patients, relighting portraits and haptic feedback anesthesia training for dental students. The range of the projects was wide. Yet everyone started from the same place, analyzing vast datasets to give more useful results. That brings me back to where I started. We’re all the same, but we’re all different.

Main Image Credit: Mike Tosti


Andy Brown is a Flame artist and creative director of Jogger Studios, a visual effects studio with offices in Los Angeles, New York, San Francisco and London.

Behind the Title: Compadre’s Mika Saulitis

This creative started writing brand campaigns for his favorite oatmeal at eight years old.

NAME: Mika Saulitis

COMPANY: Culver City, California’s Compadre

CAN YOU DESCRIBE YOUR COMPANY?
We’re a creative marketing agency. I could get into the nuts and bolts of our process and services, but what we specialize in is pretty simple: building a brand’s story, telling that story and spreading that story everywhere people can fall in love with it.

WHAT’S YOUR JOB TITLE?
Director of Creative Strategy

WHAT DOES THAT ENTAIL?
The short answer is that I oversee brand strategy and integrated marketing campaigns. The longer answer is that our creative strategy team’s primary goal is to take complex insights and business challenges and develop simple, clear creative solutions. Sometimes that’s renaming a company or developing a new brand position, or conceiving a big “hook” for a 360 marketing campaign and rippling it out across on-air, social, experiential, and brand partnerships.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Identifying the unique differentiator of a brand or product and figuring out how to express that in a succinct, unexpected way.

WHAT’S YOUR LEAST FAVORITE?
Proofreading 150-page presentations.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
9am. Once that coffee hits, I’m off to the races.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I always wanted to be a garbage man growing up, so if they still ride along on the back of the truck, probably that.

WHY DID YOU CHOOSE THIS PROFESSION?
I started writing ads for my favorite oatmeal to convert my classmates when I was eight years old, so I’ve been a marketer at heart for pretty much my whole life.

Freeform

Freeform

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
We just developed a brand campaign for Freeform that rejects dated societal norms. Our concept, “It’s Not Us, It’s You,” was a breakup letter to society; we shot real people, as well as the network’s talent, and empowered them to speak their piece and break up with all the things that suck about societal standards.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Noise-canceling headphones, Apple TV and my bike. That was technology at one point, right?

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
I’m not ashamed to admit that Flo Rida gets my creative juices flowing.

THIS IS A HIGH STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS.
Golf is my ultimate stress reliever. Being surrounded by trees, chirping birds and the occasional “fore” puts me at ease.

Maxon intros Cinema 4D R21, consolidates versions into one offering

By Brady Betzel

At SIGGRAPH 2019, Maxon introduced the next release of its graphics software, Cinema 4D R21. Maxon also announced a subscription-based pricing structure as well as a very welcomed consolidation of its Cinema 4D versions into a single version, aptly titled Cinema 4D.

That’s right, no more Studio, Broadcast or BodyPaint. It all comes in one package at one price, and that pricing will now be subscription-based — but don’t worry, the online anxiety over this change seems to have been misplaced.

The cost has been substantially dropped for Cinema 4D R21, leading the way to start what Maxon is calling the “3D for the Real World” initiative. Maxon wants it to be the tool you choose for your graphics needs.

If you plan on upgrading every year or two, the new subscription-based model seems to be a great deal:

– Cinema 4D subscription paid annually: $59.99/month
– Cinema 4D subscription paid monthly: $94.99/month
– Cinema 4D subscription with Redshift paid annually: $81.99/month
– Cinema 4D subscription with Redshift paid monthly: $116.99/month
– Cinema 4D perpetual pricing: $3,495 (upgradeable)

Maxon did mention that if you have previously purchased Cinema 4D, there will be subscription-based upgrade/crossgrade deals coming.

The Updates
Cinema 4D R21 includes some great updates that will be welcomed by many users, both new and experienced. The new Field Force dynamics object allows the use of dynamic forces in modeling and animation within the MoGraph toolset. Caps and bevels have an all-new system that not only allows the extrusion of 3D logos and text effects but also means caps and bevels are integrated on all spline-based objects.

Furthering Cinema 4D’s integration with third-party apps, there is an all-new Mixamo Control rig allowing you to easily control any Mixamo characters. (If you haven’t checked out the models from Mixamo, you should. It’s a great way to find character rigs fast.)

An all-new Intel Open Image Denoise integration has been added to R21 in what seems like part of a rendering revolution for Cinema 4D. From the acquistion of Redshift to this integration, Maxon is expanding its third-party reach and doesn’t seem scared.

There is a new Node Space, which shows what materials are compatible with chosen render engines, as well as a new API available to third-party developers that allows them to integrate render engines with the new material node system. R21 has overall speed and efficiency improvements, with Cinema 4D supporting the latest processor optimizations from both Intel and AMD.

All this being said, my favorite update — or map toward the future — was actually announced last week. Unreal Engine added Cinema 4D .c4d file support via the Datasmith plugin, which is featured in the free Unreal Studio beta.

Today, Maxon is also announcing its integration with yet another game engine: Unity. In my opinion, the future lies in this mix of real-time rendering alongside real-world television and film production as well as gaming. With Cinema 4D, Maxon is bringing all sides to the table with a mix of 3D modeling, motion-graphics-building support, motion tracking, integration with third-party apps like Adobe After Effects via Cineware, and now integration with real-time game engines like Unreal Engine. Now I just have to learn it all.

Cinema 4D R21 will be available on both Mac OS and Windows on Tuesday, Sept. 3. In the meantime, watch out for some great SIGGRAPH presentations, including one from my favorite, Mike Winkelmann, better known as Beeple. You can find some past presentations on how he uses Cinema 4D to cover his “Everydays.”

Skywalker Sound’s audio post mix for Toy Story 4

By Jennifer Walden

Pixar’s first feature-length film, 1995’s Toy Story, was a game-changer for animated movies. There was no going back after that blasted onto screens and into the hearts of millions. Fast-forward 24 years to the franchise’s fourth installment — Toy Story 4 — and it’s plain to see that Pixar’s approach to animated fare hasn’t changed.

Visually, Toy Story 4 brings so much to the screen, with its near-photorealistic imagery, interesting camera angles and variations in depth of field. “It’s a cartoon, but not really. It’s a film,” says Skywalker Sound’s Oscar-winning re-recording mixer Michael Semanick, who handled the effects/music alongside re-recording mixer Nathan Nance on dialogue/Foley.

Nathan Nance

Here, Semanick and Nance talk about their approach to mixing Toy Story 4, how they use reverb and Foley to bring the characters to life, and how they used the Dolby Atmos surround field to make the animated world feel immersive. They also talk about mixing the stunning rain scene, the challenges of mixing the emotional carnival scenes near the end and mixing the Bo Peep and Woody reunion scene.

Is your approach to mixing an animated film different from how you’d approach the mix on a live-action film? Mix-wise, what are some things you do to make an animated world feel like a real place?
Nathan Nance: The approach to the mix isn’t different. No matter if it’s an animated movie or a live-action movie, we are interested in trying to complement the story and direct the viewer’s attention to whatever the director wants their attention to be on.

With animation, you’re starting with just the ADR, and the approach to the whole sound job is different because you have to pick and choose every single sound and really create those environments. Even with the dialogue, we’re creating spaces with reverb (or lack of reverb) and helping the emotions of the story in the mix. You might not have the same options in a live-action movie.

Michael Semanick:

Michael Semanick: I don’t approach a film differently. Live action or animated, it comes down to storytelling. In today’s world, some of these live-action movies are like animated films. And the animated films are like live-action. I’m not sure which is which anymore.

Whether it’s live action or animation, the sound team is creating the environments. For live-action, they’re often shooting on a soundstage or they’re shooting on greenscreen, and the sound team creates those environments. For live-action films, they try to get the location to be as quiet as it can be to get the dialogue as clean as possible. So, the sound team is only working with dialogue and ADR.

It’s like an animation in that they need to recreate the entire environment. The production sound mixer is trying to capture the dialogue and not the extraneous sounds. The production sound mixer is there to capture the performance from the actors on that day at that time. Sometimes there are production effects, but the post sound team still preps the scene with sound effects, Foley and loop group. Then on the dub stage, we choose how much of that to put in.

For an animated film, they do the same thing. They prep a whole bunch of sounds and then on the dub stage we decide how busy we want the scene to be.

How do you use reverb to help define the spaces and make the animated world feel believable?
Semanick: Nathan really sets the tone when he’s doing the dialogue, defining how the environments and different spaces are going to sound. That works in combination with the background ambiences. It’s really the voice bouncing off objects that gives you the sense of largeness and depth of field. So reverb is really important in establishing the size of the room and also outdoors — how your voice slaps off a building versus how it slaps off of trees or mountains. Reverb is a really essential tool for creating the environments and spaces that you want to put your actors or characters in.

Nance: You can use reverb to try and make the spaces sound “real” — whatever that means for cinema. Or, you can use it to create something that’s more emotional or has a certain vibe. Reverb is really important for making the dry dialogue sound believable, especially in these Pixar films. They are all in on the environments they’ve created. They want it to sound real and really put the viewer there. But then, there are moments when we use reverb creatively to push the moment further and add to the emotional experience.

What are some other things you do mix-wise to help make this animated world feel believable?
Semanick: The addition of Foley helps ground a lot of the animation. Those natural sounds, like footsteps and movements, we take for granted — just walking down the street or sitting in a restaurant. Those become a huge part of these films. The Foley helps to ground the animation. It gives it life, something to hold onto.

Foley is a big part of making the animated world feel believable. You have Foley artists performing to the actual picture, and the way they put a cup down or how they come to a stop adds character to the sound. It can make it sound more human, more real. Really good Foley artists can become the character. They pick up on the nuances — like how the character drags their feet or puts down a cup. All those little things we take for granted but they are all part of our character. Maybe the way you hold a wine glass and set it down is different from how I would do it. So good Foley artists tune into that right away, and they’ll match it with their performance. They’ll put one edge of the cup down and then the other if that’s how the character does it. So Foley helps to ground a lot of the animation and the VFX to reality. It adds realism. Give it up for the Foley artists!

Nance: So many times the sounds that are in Foley are the ones we recognize and take for granted. You hear those little sounds and think, yeah, that’s exactly what that sounds like. It’s because the Foley artists perform it and these are sounds that you recognize from everyday life. That adds to the realism, like Michael said.

Mix-wise, it must have been pretty difficult to push the subtle sounds through a full mix, like the sounds of the little spork named Forky. What are some techniques and sound tools that help you to get these character sounds to cut through?
Semanick: Director Josh Cooley was very particular about the sounds Forky was going to make. Supervising sound editors Ren Klyce and Coya Elliott and their team went out and got a big palette of sounds for different things.

We weeded through them here with Josh and narrowed it down. Josh then kind of left it up to me. He said he just wanted to hear Forky when he needed to hear him and then not ever have to think about it. The problem with Forky is that if there’s too much sound for him then you’re constantly watching what he’s doing as opposed to listening to what he’s saying. I was very diligent about weeding things out a lot of the time and adding sounds in for the eye movements and other tiny, specific sounds. But there’s not much sound in there for him. It’s just the voice because often his sounds were getting in the way of the dialogue and being distracting. We were very diligent about choosing what to hear and not to hear. Josh was very particular about what those sounds should be. He had been working with Ren on those for a couple months.

In balancing a film (and particularly Toy Story 4 with so many characters and so much going on), you have to really pick and choose sounds. You don’t want to pull the audience away in a direction you don’t want. That was one of the main things for Forky — getting his sounds right.

The opening rain scene was stunning! What was your approach to mixing that scene? How did you use the Dolby Atmos surround field to enhance it?
Semanick: That was a tough scene to mix. There is a lot of rain coming down and the challenge was how to get clarity out of the scene and make sure the audience can follow what was happening. So the scene starts out with rain sounds, but during the action sequence there’s actually no rain in the track.

Amazingly, your human ears and your brain fill in that information. I establish the rain and then when the action starts I literally pull all of the rain out. But your mind puts the rain there still. You think you hear it but it’s actually not there. When the track gets quiet all of a sudden, I bring the rain back up so you never miss the rain. No one has ever said anything about not hearing the rain.

I love the sound of rain; don’t get me wrong. I love the sound of rain on windows, rain on cars, rain on metals… Ren and his team did such an amazing job with that. We had a huge palette of rain. But there’s a certain point in the scene where we need the audience to focus on all of the action that’s happening, what’s really going on.

There’s Woody and Slinky Dog being stretched and RC in the gutter, and all this. So when I put all of the sounds up there you couldn’t make out anything. It was confusing. So I pulled all of the rain out. Then we put in all of the specific sounds. We made sure all of the dialogue, music and sounds worked together so the audience could follow the action. Then I went back through and added the rain back in. When we didn’t need it, I drifted it out. And when we needed it, I brought it back in. It took a lot of time to do that and some careful balancing to make it work.

That was a fun thing to do, but it took time. We’re working on a movie that kids and adults are going to see. We didn’t want to make it too loud. We wanted to make it comfortable. But it’s an action scene, so you want it to be exciting. And it had to work with the music. We were very careful about how loud we made things. When things started to hurt, we pulled it all back. We were diligent about keeping control of the volume and getting those balances was very difficult. We don’t want to make it too quiet, but it’s exciting. If we make it too loud then that pushes you away and you don’t pay attention.

That scene was fun in Dolby Atmos. I had the rain all around the theater, in the ceiling. But it does go away and comes back in when needed. It was a fun thing to do.

Did you have a favorite scene for mixing in Atmos?
Semanick: One of my favorite scenes for Atmos was when Bo Peep takes Woody to the top of the carousel and she asks why Woody would ever want to stay with one kid when you can have all of this. I do a subtle thing with the music — there are a few times in the film where I do this — where I pull the music forward as they’re climbing to the top of the carousel. There’s no music in the surrounds or the tops. I pull it so far forward that it’s almost mono.

Then, as they pop up from atop the carousel and the camera sweeps around, I let the music open up. I bloom it into the surrounds and into the overheads. I bloom it really hard with the camera moves. If you’re paying attention, you will feel the music sweep around you. You’re just supposed to feel it, not to really know that it happened. That’s one of the mixing techniques that I learned over the years. The picture editor, Axel Geddes, would ask me to make it “magical” and put more “magic” into it. I started to interpret that as: fill up the surrounds more.

One of the best parts of Atmos is that you have surrounds that are the same as the front speakers so the sound doesn’t fall off. It’s more full-range because it has bass management toward the back. That helps me, mix-wise, to really bring the sound into the room and fill the room out when I need to do that. There are a few scenes like that and Nathan would look at me funny and say, “Wow, I really hear it.”

We’re so concentrated on the sound. I’m just hoping that the audience will feel it wrap around them and give them a good sense of warmth. I’m trying to help push the emotional content. The music was so good. Randy Newman did a great job on a lot of the music. It really helped the story and I wanted to help that be the best it could be emotionally. It was already there, but I just wanted to give that little extra. Pulling the music into the front and then pushing out into the whole theater gave the music an emotional edge.

Nance: There are a couple of fun Atmos moments for effects. When they’re in the dark closet and the sound is happening all around. Also, when Woody wakes up from his voice box removal surgery. Michael was bringing the sewing machine right up into the overheads. We have the pull string floating around the room and into the ceiling. Those two moments were a pretty cool use of the point-source and the enveloping capability of Atmos.

What was the most challenging scene to mix? Why?
Nance: The whole scene with the lost girl and Gabby all the way through the toys’ goodbyes. That was two full sections, but we get so quiet even though there’s a huge carnival happening. It was a huge cheat. It took a lot of work to get into these quiet, delicate moments where we take everything out, all the backgrounds, and it’s very simple. Michael pulled the music forward in some of those spots and the whole mix becomes very simple and quiet. You’re almost holding your breath in these different moments with the goodbyes. Sometimes we think of the really loud, bombastic scenes as being tough. And they were! The escape from the antique store took quite a lot of work to balance and shape. But I think the quiet, delicate scenes take more work because they take more shaping.

Semanick: I agree. Those areas were very difficult. There was a whole carnival going on and I had to strip it all down. I had my moments. When they’re together above the carnival, it looks beautiful up there. The carnival rides behind them are blurry and we didn’t need to hear the sounds. We heard them before. We know what they sound like. Plus, that moment was with the toys. We were just with them. The whole world has dissolved, and the sound of the world too. You see the carnival back there, but you’re not really paying attention to it. You’re paying attention to Woody and Bo Peep or Gabby and the lost girl.

Another interesting scene was when Woody and Forky first walk through the antique store. It was interesting how the tones in each place change and the reverbs on the voices change in every single room. Those scenes were interesting. The challenge was how to establish the antique store. It’s very quiet, so we were very specific on each cut. Where are they? What’s around them? How high is the camera sitting? You start looking closely at the scene. I was able to do things with Atmos, put things in the ceiling.

What scene went through the most evolution mix-wise? What were some of the different ways you tried mixing it? Ultimately, why did you go with the way it’s mixed in the final?
Semanick: There’s a scene when Woody and Bo Peep reunite on the playground. A little girl picks up Woody and she has Bo Peep in her hands. They meet again for the first time. That scene went through changes musically and dialogue-wise. What do we hear? How much of the girl do we hear before we see Bo Peep and Woody looking at each other? We tried several different ways. There were many opinions that came in on that. When does the music bloom? When does it fill the room out? Is the score quite right? They recut the score. They had a different version.

That scene went through quite a bit of ups and downs. We weren’t sure which way to go. Ultimately, Josh was happy with it, and it plays well.

There was another version of Randy’s score that I liked. But, it’s not about what I like. It’s about how the overall room feels — if everybody feels like it’s the best that we can do. If that’s yes, then that’s the way it goes. I’ll always speak up if I have ideas. I’ll say, “Think about this. Think about that.”

That scene went through some changes, and I’m still on the fence. It works great, but I know there’s another version of the music that I preferred. I’ll just have to live with that.

Nance: We just kept trying things out on that scene until we had it feeling good, like it was hitting the right beats. We had to figure out what the timing was, what would have the most emotional impact. That’s why we tried out so many different versions.

Semanick: That’s a big moment in the film. It’s what starts the back half of the film. Woody gets reacquainted with Bo Peep and then we’re off to the races.

What console did you mix Toy Story 4 on and why?
Semanick: We both mixed on the Neve DFC. It’s my console of choice. I love the console; I love the way it sounds. I love that it has separate automation. There’s the editor’s automation that they did. I can change my automation and that doesn’t affect their automation. It’s the best of both worlds. It runs really smoothly. It’s one of the best sounding consoles around.

Nance: I really enjoy working on the Neve DFC. It’s my console of choice when there’s the option.

Semanick: There are a lot of different consoles and control surfaces you can use now, but I’m used to the DFC. I can really play the console as a musical instrument. It’s like a performance. I can perform these balances. I can grab knobs and change EQ or add reverb and pull things back. It’s like a performance and that console seems the most reliable one for me. I know it really well. It helps when you know your instrument.

Any final thoughts you’d like to share on mixing Toy Story 4?
Semanick: With these Pixar films, I get to benefit from the great storytelling and what they’ve done visually. All the aspects of these films Pixar does — the cinematography down to the lighting down to the character development, the costumes and set design — they spent so many hours debating how things are going to look and the design.

So, on the sound side, it’s about matching what they’ve done. How can I help support it? It’s amazing to me how much time they spend on these films. It’s hardcore filmmaking. It’s a cartoon, but not really. It’s a film. and it’s a really good film. You look at all the aspects of it, like how the camera moves. It’s not a real camera but you’re watching through the lens, seeing the camera angles, where and how they place the camera. They have to debate all that.

One of the hardest scenes for them must have been when Bo Peep and Woody are in the antique store and they turn and look at all the chandeliers. It was gorgeous, a beautiful shot. I bloom the music out there, around the theater. That was a delicate scene. When you look at the filmmaking they’re doing there and the reflections of the lights, you know they’re good. They’re really good.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Dalet to acquire Ooyala Flex Media Platform business

Dalet, a provider of solutions and services for content pros and broadcasters, has signed a definitive agreement to acquire the Ooyala Flex Media Platform business. The asset deal includes the Ooyala Flex Media Platform, as well as Ooyala personnel across sales, marketing, engineering, professional services and support.

The Ooyala Flex Media Platform, which is primarily sold as a subscription/SaaS offering, services OTT and digital media distribution workflows. The acquisition of these assets, personnel, and customers will expand the Dalet solutions offering to more verticals and tiers beyond its traditional customer base in production and news workflows, and will accelerate Dalet’s strategic move to increase recurring revenue models, with a subscription/SaaS-based services offering.

“By acquiring Ooyala, Dalet widens the markets it can address in terms of verticals and their respective tiers of complexity. A perfect complement to our existing Dalet Galaxy five offering in our traditional markets, the Ooyala Flex Media Platform also opens opportunities for new customers such as corporate brands, telcos, leagues and sports teams, who are looking to simply manage their media assets. The modern metadata management and orchestration capabilities of the Ooyala Flex Media Platform bring what these organizations need to lower TCO, improve agility and reduce time to market,” says David Lasry, chief executive officer for Dalet.

Virtual Production Field Guide: Fox VFX Lab’s Glenn Derry

Just ahead of SIGGRAPH, Epic Games has published a resource guide called “The Virtual Production Field Guide”  — a comprehensive look at how virtual production impacts filmmakers, from directors to the art department to stunt coordinators to VFX teams and more. The guide is workflow-agnostic.

The use of realtime game engine technology has the potential to impact every aspect of traditional filmmaking, and the trend is increasingly being used in productions ranging from films like Avengers: Endgame and the upcoming Artemis Fowl to TV series like Game of Thrones.

The Virtual Production Field Guide offers an in-depth look at different types of techniques from creating and integrating high-quality CG elements live on set to virtual location scouting to using photoreal LED walls for in-camera VFX. It provides firsthand insights from award-winning professionals who have used these techniques – including directors Kenneth Branagh and Wes Ball, producers Connie Kennedy and Ryan Stafford, cinematographers Bill Pope and Haris Zambarloukos, VFX supervisors Ben Grossmann and Sam Nicholson, virtual production supervisors Kaya Jabar and Glenn Derry, editor Dan Lebental, previs supervisor Felix Jorge, stunt coordinators Guy and Harrison Norris, production designer Alex McDowell, and grip Kim Heath.

As mentioned, the guide is dense with information, so we decided to run an excerpt to give you an idea of what it covers.

Glenn DerryHere is an interview with Glenn Derry, founder and VP of visual effects at Fox VFX Lab, which offers a variety of virtual production services with a focus on performance capture. Derry is known for his work as a virtual production supervisor on projects like Avatar, Real Steel and The Jungle Book.

Let’s find out more.

How has performance capture evolved since projects such as The Polar Express?
In those earlier eras, there was no realtime visualization during capture. You captured everything as a standalone piece, and then you did what they called the director layout. After-the-fact, you would assemble the animation sequences from the motion data captured. Today, we’ve got a combo platter where we’re able to visualize in realtime.
When we bring a cinematographer in, he can start lining up shots with another device called the hybrid camera. It’s a tracked reference camera that he can hand hold. I can immediately toggle between an Unreal overview or a camera view of that scene.The earlier process was minimal in terms of aesthetics. We did everything we could in MotionBuilder, and we made it look as good as it could. Now we can make a lot more mission-critical decisions earlier in the process because the aesthetics of the renders look a lot better.

What are some additional uses for performance capture?
Sometimes we’re working with a pitch piece, where the studio is deciding whether they want to make a movie at all. We use the capture stage to generate what the director has in mind tonally and how the project could feel. We could do either a short little pitch piece or, for something like Call of the Wild, we created 20 minutes and three key scenes from the film to show the studio we could make it work.

The second the movie gets greenlit, we flip over into preproduction. Now we’re breaking down the full script and working with the art department to create concept art. Then we build the movie’s world out around those concepts.

We have our team doing environmental builds based on sketches. Or in some cases, the concept artists themselves are in Unreal Engine doing the environments. Then our virtual art department (VAD) cleans those up and optimizes them for realtime.

Are the artists modeling directly in Unreal Engine?
The artists model in Maya, Modo, 3ds Max, etc. — we’re not particular about the application as long as the output is FBX. The look development, which is where the texturing happens, is all done within Unreal. We’ll also have artists working in Substance Painter and it will auto-update in Unreal. We have to keep track of assets through the entire process, all the way through to the last visual effects vendor.

How do you handle the level of detail decimation so realtime assets can be reused for visual effects?
The same way we would work on AAA games. We begin with high-resolution detail and then use combinations of texture maps, normal maps and bump maps. That allows us to get high-texture detail without a huge polygon count. There are also some amazing LOD [level of detail] tools built into Unreal, which enable us to take a high-resolution asset and derive something that looks pretty much identical unless you’re right next to it, but runs at a much higher frame rate.

Do you find there’s a learning curve for crew members more accustomed to traditional production?
We’re the team productions come to do realtime on live-action sets. That’s pretty much all we do. That said, it requires prep, and if you want it to look great, you have to make decisions. If you were going to shoot rear projection back in the 1940s or Terminator 2 with large rear projection systems, you still had to have all that material pre-shot to make it work.
It’s the same concept in realtime virtual production. If you want to see it look great in Unreal live on the day, you can’t just show up and decide. You have to pre-build that world and figure out how it’s going to integrate.

The visual effects team and the virtual production team have to be involved from day one. They can’t just be brought in at the last minute. And that’s a significant change for producers and productions in general. It’s not that it’s a tough nut to swallow, it’s just a very different methodology.

How does the cinematographer collaborate with performance capture?
There are two schools of thought: one is to work live with camera operators, shooting the tangible part of the action that’s going on, as the camera is an actor in the scene as much as any of the people are. You can choreograph it all out live if you’ve got the performers and the suits. The other version of it is treated more like a stage play. Then you come back and do all the camera coverage later. I’ve seen DPs like Bill Pope and Caleb Deschanel pick this right up.

How is the experience for actors working in suits and a capture volume?
One of the harder problems we deal with is eye lines. How do we assist the actors so that they’re immersed in this, and they don’t just look around at a bunch of gray box material on a set. On any modern visual effects movie, you’re going to be standing in front of a 50-foot-tall bluescreen at some point.

Performance capture is in some ways more actor-centric versus a traditional set because there aren’t all the other distractions in a volume such as complex lighting and camera setup time. The director gets to focus in on the actors. The challenge is getting the actors to interact with something unseen. We’ll project pieces of the set on the walls and use lasers for eye lines. The quality of the HMDs today are also excellent for showing the actors what they would be seeing.

How do you see performance capture tools evolving?
I think a lot of the stuff we’re prototyping today will soon be available to consumers, home content creators, YouTubers, etc. A lot of what Epic develops also gets released in the engine. Money won’t be the driver in terms of being able to use the tools, your creative vision will be.

My teenage son uses Unreal Engine to storyboard. He knows how to do fly-throughs and use the little camera tools we built — he’s all over it. As it becomes easier to create photorealistic visual effects in realtime with a smaller team and at very high fidelity, the movie business will change dramatically.

Something that used to cost $10 million to produce might be a million or less. It’s not going to take away from artists; you still need them. But you won’t necessarily need these behemoth post companies because you’ll be able to do a lot more yourself. It’s just like desktop video — what used to take hundreds of thousands of dollars’ worth of Flame artists, you can now do yourself in After Effects.

Do you see new opportunities arising as a result of this democratization?
Yes, there are a lot of opportunities. High-quality, good-looking CG assets are still expensive to produce and expensive to make look great. There are already stock sites like TurboSquid and CGTrader where you can purchase beautiful assets economically.

But with the final assembly and coalescing of environments and characters there’s still a lot of need for talented people to do it effectively. I can see companies emerging out of that necessity. We spend a lot of time talking about assets because it’s the core of everything we do. You need to have a set to shoot on and you need compelling characters, which is why actors won’t go away.

What’s happening today isn’t even the tip of the iceberg. There are going to be 50 more big technological breakthroughs along the way. There’s tons of new content being created for Apple, Netflix, Amazon, Disney+, etc. And they’re all going to leverage virtual production.
What’s changing is previs’ role and methodology in the overall scheme of production.
While you might have previously conceived of previs as focused on the pre-production phase of a project and less integral to production, that conception shifts with a realtime engine. Previs is also typically a hands-off collaboration. In a traditional pipeline, a previs artist receives creative notes and art direction then goes off to create animation and present it back to creatives later for feedback.

In the realtime model, because the assets are directly malleable and rendering time is not a limiting factor, creatives can be much more directly and interactively involved in the process. This leads to higher levels of agency and creative satisfaction for all involved. This also means that instead of working with just a supervisor you might be interacting with the director, editor and cinematographer to design sequences and shots earlier in the project. They’re often right in the room with you as you edit the previs sequence and watch the results together in realtime.

Previs image quality has continued to increase in visual fidelity. This means a greater relationship between previs and final pixel image quality. When the assets you develop as a previs artist are of a sufficient quality, they may form the basis of final models for visual effects. The line between pre and final will continue to blur.

The efficiency of modeling assets only once is evident to all involved. By spending the time early in the project to create models of a very high quality, post begins at the outset of a project. Instead of waiting until the final phase of post to deliver the higher-quality models, the production has those assets from the beginning. And the models can also be fed into ancillary areas such as marketing, games, toys and more.

Beecham House‘s VFX take viewers back in time

Cambridge, UK-based Vine FX was the sole visual effects vendor on Gurinder Chadha’s Beecham House, a new Sunday night drama airing on ITV in the UK. Set in the India of 1795, Beecham House is the story of John Beecham (Tom Bateman), an Englishman who resigned from military service to set up as an honorable trader of the East India Company.

The series was shot at Ealing Studios and at some locations in India, with the visual effects work focusing on the Port of Delhi, the emperor’s palace and Beecham’s house. Vine FX founder Michael Illingworth assisted during development of the series and supervised his team of artists, creating intricate set extensions, matte paintings and period assets.

To make the shots believable and true to the era, the Vine FX team consulted closely with the show’s production designer and researched the period thoroughly. All modern elements — wires, telegraph poles, cars and lamp posts — had to be removed from the shoot footage, but the biggest challenge for the team was the Port of Delhi itself, a key location in the series.

Vine FX created a digital matte painting to extend the port and added numerous 3D boats and 3D people people working on the docks to create a busy working port of 1795 — a complex task and achieved by the expert eye of the Vine team.

“The success of this type of VFX is in its subtlety. We had to create a Delhi of 1795 that the audience believed, and this involved a great deal of research into how this would have looked that was essential to making it realistic,” says Illingworth. “Hopefully, we managed to do this.  I’m particularly happy with the finished port sequences as originally there were just three boats.

“I worked very closely with on-set supervisor Oliver Milburn while he was on set in India so was very much part of the production process in terms of VFX,” he continues. “Oliver would send me reference material from the shoot; this is always fundamental to the outcome of the VFX, as it allows you to plan ahead and work out any potential upcoming challenges. I was working on the VFX in Cambridge while Oliver was on set in Delhi — perfect!”

Vine FX used Photoshop and Nuke are its main tools. The artists modeled assets with Maya and Zbrush and painted assets using Substance painter. They rendered with Arnold.

Vine FX is currently working on War of the Worlds for Fox Networks and Canal+, due for release next year.

The Umbrella Academy‘s Emmy-nominated VFX supe Everett Burrell

By Iain Blair

If all ambitious TV shows with a ton of visual effects aspire to be cinematic, then Netflix’s The Umbrella Academy has to be the gold standard. The acclaimed sci-fi, superhero, adventure mash-up was just Emmy-nominated for its season-ending episode “The White Violin,” which showcased a full range of spectacular VFX. This included everything from the fully-CG Dr. Pogo to blowing up the moon and a mansion to the characters’ varied superpowers. Those VFX, mainly created by movie powerhouse Weta Digital in New Zealand and Spin VFX in Toronto, indeed rival anything in cinema. This is partly thanks to Netflix’s 4K pipeline.

The Umbrella Academy is based on the popular, Eisner Award-winning comics and graphic novels created and written by Gerard Way (“My Chemical Romance”), illustrated by Gabriel Bá, and published by Dark Horse Comics.

The story starts when, on the same day in 1989, 43 infants are born to unconnected women who showed no signs of pregnancy the day before. Seven are adopted by Sir Reginald Hargreeves, a billionaire industrialist, who creates The Umbrella Academy and prepares his “children” to save the world. But not everything went according to plan. In their teenage years, the family fractured and the team disbanded. Now, six of the surviving members reunite upon the news of Hargreeves’ death. Luther, Diego, Allison, Klaus, Vanya and Number Five work together to solve a mystery surrounding their father’s death. But the estranged family once again begins to come apart due to divergent personalities and abilities, not to mention the imminent threat of a global apocalypse.

The live-action series stars Ellen Page, Tom Hopper, Emmy Raver-Lampman, Robert Sheehan, David Castañeda, Aidan Gallagher, Cameron Britton and Mary J. Blige. It is produced by Universal Content Productions for Netflix. Steve Blackman (Fargo, Altered Carbon) is the executive producer and showrunner, with additional executive producers Jeff F. King, Bluegrass Television, and Mike Richardson and Keith Goldberg from Dark Horse Entertainment.

Everett Burrell

I spoke with senior visual effects supervisor and co-producer Everett Burrell (Pan’s Labyrinth, Altered Carbon), who has an Emmy for his work on Babylon 5, about creating the VFX and the 4K pipeline.

Congratulations on being nominated for the first season-ending episode “The White Violin,” which showcased so many impressive visual effects.
Thanks. We’re all really proud of the work.

Have you started season two?
Yes, and we’re already knee-deep in the shooting up in Canada. We shoot in Toronto, where we’re based, as well as Hamilton, which has this great period look. So we’re up there quite a bit. We’re just back here in LA for a couple of weeks working on editorial with Steve Blackman, the executive producer and showrunner. Our offices are in Encino, in a merchant bank building. I’m a co-producer as well, so I also deal a lot with editorial — more than normal.

Have you planned out all the VFX for the new season?
To a certain extent. We’re working on the scripts and have a good jump on them. We definitely plan to blow the first season out of the water in terms of what we come up with.

What are the biggest challenges of creating all the VFX on the show?
The big one is the sheer variety of VFX, which are all over the map in terms of the various types. They go from a completely animated talking CG chimpanzee Dr. Pogo to creating a very unusual apocalyptic world, with scenes like blowing up the moon and, of course, all the superpowers. One of the hardest things we had to do — which no one will ever know just watching it — was a ton of leaf replacement on trees.

Digital leaves via Montreal’s Folks.

When we began shooting, it was winter and there were no leaves on the trees. When we got to editorial we realized that the story spans just eight days, so it wouldn’t make any sense if in one scene we had no leaves and in the next we had leaves. So we had to add every single leaf to the trees for all of the first five episodes, which was a huge amount of work. The way we did it was to go back to all the locations and re-shoot all the trees from the same angles once they were in bloom. Then we had to composite all that in. Folks in Montreal did all of it, and it was very complicated. Lola did a lot of great work on Hargreeves, getting his young look for the early 1900s and cleaning up the hair and wrinkles and making it all look totally realistic. That was very tricky too.

Netflix is ahead of the curve thanks to its 4K policy. Tell us about the pipeline.
For a start, we shoot with the ARRI Alexa 65, which is a very robust cinema camera that was used on The Revenant. With its 65mm sensor, it’s meant for big-scope, epic movies, and we decided to go with it to give our show that great cinema look. The depth of field is like film, and it can also emulate film grain for this fantastic look. That camera shoots natively at 5K — it won’t go any lower. That means we’re at a much higher resolution than any other show out there.

And you’re right, Netflix requires a 4K master as future-proofing for streaming and so on. Those very high standards then trickle down to us and all the VFX. We also use a very unique system developed by Deluxe and Efilm called Portal, which basically stores the entire show in the cloud on a server somewhere, and we can get background plates to the vendors within 10 minutes. It’s amazing. Back in the old days, you’d have to make a request and maybe within 24 or 48 hours, you’d get those plates. So this system makes it almost instantaneous, and that’s a lifesaver.

   
Method blows up the moon.

How closely do you work with Steve Blackman and the editors?
I think Steve said it best:”There’s no daylight between the two of us” We’re linked at the hip pretty much all the time. He comes to my office if he has issues, and I go to his if we have complications; we resolve all of it together in probably the best creative relationship I’ve ever had. He relies on me and counts on me, and I trust him completely. Bottom line, if we need to write ourselves out of a sticky situation, he’s also the head writer, so he’ll just go off and rewrite a scene to help us out.

How many VFX do you average for each show?
We average between 150 and 200 per episode. Last season we did nearly 2,000 in total, so it’s a huge amount for a TV show, and there’s a lot of data being pushed. Luckily, I have an amazing team, including my production manager Misato Shinohara. She’s just the best and really takes care of all the databases, and manages all the shot data, reference, slates and so on. All that stuff we take on set has to go into this massive database, and just maintaining that is a huge job.

Who are the main VFX vendors?
The VFX are mainly created by Weta in New Zealand and Spin VFX in Toronto. Weta did all the Pogo stuff. Then we have Folks, Lola, Marz, Deluxe Toronto, DigitalFilm Tree in LA… and then Method Studios in Vancouver did great work on our end-of-the-world apocalyptic sequence. They blew up the moon and had a chunk of it hitting the Earth, along with all the surrounding imagery. We started R&D on that pretty early to get a jump on it. We gave them storyboards and they did previz. We used that as a cut to get iterations of it all. There were a lot of particle simulations, which was pretty intense.

Weta created Dr. Pogo

What have been the most difficult VFX sequences to create?
Just dealing with Pogo is obviously very demanding, and we had to come up with a fast shortcut to dealing with the photo-real look as we just don’t have the time or budget they have for the Planet of the Apes movies. The big thing is integrating him in the room as an actor with the live actors, and that was a huge challenge. We used just two witness cameras to capture our Pogo body performer. All the apocalyptic scenes were also very challenging because of the scale, and then those leaves were very hard to do and make look real. That alone took us a couple of months. And we might have the same problem this year, as we’re shooting in the summer through fall, and I’m praying that the leaves don’t start falling before we wrap.

What have been the main advances in technology that have really helped you pull off some of the show’s VFX?
I think the rendering and the graphics cards are the big ones, and the hardware talks together much more efficiently now. Even just a few years ago, and it might have taken weeks and weeks to render a Pogo. Now we can do it in a day. Weta developed new software for creating the texture and fabric of Pogo’s clothes. They also refined their hair programs.

 

I assume as co-producer that you’re very involved with the DI?
I am… and keeping track of all that and making sure we keep pushing the envelope. We do the DI at Company 3 with colorist Jill Bogdanowicz, who’s a partner in all of this. She brings so much to the show, and her work is a big part of why it looks so good. I love the DI. It’s where all the magic happens, and I get in there early with Jill and take care of the VFX tweaks. Then Steve comes in and works on contrast and color tweaks.By the time Steve gets there, we’re probably 80% of the way there already.

What can fans expect from season two?
Bigger, better visual effects. We definitely pay attention to the fans. They love the graphic novel, so we’re getting more of that into the show.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Masv now integrates with Premiere for fast file delivery

Masv, which sends large video files via the cloud, is offering a new extension for Adobe Premiere. The extension simplifies the delivery of data-heavy video projects by embedding Masv’s accelerated cloud transfer technology directly within the NLE.

The new extension is available for free at www.massive.io/premiere or the Adobe Exchange.

The new Masv Panel for Premiere reliably renders, uploads and sends large (20GB and higher) files that are typically too big for conventional cloud transfer services. Masv sends files over a high-performance global network, exploiting users’ maximum Internet bandwidth.

“Today’s video professionals are increasingly independent and distributed globally. They need to deliver huge projects faster, often from home studios or remote locations, while collaborating with teams that can change from project to project,” says Dave Horne. “This new production paradigm has broken traditional transfer methods, namely the shipping of hard drives and use of expensive on-prem transfer tools.

“By bringing MASV directly inside Premiere Pro, now even the largest Premiere project can be delivered via Cloud, streamlining the export process and tightly integrating digital project delivery within editors’ workflows.”

Key Features:
• The new Masv extension installs in a dockable panel, integrating perfectly into Premiere Pro CC 2018 and higher (MacOS/Windows)
• Users can upload full projects, project sequences, files and folders from within Premiere Pro. The Masv Panel retries aggressively, sending files successfully even in poor networking conditions.
• Users can render projects to any Adobe Media Encoder export preset and then send. Favorite export formats can be stored for quick use on future uploads.
• When exporting to Media Encoder, users can choose to automatically upload and send after rendering. Alternatively, they can opt to review your export before uploading.
• Users can monitor export and transfer progress, plus upload performance stats, in realtime.• Users can distribute transfer notifications via email and Slack.
• Deliveries are secured by adding a password. Transfers are fully encrypted at rest and in flight and comply with GDPR standards.
• Users can set storage duration based on project requirements. Set a nearer delete date for maximum security or longer for convenience.
• Set download limits protect sensitive content and manage transfer costs.
• Users can send files from Premiere and then use the Masv Web Application to review delivery status, delivery costs and manage active deliveries easily.
• Users can send terabytes of data, at very fast speeds without having to manage storage or deal with file size limits.

Masv launched a new version of the service in February, followed by a chain of significant product updates.