OWC 12.4

Category Archives: post production

The 71st NATAS Technical & Engineering Emmy Award winners

The National Academy of Television Arts & Sciences (NATAS) has announced the recipients of the 71st Annual Technology & Engineering Emmy Awards. The event will take place in partnership with the National Association of Broadcasters, during the NAB Show on Sunday, April 19 in Las Vegas.

The Technology & Engineering Emmy Awards are awarded to a living individual, a company or a scientific or technical organization for developments and/or standardization involved in engineering technologies that either represent so extensive an improvement on existing methods or are so innovative in nature that they materially have affected television.

A Committee of engineers working in television considers technical developments in the industry and determines which, if any, merit an award.

“The Technology & Engineering Emmy Award was the first Emmy Award issued in 1949 and it laid the groundwork for all the other Emmys to come,” says Adam Sharp, CEO/president of NATAS. “We are especially excited to be honoring Yvette Kanouff with our Lifetime Achievement Award in Technology & Engineering.”

Kanouff has held CTO and president roles at various companies in the cable and media industry. Over the years, she has spearheaded transformational technologies, such as video on demand, cloud DVR, digital and on-demand advertising, streaming security and privacy.

And now the Awards recipients:

2020 Technical / Engineering Achievement Awards

Pioneering System for Live Performance-Based Animation Using Facial Recognition
– Adobe

HTML5 Development and Deployment of a Full TV Experience on Any Device
– Apple
– Google
– LG
– Microsoft
– Mozilla
– Opera
– Samsung

Pioneering Public Cloud Based linear Media Supply Chains
– AWS
– Discovery
– Evertz
– Fox Neo (Walt Disney Television)
– SDVI

Pioneering Development of Large Scale, Cloud Served, Broadcast Quality,
Linear Channel Transmission to Consumers
– Sling TV
– Sony PlayStation Vue
– Zattoo

Early Development of HSM Systems that Created a Pivotal Improvement in Broadcast Workflows
– Dell (Isilon)
– IBM
– Masstech
– Quantum

Pioneering Development and Deployment of Hybrid Fiber Coax Network Architecture
– Cable Labs

Pioneering Development of the CCD Image Sensor
– Bell Labs
– Michael Tompsett

VoCIP (Video over Bonded Cellular Internet)
– AVI West
– Dejero
– LiveU
– TVU Networks

Ultra-High Sensitivity HDTV Camera
– Canon
– Flovel

Development of Synchronized multi-channel uncompressed audio transport over IP Networks
– ALC NetworX
– Audinate
– Audio Engineering Society
– Kevin Gross
– QSC
– Telos Alliance
– Wheatstone

Emmy Statue image courtesy of ATAS/NATAS

Review: HP’s ZBook G6 mobile workstation

By Brady Betzel

In a year that’s seen AMD reveal an affordable 64-core processor with its Threadripper 3, it appears as though we are picking up steam toward next-level computing.

Apple finally released its much-anticipated Mac Pro (which comes with a hefty price tag for the 1.5TB upgrade), and custom-build workstation companies — like Boxx and Puget Systems — can customize good-looking systems to fit any need you can imagine. Additionally, over the past few months, I have seen mobile workstations leveling the playing field with their desktop counterparts.

HP is well-known in the M&E community for its powerhouse workstations. Since I started my career, I have either worked on a MacPro or an HP. Both have their strong points. However, workstation users who must be able to travel with their systems, there have always been some technical abilities you had to give up in exchange for a smaller footprint. That is, until now.

The newly released HP ZBook 15 G6 has become the rising the rising tide that will float all the boats in the mobile workstation market. I know I’ve said it before, but the classification of “workstation” is technically much more than just a term companies just throw around. The systems with workstation-level classification (at least from HP) are meant to be powered on and run at high levels 24 hours a day, seven days a week, 365 days a year.

They are built with high-quality, enterprise-level components, such as ECC (error correcting code) memory. ECC memory will self-correct errors that it sees, preventing things like blue screens of death and other screen freezes. ECC memory comes at a cost, and that is why these workstations are priced a little higher than a standard computer system. In addition, the warranties are a little more inclusive — the HP ZBook 15 G6 comes with a standard three-year/on-site service warranty.

Beyond the “workstation” classification, the ZBook 15 G6 is amazingly powerful, brutally strong and incredibly colorful and bright. But what really matters is under the hood. I was sent the HP ZBook 15 G6 that retails for $4,096 and contains the following specs:
– Intel Xeon E-2286M (eight cores/16 threads — 2.4GHz base/5GHz Turbo)
– Nvidia Quadro RTX 3000 (6GB VRAM)
15.6-inch UHD HP Dream Color display, anti-glare, WLED backlit 600 nits, 100% DCI-P3
– 64GB DDR4 2667MHz
– 1TB PCIe Gen 3 x4 NVMe SSD TLC
– FHD webcam 1080p plus IR camera
– HP collaboration keyboard with dual point stick
– Fingerprint sensor
– Smart Card reader
– Intel Wi-Fi 6 AX 200, 802.11ac 2×2 +BT 4.2 combo adapter (vPro)
– HP long-life battery four-cell 90 Wh
– Three-year limited warranty

The ZBook 15 G6 is a high-end mobile workstation with a price that reflects it. However, as I said earlier, true workstations are built to withstand constant use and, in this case, abuse. The ZBook 15 G6 has been designed to pass up to 21 extensive MIL-STD 810G tests, which is essentially worst-case scenario testing. For instance, drop testing of around four feet, sand and dust testing, radiation testing (the sun beating down on the laptop for an extended period) and much more.

The exterior of the G6 is made of aluminum and built to withstand abuse. The latest G6 is a little bulky/boxy, in my opinion, but I can see why it would hold up to some bumps and bruises, all while working at blazingly fast speeds, so bulk isn’t a huge issue for me. Because of that bulk, you can imagine that this isn’t the lightest laptop either. It weighs in at 5.79 pounds for the lowest end and measures 1 inch by 14.8 inches by 10.4 inches.

On the bottom of the workstation is an easy-to-access panel for performing repairs and upgrades yourself. I really like the bottom compartment. I opened it and noticed I could throw in an additional NVMe drive and an SSD if needed. You can also access memory here. I love this because not only can you perform easy repairs yourself, but you can perform upgrades or part replacements without voiding your warranty on the original equipment. I’m glad to see that HP kept this in mind.

The keyboard is smaller than a full-size version but has a number keypad, which I love using when typing in timecodes. It is such a time-saver for me. (I credit entering in repair order numbers when I fixed computers at Best Buy as a teenager.) On the top of the keyboard are some handy shortcuts if you do web conferences or calls on your computer, including answering and ending calls. The Bang & Olufsen speakers are some of the best laptop speakers I’ve heard. While they aren’t quite monitor-quality, they do have some nice sound on the low end that I was able to fine-tune in the Bang & Olufsen audio control app.

Software Tests
All right, enough of the technical specs. Let’s get on to what people really want to know — how the HP ZBook 15 G6 performs while using apps like Blackmagic’s DaVinci Resolve and Adobe Premiere Pro. I used sample Red and Blackmagic Raw footage that I use a lot in testing. You can grab the Red footage here and the BRaw footage here. Keep in mind you will need to download the BRaw software to edit with BRaw inside of Adobe products, which you can find here).

Performance monitor while exporting in Resolve with VFX.

For testing in Resolve and Premiere, I strung out one-minute of 4K, 6K and 8K Red media in one sequence and the 4608×2592 4K and 6K BRaw media in another. During the middle of my testing Resolve had a giant Red API upgrade to allow for better realtime playback of Red Raw files if you have an Nvidia CUDA-based GPU.

First up is Resolve 16.1.1 and then Resolve 16.1.2. Both sequences are set to UHD (3840×2160) resolution. One sequence of each codec contains just color correction, while another of each codec contains effects and color correction. The Premiere sequence with color and effects contains basic Lumetri color correction, noise reduction (50) and a Gaussian blur with settings of 0.4. In Resolve, the only difference in the color and effects sequence is that the noise reduction is spatial and set to Enhanced, Medium and 25/25.

In Resolve, the 4K Red media would play in realtime while the 6K (RedCode 3:1) would jump down to about 14fps to 15fps, and the 8K (RedCode 7:1) would play at 10fps at full resolution with just color correction. With effects, the 4K media would play at 20fps, 6K at 3fps and 8K at 10fps. The Blackmagic Raw video would play at real time with just color correction and around 3fps to 4fps with effects.

This is where I talk about just how loud the fans in the ZBook 15 G6 can get. When running exports and benchmarks, the fans are noticeable and a little distracting. Obviously, we are running some high-end testing with processor- and GPU-intensive tests but still, the fans were noticeable. However, the bottom of the mobile workstation was not terribly hot, unlike the MacBook Pros I’ve tested before. So my lap was not on fire.

In my export testing, I used those same sequences as before and from Adobe Premiere Pro 2020. I exported UHD files using Adobe Media Encoder in different containers and codecs: H.264 (Mov), H.265 (Mov), ProResHQ, DPX, DCP and MXF OP1a (XDCAM). The MXF OP1a was at 1920x1080p export.
Here are my results:

Red (4K,6K,8K)
– Color Only: H.264 – 5:27; H.265 – 4:45; ProResHQ – 4:29; DPX – 3:37; DCP – 10:38; MXF OP1a – 2:31

Red Color, Noise Reduction (50), Gaussian Blur .4: H.264 – 4:56; H.265 – 4:56; ProResHQ – 4:36; DPX – 4:02; DCP – 8:20; MXF OP1a – 2:41

Blackmagic Raw
Color Only: H.264 – 2:05; H.265 – 2:19; ProResHQ – 2:04; DPX – 3:33; DCP – 4:05; MXF OP1a – 1:38

Color, Noise Reduction (50), Gaussian Blur 0.4: H.264 – 1:59; H.265 – 2:22; ProResHQ – 2:07; DPX – 3:49; DCP – 3:45; MXF OP1a – 1:51

What is surprising is that when adding effects like noise reduction and a Gaussian blur in Premiere, the export times stayed similar. While using the ZBook 15 G6, I noticed my export times improved when I upgraded driver versions, so I re-did my tests with the latest Nvidia drivers to make sure I was consistent. The drivers also solved an issue in which Resolve wasn’t reading BRaw properly, so remember to always research drivers.

The Nvidia Quadro RTX 3000 really pulled its weight when editing and exporting in both Premiere and Resolve. In fact, in previous versions of Premiere, I noticed that the GPU was not really being used as well as it should have been. With the Premiere Pro 2020 upgrade it seems like Adobe really upped its GPU usage game — at some points I saw 100% GPU usage.

In Resolve, I performed similar tests, but instead of ProResHQ I exported a DNxHR QuickTime file/package instead of a DCP and IMF package. For the most part, they are stock exports in the Deliver page of Resolve, except I forced Video Levels, Forced Debayer and Resizing to Highest Quality. Here are my results from Resolve version 16.1.1 and 16.1.2. (16.1.2 will be in parenthesis.)

– Red (4K, 6K, 8K) Color Only: H.264 – 2:17 (2:31); H.265 – 2:23 (2:37); DNxHR – 2:59 (3:06); IMF – 6:37 (6:40); DPX – 2:48 (2:45); MXF OP1A – 2:45 (2:33)

Color, Noise Reduction (Spatial, Enhanced, Medium, 25/25), Gaussian Blur 0.4: H.264 – 5:00 (5:15); H.265 – 5:18 (5:21); DNxHR – 5:25 (5:02); IMF – 5:28 (5:11); DPX – 5:23 (5:02); MXF OP1a – 5:20 (4:54)

-Blackmagic Raw Color Only: H.264 – 0:26 (0:25); H.265 – 0:31 (0:30); DNxHR – 0:50 (0:50); IMF – 3:51 (3:36); DPX – 0:46 (0:46); MXF OP1a – 0:23 (0:22)

Color, Noise Reduction (Spatial, Enhanced, Medium, 25/25), Gaussian Blur 0.4: H.264 – 7:51 (7:53); H.265 – 7:45 (8:01); DNxHR – 7:53 (8:00); IMF – 8:13 (7:56); DPX – 7:54 (8:18); MXF OP1a – 7:58 (7:57)

Interesting to note: Exporting Red footage with color correction only was significantly faster from Resolve, but for Red footage with effects applied, export times were similar between Resolve and Premiere. With the CUDA Red SDK update to Resolve in 16.1.2, I thought I would see a large improvement, but I didn’t. I saw an approximate 10% increase in playback but no improvement in export times.

Puget

Puget Systems has some great benchmarking tools, so I reached out to Matt Bach, Puget Systems’ senior labs technician, about my findings. He suggested that the mobile Xeon could possibly still be the bottleneck for Resolve. In his testing he saw a larger increase in speed with AMD Threadripper 3 and Intel i9-based systems. Regardless, I am kind of going deep on realtime playback of 8K Red Raw media on a mobile workstation — what a time we are in. Nonetheless, Blackmagic Raw footage was insanely fast when exporting out of Resolve, while export time for the Blackmagic Raw footage with effects was higher than I expected. There was a consistent use of the GPU and CPU in Resolve much like in the new version of Premiere 2020, which is a trend that’s nice to see.

In addition to Premiere and Resolve testing, I ran some common benchmarks that provide a good 30,000-foot view of the HP ZBook 15 G6 when comparing it to other systems. I decided to use the Puget Systems benchmarking tools. Unfortunately, at the time of this review, the tools were only working properly with Premiere and After Effects 2019, so I ran the After Effects benchmark using the 2019 version. The ZBook 15 G6 received an overall score of 802, render score of 79, preview score of 75.2 and tracking score of 86.4. These are solid numbers that beat out some desktop systems I have tested.

Corona

To test some 3D applications, I ran the Cinebench R20, which gave a CPU score of 3243, CPU (single core) score of 470 and an M/P ratio of 6.90x. I recently began running the Gooseberry benchmark scene in Blender to get a better sense of 3D rendering performance, and it took 29:56 to export. Using the Corona benchmark, it took 2:33 to render 16 passes, 3,216,368 rays/s. Using Octane Bench the ZBook 15 G6 received a score of 139.79. In the Vray benchmark for CPU, it received 9833 Ksamples, and in the Vray GPU testing, 228 mpaths. I’m not going to lie; I really don’t know a lot about what these benchmarks are trying to tell me, but they might help you decide whether this is the mobile workstation for your work.

Cinebench

One benchmark I thought was interesting between driver updates for the Nvidia Quadro RTX 3000 was the Neat Bench from Neat Video — the noise reduction plugin for video. It measures whether your system should use the CPU, GPU or a combination thereof to run Neat Video. Initially, the best combination result was to use the CPU only (seven cores) at 11.5fps.

After updating to the latest Nvidia drivers, the best combination result was to use the CPU (seven cores) and GPU (Quadro RTX 3000) at 24.2fps. A pretty incredible jump just from a driver update. Moral of the story: Make sure you have the correct drivers always!

Summing Up
Overall, the HP ZBook 15 G6 is a powerful mobile workstation that will work well across the board. From 3D to color correction apps, the Xeon processor in combination with the Quadro RTX 3000 will get you running 4K video without a problem. With the HP DreamColor anti-glare display using up to 600 nits of brightness and covering 100% of the DCI-P3 color space, coupled with the HDR option, you can rely on the attached display for color accuracy if you don’t have your output monitor attached. And with features like two USB Type-C ports (Thunderbolt 3 plus DP 1.4 plus USB 3.1 Gen 2), you can connect external monitors for a larger view of your work

The HP Fast Charge will get you out of a dead battery fiasco with the ability to go from 0% to 50% charge in 45 minutes. All of this for around $4,000 seems to be a pretty low price to pay, especially because it includes a three-year on-site warranty and because the device is certified to work seamlessly with many apps that pros use with HP’s independent software vendor verifications.

If you are looking for a mobile workstation upgrade, are moving from desktop to mobile or want an alternative to a MacBook Pro, you should price a system out online.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

OWC 12.4

The Mill opens boutique studio in Berlin

Technicolor’s The Mill has officially launched in Berlin. This new boutique studio is located in the heart of Berlin, situated in the creative hub of Mitte, near many of Germany’s agencies, production companies and brands.

The Mill has been working with German clients for years. Recent projects include the Mercedes’ Bertha Benz spot with director Sebastian Strasser; Netto’s The Easter Surprise, directed in-house by The Mill; and BMW The 8 with director Daniel Wolfe. The new studio will bring The Mill’s full range of creative services from color to experiential and interactive, as well as visual effects and design.

The Mill Berlin crew

Creative director Greg Spencer will lead the creative team. He is a multi-award winning creative, having won several VES, Cannes Lions and British Arrow awards. His recent projects include Carlsberg’s The Lake, PlayStation’s This Could Be You and Eve Cuddly Toy. Spencer also played a role in some of Mill Film’s major titles. He was the 2D supervisor for Les Misérables and also worked on the Lord of the Rings trilogy. His resume also includes campaigns for brands such as Nike and Samsung.

Executive producer Justin Stiebel moves from The Mill London, where he has been since early 2014, to manage client relationships and new business. Since joining the company, Stiebel has produced spots such as Audi’s Next Level and the Mini’s “The Faith of a Few” campaign. He has also collaborated with directors such as Sebastian Strasser, Markus Walter and Daniel Wolfe while working on brands like Mercedes, Audi and BMW.

Sean Costelloe is managing director of The Mill London and The Mill Berlin.

Main Image Caption: (L-R) Justin Stiebel and Greg Spencer


Quantum F1000: a lower-cost NVMe storage option

Quantum is now offering the F1000, a lower-priced addition to the Quantum F-Series family of NVMe storage appliances. Using the software-defined architecture introduced with the F2000, the F1000 offers “ultra-fast streaming” performance and response times at a lower entry price. The F-Series can be used to accelerate the capture, edit and finishing of high-definition content and to accelerate VFX and CGI render speeds up to 100 times for developing augmented and virtual reality.

The Quantum F-Series was designed to handle content such as HD video used for movie, TV and sports production, advertising content or image-based workloads that require high-speed processing. Pros are using F-Series NVMe systems as part of Quantum’s StorNext scale-out file storage cluster and leveraging the StorNext data management capabilities to move data between NVMe storage pools and other storage pools. Users can take advantage of the performance boost NVMe provides for workloads that require it, while continuing to use lower-cost storage for data where performance is less critical.

Quantum F-Series NVMe appliances accelerate pro workloads and also help customers move from Fibre Channel networks to less expensive IP-based networks. User feedback has shown that pros need a lower cost of entry into NVMe technology, which is what led Quantum to develop the F1000. According to Quantum, the F1000 offers performance that is five to 10 times faster than an equivalent SAS SSD storage array at a similar price.

The F1000 is available in two capacity points: 39TB and 77TB. It offers the same connectivity options as the F2000 — 32Gb Fibre Channel or iSER/RDMA using 100Gb Ethernet — and is designed to be deployed as part of a StorNext scale out file storage cluster.


DP Chat: The Grudge’s Zachary Galler

By Randi Altman

Being on set is like coming home for New York-based cinematographer Zachary Galler, who as a child would tag along with his father while he directed television and film projects. The younger Galler started in the industry as a lighting technician and quickly worked his way up to shooting various features and series.

His first feature as a cinematographer, The Sleepwalker, premiered at the in 2014 and was later distributed by IFC. His second feature, She’s Lost Control, was awarded the C.I.C.A.E. Award at the Berlin International Film Festival later that year. Other television credits include all eight episodes of Discovery’s scripted series Manhunt: Unabomber, Hulu’s The Act and USA’s Briarpatch (coming in February). He recently completed the feature Nicolas Pesce-directed thriller The Grudge, which stars John Cho and Betty Gilpin and is in theaters now.

Tell us about The Grudge. How early did you get involved in planning, and what direction were you given by the director about the look he wanted?
Nick and I worked together on a movie he directed called Piercing. That was our first collaboration, but we discovered that we had very similar ideas and working styles and we formed a special relationship. Shortly after that project, we started talking about The Grudge, and about a year later we were shooting. We talked a lot about how this movie should feel, and how we could achieve something new and different from something neither of us had done before. We used a lot of look-books and movie references to communicate, so when it came time to shoot we had the visual language down fluently and that allowed us keep each other consistent in execution.

How would you describe the look?
Nick really liked the bleach-bypass look from David Fincher’s Se7en, and I thought about a mix of that and (photographer) Bill Henson. We also knew that we had to differentiate between the different storyline threads in the movie, so we had lots to figure out. One of the threads is darker and looks very yellow, while another is warmer and more classic. Another is slightly more desaturated and darker. We did keep the same bleach-bypass look throughout, but adjusted our color temperature, contrast and saturation accordingly. For a horror movie like this, I really wanted to be able to control where the shadow detail turned into black, because some of our scare scenes relied on that so we made sure to light accordingly, and were able to fine-tune most of that in-camera.

How did you work with the director and colorist to achieve that look?
We worked with FotoKem colorist Kostas Theodosiou (who used Blackmagic Resolve). I was shooting a TV show during the main color pass, so I only got to check in to set looks and approve final color, but Nick and Kostas did a beautiful job. Kostas is a master of contrast control and very tastefully helped us ride that line of where there should be detail and where it should not be detail. He was definitely an important part of the collaboration and helped make the movie better.

Where was it shot and how long was the shoot?
We shot the movie in 35 days in Winnipeg, Canada.

How did you go about choosing the right camera and lenses for this project and why these tools?
Nick decided early on that he wanted to shoot this film anamorphic. Panavision has been an important partner for me on most of my projects, and I knew that I loved their glass. We got a range of different lenses from Panavision Toronto to help us differentiate our storylines — we shot one on T Series, one on Primo anamorphics and one on G Series anamorphics. The Alexa Mini was the camera of choice because of its low light sensitivity and more natural feel.

Now more general questions…

How did you become interested in cinematography?
My father was a director, so I would visit him on set a lot when I was growing up. I didn’t know quite what I wanted to do when I was young but I knew that it was being on set. After dropping out of film school, I got a job working in a lighting rental warehouse and started driving trucks and delivering lights to sets in New York. I had always loved taking pictures as a kid and as I worked more and learned more, I realized that what I wanted to do was be a DP. I was very lucky in that I found some great collaborators early on in my career that both pushed me and allowed me to fail. This is the greatest job in the world.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
Artistically, I am inspired by painters, photographers and other DPs. There are so many people doing such amazing work right now. As far as technology is concerned, I’m a bit slow with adopting, as I need to hold something in my hands or see what it does before I adopt it. I have been very lucky to get to work with some great crews, and often a camera assistant, gaffer or key grip will bring something new to the table. I love that type of collaboration.

 

DP Zachary Galler (right) and director Nicolas Pesce on the set of Screen Gems’ The Grudge.

What new technology has changed the way you works?
For some reason, I was resistant to using LUTs for a long time. The Grudge was actually the first time I relied on something that wasn’t close to just plain Rec 709. I always figured that if I could get the 709 feeling good when I got into color I’d be in great shape. Now, I realize how helpful they can be, and that you can push much further. I also think that the Astera LED tubes are amazing. They allow you to do so much so fast and put light in places that would be very hard to do with other traditional lighting units.

What are some of your best practices or rules you try to follow on each job?
I try to be pretty laid back on set, and I can only do that because I’m very picky about who I hire in prep. I try and let people run their departments as much as possible and give them as much information as possible — it’s like cooking, where you try and get the best ingredients and don’t do much to them. I’ve been very lucky to have worked with some great crews over the years.

What’s your go-to gear — things you can’t live without?
I really try and keep an open mind about gear. I don’t feel romantically attached to anything, so that I can make the right choices for each project.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Directing Olly’s ‘Happy Inside Out’ campaign

How do you express how vitamins make you feel? Well, production company 1stAveMachine partnered with independent creative agency Yard NYC to develop the stylized “Happy Inside Out” campaign for Olly multivitamin gummies to show just that.

Beauty

The directing duo of Erika Zorzi and Matteo Sangalli, known as Mathery, highlighted the brand’s products and benefits by using rich textures, colors and lighting. They shot on an ARRI Alexa Mini. “Our vision was to tell a cohesive narrative, where each story of the supplements spoke the same visual language,” Mathery explains. “We created worlds where everything is possible and sometimes took each product’s concept to the extreme and other times added some romance to it.”

Each spot imagines various benefits of taking Olly products. The side-scrolling Energy, which features a green palette, shows a woman jumping and doing flips through life’s everyday challenges, including through her home to work, doing laundry and going to the movies. Beauty, with its pink color pallete, features another woman “feeling beautiful” while turning the heads of a parliament of owls. Meanwhile, Stress, with its purple/blue palette, features a women tied up in a giant ball of yarn, and as she unspools herself, the things that were tying her up spin away. In the purple-shaded Sleep, a lady lies in bed pulling off layer after layer of sleep masks until she just happily sleeps.

Sleep

The spots were shot with minimal VFX, other than a few greenscreen moments, and the team found itself making decisions on the fly, constantly managing logistics for stunt choreography, animal performances and wardrobe. Jogger Studios provided the VFX using Autodesk Flame for conform, cleanup and composite work. Adobe After Effects was used for all of the end tag animation. Cut+Run edited the campaign.

According to Mathery, “The acrobatic moves and obstacle pieces in the Energy spot were rehearsed on the same day of the shoot. We had to be mindful because the action was physically demanding on the talent. With the Beauty spot, we didn’t have time to prepare with the owls. We had no idea if they would move their heads on command or try to escape and fly around the whole time. For the Stress spot, we experimented with various costume designs and materials until we reached a look that humorously captured the concept.”

The campaign marks Mathery’s second collaboration with Yard NYC and Olly, who brought the directing team into the fold very early on, during the initial stages of the project. This familiarity gave everyone plenty of time to let the ideas breath.


Recreating the Vatican and Sistine Chapel for Netflix’s The Two Popes

The Two Popes, directed by Fernando Meirelles, stars Anthony Hopkins as Pope Benedict XVI and Jonathan Pryce as current pontiff Pope Francis in a story about one of the most dramatic transitions of power in the Catholic Church’s history. The film follows a frustrated Cardinal Bergoglio (the future Pope Francis) who in 2012 requests permission from Pope Benedict to retire because of his issues with the direction of the church. Instead, facing scandal and self-doubt, the introspective Benedict summons his harshest critic and future successor to Rome to reveal a secret that would shake the foundations of the Catholic Church.

London’s Union was approached in May 2017 and supervised visual effects on location in Argentina and Italy over several months. A large proportion of the film takes place within the walls of Vatican City. The Vatican was not involved in the production and the team had very limited or no access to some of the key locations.

Under the direction of production designer Mark Tildesley, the production replicated parts of the Vatican at Rome’s Cinecitta Studios, including a life-size, open ceiling, Sistine Chapel, which took two months to build.

The team LIDAR-scanned everything available and set about amassing as much reference material as possible — photographing from a permitted distance, scanning the set builds and buying every photographic book they could lay their hands on.

From this material, the team set about building 3D models — created in Autodesk Maya — of St. Peter’s Square, the Basilica and the Sistine Chapel. The environments team was tasked with texturing all of these well-known locations using digital matte painting techniques, including recreating Michelangelo’s masterpiece on the ceiling of the Sistine Chapel.

The story centers on two key changes of pope in 2005 and 2013. Those events attracted huge attention, filling St. Peter’s Square with people eager to discover the identity of the new pope and celebrate his ascension. News crews from around the world also camp out to provide coverage for the billions of Catholics all over the world.

To recreate these scenes, the crew shot at a school in Rome (Ponte Mammolo) that has the same pattern on its floor. A cast of 300 extras was shot in blocks in different positions at different times of day, with costume tweaks including the addition of umbrellas to build a library that would provide enough flexibility during post to recreate these moments at different times of day and in different weather conditions.

Union also called on Clear Angle Studios to individually scan 50 extras to provide additional options for the VFX team. This was an ambitious crowd project, so the team couldn’t shoot in the location, and the end result had to stand up at 4K in very close proximity to the camera. Union designed a Houdini-based system to deal with the number of assets and clothing in such a way that the studio could easily art-direct them as individuals, allow the director to choreograph them and deliver a believable result.

Union conducted several motion capture shoots inhouse at Union to provide some specific animation cycles that married with the occasions they were recreating. This provided even more authentic-looking crowds for the post team.

Union worked on a total of 288 VFX shots, including greenscreens, set extensions, window reflections, muzzle flashes, fog and rain and a storm that included a lightning strike on the Basilica.

In addition, the team did a significant amount of de-aging work to accommodate the film’s eight-year main narrative timeline as well as a long period in Pope Francis’ younger years.


A Beautiful Day in the Neighborhood director Marielle Heller

By Iain Blair

If you are of a certain age, the red cardigan, the cozy living room and the comfy sneakers can only mean one thing — Mister Rogers! Sony Pictures’ new film, A Beautiful Day in the Neighborhood, is a story of kindness triumphing over cynicism. It stars Tom Hanks and is based on the real-life friendship between Fred Rogers and journalist Tom Junod.

Marielle Heller

In the film, jaded writer Lloyd Vogel (Matthew Rhys), whose character is loosely based on Junod, is assigned a profile of Rogers. Over the course of his assignment, he overcomes his skepticism, learning about empathy, kindness and decency from America’s most beloved neighbor.

A Beautiful Day in the Neighborhood is helmed by Marielle Heller, who most recently directed the film Can You Ever Forgive Me? and whose feature directorial debut was 2015’s The Diary of a Teenage Girl. Heller has also directed episodes of Amazon’s Transparent and Hulu’s Casual.

Behind the scenes, Heller collaborated with DP Jody Lee Lipes, production designer Jade Healy, editor Anne McCabe, ACE, and composer Nate Heller.

I recently spoke with Heller about making the film, which is generating a lot of Oscar buzz, and her workflow.

What sort of film did you set out to make?
I didn’t want to make a traditional biopic, and part of what I loved about the script was it had this larger framing device — that it’s a big episode of Mister Rogers for adults. That was very clever, but it’s also trying to show who he was deep down and what it was like to be around him, rather than just rattling off facts and checking boxes. I wanted to show Fred in action and his philosophy. He believed in authenticity and truth and listening and forgiveness, and we wanted to embody all that in the filmmaking.

It couldn’t be more timely.
Exactly, and it’s weird since it’s taken eight years to get it made.

Is it true Tom Hanks had turned this down several times before, but you got him in a headlock and persuaded him to do it?
(Laughs) The headlock part is definitely true. He had turned it down several times, but there was no director attached. He’s the type of actor who can’t imagine what a project will be until he knows who’s helming it and what their vision is.

We first met at his grandkid’s birthday party. We became friends, and when I came on board as director, the producers told me, “Tom Hanks was always our dream for playing Mister Rogers, but he’s not interested.” I said, “Well, I could just call him and send him the script,” and then I told Tom I wasn’t interested in doing an imitation or a sketch version, and that I wanted to get to his essence right and the tone right. It would be a tightrope to walk, but if we could pull it off, I felt it would be very moving. A week later he was like, “Okay, I’ll do it.” And everyone was like, “How did you get him to finally agree?” I think they were amazed.

What did he bring to the role?
Maybe people think he just breezed into this — he’s a nice guy, Fred’s a nice guy, so it’s easy. But the truth is, Tom’s an incredibly technically gifted actor and one of the hardest-working ones I’ve ever worked with. He does a huge amount of research, and he came in completely prepared, and he loves to be directed, loves to collaborate and loves to do another take if you need it. He just loves the work.

Any surprises working with him?
I just heard that he’s actually related to Fred, and that’s another weird thing. But he truly had to transform for the role because he’s not like Fred. He had to slow everything down to a much slower pace than is normal for him and find Fred’s deliberate way of listening and his stillness and so on. It was pretty amazing considering how much coffee Tom drinks every day.

What did Matthew Rhys bring to his role?
It’s easy to forget that he’s actually the protagonist and the proxy for all the cynicism and neuroticism that many of us feel and carry around. This is what makes it so hard to buy into a Mister Rogers world and philosophy. But Matthew’s an incredibly complex, emotional person, and you always know how much he’s thinking. He’s always three steps ahead of you, he’s very smart, and he’s not afraid of his own anger and exploring it on screen. I put him through the ringer, as he had to go through this major emotional journey as Lloyd.

How important was the miniature model, which is a key part of the film?
It was a huge undertaking, but also the most fun we had on the movie. I grew up building miniatures and little cities out of clay, so figuring it all out — What’s the bigger concept behind it? How do we make it integrate seamlessly into the story? — fascinated me. We spent months figuring out all the logistics of moving between Fred’s set and home life in Pittsburgh and Lloyd’s gritty, New York environment.

While we shot in Pittsburgh, we had a team of people spend 12 weeks building the detailed models that included the Pittsburgh and Manhattan skylines, the New Jersey suburbs, and Fred’s miniature model neighborhood. I’d visit them once a week to check on progress. Our rule of thumb was we couldn’t do anything that Fred and his team couldn’t do on the “Neighborhood,” and we expanded a bit beyond Fred’s miniatures, but not outside of the realm of possibility. We had very specific shots and scenes all planned out, and we got to film with the miniatures for a whole week, which was a delight. They really help bridge the gap between the two worlds — Mister Rogers’ and Lloyd’s worlds.

I heard you shot with the same cameras the original show used. Can you talk about how you collaborated with DP Jody Lee Lipes, to get the right look?
We tracked down original Ikegami HK-323 cameras, which were used to film the show, and shipped them in from England and brought them to the set in Pittsburgh. That was huge in shooting the show and making it even more authentic. We tried doing it digitally, but it didn’t feel right, and it was Jody who insisted we get the original cameras — and he was so right.

Where did you post?
We did it in New York — the editing at Light Iron, the sound at Harbor and the color at Deluxe.

Do you like the post process?
I do, as it feels like writing. There’s always a bit of a comedown from production for me, which is so fast-paced. You really slow down for post; it feels a bit like screeching to a halt for me, but the plus is you get back to the deep critical thinking needed to rewrite in the edit, and to retell the story with the sound and the DI and so on.

I feel very strongly that the last 10% of post is the most important part of the whole process. It’s so tempting to just give up near the end. You’re tired, you’ve lost all objectivity, but it’s critical you keep going.

Talk about editing with Anne McCabe. What were the big editing challenges?
She wasn’t on the set. We sent dailies to her in New York, and she began assembling while we shot. We have a very close working relationship, so she’d be on the phone immediately if there were any concerns. I think finding the right tone was the biggest challenge, and making it emotionally truthful so that you can engage with it. How are you getting information and when? It’s also playing with audiences’ expectations. You have to get used to seeing Tom Hanks as Mister Rogers, so we decided it had to start really boldly and drop you in the deep end — here you go, get used to it! Editing is everything.

There are quite a few VFX. How did that work?
Obviously, there’s the really big VFX sequence when Lloyd goes into his “fever dreams” and imagines himself shrunk down on the set of the neighborhood and inside the castle. We planned that right from the start and did greenscreen — my first time ever — which I loved. And even the practical miniature sets all needed VFX to integrate them into the story. We also had seasonal stuff, period-correct stuff, cleanup and so on. Phosphene in New York did all the VFX.

Talk about the importance of sound and music.
My composer’s also my brother, and he starts very early on so the music’s always an integral part of post and not just something added at the end. He’s writing while we shoot, and we also had a lot of live music we had to pre-record so we could film it on the day. There’s a lot of singing too, and I wanted it to sound live and not overly produced. So when Tom’s singing live, I wanted to keep that human quality, with all the little mouth sounds and any mistakes. I left all that in purposely. We never used a temp score since I don’t like editing to temp music, and we worked closely with the sound guys at Harbor in integrating all of the music, the singing, the whole sound design.

How important is the DI to you?
Hugely important and we finessed a lot with colorist Sam Daley. When you’re doing a period piece, color is so crucial – that it feels authentic to that world. Jody and Sam have worked together for a long time and they worked very hard on the LUT before we began, and every department was aware of the color palette and how we wanted it to look and feel.

What’s next?
I just started a new company called Defiant By Nature, where I’ll be developing and producing TV projects by other people. As for movies, I’m taking a little break.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Behind the title: Cutters editor Steve Bell

“I’ve always done a fair amount of animation design, music rearranging and other things that aren’t strictly editing, but most editors are expected to play a role in aspects of the post process that aren’t strictly editing.”

Name: Steve Bell

What’s your job title?
Editor

Company: Cutters Editorial

Can you describe your company?
Cutters is part of a global group of companies offering offline editing, audio engineering, VFX and picture finishing, production and design – all of which fall under Cutters Studios. Here in New York, we do traditional broadcast TV advertising and online content, as well as longer format work and social media content for brands, directors and various organizations that hire us to develop a concept, shoot and direct.

Cutters New York

What’s your job title?
Editor

What’s your favorite part of the job?
There’s a stage to pretty much every project where I feel I’ve gotten a good enough grasp of the material that I can connect the storytelling dots and see it come to life. I like problem solving and love the feeling you get when you know you’ve “figured it out.”

Depending on the scale of the project, it can start a few hours in, a few days in or a few weeks in, but once it hits you can’t stop until you see the piece finished. It’s like reading a good page-turner; you can’t put it down. That’s the part of the creative process I love and what I like most about my job.

What’s your least favorite?
It’s those times when it becomes clear that I’ve/we’ve probably looked at something too many times to actually make it better. That certainly doesn’t happen on many jobs, but when it does, it’s probably because too many voices have had a say; too many cooks in the kitchen, as they say.

What is your most productive time of the day?
Early in the morning. I’m most clearheaded at the very beginning of the day, and then sometimes toward the very end of a long day. But those times also happen to be when I’m most likely to be alone with what I’m working on and free from other distractions.

If you didn’t have this job, what would you be doing instead? 
Baseball player? Astronaut? Joking. But let’s face it, we all fantasize about fulfilling the childhood dreams that are completely different from what we do. To be truthful I’m sure I’d be doing some kind of writing, because it was my desire to be a writer, particularly of film, that indirectly led me to be an editor.

Why did you choose this profession? How early on did you know this would be your path?
Well the simple answer is probably that I had opportunities to edit professionally at a relatively young age, which forced me to get better at editing way before I had a chance to get better at writing. If I keep editing I may never know if I can write!

Stella Artois

Can you name some recent projects you have worked on?
The Dwyane Wade Budweiser retirement film, Stella Artois holiday spots, a few films for the Schott/Hamilton watch collaboration. We did some fun work for Rihanna’s Savage X Fenty release. Early in the year I did a bunch of lovely spots for Hallmark Hall of Fame programming.

Do you put on a different hat when cutting for a specific genre?
For sure. There are overlapping tasks, but I do believe it takes a different set of skills to do good dramatic storytelling than it takes to do straight comedy, or doc or beauty. Good “Storytelling” (with a capital ‘S’) is helpful in all of it — I’d probably say crucial. But it comes down to the important element that’s used to create the story: emotion, humor, rhythm, etc. And then you need to know when it needs to be raw versus formal, broad versus subtle and so forth. Different hats are needed to get that exactly right.

What is the project that you are most proud of and why?
I’m still proud of the NHL’s No Words spot I worked on with Cliff Skeete and Bruce Jacobson. We’ve become close friends as we’ve collaborated on a lot of work since then for the NHL and others. I love how effective that spot is, and I’m proud that it continues to be referenced in certain circles.

NHL No Words

In a very different vein, I think I’m equally proud of the work I’ve done for the UN General Assembly meetings, especially the film that accompanied Kathy Jetnil-Kijiner’s spoken word performance of her poem “Dear Matafele Peinem” during the opening ceremonies of the UN’s first Climate Change conference. That’s an issue that’s very important to me and I’m grateful for the chance to do something that had an impact on those who saw it.

What do you use to edit?
I’m a Media Composer editor, and it probably goes back to the days when I did freelance work for Avid and had to learn it inside out. The interface at least is second nature to me. Also, the media sharing and networking capabilities of Avid make it indispensable. That said, I appreciate that Premiere has some clear advantages in other ways. If I had to start over I’m not sure I wouldn’t start with Premiere.

What is your favorite plugin?
I use a lot of Boris FX plugins for stabilization, color correction and so forth. I used to use After Effects often, and Boris FX offers a way of achieving some of what I once did exclusively in After Effects.

Are you often asked to do more than edit? If so, what else are you asked to do?
I’ve always done a fair amount of animation design, music rearranging and other things that aren’t strictly editing, but most editors are expected to play a role in aspects of the post process that aren’t strictly “film editing.”

Many of my clients know that I have strong opinions about those things, so I do get asked to participate in music and animation quite often. I’m also sometimes asked to help with the write-ups of what we’ve done in the edit because I like talking about the process and clarifying what I’ve done. If you can explain what you’ve done you’re probably that much more confident about the reasons you did it. It can be a good way to call “bullshit” on yourself.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
Yeah, right?! It can be stressful, especially when you’re occasionally lucky enough to be busy with multiple projects all at once. I take decompressing very seriously. When I can, I spend a lot of time outdoors — hiking, biking, you name it — not just for the cardio and exercise, which is important enough, but also because it’s important to give your eyes a chance to look off into the distance. There are tremendous physical and psychological benefits to looking to the horizon.

Ford v Ferrari’s co-editors discuss the cut

By Oliver Peters

After a failed attempt to acquire European carmaker Ferrari, an outraged Henry Ford II sets out to trounce Enzo Ferrari on his own playing field — automobile endurance racing. That is the plot of 20th Century Fox’s Ford v Ferrari, directed by James Mangold. In the end, Ford’s effort falls short, leading him to independent car designer Carroll Shelby (Matt Damon). Shelby’s outspoken lead test driver Ken Miles (Christian Bale) complicates the situation by making an enemy out of Ford senior VP Leo Beebe.

Michael McCusker

Nevertheless, Shelby and his team are able to build one of the greatest race cars ever — the GT40 MkII — setting up a showdown between the two auto legends at the 1966 24 Hours of Le Mans.

The challenge of bringing this clash of personalities to the screen was taken on by director James Mangold (Logan, Wolverine, 3:10 to Yuma) and his team of long-time collaborators.

I recently spoke with film editors Michael McCusker, ACE, (Walk the Line, 3:10 to Yuma, Logan) and Andrew Buckland (The Girl On the Train) — both of whom were recently nominated for an Oscar and ACE Eddie Award for their work on the film — about what it took to bring Ford v Ferrari together.

The post team for this film has worked with James Mangold on quite a few films. Tell me a bit about the relationship.
Michael McCusker: I cut my very first movie, Walk the Line, for Jim 15 years ago and have since cut his last six movies. I was the first assistant editor on Kate & Leopold, which was shot in New York in 2001. That’s where I met Andrew, who was hired as one of the local New York film assistants. We became fast friends. Andrew moved to LA in 2009, and I hired him to assist me on Knight & Day.

Andrew Buckland

I always want to keep myself available for Jim — he chooses good material, attracts great talent and is a filmmaker who works across multiple genres. Since I’ve worked with him, I’ve cut a musical movie, a western, a rom-com, an action movie, a straight-up superhero movie, a dystopian superhero movie and now a racing film.

As a film editor, it must be great not to get typecast for any particular cutting style.
McCusker: Exactly. I worked for David Brenner for years as his first. He was able to cross genres, and that’s what I wanted to do. I knew even then that the most important decisions I would make would be choosing projects. I couldn’t have foreseen that Jim was going to work across all these genres — I simply knew that we worked well together and that the end product was good.

In preparing for Ford v Ferrari, did you study any other recent racing films, like Ron Howard’s Rush?
McCusker: I saw that movie, and liked it. Jim was aware of it, too, but I think he wanted to do something a little more organic. We watched a lot of older racing films, like Steve McQueen’s Le Mans and John Frankenheimer’s Grand Prix.

Jim’s original intention was to play the racing in long takes and bring the audience along for the ride. As he was developing the script, and we were in preproduction, it became clear that there was more drama for him to portray during the racing sequences than he anticipated. So the races took on more of an energized pace.

Energized in what way? Do you mean in how you cut it or in a change of production technique, like more stunt cameras and angles?
McCusker: I was fortunate to get involved about two-and-a-half months prior to the start of production. We were developing the Le Mans race in previs. This required a lot of editing and discussions about shot design and figuring out what the intercutting was going to be during that sequence, which is like the fourth act of the movie.

You’re dealing with Mollie and Peter [Miles’ wife and son] at home watching the race, the pit drama, what’s going on with Shelby and his crew, with Ford and Leo Beebe and also, of course, what’s going on in the car with Ken. It’s a three-act movie unto itself, so Jim was trying to figure out how it was all going to work before he had to shoot it. That’s where I came in. The frenetic pace of Le Mans was more a part of the writing process — and part of the writing process was the previs. The trick was how to make sure we weren’t just following cars around a track. That’s where redundancy can tend to beleaguer an audience in racing movies.

What was the timeline for production and post?
McCusker: I started at the end of May 2018. Production began at the beginning of August and went all the way through to the end of November. We started post in earnest at the beginning of November of last year, took some time off for the holidays, and then showed the film to the studios around February or March.

When did you realize you were going to need help?
The challenge was that there was going to be a lot of racing footage, which meant there was going to be a lot of footage. I knew I was going to need a strong co-editor, so Andrew was the natural choice. He had been cutting on his own and cutting with me over the years. We share a common approach to editing and have a similar aesthetic.

There was a point when things got really intense and we needed another pair of hands, so I brought in Dirk Westervelt to help out for a couple of months. That kept our noses above water, but the process was really enjoyable. We were never in a crisis mode. We got a great response from preview audiences and, of course, that calms everybody down. At that point it was just about quality control and making sure we weren’t resting on our laurels.

How long was your initial cut, and what was your process for trimming the film down to the present run time?
McCusker: We’re at 2:30:00 right now and I think the first cut was 3:10 or 3:12. The Le Mans section was longer. The front end of the movie had more scenes in it. We ended up lifting some scenes and rearranging others. Plus, the basic trimming of scenes brought the length down.

But nothing was the result of a panic, like, “Oh my God, we’ve got to get to 2:30!” There were no demands by the studio or any pressures we placed upon ourselves to hit a particular running time. I like to say that there’s real time and there’s cinematic time. You can watch Once Upon a Time in America, which is 3:45, and feels like it’s an hour. Or you can watch an 89-minute movie and feel like it’s drudgery. We just wanted to make sure we weren’t overstaying our welcome.

How extensively did you rearrange scenes during the edit? Or did the structure of the film stay pretty much as scripted?
McCusker: To a great degree it stayed as scripted. We had some scenes in the beginning that we felt were a little bit tangential and weren’t serving the narrative directly, and those were cut.

The real endeavor of this movie starts the moment that these two guys [Shelby and Miles] decide to tackle the challenge of developing this car. There’s a scene where Miles sees the car for the first time at LAX. We understood that we had to get to that point in a very efficient way, but also set up all the other characters — their motives and their desires.

It’s an interesting movie, because it starts off with a lot of characters. But then it develops into a movie about two guys and their friendship. So it goes from an ensemble piece to being about Ken and Carroll, while at the same time the scope of the movie is opening up and becoming larger as the racing is going on. For us, the trickiest part was the front end — to make sure we spent enough time with each character so that we understood them, but not so much time that audience would go, “Enough already! Get on with it!”

Did that help inform your cutting style for this film?
McCusker: I don’t think so. Where it helped was knowing the sound of the broadcasters and race announcers. I liked Chris Economaki and Jim McKay — guys who were broadcasting the races when I was a kid. I was intrigued about how they gave us the narrative of the race. It came in handy while we were making this movie, because we were able to get our hands on some of Jim McKay’s actual coverage of Le Mans and used it in the movie. That brings so much authenticity.

Let’s talk sound. I would imagine the sound design was integral to your rough cuts. How did you tackle that?
Andrew Buckland: We were fortunate to have the sound team on very early during preproduction. We were cutting in a 5.1 environment, so we wanted to create sound design early. The engine sounds might not have been the exact sounds that would end up in the final, but they were adequate enough to allow you to experience the scenes as intended. Because we needed to get Jim’s response early, some of the races were cut with the production sound — from the live mics during filming. This allowed Jim and us to quickly see how the scenes would flow.

Other scenes were cut strictly MOS because the sound design would have been way too complicated for the initial cut of the scene. Once the scene was cut visually, we’d hand over the scene to sound supervisor Don Sylvester, who was able to provide us with a set of 5.1 stems. That was great, because we could recut and repurpose those stems for other races.

McCusker: We had developed a strategy with Don to split the sound design into four or five stems to give us enough discrete channels to recut these sequences. The stems were a palette of interior perspectives, exterior perspectives, crowds, car-bys, and so on. By employing this strategy, we didn’t need to continually turn over the cut to sound for patch-up work.

Then, as Don went out and recorded the real cars and was developing the actual sounds for what was going to be used in the mix, he’d generate new stems and we would put them into the Media Composer. This was extremely informative to Jim, because he could experience our Avid temp mix in 5.1 and give notes, which ultimately informed the final sound design and the mix.

What about temp music? Did you also weave that into your rough cuts?
McCusker: Ted Caplan, our music editor, has also worked with Jim for 15 years. He’s a bit of a renaissance man — a screenwriter, a novelist, a one-time musician and a sound designer in his own right. When he sits down to work with music, he’s coming at it from a story point-of-view. He has a very instinctual knowledge of where music should start, and it happens to dovetail into the aesthetic that Jim, Andrew, and I are working toward. None of us like music to lead scenes in a way that anticipates what the scene is going to be about before you experience it.

For this movie, it was challenging to develop what the musical tone of the movie would be. Ted was developing the temp track along with us from a very early stage. We found over time that not one particular musical style was going to work. This is a very complex score. It includes a kind of surf-rock sound with Carroll Shelby in LA, an almost jaunty, lounge jazz sound for Detroit and the Ford executives, and then the hard-driving rhythmic sound for the racing.

The final score was composed by Marco Beltrami and Buck Sanders.

I presume you were housed in multiple cutting rooms at a central facility.
McCusker: We cut at 20th Century Fox, where Jim has a large office space. We cut Logan and Wolverine there before this movie. It has several cutting spaces and I was situated between Andrew and Don. Ted was next to Don and John Berri, our additional editor. Assistants were right around the corner. It makes for a very efficient working environment.

Since the team was cutting with Avid Media Composer, did any of its features stand out to you for this film?
Both: FluidMorph! (laughing)

McCusker: FluidMorph, speed-ramping — we often had to manipulate the shot speeds to communicate the speed of the cars. A lot of these cars were kit cars that could drive safely at a certain speed for photography, but not at race speed. So we had to manipulate the speed a lot to get the sense of action that these cars have.

What about Avid’s ScriptSync? I know a lot of narrative editors love it.
McCusker: I used ScriptSync once a few years ago and I never cut a scene faster. I was so excited. Then I watched it, and it was terrible. To me there’s so much more to editing than hitting the next line of dialogue. I’m more interested in the lines between the lines — subtext. I do understand the value of it in certain applications. For instance, I think it’s great on straight comedy. It’s helpful to get around and find things when you are shooting tons of coverage for a particular joke. But for me, it’s not something I lean on. I mark up my own dailies and find stuff that way.

Tell me a bit more about your organizational process. Do you start with a Kem roll or stringouts of selected takes?
McCusker: I don’t watch dailies, at least in a traditional sense. I don’t start in the morning, watch the dailies and then cut. And I don’t ask my assistants to organize any of my dailies in bins. I come in and grab the scene that I have in front on me. I’ll look at the last take of every set-up quickly and then I spend an enormous amount of time — particularly on complex scenes — creating a bin structure that I can work with.

Sometimes it’s the beats in a scene, sometimes I organize by shot size, sometimes by character — it depends on what’s driving the scene. I learn my footage by organizing it. I remember shot sizes. I remember what was shot from set-up to set-up. I have a strong visual memory of where things are in a bin. So, if I ask an assistant to do that, then I’m not going to remember it. If there are a lot of resets or restarts in a take, I’ll have the assistant mark those up. But, I’ll go through and mark up beats or pivotal points in a scene, or particularly beautiful moments, and then I’ll start cutting.

Buckland: I’ve adopted a lot of Mike’s methodology, mainly because I assisted Mike on a few films. But it actually works for me, as well. I have a similar aesthetic to Mike.

Was this was shot digitally?
McCusker: It was primarily shot with ARRI Alexa 65 LFs, plus some other small-format cameras. A lot of it was shot with old anamorphic lenses on the Alexa that allowed them to give it a bit of a vintage feeling. It’s interesting that as you watch it, you see the effect of the old lenses. There’s a fall-off on the edges, which is kind of cool. There were a couple of places where the subject matter was framed into the curve of the lens, which affects the focus. But we stuck with it, because it feels “of the time.”

Since the film takes place in the 1960s and has a lot of racing sequences, I assume there a lot of VFX?
McCusker: The whole movie is a period film and we would temp certain things in the Avid for the rough cuts. John Berri was wrangling visual effects. He’s a master in the Avid and also Adobe After Effects. He has some clever ways of filling in backgrounds or greenscreens with temp elements to give the director an idea of what’s going to go there. We try to do as much temp work in the Avid as we are capable of doing, but there’s so much 3D visual effects work in this movie that we weren’t able to do that all of the time.

The racing is real. The cars are real. The visual effects work was for a lot of the backgrounds. The movie was shot almost entirely in Los Angeles with some second unit footage shot in Georgia. The modern-day Le Mans track isn’t at all representative of what Le Mans was in 1966, so there was no way to shoot that. Everything had to be doubled and then augmented with visual effects. In addition to Georgia, where they shot most of the actual racing for Le Mans, they went to France to get some shots of the actual town of Le Mans. Of those, I think only about four of those shots are left. (laughs)

Any final thoughts about how this film turned out?
McCusker: I’m psyched that people seem to like the film. Our concern was that we had a lot of story to tell. Would we wear audiences out? We continually have people tell us, “That was two and a half hours? We had no idea.” That’s humbling for us and a great feeling. It’s a movie about these really great characters with great scope and great racing. You can put all the big visual effects in a film that you want to, but it’s really about people.

Buckland: I agree. It’s more of a character movie with racing. Also, because I am not a racing fan per se, the character drama really pulled me into the film while working on it.


Oliver Peters is an experienced film and commercial editor/colorist. In addition, he regularly interviews editors for trade publications. He may be contacted through his website at oliverpeters.com.

Company 3 ups Jill Bogdanowicz to co-creative head, feature post  

Company 3 senior colorist Jill Bogdanowicz will now share the title of creative head, feature post with senior colorist Stephen Nakamura. In this new role she will collaborate with Nakamura working to foster communication among artists, operations and management in designing and implementing workflows to meet the ever-changing needs of feature post clients.

“Company 3 has been and will always be guided by artists,” says senior colorist/president Stefan Sonnenfeld. “As we continue to grow, we have been formalizing our intra-company communication to ensure that our artists communicate among themselves and with the company as a whole. I’m excited that Jill will be joining Stephen as a representative of our feature colorists. Her years of excellent work and her deep understanding of color science makes her a perfect choice for this position.”

Among the kinds of issues Bogdanowicz and Nakamura will address: Mentorship within the company, artist recruitment and training and adapting for emerging workflows and client expectations.

Says Bogdanowicz, “As the company continues to expand, both in size and workload, I think it’s more important than ever to have Stephen and me in a position to provide guidance to help the features department grow efficiently while also maintaining the level of quality our clients expect. I intend to listen closely to clients and the other artists to make sure that their ideas and concerns are heard.”

Bogdanowicz has been a leading feature film colorist since the early 2000s. Recent work includes Joker, Spider-Man: Far From Home and Dr. Sleep, to name a few.

Storage for Color and Post

By Karen Moltenbrey

At nearly every phase of the content creation process, storage is at the center. Here we look at two post facilities whose projects continually push boundaries in terms of data, but through it all, their storage solution remains fast and reliable. One, Light Iron, juggles an average of 20 to 40 data-intensive projects at a time and must have a robust storage solution to handle its ever-growing work. Another, Final Frame, recently took on a project whose storage requirements were literally out of this world.

Amazon’s The Marvelous Mrs. Maisel

Light Iron
Light Iron provides a wide range of services, from dailies to post on feature films, indies and episodic shows, to color/conform/beauty work on commercials and short-form projects. The facility’s clients include Netflix, Amazon Studios, Apple TV+, ABC Studios, HBO, Fox, FX, Paramount and many more. Light Iron has been committed to evolving digital filmmaking techniques over the past 10 years and understands the importance of data availability throughout the pipeline. Having a storage solution that is reliable, fast and scalable is paramount to successfully servicing data-centric projects with an ever-growing footprint.

More than 100 full-time employees located at Light Iron’s Los Angeles and New York locations regularly access the company’s shared storage solutions. Both facilities are equipped for dailies and finishing, giving clients an option between its offices based on proximity. In New York, where space is at a premium, the company also offers offline editorial suites.

The central storage solution used at both locations is a Quantum StorNext file system along with a combination of network-attached and direct-attached storage. On the archive end, both sites use LTO-7 tapes for backing up before moving the data off the spinning disc storage.

As Lance Hayes, senior post production systems engineer, explains, the facility segments the storage between three different types of options. “We structured our storage environment in a three-tiered model, with redundancy, flexibility and security in mind. We have our fast disks (tier one), which are fast volumes used primarily for playbacks in the rooms. Then there are deliverable volumes (tier two), where the focus is on the density of the storage. These are usually the destination for rendered files. And then, our nearline network-attached storage (tier three) is more for the deep storage, a holding pool before output to tape,” he explains.

Light Iron has been using Quantum as its de facto standard for the past several years. Founded in 2009, Light Iron has been on an aggressive growth trajectory and has evolved its storage strategy in response to client needs and technological advancement. Before installing its StorNext system, it managed with JBOD (“just a bunch of discs”) direct-attached storage on a very limited number of systems to service its staff of then-30-some employees, says Keenan Mock, senior media archivist at Light Iron. Light Iron, though, grew quickly, “and we realized we needed to invest in a full infrastructure,” he adds.

Lance Hayes

At Light Iron, work often starts with dailies, so the workflow teams interact with production to determine the cameras being used, the codecs being shot, the number of shoot days, the expected shooting ratio and so forth. Based on that information, the group determines which generation of LTO stock makes the most sense for the project (LTO-6 or LTO-7, with LTO-8 soon to be an option at the facility). “The industry standard, and our recommendation as well, is to create two LTO tapes per shoot day,” says Mock. Then, those tapes are geographically separated for safety.

In terms of working materials, the group generally restores only what is needed for each individual show from LTO tape, as opposed to keeping the entire show on spinning disc. “This allows us to use those really fast discs in a cost-effective way,” Hayes says.

Following the editorial process, Light Iron restores only the needed shots plus handles from tape directly to the StorNext SAN, so online editors can have immediate access. The material stays on the system while the conform and DI occur, followed by the creation of final deliverables, which are sent to the tier two and tier three spinning disk storage. If the project needs to be archived to tape, Mock’s department takes care of that; if it needs to be uploaded, that usually happens from the spinning discs.

Light Iron’s FilmLight Baselight systems have local storage, which is used mainly as cache volumes to ensure sustained playback in the color suite. In addition, Blackmagic Resolve color correctors play back content directly to the SAN using tier two storage.

Keenan Mock

Light Iron continually analyzes its storage infrastructure and reviews its options in terms of the latest technologies. Currently, the company considers its existing storage solution to be highly functional, though it is reviewing options for the latest versions of flash solutions from Quantum in 2020.

Based on the facility’s storage workflow, there’s minimal danger of maxing out the storage space anytime soon.

While Light Iron is religious about creating a duplicate set of tapes for backup, “it’s a very rare occurrence [for the duplicate to be needed],” notes Mock, “But it can happen, and in that circumstance, Light Iron is prepared.”

As for the shared storage, the datasets used in post, compared to other industries, are very large, “and without shared storage and a clustered file system, we wouldn’t be able to do the jobs we are currently doing,” Hayes notes.

Final Frame
With offices in New York City and London, Final Frame is a full-featured post facility offering a range of services, including DI of every flavor, 8mm to 77mm film scanning and restoration, offline editing, VFX, sound editing (theatrical and home Dolby Atmos) and mastering. Its work spans feature films, documentaries and television. The facility’s recent work on the documentary film Apollo 11, though, tested its infrastructure like no other, including the amount of storage space it required.

Will Cox

“A long time ago, we decided that for the backbone of all our storage needs, we were going to rely on fiber. We have a total of 55 edit rooms, five projection theaters and five audio mixing rooms, and we have fiber connectivity between all of those,” says Will Cox, CEO/supervising colorist. So, for the past 20 years, ever since 1Gb fiber became available, Final Frame has relied on this setup, though every five years or so, the shop has upgraded to the next level of fiber and is currently using 16Gb fiber.

“Storage requirements have increased because image data has increased and audio data has increased with Atmos. So, we’ve needed more storage and faster storage,” Cox says.

While the core of the system is fiber, the facility uses a variety of storage arrays, the bulk of which are 16Gb 4000 Series SAN offerings from Infortrend, totaling approximately 2PB of space. In addition, the studio uses 8GB Promise Technology VTrak arrays, also totaling about 1PB. Additionally installed at the facility are some JetStor 8GB offerings. For SAN management, Final Frame uses Tiger Technology’s Tiger Store.

Foremost in Cox’s mind when looking for a storage solution is interoperability, since Final Frame uses Linux, Mac and Windows platforms; reliability and fault tolerance are important as well. “We run RAID-6 and RAID-60 for pretty much everything,” he adds. “We also focus on how good the remote management is. We’ve brought online so much storage, we need the storage vendors to provide good interfaces so that our engineers and IT people can manage and get realtime feedback about the performance of the arrays and any faults that are creeping in, whether it’s due to failed drives or drives that are performing less than we had anticipated.”

Final Frame has also brought on a good deal more SSD storage. “We manage projects a bit differently now than we used to, where we have more tiered storage,” Cox adds. “We still do a lot of spinning discs, but SSD is moving in, and that is changing our workflows somewhat in that we don’t have to render as many files and as many versions when we have really fast storage. As a result, there’s some cost-savings on personnel at the workflow level when you have extremely fast storage.”

When working with clients who are doing offline editing, Final Frame will build an isolated SAN for them, and when it comes time to finish the project, whether it’s a picture or audio, the studio will connect its online and mixing rooms to that SAN. This setup is beneficial to security, Cox contends, as it accelerates the workflow since there’s no copying of data. However, aside from that work, everyone generally has parallel access to the storage infrastructure and can access it at any time.

More recently, in addition to other projects, Final Frame began working on Apollo 11, a film directed by Todd Douglas Miller. Miller wanted to rescan all the original negatives and all the original elements available from the Apollo 11 moon landing for a documentary film using audio and footage (16mm and 35mm) from NASA during that extraordinary feat. “He asked if we could make a movie just with the archival elements of what existed,” says Cox.

While ramping up and determining a plan of attack — Final Frame was going to scan the data at 4K resolution — NASA and NARA (National Archives and Records Administration) discovered a lost cache of archives containing 65mm and 70mm film.

“At that point, we decided that existing scanning technology wasn’t sufficient, and we’d need a film scanner to scan all this footage at 16K,” Cox adds, noting the company had to design and build an entirely new 16K film scanner and then build a pipeline that could handle all that data. “If you can imagine how tough 4K is to deal with, then think about 16K, with its insanely high data rates. And 8K is four times larger than 4K, and 16K is four times larger than 8K, so you’re talking about orders-of-magnitude increases in data.”

Adding to the complexity, the facility had no idea how much footage it would be using. Alas, Final Frame ultimately considered its storage structure and the costs needed to take it to the next level for 16K scanning and determined that amount of data was just too much to move and too much to store. “As it was, we filled up a little over a petabyte of storage just scanning the 8K material. We were looking at 4PB, quadrupling the amount of storage infrastructure needed. Then we would have had to run backups of everything, which would have increased it by another 4PB.”

Considering these factors, Final Frame changed its game plan and decided to scan at 8K. “So instead of 2PB to 2.5PB, we would have been looking at 8PB to 10PB of storage if we continued with our earlier plan, and that was really beyond what the production could tolerate,” says Cox.

Even scanning at 8K, the group had to have the data held in the central repository. “We were scanning in, doing what were extensively dailies, restoration and editorial, all from the same core set of media. Then, as editorial was still going on, we were beginning to conform and finish the film so we could make the Sundance deadline,” recalls Cox.

In terms of scans, copies and so forth, Final Frame stored about 2.5PB of data for that project. But in terms of data created and then destroyed, the amount of data was between 12PB and 15PB. To handle this load, the facility needed storage that could perform quickly, be very redundant and large. This led the company to bring on an additional 1PB of Fibre Channel SAN storage to add to the 1.5PB already in place — dedicated to just the Apollo 11 project. “We almost had to double the amount of storage infrastructure in the whole facility just to run this one project,” Cox points out. The additional storage was added in half-petabyte array increments, all connected to the SAN, all at 16Gb fiber.

While storage is important to any project, it was especially true for the Apollo 11 project due to the aggressive deadlines and excessively large storage needs. “Apollo 11 was a unique project. We were producing imagery that was being returned to the National Archives to be part of the historic record. Because of the significance of what we were scanning, we had to be very attentive to the longevity and accuracy of the media,” says Cox. “So, how it was being stored and where it was being stored were important factors on this project, more so than maybe any other project we’ve ever done.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage for UHD and 4K

By Peter Collins

Over the past few years, we have seen a huge audience uptake of UHD and 4K technologies. The increase in resolution offering more detailed imagery, and the adoption of HDR bringing bigger and brighter colors.

UHD technologies are a significant selling point, and are quickly becoming the “new normal ” for many commissioners. VOD providers, in particular, are behind the wheel and pushing things forward rapidly — it’s not just a creative decision, but one that is now required for delivery. Essentially, something the cinematographers used to have to fight for is now being man-dated by those commissioning the content.

This is all very exciting, but what does this mean for productions in general? There are wide-ranging implications and questions of logistics — timescales for data transfer and processing increase, post production infrastructure and workflows must be adapted, and archiving and retrieval times are extended (to say the least).

With these UHD and 4K productions having storage requirements into the hundreds of terabytes between various stages of the supply chain, the need to store the data in an accessible, secure and affordable manner is critical.

The majority of production, VFX, post and mastering facilities are currently still working the traditional way — from physically on-premise storage (on-prem for those who like to shave off a couple of syllables) such as NAS, local storage, LTO and SANs to distributed data stores spread across different buildings of a facility.

With UHD and 4K projects sometime generating north of half a petabyte of data (which needs to stick around until delivery is complete and beyond), it’s not a simple problem to ensure that large chunks of that data are available and accessible for every-one involved in the project who needs it — at least not in the most time effective way. And as sure as death and taxes, no matter how much storage you have to hand, you will miraculously start running out far sooner than you anticipated. Since this affects all stages of the supply chain, doesn’t it make sense to have some central store of data for everyone to access what they need, when they need it?

Across all areas of the industry, we are seeing the adoption of cloud storage over the traditional on-premises solution and are starting to see opportunities where a cloud-based solution might save money, time or, even better, both! There are numerous cloud “types” out there and below is my overview of the four most widely adopted.

Public: The public cloud can offer large amounts of storage for as long as it’s required (i.e., paid for) and stop charging you for it when it’s not (which is a nice change from having to buy storage with a lengthy support contract). The physical infrastructure of a public cloud is shared with other customers of the cloud provider (this is known as multi-tenancy), however all the resources allocated to you are invisible to other customers. Your data may be spread across several different areas of the data center (or beyond) depending on where the provider’s infrastructure has the most availability.

Private: Private clouds (from a storage perspective) are useful for those needing finer grained control over their data. Private clouds are those in which companies build their own infrastructure to support the services they want to offer and have complete control over where their data physically resides.

The downside to private clouds is cost, as the business is effectively paying to be their own cloud provider and maintaining the systems over their lifetime. With this in mind, many of the bigger public cloud providers offer “virtual private clouds,” in which a chunk of their resources are dedicated solely to a single customer (single-tenancy). This of course comes at a slightly higher cost than the plain public cloud offering, but does allow more finely grained control for those consumers who need it.

Hybrid: Hybrid clouds are, as the name suggests, a mixture of the two cloud approaches outlined above (public and private). This offers the best of both worlds and can be a useful approach when flexibility is required, or when certain data accessing processes are not practical to run from an off-site public cloud (at time of writing, a 50fps realtime stream of uncompressed 4K raw to a grade, for example, is unlikely to happen from a vanilla public cloud agreement without some additional bandwidth discussions — and costs).

Having the flexibility to migrate data between a virtual private cloud and a local private cloud while continuing to work, could help minimize the impact on existing infrastructure locally, and could also enable workflows and interchange between local and “cloud-native” applications. Certain processes that take up a lot of resources locally could be re-located to a virtual private cloud for a lower cost, freeing up local resources for more time-sensitive applications.

Community: Here’s where the cloud could shine as a prospect from a production standpoint. This cloud model is based on businesses and those with a stake in the process pooling their resources and collaborating, coming up with a system and overarching set of processes that they all operate under — in effect offering a completely customized set of cloud services for any given project.

From a storage perspective, this could mean a production company running a virtual private cloud with the cost being distributed across all stakeholders accessing that data. Original camera files, for example, may be transferred to this virtual private cloud during the shoot, with post, VFX, marketing and reversioning houses downloading and uploading their work in turn. As all data transfers are monitored and tracked, the billing from a production standpoint on a per-vendor (or departmental) basis becomes much easier — everyone just pays for what they use.

MovieLabs’ “Envisioning Production in 2030” white paper, goes deeper into production related applications of cloud technologies over the coming decade (among other sharp in-sights), and is well worth absorbing over a cup of coffee or two.

As production technologies progress, we are only ever going to generate more and more data. For storage professionals, those managing systems, or project managers looking to improve timeframes and reduce costs, solutions may not only be financial or center around logistics. They may also factor in how easily it facilitates collaboration, interchange and fostering closer working relationships. To that question, the cloud may well be a clear best fit.

Studio Images: Goldcrest Post Production / Neil Harrison


Peter Collins is a post professional with experience working in film and television globally. He has worked at the forefront of new production technologies and consults on workflows, project management and industry best practices. He can be contacted via twitter via @PCPostPro or email at pcpostpro@icloud.com.

Storage for Editors

By Karen Moltenbrey

Whether you are a small-, medium- or large-size facility, storage is at the heart of your workflow. Consider, for instance, the one-person shop Fin Film Company, which films and edits footage for branding and events, often on water. Then there’s Uppercut, a boutique creative/post studio where collaborative workflow is the key to pushing boundaries on commercials and other similar projects.

Let’s take a look at Uppercut’s workflow first…

Uppercut
Uppercut is a creative editorial boutique shop founded by Micah Scarpelli in 2015 and offering a range of post services. Based in New York and soon Atlanta, the studio employs five editors with their own suites along with an in-house Flame artist who has his own suite.

Taylor Schafer

In contrast to Uppercut’s size, its storage needs are quite large, with five editors working on as many as five projects at a time. Although most of it is commercial work, some of those projects can get heavy in terms of the generated media, which is stored on-site.

So, for its storage needs, the studio employs an EditShare RAID system. “Sometimes we have multiple editors working on one large campaign, and then usually an assistant is working with an editor, so we want to make sure they have access to all the media at the same time,” says Taylor Schafer, an assistant editor at Uppercut.

Additionally, Uppercut uses a Supermicro nearline server to store some of its VFX data, as the Flame artist cannot access the EditShare system on his CentOS operating system. Furthermore, the studio uses LTO-6 archive media in a number of ways. “We use EditShare’s Ark to LTO our partitions once the editors are done with them for their projects. It’s wonderfully integrated with the whole EditShare system. Ark is easy to navigate, and it’s easy to swap LTO tapes in and out, and everything is in one location,” says Schafer.

The studio employs the EditShare Ark to archive its editors’ working files, such as Premiere and Avid projects, graphics, transcodes and so forth. Uppercut also uses BRU (Backup Restore Utility) from Tolis Group to archive larger files that only live on LaCie hard drives and not on EditShare, such as a raw grade. “Then we’re LTO’ing the project and the whole partition with all the working files at the end through Ark,” Schafer explains.

The importance of having a system like this was punctuated over the summer when Uppercut underwent a renovation and had to move into temporary office space at Light Iron, New York — without the EditShare system. As a result, the team had to work off of hard drives and Light Iron’s Avid Nexis for some limited projects. “However, due to storage limits, we mainly worked off of the hard drives, and I realized how important a file storage system that has the ability to share data in real time truly is,” Schafer recalls. “It was a pain having to copy everything onto a hard drive, hand it back to the editor to make new changes, copy it again and make sure all the files were up to date, as opposed to using a storage system like ours, where everything is instantly up to date. You don’t have to worry whether something copied over correctly or not.”

She continues: “Even with Nexis, we were limited in our ability to restore old projects, which lived on EditShare.”

When a new project comes in at Uppercut, the first thing Schafer and her colleagues do is create a partition on EditShare and copy over the working template, whether it’s for Avid or Premiere, on that partition. Then they get their various working files and start the project, copying over the transcodes they receive. As the project progresses, the artists will get graphics and update the partition size as needed. “It’s so easy to change on our end,” notes Schafer. And once the project is completed, she or another assistant will make sure all the files they would possibly need, dating back to day one of the project, are on the EditShare, and that the client files are on the various hard drives and FTP links.

Reebok

“We’ll LTO the partition on EditShare through Ark onto an LTO-6 tape, and once that is complete, then generally we will take the projects or partition off the EditShare,” Schafer continues. The studio has approximately 26TB of RAID storage but, due to the large size of the projects, cannot retain everything on the EditShare long term. Nevertheless, the studio has a nearline server that hosts its masters and generics, as well as any other file the team might need to send to a client. “We don’t always need to restore. Generally the only time we try to restore is when we need to go back to the actual working files, like the Premiere or Avid project,” she adds.

Uppercut avoids keeping data locally on workstations due to the collaborative workflow.

According to Schafer, the storage setup is easy to use. Recently, Schafer finished a Reebok project she and two editors had been working on. The project initially started in Avid Media Composer, which was preferred by one of the editors. The other editor prefers Premiere but is well-versed on the Avid. After they received the transcodes and all the materials, the two editors started working in tandem using the EditShare. “It was great to use Avid on top of it, having Avid bins to open separately and not having to close out of the project and sharing through a media browser or closing out of entire projects, like you have to do with a Premiere project,” she says. “Avid is nice to work with in situations where we have multiple editors because we can all have the project open at once, as opposed to Premiere projects.”

Later, after the project was finished, the editor who prefers Premiere did a director’s cut in that software. As a result, Schafer had to re-transcode the footage, “which was more complicated because it was shot on 16mm, so it was also digitized and on one large video reel instead of many video files — on top of everything else we were doing,” she notes. She re-transcoded for Premiere and created a Premiere project from scratch, then added more storage on EditShare to make sure the files were all in place and that everything was up to date and working properly. “When we were done, the client had everything; the director had his director’s cut and everything was backed up to our nearline for easy access. Then it was LTO’d through Ark on LTO-6 tapes and taken off EditShare, as well as LTO’d on BRU for the raw and the grade. It is now done, inactive and archived.”

Without question, says Schafer, storage is important in the work she and her colleagues do. “It’s not so much about the storage itself, but the speed of the storage, how easily I’m able to access it, how collaborative it allows me to be with the other people I’m working with. Storage is great when it’s accessible and easy for pretty much anyone to use. It’s not so good when it’s slow or hard to navigate and possibly has tech issues and failures,” Schafer says. “So, when I’m looking for storage, I’m looking for something that is secure, fast and reliable, and most of all, easy to understand, no matter the person’s level of technical expertise.”

Chris Aguilar

Fin Film Company
People can count themselves fortunate when they can mix business with pleasure and integrate their beloved hobby with their work. Such is the case for solo producer/director/editor Chris Aguilar of Fin Film Company in Southern California, which he founded a decade ago. As Aguilar says, he does it all, as does Fin Film, which produces everything from conferences to music videos and commercial/branded content. But his real passion involves outdoor adventure paddle sports, from stand-up paddleboarding to pro paddleboarding.

“That’s been pretty much my niche,” says Aguilar, who got his start doing in-house production (photography, video and so forth) for a paddleboard company. Since then, he has been able to turn his passion and adventures into full-time freelance work. “When someone wants an event video done, especially one involving paddleboard races, I get the phone call and go!”

Like many videographers and editors, Aguilar got his start filming weddings. Always into surfing himself, he would shoot surfing videos of friends “and just have fun with it,” he says of augmenting that work. Eventually, this allowed him to move into areas he is more passionate about, such as surfing events and outdoor sports. Now, Aguilar finds that a lot of his time is spent filming paddleboard events around the globe.

Today, there are many one-person studios with solo producers, directors and editors. And as Aguilar points out, their storage needs might not be on the level of feature filmmakers or even independent TV cinematographers, but that doesn’t negate their need for storage. “I have some pretty wide-ranging storage needs, and it has definitely increased over the years,” he says.

In his work, Aguilar has to avoid cumbersome and heavy equipment, such as Atomos recorders, because of their weight on board the watercraft he uses to film paddleboard events. “I’m usually on a small boat and don’t have a lot of room to haul a bunch of gear around,” he says. Rather, Aguilar uses Panasonic’s AG-CX350 as well as Panasonic’s EVA1 and GH5, and on a typical two-day shoot (the event and interviews), he will fill five to six 64GB cards.

“Because most paddleboard races are long-distance, we’re usually on the water for about five to eight hours,” says Aguilar. “Although I am not rolling cameras the whole time, the weight still adds up pretty quickly.”

As for storage, Aguilar offloads his video onto SSD drives or other kinds of external media. “I call it my ‘working drive for editing and that kind of thing,’” he says. “Once I am done with the edit and other tasks, I have all those source files somewhere.” He calls on the G-Technology G-Drive Mobile SSD 1TB for in the field and some editing and their Ev Raw portable raw drive for back ups and some editing. He also calls on Gylph’s Atom SSD for the field.

For years, that “somewhere” has been a cabinet that was filled with archived files. Indeed, that cabinet is currently holding, in Aguilar’s estimate, 30TB of data, if not more. “That’s just the archives. I have 10 or 11 years of archives sitting there. It’s pretty intense,” he adds. But, as soon as he gets an opportunity, those will be ported to the same cloud backup solution he is using for all his current work.

Yes, he still uses the source cards, but for a typical project involving an end-to-end shoot, Aguilar will use at least a 1TB drive to house all the source cards and all the subsequent work files. “Things have changed. Back in the day, I used hard drives – you should see the cabinet in my office with all these hard drives in it. Thank God for SSDs and other options out there. It’s changed our lives. I can get [some brands of] 1TB SSD for $99 or a little more right now. My workflow has me throwing all the source cards onto something like that that’s dedicated to all those cards, and that becomes my little archive,” explains Aguilar.

He usually uploads the content as fast as possible to keep the data secure. “That’s always the concern, losing it, and that’s where Backblaze comes in,” Aguilar says. Backblaze is a cloud backup solution that is easily deployed across desktops and laptops and managed centrally — a solution Aguilar recently began employing. He also uses Iconik Solutions’ digital management system, which eases the task of looking up video files or pulling archived files from Backblaze. The digital management system sits on top of Backblaze and creates little offline proxies of the larger content, allowing Aguilar to view the entire 10-year archive online in one interface.

According to Aguilar, his archived files are an important aspect of his work. Since he works so many paddleboard events, he often receives requests for clips from specific racers or races, some dating back years. Prior to using Backblaze, if someone requested footage, it was a challenge to locate it because he’d have to pull that particular hard drive and plug it into the computer, “and if I had been organized that year, I’ll know where that piece of content is because I can find it. If I wasn’t organized that year, I’d be in trouble,” he explains. “At best, though, it would be an hour and a half or more of looking around. Now I can locate and send it in 15 minutes.”

Aguilar says the Iconik digital management system allows him to pull up the content on the interface and drill down to the year of the race, click on it, download it and send it off or share it directly through his interface to the person requesting the footage.

Aguilar went live with this new Backblaze and digital management system storage workflow this year and has been fully on board with it for just the past two to three months. He is still uncovering all the available features and the power underneath the hood. “Even for a guy who’s got a technical background, I’m still finding things I didn’t know I could do,” and as such, Aguilar is still fine-tuning his workflow. “The neat thing with Iconik is that it could actually support online editing straight up, and that’s the next phase of my workflow, to accommodate that.”

Fortunately or unfortunately, at this time Aguilar is just starting to come off his busy season, so now he can step back and explore the new system. And transfer onto the new system all the material on the old source cards in that cabinet of his.

“[The new solution] is more efficient and has reduced costs since I am not buying all these drives anymore. I can reuse them now. But mostly, it has given me peace of mind that I know the data is secure,” says Aguilar. “I have been lucky in my career to be present for a lot of cool moments in the sport of paddling. It’s a small community and a very close-knit group. The peace of mind knowing that this history is preserved, well, that’s something I greatly appreciate. And I know my fellow paddlers also appreciate it.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Behind the Title: Matter Films president Matt Moore

Part of his job is finding talent and production partners. “We want the most innovative and freshest directors, cinematographers and editors from all over the world.”

NAME: Matt Moore

COMPANY: Phoenix and Los Angeles’ Matter Films
and OH Partners

CAN YOU DESCRIBE YOUR COMPANY?
Matter Films is a full-service production company that takes projects from script to screen — doing both pre-production and post in addition to producing content. We are joined by our sister company OH Partners, a full-service advertising agency.

WHAT’S YOUR JOB TITLE?
President of Matter Films and CCO of OH Partners,

WHAT DOES THAT ENTAIL?
I’m lucky to be the only person in the company who gets to serve on both sides of the fence. Knowing that, I think that working with Matter and OH gives me a unique insight into how to meet our clients’ needs best. My number one job is to push both teams to be as innovative and outside of the box as possible. A lot of people do what we do, so I work on our points of differentiation.

Gila River Hotels and Casinos – Sports Partnership

I spend a lot of time finding talent and production partners. We want the most innovative and freshest directors, cinematographers and editors from all over the world. That talent must push all of our work to be the best. We then pair that partner with the right project and the right client.

The other part of my job is figuring out where the production industry is headed. We launched Matter Films because we saw a change within the production world — many production companies weren’t able to respond quickly enough to the need for social and digital work, so we started a company able to address that need and then some.

My job is to always be selling ideas and proposing different avenues we could pursue with Matter and with OH. I instill trust in our clients by using our work as a proof point that the team we’ve assembled is the right choice to get the job done.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People assumed when we started Matter Films that we would keep everything in-house and have no outside partners, and that’s just not the case. Matter actually gives us even more resources to find those innovators from across the globe. It allows us to do more.

The variation in budget size that we accept at Matter Films would also surprise people. We’ll take on projects with anywhere from $1,000 to one million-plus budgets. We’ve staffed ourselves in such a way that even small projects can be profitable.

WHAT’S YOUR FAVORITE PART OF THE JOB?
It sounds so cliché, but I would have to say the people. I’m around people that I genuinely want to see every single day. I love when we all get together for our meetings, because while we do discuss upcoming projects, we also goof off and just hang out. These are the people I go into battle with every single day. I choose to go into the battle with people that I whole-heartedly care about and enjoy being with. It makes life better.

WHAT’S YOUR LEAST FAVORITE?
What’s tough is how fast this business changes. Every day there’s a new conference or event, and just when you think an idea you’ve had is cutting edge and brand new, you realize you have to keep going and push to be more innovative. Just when you get caught up, you’re already behind. The big challenge is how you’re going to constantly step up your game.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I’m an early morning person. I can get more done if I start before everybody else.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I was actually pre-med for two years in college with the desire to be a surgeon. When I was an undergrad, I got an abysmal grade on one of our exams and the professor pulled me aside and told me that a score that low proved that I truly did not care about learning the material. He allowed me to withdraw from the class to find something I was more passionate about, and that was life changing.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I found out in college. I genuinely just loved making a product that either entertained or educated people. I started in the news business, so every night I would go home after work and people could tell me about the news of the day because of what I’d written, edited and put on TV.

People knew about what was going on because of the stories that we told. I have a great love for telling stories and having others engage with that story. If you’re good at the job, peoples’ lives will be different as a result of what you create.

Barbuda Ocean Club

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped a large shoot in Maryland for Live Casino, and a different tourism project for a luxury property in Barbuda. We’re currently developing our work with Virgin, and we have a shoot for a technology company focused on developing autonomous driving and green energy upcoming as well. We’re all over the map with the range of work that we have in the pipeline.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
One of my favorite projects actually took place before Matter Films was officially around, but we had a lot of the same team. We did an environmentally sensitive project for Sedona, Arizona, called Sedona Secret 7. Our campaign told the millions of tourists who arrive there how to find other equally beautiful destinations in and around Sedona instead of just the ones everyone already knew.

It was one of those times when advertising wasn’t about selling something, but about saving something.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, a pair of AirPods and a laptop. The Matter Films team gave me AirPods for my birthday, so those are extra special!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
My usage on Instagram is off the charts; it’s embarrassing. While I do look at everyone’s vacation photos or what workout they did that day, I also use Instagram as a talent sourcing tool for a lot of work purposes: I follow directors, animation studios and tons of artists that I either get inspiration from or want to work with.

A good percentage of people I follow are creatives that I want to work with at some point. I also reach out to people all the time for potential collaborations.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I love outdoor adventures. Some days I’ll go on a crazy hike here in Arizona or rent a four-wheeler and explore the desert or mountains. I also love just hanging out with my kids — they’re a great age.

Alaina Zanotti rejoins Cartel as executive producer

Santa Monica-based editorial and post studio Cartel has named Alaina Zanotti as executive producer to help with business development and to oversee creative operations along with partner and executive producer Lauren Bleiweiss. Additionally, Cartel has bolstered its roster with the signing of comedic editor Kevin Zimmerman.

Kevin Zimmerman

With more than 15 years of experience, Zanotti joins Cartel after working for clients that include BBDO, Wieden+Kennedy, Deutsch, Google, Paramount and Disney. Zanotti most recently served as senior executive producer at Method Studios, where she oversaw business development for global VFX and post. Prior to that stint, she joined Cartel in 2016 to assist the newly established post and editorial house’s growth. Previously, Zanotti spent more than a decade driving operations and raising brand visibility for Method and Company 3.

Editor Zimmerman joins Cartel following a tenure as a freelance editor, during which his comedic timing and entrepreneurial spirit earned him commercial work for Avocados From Mexico and Planters that aired during 2019’s Super Bowl.

Throughout his two-decade career in editorial, Zimmerman has held positions at Spot Welders, NO6, Whitehouse Post and FilmCore, with recent work for Sprite, Kia, hotels.com, Microsoft and Miller Lite, and a PSA for Girls Who Code. Zimmer has previously worked with Cartel partners Adam Robinson and Leo Scott.

Object Matrix and Arvato partner for managing digital archives

Object Matrix and Arvato Systems have partnered to help companies instantly access, manage, browse and edit clips from their digital archives.

Using Arvato’s production asset management platform, VPMS EditMate along with the media-focused object storage solution from Object Matrix, MatrixStore, the companies report that organizations can significantly reduce the time needed to manage media workflows, while making content easily discoverable. The integration makes it easy to unlock assets held in archive, enable creative collaboration and monetize archived assets.

MatrixStore is a media-focused private and hybrid cloud storage platform that provides instant access to all media assets. Built upon object-based storage technology, MatrixStore provides digital content governance through an integrated and automated storage platform supporting multiple media-based workflows while providing a secure and scalable solution.

VPMS EditMate is a toolkit built for managing and editing projects in a streamlined, intuitive and efficient manner, all from within Adobe Premiere Pro. From project creation and collecting media, to the export and storage of edited material, users benefit from a series of features designed to simplify the spectrum of tasks involved in a modern and collaborative editing environment.

Alkemy X adds Albert Mason as head of production

Albert Mason has joined VFX house Alkemy X as head of production. He comes to Alkemy X with over two decades of experience in visual effects and post production. He has worked on projects directed by such industry icons as Peter Jackson on the Lord of the Rings trilogy, Tim Burton on Alice in Wonderland and Robert Zemeckis on The Polar Express. In his new role at Alkemy X, he will use his experience in feature films to target the growing episodic space.

A large part of Alkemy X’s work has been for episodic visual effects, with credits that include Amazon Prime’s Emmy-winning original series, The Marvelous Mrs. Maisel, USA’s Mr. Robot, AMC’s Fear the Walking Dead, Netflix’s Maniac, NBC’s Blindspot and Starz’s Power.

Mason began his career at MTV’s on-air promos department, sharpening his production skills on top series promo campaigns and as a part of its newly launched MTV Animation Department. He took an opportunity to transition into VFX, stepping into a production role for Weta Digital and spending three years working globally on the Lord of the Rings trilogy. He then joined Sony Pictures Imageworks, where he contributed to features including Spider-Man 3 and Ghost Rider. He has also produced work for such top industry shops as Logan, Rising Sun Pictures and Greymatter VFX.

“[Albert’s] expertise in constructing advanced pipelines that embrace emerging technologies will be invaluable to our team as we continue to bolster our slate of VFX work,” says Alkemy X president/CEO Justin Wineburgh.

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

IDC goes bicoastal, adds Hollywood post facility 


New York’s International Digital Centre (IDC) has opened a new 6,800-square-foot digital post facility in Hollywood, with Rosanna Marino serving as COO. She will manage the day-to-day operations of the West Coast post house. IDC LA will focus on serving the entertainment, content creation, distribution and streaming industries.

Rosanna Marino

Marino will manage sales, marketing, engineering and the day-to-day operations for the Hollywood location, while IDC founder/CEO Marcy Gilbert, will lead the company’s overall activities and New York headquarters.

IDC will provide finishing, color grading and editorial in Dolby Vision 4K HDR, UHD as well as global QC. IDC LA features 11 bays and a DI theater, which includes Dolby 7.1 Atmos audio mixing, dubbing and audio description. They are also providing subtitle and closed caption-timed text creation and localization, ABS scripting and translations in over 40 languages.

To complete the end-to-end chain, they provide IMF and DCP creation, supplemental and all media fulfillment processing, including audio and timed text conforms for distribution. IDC is an existing Netflix Partner Program member — NP3 in New York and NPFP for the Americas and Canada.

IDC LA occupies the top two floors and rooftop deck in a vintage 1930’s brick building on Santa Monica Boulevard.

Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”

Report: Apple intros 16-inch MacBook Pro, previews new Mac Pro, display

By Pat Birk

At a New York City press event, Apple announced that it will being shipping a new 16-inch MacBook Pro this week. This new offering will feature an updated 16-inch Retina display with a pixel density of 226ppi; 9th-generation Intel processors featuring up to 8 cores and up to 64GB of DDR4 memory; vastly expanded SSDs ranging from 512GB to a whopping 8TB; upgraded discrete AMD Radeon Pro 5000M series graphics; completely redesigned speakers and internal microphones; and an overhauled keyboard dubbed, of course, the “Magic Keyboard.”

The MacBook Pro’s new Magic Keyboard.

These MacBooks also feature a new cooling system, with wider vents and a 35 percent larger heatsink, along with a 100-watt hour battery (which the company stressed is the maximum capacity allowed by the Federal Aviation Administration), contributing to an additional hour of battery life while web browsing or playing back video.

I had the opportunity to do a brief hands-on demo, and for the first time since Apple introduced the Touch Bar to the MacBook Pro, I have found myself wanting a new Mac. The keyboard felt great, offering far more give and far less plastic-y clicks than the divisive Butterfly keyboard. The Mac team has reintroduced a physical escape key, along with an inverted T-style cluster of arrow keys, both features that will be helpful for coders. Apple also previewed its upcoming Mac Pro tower and Pro Display XDR.

Sound Offerings
As an audio guy, I was naturally drawn to the workstation’s sound offerings and was happy when the company dedicated a good portion of the presentation to touting its enhanced speaker and microphone arrays. The six-speaker system features dual-opposed woofer drivers, which offer enhanced bass while canceling out problematic distortion-causing frequencies. When compared side by side with high-end offerings from other manufacturers, the MacBook offered a far more complete sonic experience than the competition, and I believe Apple is right in saying that they’ve achieved an extra half octave of bass range with this revision.

The all-new MacBook Pro features a 16-inch Retina display.

It’s really impressive for a laptop, but I honestly don’t see it replacing a good pair of headphones or a half-decent Bluetooth for most users. I can see it being useful in the occasional pitch meeting, or showing an idea or video to a friend with no other option, but feel it’s more of a nice touch than a major selling point.

The three-microphone array was impressive, as well, and I can see it offering legitimate functionality for working creatives. When A/B’d with competing internal microphones, there was really no comparison. The MacBook’s mics deliver crisp, clean recordings with very little hiss and no noticeable digital artifacting, both of which were clearly present in competing PCs. I could realistically see this working for a small podcast, or on-the-go musicians recording demos. We live in a world where Steve Lacie recorded and produced a beat for Kendrick Lamar on an iPhone. When Apple claims that the signal-to-noise ratio rivals or even surpasses that of digital mics like the Blue Yeti, they may very well be right. However, in an A/B comparison, I found the Blue to have more body and room ambience, while the MacBook sounded a bit thin and sterile.

Demos
The rest of the demo featured creative professionals — coders, animators, colorists and composers — pushing the spec’d out Mac and MacBook Pros to their limits. A coder demonstrated testing a program in realtime on eight emulations of iOS and iPad OS at once.

A video editor demonstrated the new Mac Pro (not the MacBook) running a project with six 8K video sources playing at once through an animation layer, with no rendering at all. We were also treated to a brief Blackmagic Da Vinci Resolve demo on a Pro Display XDR. A VFX artist demonstrated making realtime lighting changes to an animation comprised of eight million polygons on the Mac Pro, again with no need for rendering.

The Mac Pro and Pro Display XDR, the world’s best pro display, will be available in December.

Composers showed us a Logic X session running a track produced for Lizzo by Oak Felder. The song had over 200 tracks, replete with plugins and instruments — Felder was able to accomplish this on an MacBook Pro. Also on the MacBook, they had a session loaded running multiple instances of MIDI instruments using sample libraries from Cinesamples, Spitfire Audio and Orchestral Tools. The result could easily have fooled me into believing it had been recorded with a live orchestra, and the fact that all of these massive, processor intensive sample libraries could operate at the same without making the Mac Pro break a sweat had me floored.

Summing Up
Apple has delivered a very solid upgrade in the new 16-inch MacBook Pro, especially as a replacement for the earlier iterations of the Touch Bar MacBook Pros. They have begun taking orders, with prices starting at $2,399 for the 2.6GHz 6-core model, and $2,799 for the 2.3GHz 8-core model.
As for the new Mac Pro and Pro Display XDR, they’re coming in December, but company representatives remained tight-lipped on a release date.


Pat Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

postPerspective’s ‘SMPTE 2019 Live’ interview coverage

postPerspective was the official production team for SMPTE during its most recent conference in downtown Los Angeles this year. Taking place once again at the Bonaventure Hotel, the conference featured events and sessions all week. (You can watch those interviews here.)

These sessions ranged from “Machine Learning & AI in Content Creation” to “UHD, HDR, 4K, High Frame Rate” to “Mission Critical: Project Artemis, Imaging from the Moon and Deep Space Imaging.” The latter featured two NASA employees and a live talk with astronauts on the International Space Station. It was very cool.

postPerspective’s coverage was also cool and included many sit-down interviews with those presenting at the show (including former astronaut and One More Orbit director Terry Virts as well as Todd Douglas Miller, the director of the Apollo 11 doc), SMPTE executives and long-standing members of the organization.

In addition to the sessions, manufacturers had the opportunity to show their tools on the exhibit floor, where one of our crews roamed with camera and mic in hand reporting on the newest tech.

Whether you missed the conference or experienced it firsthand, these exclusive interviews will provide a ton of information about SMPTE, standards, and the future of our industry, as well as just incredibly smart people talking about the merger of technology and creativity.

Enjoy our coverage!

Blog: Making post deliverables simple and secure

By Morgan Swift

Post producers don’t have it easy. With an ever-increasing number of platforms for distribution and target languages to cater to, getting one’s content to the global market can be challenging to say the least. To top it all, given the current competitive landscape, producers are always under pressure to reduce costs and meet tight deadlines.

Having been in the creative services business for two decades, we’ve all seen it before — post coordinators and supervisors getting burnt out working late nights, often juggling multiple projects and being pushed to the breaking point. You can see it in their eyes. What adds to the stress is dealing with multiple vendors to get various kinds of post finishing work done — from color grading to master QC to localization.

Morgan Swift

Localization is not the least of these challenges. Different platforms specify different deliverables, including access services like closed captions (CC) and audio description (AD); along with as-broadcast scripts (ABS) and combined continuity spotting lists (CCSL). Each of these deliverables requires specialized teams and tools to execute. Needless to say, they also have a significant impact on the budget — usually at least tens of thousands of dollars (much more for a major release).

It is therefore extremely critical to plan post deliverables well in advance to ensure that you are in complete control of turnaround time (TAT), expected spend and potential cost saving opportunities. Let’s look at a few ways of streamlining the process of creating access services deliverables. To do this, we need to understand the various factors at play.

First of all, we need to consider the amount of effort involved in creating these deliverables. There is typically a lot of overlap, as deliverables like as-broadcast scripts and combined continuity spotting lists are often required for creating closed captions and audio description. This means that it is cheaper to combine the creation of all these deliverables instead of getting them done separately.

The second factor to think about is security. Given that pre-release content is extremely vulnerable to piracy, the days of getting an extra DVD with visible timecode for closed captions should be over. Even the days of sending a non-studio-approved link just to create the deliverables should be over.
Why? Because today, there exist tailor-made solutions that have been designed to facilitate secure localization operations. They enable easy creation of a folder that can be used to send and receive files securely, even by external vendors. One such solution is Clear Media ERP, which was built ground-up by Prime Focus Technologies in order to address these challenges.

There is no additional cost to send and receive videos or post deliverable files if you already have a system like this set up for a show. You can keep your pre-release content completely safe, leveraging the software’s advanced security features which include multi-factor authentication, Okta integration, bulk watermarking, burnt-in watermarks for downloads, secure script and document distribution and more.

With the right tech stack, you can get one beautifully organized and secure location to store all of your Access Services deliverables. Which means your team can finally sit back and focus on what matters the most — creating incredible content.


Morgan Swift  is director of account management at Prime Focus Technologies in Los Angeles.

SMPTE 2019 Live: Gala Award Winners

postPerspective was invited by SMPTE to host the exclusive coverage of their 2019 Awards Gala. (Watch here!)

The annual event was hosted by Kasha Patel (a digital storyteller at NASA Earth Observatory by day and a science comedian by night!), and presenters included Steve Wozniak. Among this year’s honorees — Netflix’s Anne Aaron, Gary J. Sullivan, Michelle Munson and Sky’s Cristina Gomila Torres. Honorary Membership was bestowed on Roderick Snell (Snell & Wilcox) and Paul Kellar (Quantel).

If you missed this year’s SMPTE Awards Gala, or even if you were there, check out our backstage interviews with some of our industry’s luminaries. We hope you enjoy watching these interviews as much as we enjoyed shooting them.

Oh, and a big shout out to the team from AlphaDogs who shot and edited all of our 2019 SMPTE Live coverage!

James Norris joins Nomad in London as editor, partner

Nomad in London has added James Norris as editor and partner. A self-taught, natural editor, James started out running for the likes of Working Title, Partizan and Tomboy Films. He then moved to Whitehouse Post as an assistant where he refined his craft and rose through the ranks to become an editor.

Over the past 15 years, he’s worked across commercials, music videos, features and television. Norris edited Ikea’s Fly Robot Fly spot and Asda’s Get Possessed piece, and has recently cut a new project for Nike. Working within television and film, he also cut an episode of the BAFTA-nominated drama Our World War and feature film We Are Monster.

“I was attracted to Nomad for their vision for the future and their dedication to the craft of editing. They have a wonderful history but are also so forward-thinking and want to create new, exciting things. The New York and LA offices have seen incredible success over the last few years, and now there’s Tokyo and London too. On top of this, Nomad feels like home already. They’re really lovely people — it really does feel like a family.”

Norris will be cutting on Avid Media Composer at Nomad.

 

Production and post boutique Destro opens in LA

Industry veterans Drew Neujahr, Sean McAllen, and Shane McAllen have partnered to form Destro, a live-action and post production boutique based in Los Angeles. Destro has already developed and produced an original documentary series, Seed, which profiles artists and innovators across a range of disciplines. In addition, the team has recently worked on projects for Google, Nintendo and Michelin.

Destro’s primary focus will be producing, directing, and post on live-action projects. However, with the partners’ extensive background in motion and VFX, the team is adept at executing mixed-media pipelines when the occasion calls.

With the launch of original studio projects like Seed, Destro sees an opportunity not only to showcase its own voice but to present a case study to forge symbiotic relationships with brands that have real stories to tell about their teams, products, users, and core values.

“Great ideas don’t always happen at conception,” says Neujahr. “When the weather changes during production or the client rethinks the concept in post, being able to improvise and adjust brings about the best work.”

Neujahr and the McAllen brothers bring a combined 45 years of experience spanning commercial and film production, post production and entertainment branding/marketing.

Neujahr’s experience includes features and marketing as both a producer and a creative. He has directed short films, commercials and the documentary series Western State. As a producer, head of production and executive producer at top motion graphics and visual effects studios in LA, he oversaw spots for Ford, Burger King, Walmart, Nickelodeon, FX and History.

Sean McAllen is a seasoned film and commercial editor who has crafted both short-form and long-form work for Ford, Chevy, Nissan, Toyota, Red Bull, Google and Samsung. He also co-wrote and edited the Emmy-nominated documentary feature Houston We Have a Problem. McAllen got his start co-founding a Tokyo/Los Angeles-based production company, where he directed commercials, broadcast documentaries and entertainment marketing content.

Shane McAllen is a veteran of the film and commercial industry. His feature editing credits include contributions to Iron Man 3 and Captain America: The Winter Soldier. On the commercial side, he has worked on campaigns for BMW, Apple and Nintendo. He is also an accomplished writer, producer and director who has worked on a bevy of projects for Google AR and two product reveals for the Nintendo Switch.

“We all got into this crazy world because we love telling stories,” concludes Sean McAllen. “And we share a mutual respect for each other’s craft. Ultimately, our strength is our approachability. We’re the ones who pick up the phone, answer the emails, make the coffee, and do the work.”

Main Image: (L-R) Sean McAllen, Drew Neujahr, and Shane McAllen

Dell intros new 4K monitors for creators

Dell Technologies is offering a new 4K monitor developed with creatives in mind. The Dell UltraSharp 27 4K PremierColor (UP2720Q) is a 27-inch 4K monitor with built-in colorimeter and Thunderbolt 3 for content creators who require color-critical performance and a fast connection to any dock or PC.

Creatives get optimal color performance by calibrating the UltraSharp 27 4K PremierColor monitor with the built-in colorimeter, and they can save time by scheduling automated color checks and calibrations with the Dell Calibration Assistant. This monitor works seamlessly with CalMan software (sold separately) to perform a variety of tasks, including calibrations with the built-in or an external colorimeter. An included shading hood snaps firmly to the monitor via magnets to reduce unwanted glare and reflections.

The UltraSharp 27 4K PremierColor monitor shows images in accurate color and sharp detail with 3840×2160 Ultra HD 4K resolution and a high pixel density of 163ppi. It features a high contrast ratio of 1,300:1. Each monitor is factory-calibrated for accurate color right out of the box. Plus, it supports a wide color coverage that includes 100% Adobe RGB, 80% BT.2020 and 98% DCI-P3.

Thunderbolt 3 offers speeds of up to 40Gbps, creating one compact port for a fast connection to devices. With Thunderbolt 3, users can connect a laptop to the monitor and charge up to 90W from a single cable while simultaneously transferring video and data signals. They can also daisy-chain up to two 4K monitors with Thunderbolt 3 for greater multitasking capabilities.

Terminator: Dark Fate director Tim Miller

By Iain Blair

He said he’d be back, and he meant it. Thirty-five years after he first arrived to menace the world in the 1984 classic The Terminator, Arnold Schwarzenegger has returned as the implacable killing machine in Terminator: Dark Fate, the latest installment of the long-running franchise.

And he’s not alone in his return. Terminator: Dark Fate also reunites the film’s producer and co-writer James Cameron with original franchise star Linda Hamilton for the first time in 28 years in a new sequel that picks up where Terminator 2: Judgment Day left off.

When the film begins, more than two decades have passed since Sarah Connor (Hamilton) prevented Judgment Day, changed the future and re-wrote the fate of the human race. Now, Dani Ramos (Natalia Reyes) is living a simple life in Mexico City with her brother (Diego Boneta) and father when a highly advanced and deadly new Terminator — a Rev-9 (Gabriel Luna) — travels back through time to hunt and kill her. Dani’s survival depends on her joining forces with two warriors: Grace (Mackenzie Davis), an enhanced super-soldier from the future, and a battle-hardened Sarah Connor. As the Rev-9 ruthlessly destroys everything and everyone in its path on the hunt for Dani, the three are led to a T-800 (Schwarzenegger) from Sarah’s past that might be their last best hope.

To helm all the on-screen mayhem, black humor and visual effects, Cameron handpicked Tim Miller, whose credits include the global blockbuster Deadpool, one of the highest grossing R-rated films of all time (it grossed close to $800 million). Miller then assembled a close-knit team of collaborators that included director of photography Ken Seng (Deadpool, Project X), editor Julian Clarke (Deadpool, District 9) and visual effects supervisor Eric Barba (The Curious Case of Benjamin Button, Oblivion).

Tim Miller on set

I recently talked to Miller about making the film, its cutting-edge VFX, the workflow and his love of editing and post.

How daunting was it when James Cameron picked you to direct this?
I think there’s something wrong with me because I don’t really feel fear as normal people do. It just manifests as a sense of responsibility, and with this I knew I’d never measure up to Jim’s movies but felt I could do a good job. Jim was never going to tell this story, and I wanted to see it, so it just became more about the weight of that sense of responsibility, but not in a debilitating way. I felt pretty confident I could carry this off. But later, the big anxiety was not to let down Linda Hamilton. Before I knew her, it wasn’t a thing, but later, once I got to know her I really felt I couldn’t mess it up (laughs).

This is still Cameron’s baby even though he handed over the directing to you. How hands-on was he?
He was busy with Avatar, but he was there for a lot of the early meetings and was very involved with the writing and ideas, which was very helpful thematically. But he wasn’t overbearing on all that. Then later when we shot, he wanted to write a few of the key scenes, which he did, and then in the edit he was in and out, but he never came into my edit room. He’d give notes and let us get on with it.

What sort of film did you set out to make?
A continuation of Sarah’s story. I never felt it was John’s story to me. It was always about a mother’s love for a son, and I felt like there was a real opportunity here. And that that story hadn’t been told — partly because the other sequels never had Linda. Once she wanted to come back, it was always the best possible story. No one else could be her or Arnold’s character.

Any surprises working with them?
Before we shot, people were telling me, “You got to be ready, we can’t mess around. When Arnold walks on set you’d better be rolling!” Sure enough, when he walked on he’d go, “And…” (Laughs) He really likes to joke around. With Linda — and the other actors — it was a love-fest. They’re both such nice, down-to-earth people, and I like a collegial atmosphere. I’m not a screamer. I’m very prepared, and I feel if you just show up on time, you’re already ahead of the game as a director.

What were the main technical challenges in pulling it all together?
They were all different for each big action set piece, and fitting it all into a schedule was tough, as we had a crazy amount of VFX. The C-5 plane sequence was far and away the biggest challenge to do and [SFX supervisor] Neil Corbould and his team designed and constructed all the effects rigs for the movie. The C-5 set was incredible, with two revolving sets, one vertical and one horizontal. It was so big you could put a bus in it, and it was able to rotate 360 degrees and tilt in either direction at the same time.

You just can’t simulate that reality of zero gravity on the actors. And then after we got it all in camera, which took weeks, our VFX guy Eric Barba finished it off. The other big one was the whole underwater scene, where the Humvee falls over the top of a dam and goes underwater as it’s swept down a river. For that, we put the Humvee on a giant scissor lift that could take it all the way under, so the water rushes in and fills it up. It’s really safe to do, but it feels frighteningly realistic for the actors.

This is only my second movie, so I’m still learning, but the advantage is I’m really willing to listen to any advice from the smart people around me on set on how best to do all this stuff.

How early on did you start integrating post and all the VFX?
Right from the start. I use previz a lot, as I come from that environment and I’m very comfortable with it, and that becomes the template for all of production to work from. Sometimes it’s too much of a template and treated like a bible, but I’m like, “Please keep thinking. Is there a better idea?” But it’s great to get everyone on the same page, so very early on you see what’s VFX, what’s live-action only, what’s a combination, and you can really plan your shoot. We did over 45 minutes of previz, along with storyboards. We did tons of postviz. My director’s cut had no blue/green at all. It was all postviz for every shot.

Tim Miller and Linda Hamilton

DP Ken Seng, who did Deadpool with you, shot it. Talk about how you collaborated on the look.
We didn’t really have time to plan shot lists that much since we moved so much and packed so much into every day. A lot of it was just instinctive run-and-gun, as the shoot was pretty grueling. We shot in Madrid and [other parts of] Spain, which doubled for Mexico. Then we did studio work in Budapest. The script was in flux a lot, and Jim wrote a few scenes that came in late, and I was constantly re-writing and tweaking dialogue and adjusting to the locations because there’s the location you think you’ll get and then the one you actually get.

Where did you post?
All at Blur, my company where we did Deadpool. The edit bays weren’t big enough for this though, so we spilled over into another building next door. That became Terminator HQ with the main edit bay and several assistant bays, plus all the VFX and compositing post teams. Blur also helped out with postviz and previz.

Do you like the post process?
I love post! I was an animator and VFX guy first, so it’s very natural to me, and I had a lot of the same team from Deadpool, which was great.

Talk about editing with Julian Clarke who cut Deadpool. How did that work?
It was the same set up. He’d be back here in LA cutting while we shot. He’s so fast; he’d be just one day behind me — I’ve never met anyone who works as hard. Then after the shoot, we’d edit all day and then I’d deal with VFX reviews for hours.

Can you talk about how Adobe Creative Cloud helped the post and VFX teams achieve their creative and technical goals?
I’m a big fan, and that started back on Deadpool as David Fincher was working closely with Adobe to make Premiere something that could beat Avid. We’re good friends — we’re doing our animated Netflix show Love, Death & Robots together — and he was like, “Dude, you gotta use this tool,” so we used it on Deadpool. It was still a little rocky on that one, but overall it was a great experience, and we knew we’d use it on this one. Adobe really helped refine it and the workflow, and it was a huge leap.

What were the big editing challenges?
(Laughs) We just shot too much movie. We had many discussions about cutting one or more of the action scenes, but in the end, we just took out some of the action from all of them, instead of cutting a particular set piece. But it’s tricky cutting stuff and still making it seamless, especially in a very heavily choreographed sequence like the C-5.

VFX plays a big role. How many were there?
Over 2,500 — a huge amount. The VFX on this were so huge it became a bit of a problem, to be honest.

L-R: Writer Iain Blair and director Tim Miller

How did you work with VFX supervisor Eric Barba.
He did a great job and oversaw all the vendors, including ILM, who did most of them. We tried to have them do all the character-based stuff, to keep it in one place, but in the end, we also had Digital Domain, Method, Blur, UPP, Cantina, and some others. We also brought on Jeff White from ILM since it was more than Eric could handle.

Talk about the importance of sound and music.
Tom Holkenborg, who scored Deadpool, did another great job. We also reteamed with sound design and mixer Craig Henighan and we did the mix at Fox. They’re both crucial in a film like this, but I’m the first to admit music’s not my strength. Luckily, Julian Clarke is excellent with that and very focused. He worked hard at pulling it all together. I love sound design and we talked about all the spotting, and Julian managed a lot of that too for me because I was so busy with the VFX.

Where did you do the DI and how important is it to you?
It’s huge, and we did it at Company 3 with Tim Stipan, who did Deadpool. I like to do a lot of reframing, adding camera shake and so on. It has a subtle but important effect on the overall film.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Color Chat: Light Iron’s Corinne Bogdanowicz

Corinne Bogdanowicz colorist at Light Iron, joined the post house in 2010 after working as a colorist and digital compositor for Post Logic/Prime Focus, Pacific Title and DreamWorks Animation.

Bogdanowicz, who comes from a family of colorists/color scientists (sister and father), has an impressive credit list, including the features 42, Flight, Hell or High Water, Allied and Wonder. On the episodic side, she has colored all five seasons of Amazon’s Emmy-winning series Transparent, as well as many other shows, including FX’s Baskets and Boomerang for BET. Her most recent work includes Netflix’s Dolemite is My Name and HBO’s Mrs. Fletcher.

HBO’s Mrs. Fletcher

We reached out to find out more…

NAME: Corinne Bogdanowicz

COMPANY: Light Iron

CAN YOU DESCRIBE YOUR COMPANY?
Light Iron is a post production company owned by Panavision. We have studios in New York and Los Angeles.

AS A COLORIST, WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think that most people would be surprised that we are the last stop for all visuals on a project. We are where all of the final VFX come together, and we also manage the different color spaces for final distribution.

ARE YOU SOMETIMES ASKED TO DO MORE THAN JUST COLOR ON PROJECTS?
Yes, I am very often doing work that crosses over into visual effects. Beauty work, paint outs and VFX integration are all commonplace in the DI suite these days.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The collaboration between myself and the creatives on a project is my favorite aspect of color correction. There is always a moment when we start color where I get “the look,” and everyone is excited that their vision is coming to fruition.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Maybe farming? (laughs) I’m not sure. I love being outdoors and working with animals.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I have an art background, and when I moved to Los Angeles years ago I worked in VFX. I quickly was introduced to the world of color and found it was a great fit. I love the combination of art and technology, as well as constantly being introduced to new ideas by industry creatives.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Where’d You Go, Bernadette?, Sextuplets, Truth Be Told, Transparent, Mrs. Fletcher and Dolemite is My Name.

Transparent

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
This is a hard question because I feel like I leave a little piece of myself in everything that I work on.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, the coffee maker and FilmLight Baselight.

WHAT DO YOU DO TO DE-STRESS FROM THE PRESSURES OF THE JOB?
I have two small children at home, so I think I de-stress when I get to work (laughs)!

Review: Lenovo Yoga A940 all-in-one workstation

By Brady Betzel

While more and more creators are looking for alternatives to the iMac, iMac Pro and Mac Pro, there are few options with high-quality, built-in monitors: Microsoft Surface Studio, HP Envy, and Dell 7000 are a few. There are even fewer choices if you want touch and pen capabilities. It’s with that need in mind that I decided to review the Lenovo Yoga A940, a 27-inch, UHD, pen- and touch-capable Intel Core i7 computer with an AMD Radeon RX 560 GPU.

While I haven’t done a lot of all-in-one system reviews like the Yoga A940, I have had my eyes on the Microsoft Surface Studio 2 for a long time. The only problem is the hefty price tag of around $3,500. The Lenovo’s most appealing feature — in addition to the tech specs I will go over — is its price point: It’s available from $2,200 and up. (I saw Best Buy selling a similar system to the one I reviewed for around $2,299. The insides of the Yoga and the Surface Studio 2 aren’t that far off from each other either, at least not enough to make up for the $1,300 disparity.)

Here are the parts inside the Lenovo Yoga A940: Intel Core i7-8700 3.2GHz processor (up to 4.6GHz with Turbo Boost), six cores (12 threads) and 12MB cache; 27-inch 4K UHD IPS multitouch 100% Adobe RGB display; 16GB DDR4 2666MHz (SODIMM) memory; 1TB 5400 RPM drive plus 256GB PCIe SSD; AMD Radeon RX 560 4GB graphics processor; 25-degree monitor tilt angle; Dolby Atmos speakers; Dimensions: 25 inches by 18.3 inches by 9.6 inches; Weight: 32.2 pounds; 802.11AC and Bluetooth 4.2 connectivity; side panel inputs: Intel Thunderbolt, USB 3.1, 3-in-1 card reader and audio jack; rear panel inputs: AC-in, RJ45, HDMI and four USB 3.0; Bluetooth active pen (appears to be the Lenovo Active Pen 2); and QI wireless charging technology platform.

Digging In
Right off the bat, I just happened to put my Android Galaxy phone on the odd little flat platform located on the right side of the all-in-one workstation, just under the monitor, and I saw my phone begin to charge wirelessly. QI wireless charging is an amazing little addition to the Yoga; it really comes through in a pinch when I need my phone charged and don’t have the cable or charging dock around.

Other than that nifty feature, why would you choose a Lenovo Yoga A940 over any other all-in-one system? Well, as mentioned, the price point is very attractive, but you are also getting a near-professional-level system in a very tiny footprint — including Thunderbolt 3 and USB connections, HDMI port, network port and SD card reader. While it would be incredible to have an Intel i9 processor inside of the Yoga, the i7 clocks in at 3.2GHz with six cores. Not a beast, but enough to get the job done inside of Adobe Premiere and Blackmagic’s DaVinci Resolve, but maybe with transcoded files instead of Red raw or the like.

The Lenovo Yoga A940 is outfitted with a front-facing Dolby Atmos audio speaker as well as Dolby Vision technology in the IPS display. The audio could use a little more low end, but it is good. The monitor is surprisingly great — the whites are white and the blacks are black; something not everyone can get right. It has 100% Adobe RGB color coverage and is Pantone-validated. The HDR is technically Dolby Vision and looks great at about 350 nits (not the brightest, but it won’t burn your eyes out either). The Lenovo BT active pen works well. I use Wacom tablets and laptop tablets daily, so this pen had a lot to live up to. While I still prefer the Wacom pen, the Lenovo pen, with 4,096 levels of sensitivity, will do just fine. I actually found myself using the touchscreen with my fingers way more than the pen.

One feature that sets the A940 apart from the other all-in-one machines is the USB Content Creation dial. With the little time I had with the system, I only used it to adjust speaker volume when playing Spotify, but in time I can see myself customizing the dials to work in Premiere and Resolve. The dial has good action and resistance. To customize the dial, you can jump into the Lenovo Dial Customization Assistant.

Besides the Intel i7, there is an AMD Radeon RX 560 with 4GB of memory, two 3W and two 5W speakers, 32 GB of DDR4 2666 MHz memory, a 1 TB 5400 RPM hard drive for storage, and a 256GB PCIe SSD. I wish the 1TB drive was also an SSD, but obviously Lenovo has to keep that price point somehow.

Real-World Testing
I use Premiere Pro, After Effects and Resolve all the time and can understand the horsepower of a machine through these apps. Whether editing and/or color correcting, the Lenovo A940 is a good medium ground — it won’t be running much more than 4K Red raw footage in real time without cutting the debayering quality down to half if not one-eighth. This system would make a good “offline” edit system, where you transcode your high-res media to a mezzanine codec like DNxHR or ProRes for your editing and then up-res your footage back to the highest resolution you have. Or, if you are in Resolve, maybe you could use optimized media for 80% of the workflow until you color. You will really want a system with a higher-end GPU if you want to fluidly cut and color in Premiere and Resolve. That being said, you can make it work with some debayer tweaking and/or transcoding.

In my testing I downloaded some footage from Red’s sample library, which you can find here. I also used some BRAW clips to test inside of Resolve, which can be downloaded here. I grabbed 4K, 6K, and 8K Red raw R3D files and the UHD-sized Blackmagic raw (BRAW) files to test with.

Adobe Premiere
Using the same Red clips as above, I created two one-minute-long UHD (3840×2160) sequences. I also clicked “Set to Frame Size” for all the clips. Sequence 1 contained these clips with a simple contrast, brightness and color cast applied. Sequence 2 contained these same clips with the same color correction applied, but also a 110% resize, 100 sharpen and 20 Gaussian Blur. I then exported them to various codecs via Adobe Media Encoder using the OpenCL for processing. Here are my results:

QuickTime (.mov) H.264, No Audio, UHD, 23.98 Maximum Render Quality, 10 Mb/s:
Color Correction Only: 24:07
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 26:11
DNxHR HQX 10 bit UHD
Color Correction Only: 25:42
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 27:03

ProRes HQ
Color Correction Only: 24:48
Color Correction w/ 110% Resize, 100 Sharpen, 20 Gaussian Blur: 25:34

As you can see, the export time is pretty long. And let me tell you, once the sequence with the Gaussian Blur and Resize kicked in, so did the fans. While it wasn’t like a jet was taking off, the sound of the fans definitely made me and my wife take a glance at the system. It was also throwing some heat out the back. Because of the way Premiere works, it relies heavily on the CPU over GPU. Not that it doesn’t embrace the GPU, but, as you will see later, Resolve takes more advantage of the GPUs. Either way, Premiere really taxed the Lenovo A940 when using 4K, 6K and 8K Red raw files. Playback in real time wasn’t possible except for the 4K files. I probably wouldn’t recommend this system for someone working with lots of higher-than-4K raw files; it seems to be simply too much for it to handle. But if you transcode the files down to ProRes, you will be in business.

Blackmagic Resolve 16 Studio
Resolve seemed to take better advantage of the AMD Radeon RX 560 GPU in combination with the CPU, as well as the onboard Intel GPU. In this test I added in Resolve’s amazing built-in spatial noise reduction, so other than the Red R3D footage, this test and the Premiere test weren’t exactly comparing apples to apples. Overall the export times will be significantly higher (or, in theory, they should be). I also added in some BRAW footage to test for fun, and that footage was way easier to work and color with. Both sequences were UHD (3840×2160) 23.98. I will definitely be looking into working with more BRAW footage. Here are my results:

Playback: 4K realtime playback at half-premium, 6K no realtime playback, 8K no realtime playback

H.264 no audio, UHD, 23.98fps, force sizing and debayering to highest quality
Export 1 (Native Renderer)
Export 2 (AMD Renderer)
Export 3 (Intel QuickSync)

Color Only
Export 1: 3:46
Export 2: 4:35
Export 3: 4:01

Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur
Export 1: 36:51
Export 2: 37:21
Export 3: 37:13

BRAW 4K (4608×2592) Playback and Export Tests

Playback: Full-res would play at about 22fps; half-res plays at realtime

H.264 No Audio, UHD, 23.98 fps, Force Sizing and Debayering to highest quality
Color Only
Export 1: 1:26
Export 2: 1:31
Export 3: 1:29
Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur
Export 1: 36:30
Export 2: 36:24
Export 3: 36:22

DNxHR 10 bit:
Color Correction Only: 3:42
Color, 110% Resize, Spatial NR: Enhanced, Medium, 25; Sharpening, Gaussian Blur: 39:03

One takeaway from the Resolve exports is that the color-only export was much more efficient than in Premiere, taking just over three or four times realtime for the intensive Red R3D files, and just over one and a half times real time for BRAW.

Summing UpIn the end, the Lenovo A940 is a sleek looking all-in-one touchscreen- and pen-compatible system. While it isn’t jam-packed with the latest high-end AMD GPUs or Intel i9 processors, the A940 is a mid-level system with an incredibly good-looking IPS Dolby Vision monitor with Dolby Atmos speakers. It has some other features — like IR camera, QI wireless charger and USB Dial — that you might not necessarily be looking for but love to find.

The power adapter is like a large laptop power brick, so you will need somewhere to stash that, but overall the monitor has a really nice 25-degree tilt that is comfortable when using just the touchscreen or pen, or when using the wireless keyboard and mouse.

Because the Lenovo A940 starts at just around $2,299 I think it really deserves a look when searching for a new system. If you are working in primarily HD video and/or graphics this is the all-in-one system for you. Check out more at their website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Bonfire adds Jason Mayo as managing director/partner

Jason Mayo has joined digital production company Bonfire in New York as managing director and partner. Industry veteran Mayo will be working with Bonfire’s new leadership lineup, which includes founder/Flame artist Brendan O’Neil, CD Aron Baxter, executive producer Dave Dimeola and partner Peter Corbett. Bonfire’s offerings include VFX, design, CG, animation, color, finishing and live action.

Mayo comes to Bonfire after several years building Postal, the digital arm of the production company Humble. Prior to that he spent 14 years at Click 3X, where he worked closely with Corbett as his partner. While there he also worked with Dimeola, who cut his teeth at Click as a young designer/compositor. Dimeola later went on to create The Brigade, where he developed the network and technology that now forms the remote, cloud-based backbone referred to as the Bonfire Platform.

Mayo says a number of factors convinced him that Bonfire was the right fit for him. “This really was what I’d been looking for,” he says. “The chance to be part of a creative and innovative operation like Bonfire in an ownership role gets me excited, as it allows me to make a real difference and genuinely effect change. And when you’re working closely with a tight group of people who are focused on a single vision, it’s much easier for that vision to be fully aligned. That’s harder to do in a larger company.”

O’Neil says that having Mayo join as partner/MD is a major move for the company. “Jason’s arrival is the missing link for us at Bonfire,” he says. “While each of us has specific areas to focus on, we needed someone who could both handle the day to day of running the company while keeping an eye on our brand and our mission and introducing our model to new opportunities. And that’s exactly his strong suit.”

For the most part, Mayo’s familiarity with his new partners means he’s arriving with a head start. Indeed, his connection to Dimeola, who built the Bonfire Platform — the company’s proprietary remote talent network, nicknamed the “secret sauce” — continued as Mayo tapped Dimeola’s network for overflow and outsourced work while at Postal. Their relationship, he says, was founded on trust.

“Dave came from the artist side, so I knew the work I’d be getting would be top quality and done right,” Mayo explains. “I never actually questioned how it was done, but now that he’s pulled back the curtain, I was blown away by the capabilities of the Platform and how it dramatically differentiates us.

“What separates our system is that we can go to top-level people around the world but have them working on the Bonfire Platform, which gives us total control over the process,” he continues. “They work on our cloud servers with our licenses and use our cloud rendering. The Platform lets us know everything they’re doing, so it’s much easier to track costs and make sure you’re only paying for the work you actually need. More importantly, it’s a way for us to feel connected – it’s like they’re working in a suite down the hall, except they could be anywhere in the world.”

Mayo stresses that while the cloud-based Platform is a huge advantage for Bonfire, it’s just one part of its profile. “We’re not a company riding on the backs of freelancers,” he points out. “We have great, proven talent in our core team who work directly with clients. What I’ve been telling my longtime client contacts is that Bonfire represents a huge step forward in terms of the services and level of work I can offer them.”

Corbett believes he and Mayo will continue to explore new ways of working now that he’s at Bonfire. “In the 14 years Jason and I built Click 3X, we were constantly innovating across both video and digital, integrating live action, post production, VFX and digital engagements in unique ways,” he observes. “I’m greatly looking forward to continuing on that path with him here.”

Technicolor Post opens in Wales 

Technicolor has opened a new facility in Cardiff, Wales, within Wolf Studios. This expansion of the company’s post production footprint in the UK is a result of the growing demand for more high-quality content across streaming platforms and the need to post these projects, as well as the growth of production in Wales.

The facility is connected to all of Technicolor’s locations worldwide through the Technicolor Production Network, giving creatives easy access and to their projects no matter where they are shooting or posting.

The facility, an extension of Technicolor’s London operations, supports all Welsh productions and features a multi-purpose, state-of-the-art suite as well as space for VFX and front-end services including dailies. Technicolor Wales is working on Bad Wolf Production’s upcoming fantasy epic His Dark Materials, providing picture and sound services for the BBC/HBO show. Technicolor London’s recent credits include The Two Popes, The Souvenir, Chernobyl, Black Mirror, Gentleman Jack and The Spanish Princess.

Within this new Cardiff facility, Technicolor is offering 2K digital cinema projection, FilmLight Baselight color grading, realtime 4K HDR remote review, 4K OLED video monitoring, 5.1/7.1 sound, ADR recording/source connect, Avid Pro Tools sound mixing, dailies processing and Pulse cloud storage.

Bad Wolf Studios in Cardiff offers 125,000 square feet of stage space with five stages. There is flexible office space, as well as auxiliary rooms and costume and props storage. Its within

Behind the Title: C&I Studios founder Joshua Miller

While he might run the company, founder/CEO Joshua Miller is happiest creating. He also says there is no job too small: “Nothing is beneath you.”

NAME: Joshua Otis Miller

COMPANY: C&I Studios

CAN YOU DESCRIBE YOUR COMPANY?
C&I Studios is a production company and advertising agency. We are located in New York City, Los Angeles, and Fort Lauderdale.

WHAT’S YOUR JOB TITLE?
Founder and CEO

WHAT DOES THAT ENTAIL?
Well, my job is a little weird. While I own and run the company, my passion has always been filmmaking… since I was four years old. I also run the video and film team at the studio, so my job means a lot of things. One day, I can be shooting on a mountain and the next day writing scripts and concepts, or editing, creating feature films or TV shows or managing post production. Since I’m the CEO, I spend a ton of time bringing in new business and adding technology to the company. Every day feels brand new to me, and that is the best part.

Black Violin

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think the thing that surprises most people is that when I’m on set working, I’m not sitting back drinking a mojito. I’m carrying the tripods and the sandbags and setting up the shots. I’m also the one signing everyone’s checks. One of our core beliefs at our company is “nothing is beneath you,” and that means you can do anything — including cleaning toilets —that helps the company grow, and it requires you to drop your ego. In the creative industry that’s a big deal.

WHAT’S YOUR FAVORITE PART OF THE JOB?
My favorite part of the job is working with my team. I got so sick of the freelance game — it’s so individualized, and everyone is out for themselves. I wanted to start C&I to work with people consistently, dream together, build together and create together. That is by far better than anything else.

WHAT’S YOUR LEAST FAVORITE?
My least favorite part of the job is firing people. That just sucks.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Between 4am and 5am. If you aren’t waking up earlier than everyone else, you aren’t doing it right.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be doing the exact same thing. I could be working at McDonald’s, but I’d be filming with my iPhone or Razer phone and editing. It’s not about the money; you can’t take this thing from me. It’s a part of me, and something I certainly didn’t choose. So, no matter where you put me, this is what will come out. And since Blackmagic DaVinci Resolve is free, this is something I could actually do… I could be working at McDonald’s and shooting for fun on my phone and editing in Resolve’s new cut page, which is magic. That actually sounds awesome. Well, except the McDonald’s part (laughs).

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
Again, I don’t feel like I chose it. It’s something that I always felt drawn to. I was interested in cameras since I was very young… tearing apart my parents VHS tapes to see how they worked. I was completely perplexed by the idea that a camera does something and then it goes on this tape, and I see what’s on that tape in this VHS player and on TV. That was something I had to learn and figure out. But the main reason I wanted to really dig into this field is because I remember being in my grandmother’s house watching those VHS tapes with my brothers and my family and everyone is just sitting around, laughing watching old memories. I can’t shake that feeling. People feel warm, vulnerable, close… that is the power you have with a camera and the ability to tell a story. It’s absolutely incredible.

Black Violin

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Right now, I’m working on an incredible music video with Black Violin. We are shooting it in Los Angeles and Miami, and I’m really excited about it.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Probably something I’m most proud of is our latest film Christmas Eve. We just poured everything into that film. It’s just magic. We have done a lot of amazing stuff, but that one is really close to me right now.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Camera, computer, speakers (for music — I can’t live without music). Those three things are a must for me to breathe.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I’m not really into social media, not a big fan of what it has turned us into (off of my soapbox now), but I do follow a ton of film companies and directors. I love following Shane Hurlbut, Blackmagic Design, SmallHD, Red Digital Cinema and Panavision, to name a view.

YOU MENTIONED LOVING MUSIC. DO YOU LISTEN WHILE YOU WORK?
Music is everything. It’s the oil to my car. Without that, I’m toast. Of course, I don’t listen to music when I’m editing, but when I’m on set I love to listen to music. Love the new Chance record. When I’m writing, it’s always either Bon Iver or Michael Giacchino. I love scores and composers.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
To distress, I love the moments in the studio when the staff and I just sit around and get to laugh and just hang out. I have a beautiful family and two wonderful kids, so when I’m not stressing about work I’m giving horsey-back rides to my son, while my daughter tries to explain TikTok to me.

Quick Chat: Element’s Matthew O’Rourke on Vivian partnership

Recently, Boston-based production and post company Element  launched Element Austin — a partnership with production studio Vivian. Element now represents a select directorial roster out of Austin.

We recently reached out to Element executive producer Matthew O’Rourke, who led the charge to get this partnership off the ground.

Can you talk a bit about your partnership with Vivian? How did that come about, and why was this important for Element to do?
I’ve had a relationship with Vivian’s co-owner, Buttons Pham, for almost 10 years. She was my go-to Texas-based resource while I was an executive producer at MMB working on Toyota. She is incredibly resourceful and a great human being. When I joined Element, she became a valued production service partner for our projects in the south (mostly based out of Texas and Atlanta). Our relationship with Vivian was always important to Element since it expands the production support we can offer for our directors and our clients.

Blue Cross Blue Shield

Expanding on that thought. What does Vivian offer that you guys don’t?
They let us have boots on the ground in Austin. They have a strong reputation there and deep resources to handle all levels of work.

How will this partnership work?
Buttons and her business partner Tim Hoppock have become additional executive producers for Element and lead the Element Austin office.

How does the Boston market differ from Austin?
Austin is a growing, vibrant market with tons of amazingly creative people and companies. Lots of production resources are coming in from Los Angeles, but are also developing locally.

Can you point to any recent jobs that resulted from this partnership?
Vivian has been a production services partner for several years, helping us with campaigns for Blue Cross Blue Shield, Subway and more. Since our launch a few weeks ago, we have entered into discussions with several agencies on upcoming work out of the Austin market.

What trends are you seeing overall for this part of the market?
Creative agencies are looking for reliable resources. Having a physical presence in Austin allows us to better support local clients, but also bring in projects from outside that market and produce efficient, quality work.

Good Company adds director Daniel Iglesias Jr.

Filmmaker Daniel Iglesias Jr., whose reel spans narrative storytelling to avant-garde fashion films with creativity and an eccentric visual style, has signed with full-service creative studio Good Company.

Iglesias’ career started while attending Chapman University’s renowned film school, where he earned a BFA in screen acting. At the same time, Iglesias and his friend Zack Sekuler began crafting images for his friends in the alt-rock band The Neighbourhood. Iglesias’ career took off after directing his first music video for the band’s breakout hit “Sweater Weather,” which reached over 310 million views. He continues working behind the camera for The Neighbourhood and other artists like X Ambassadors and AlunaGeorge.

Iglesias uses elements of surrealism and a blend of avant-garde and commercial compositions, often stemming from innovative camera techniques. His work includes projects for clients like Ralph Lauren, Steve Madden, Skyy Vodka and Chrysler and the Vogue film Death Head Sphinx.

One of his most celebrated projects was a two-minute promo for Margaux the Agency. Designed as a “living magazine,” Margaux Vol 1 merges creative blocking, camera movement and effects to create a kinetic visual catalog that is both classic and contemporary. The piece took home Best Picture at the London Fashion Film Festival, along with awards from the Los Angeles Film Festival, the International Fashion Film Awards and Promofest in Spain.

Iglesias’ first project since joining Good Company was Ikea’s Kama Sutra commercial for Ogilvy NY, a tongue-in-cheek exploration of the boudoir. Now he is working on a project for Paper Magazine and Tiffany.

“We all see the world through our own lens; through film, I can unscrew my lens and pop in onto other people and, by effect, change their point of view or even the depth of culture,” he says. “That’s why the medium excites me — I want to show people my lens.”

We reached out to Iglesias to learn a bit more about how he works.

How do you go about picking the people you work with?
I do have a couple DPs and PDs I like to work with on the regular, depending on the job, and sometimes it makes sense to work with someone new. If it’s someone new that I haven’t worked with before, I typically look at three things to get a sense of how right they are for the project: image quality, taste and versatility. Then it’s a phone call or meeting to discuss the project in person so we can feel out chemistry and execution strategy.

Do you trust your people completely in terms of what to shoot on, or do you like to get involved in that process as well?
I’m a pretty hands-on and involved director, but I think it’s important to know what you don’t know and delegate/trust accordingly. I think it’s my job as a director to communicate, as detailed and effectively as possible, an accurate explanation of the vision (because nobody sees the vision of the project better than I do). Then I must understand that the DPs/PDs/etc. have a greater knowledge of their field than I do, so I must trust them to execute (because nobody understands how to execute in their fields better than they do).

Since Good Company also provides post, how involved do you get in that process?
I would say I edit 90% of my work. If I’m not editing it myself, then I still oversee the creative in post. It’s great to have such a strong post workflow with Good Company.

The editors of Ad Astra: John Axelrad and Lee Haugen

By Amy Leland

The new Brad Pitt film Ad Astra follows astronaut Roy McBride (Pitt) as he journeys deep into space in search of his father, astronaut Clifford McBride (Tommy Lee Jones). The elder McBride disappeared years before, and his experiments in space might now be endangering all life on Earth. Much of the film features Pitt’s character alone in space with his thoughts, creating a happy challenge for the film’s editing team, who have a long history of collaboration with each other and the film’s director James Gray.

L-R: Lee Haugen and John Axelrad

Co-editors John Axelrad, ACE, and Lee Haugen share credits on three previous films — Haugen served as Axelrad’s apprentice editor on Two Lovers, and the two co-edited The Lost City of Z and Papillon. Ad Astra’s director, James Gray, was also at the helm of Two Lovers and The Lost City of Z. A lot can be said for long-time collaborations.

When I had the opportunity to speak with Axlerad and Haugen, I was eager to find out more about how this shared history influenced their editing process and the creation of this fascinating story.

What led you both to film editing?
John Axelrad: I went to film school at USC and graduated in 1990. Like everyone else, I wanted to be a director. Everyone that goes to film school wants that. Then I focused on studying cinematography, but then I realized several years into film school that I don’t like being on the set.

Not long ago, I spoke to Fred Raskin about editing Once Upon a Time… in Hollywood. He originally thought he was going to be a director, but then he figured out he could tell stories in an air-conditioned room.
Axelrad: That’s exactly it. Air conditioning plays a big role in my life; I can tell you that much. I get a lot of enjoyment out of putting a movie together and of being in my own head creatively and really working with the elements that make the magic. In some ways, there are a lot of parallels with the writer when you’re an editor; the difference is I’m not dealing with a blank page and words — I’m dealing with images, sound and music, and how it all comes together. A lot of people say the first draft is the script, the second draft is the shoot, and the third draft is the edit.

L-R: John and Lee at the Papillon premiere.

I started off as an assistant editor, working for some top editors for about 10 years in the ’90s, including Anne V. Coates. I was an assistant on Out of Sight when Anne Coates was nominated for the Oscar. Those 10 years of experience really prepped me for dealing with what it’s like to be the lead editor in charge of a department — dealing with the politics, the personalities and the creative content and learning how to solve problems. I started cutting on my own in the late ‘90s, and in the early 2000s, I started editing feature films.

When did you meet your frequent collaborator James Gray?
Axelrad: I had done a few horror features, and then I hooked up with James on We Own the Night, and that went very well. Then we did Two Lovers after that. That’s where Lee Haugen came in — and I’ll let him tell his side of the story — but suffice it to say that I’ve done five films for James Gray, and Lee Haugen rose up through the ranks and became my co-editor on the Lost City of Z. Then we edited the movie Papillon together, so it was just natural that we would do Ad Astra together as a team.

What about you, Lee? How did you wind your way to where we are now?
Lee Haugen: Growing up in Wisconsin, any time I had a school project, like writing a story or writing an article, I would change it into a short video or short film instead. Back then I had to shoot on VHS tape and edited tape to tape by pushing play and hitting record and timing it. It took forever, but that was when I really found out that I loved editing.

So I went to school with a focus on wanting to be an editor. After graduating from Wisconsin, I moved to California and found my way into reality television. That was the mid-2000s and it was the boom of reality television; there were a lot of jobs that offered me the chance to get in the hours needed for becoming a member of the Editors Guild as well as more experience on Avid Media Composer.

After about a year of that, I realized working the night shift as an assistant editor on reality television shows was not my real passion. I really wanted to move toward features. I was listening to a podcast by Patrick Don Vito (editor of Green Book, among other things), and he mentioned John Axelrad. I met John on an interview for We Own the Night when I first moved out here, but I didn’t get the job. But a year or two later, I called him, and he said, “You know what? We’re starting another James Gray movie next week. Why don’t you come in for an interview?” I started working with John the day I came in. I could not have been more fortunate to find this group of people that gave me my first experience in feature films.

Then I had the opportunity to work on a lower-budget feature called Dope, and that was my first feature editing job by myself. The success of the film at Sundance really helped launch my career. Then things came back around. John was finishing up Krampus, and he needed somebody to go out to Northern Ireland to edit the assembly of The Lost City of Z with James Gray. So, it worked out perfectly, and from there, we’ve been collaborating.

Axelrad: Ad Astra is my third time co-editing with Lee, and I find our working as a team to be a naturally fluid and creative process. It’s a collaboration entailing many months of sharing perspectives, ideas and insights on how best to approach the material, and one that ultimately benefits the final edit. Lee wouldn’t be where he is if he weren’t a talent in his own right. He proved himself, and here we are together.

How has your collaborative process changed and grown from when you were first working together (John, Lee and James) to now, on Ad Astra?
Axelrad: This is my fifth film with James. He’s a marvelous filmmaker, and one of the reasons he’s so good is that he really understands the subtlety and power of editing. He’s very neoclassical in his approach, and he challenges the viewer since we’re all accustomed to faster cutting and faster pacing. But with James, it’s so much more of a methodical approach. James is very performance-driven. It’s all about the character, it’s all about the narrative and the story, and we really understand his instincts. Additionally, you need to develop a second-hand language and truly understand what the director wants.

Working with Lee, it was just a natural process to have the two of us cutting. I would work on a scene, and then I could say, “Hey Lee, why don’t you take a stab at it?” Or vice versa. When James was in the editing room working with us, he would often work intensely with one of us and then switch rooms and work with the other. I think we each really touched almost everything in the film.

Haugen: I agree with John. Our way of working is very collaborative —that includes John and I, but also our assistant editors and additional editors. It’s a process that we feel benefits the film as a whole; when we have different perspectives, it can help us explore different options that can raise the film to another level. And when James comes in, he’s extremely meticulous. And as John said, he and I both touched every single scene, and I think we’ve even touched every frame of the film.

Axelrad: To add to what Lee said, about involving our whole editing team, I love mentoring, and I love having my crew feel very involved. Not just technical stuff, but creatively. We worked with a terrific guy, Scott Morris, who is our first assistant editor. Ultimately, he got bumped up during the course of the film and got an additional editor credit on Ad Astra.

We involve everyone, even down to the post assistant. We want to hear their ideas and make them feel like a welcome part of a collaborative environment. They obviously have to focus on their primary tasks, but I think it just makes for a much happier editing room when everyone feels part of a team.

How did you manage an edit that was so collaborative? Did you have screenings of dailies or screenings of cuts?
Axelrad: During dailies it was just James, and we would send edits for him to look at. But James doesn’t really start until he’s in the room. He really wants to explore every frame of film and try all the infinite combinations, especially when you’re dealing with drama and dealing with nuance and subtlety and subtext. Those are the scenes that take the longest. When I put together the lunar rover chase, it was almost easier in some ways than some of the intense drama scenes in the film.

Haugen: As the dailies came in, John and I would each take a scene and do a first cut. And then, once we had something to present, we would call everybody in to watch the scene. We would get everybody’s feedback and see what was working, what wasn’t working. If there were any problems that we could address before moving to the next scene, we would. We liked to get the outside point of view, because once you get further and deeper into the process of editing a film, you do start to lose perspective. To be able to bring somebody else in to watch a scene and to give you feedback is extremely helpful.

One thing that John established with me on Two Lovers — my first editing job on a feature — was allowing me to come and sit in the room during the editing. After my work was done, I was welcome to sit in the back of the room and just observe the interaction between John and James. We continued that process with this film, just to give those people experience and to learn and to observe how an edit room works. That helped me become an editor.

John, you talked about how the action scenes are often easier to cut than the dramatic scenes. It seems like that would be even more true with Ad Astra, because so much of this film is about isolation. How does that complicate the process of structuring a scene when it’s so much about a person alone with his own thoughts?
Axelrad: That was the biggest challenge, but one we were prepared for. To James’ credit, he’s not precious about his written words; he’s not precious about the script. Some directors might say, “Oh no, we need to mold it to fit the script,” but he allows the actors to work within a space. The script is a guide for them, and they bring so much to it that it changes the story. That’s why I always say that we serve the ego of the movie. The movie, in a way, informs us what it wants to be, and what it needs to be. And in the case of this, Brad gave us such amazing nuanced performances. I believe you can sometimes shape the best performance around what is not said through the more nuanced cues of facial expressions and gestures.

So, as an editor, when you can craft something that transcends what is written and what is photographed and achieve a compelling synergy of sound, music and performance — to create heightened emotions in a film — that’s what we’re aiming for. In the case of his isolation, we discovered early on that having voiceover and really getting more interior was important. That wasn’t initially part of the cut, but James had written voiceover, and we began to incorporate that, and it really helped make this film into more of an existential journey.

The further he goes out into space, the deeper we go into his soul, and it’s really a dive into the subconscious. That sequence where he dives underwater in the cooling liquid of the rocket, he emerges and climbs up the rocket, and it’s almost like a dream. Like how in our dreams we have superhuman strength as a way to conquer our demons and our fears. The intent really was to make the film very hypnotic. Some people get it and appreciate it.

As an editor, sound often determines the rhythm of the edit, but one of the things that was fascinating with this film is how deafeningly quiet space likely is. How do you work with the material when it’s mostly silent?
Haugen: Early on, James established that he wanted to make the film as realistic as possible. Sound, or lack of sound, is a huge part of space travel. So the hard part is when you have, for example, the lunar rover chase on the moon, and you play it completely silent; it’s disarming and different and eerie, which was very interesting at first.

But then we started to explore how we could make this sound more realistic or find a way to amplify the action beats through sound. One way was, when things were hitting him or things were vibrating off of his suit, he could feel the impacts and he could hear the vibrations of different things going on.

Axelrad: It was very much part of our rhythm, of how we cut it together, because we knew James wanted to be as realistic as possible. We did what we could with the soundscapes that were allowable for a big studio film like this. And, as Lee mentioned, playing it from Roy’s perspective — being in the space suit with him. It was really just to get into his head and hear things how he would hear things.

Thanks to Max Richter’s beautiful score, we were able to hone the rhythms to induce a transcendental state. We had Gary Rydstrom and Tom Johnson mix the movie for us at Skywalker, and they were the ultimate creators of the balance of the rhythms of the sounds.

Did you work with music in the cut?
Axelrad: James loves to temp with classical music. In previous films, we used a lot of Puccini. In this film, there was a lot of Wagner. But Max Richter came in fairly early in the process and developed such beautiful themes, and we began to incorporate his themes. That really set the mood.

When you’re working with your composer and sound designer, you feed off each other. So things that they would do would inspire us, and we would change the edits. I always tell the composers when I work with them, “Hey, if you come up with something, and you think musically it’s very powerful, let me know, and I am more than willing to pitch changing the edit to accommodate.” Max’s music editor, Katrina Schiller, worked in-house with us and was hugely helpful, since Max worked out of London.

We tend not to want to cut with music because initially you want the edit not to have music as a Band-Aid to cover up a problem. But once we feel the picture is working, and the rhythm is going, sometimes the music will just fit perfectly, even as temp music. And if the rhythms match up to what we’re doing, then we know that we’ve done it right.

What is next for the two of you?
Axelrad: I’m working on a lower-budget movie right now, a Lionsgate feature film. The title is under wraps, but it stars Janelle Monáe, and it’s kind of a socio-political thriller.

What about you Lee?
Haugen: I jumped onto another film as well. It’s an independent film starring Zoe Saldana. It’s called Keyhole Garden, and it’s this very intimate drama that takes place on the border between Mexico and America. So it’s a very timely story to tell.


Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. She also has a feature documentary in post, a feature screenplay in development, and a new doc in pre-production. She is an editor for CBS Sports Network and recently edited the feature “Sundown.” You can follow Amy on social media on Twitter at @amy-leland and Instagram at @la_directora.

Review: Boxx’s Apexx A3 AMD Ryzen workstation

By Mike McCarthy

Boxx’s Apexx A3 is based on AMD’s newest Ryzen CPUs and the X570 chipset. Boxx has taken these elements and added liquid CPU cooling, professional GPUs and a compact, solid case to create an optimal third-generation Ryzen system configured for pros. It can support dual GPUs and two 3.5-inch hard drives, as well as the three M.2 slots on the board and anything that can fit into its five PCIe slots. The system I am reviewing came with AMD’s top CPU, the 12-core 3900X running at 3.8GHz, as well as 64GB of DDR4-2666 RAM and a Quadro RTX 4000 GPU. I also tested it with a 40GbE network card and a variety of other GPUs.

I have been curious about AMD’s CPU reboot with Ryzen architecture, but I haven’t used an AMD-based system since the 64-bit Opterons in the HP xw9300s that I had in 2006. That was also around the same time that I last used a system from Boxx, in the form of its HD Pro RT editing systems, based on those same AMD Opteron CPUs. At the time, Boxx systems were relatively unique in that they had large internal storage arrays with eight or 10 separate disks, and those arrays came in a variety of forms.

The three different locations that I worked during that time period had Boxx workstations with IDE-, SATA- and SCSI-based storage arrays. All three types of storage experienced various issues at the locations where I worked with them, but that might have been more a result of unreliable hard drives and relatively new PCI RAID controllers available at that time more than a reflection on Boxx.

Regardless, and for whatever reason, Boxx focused more on processing performance than storage over the next decade, marketing more toward 3D animation and VFX artists (among other users) who do lots of processing on small amounts of data, instead of video editors who do small amounts of processing on large amounts of data. At this point, most large data sets are stored on network appliances or external arrays, although my projects have recently been leaning the other way, using older server chassis with lots of internal drive slots.

Out of the Box
The Apexx system shipped from Boxx in a reasonably sized carton with good foam protection. Compared to the servers I have been using recently, it is tiny and feather-light at 25 pounds. The compact case is basically designed upside down from conventional layouts, with the power supply at the bottom and the card slots at the top. To save space, it fits the 750W power supply directly over the CPU, which is liquid-cooled with a radiator at the front of the case. There are two SATA hard drive bays at the top of the case. The system is based on the X570 Aorus Ultra motherboard, which has three full-length and two x1 PCIe slots, as well as three M.2 slots.

The system has no shortage of USB ports, with four USB 3.0 ports up front next to the headphone and mic connectors, and 10 on the back panel. Of those, three are USB 3.1 Gen2, including one that is a Type-C port. All the rest are Type-A, three more USB 3.0 ports and four USB 2.0 ports. The white USB 3.0 port allows you to update the BIOS from a USB stick if desired, which might come in handy when AMD’s fix to the Zen2 boost frequency issue becomes available. There are also 5.1 analog audio and SPDIF connectors on the board, as well as HDMI out and Wi-Fi antenna ports.

I hooked up my 8K monitor and connected it to my network for initial config and setup. The simplest test I run is Maxon’s Cinebench 15, which returned a GPU score of 207 and a multi-core CPU score of 3169. Both those values are the highest results I have ever gotten with that tool, including from dual-socket systems workstations, although I have not tested the newest generation of Intel Xeons. AMD’s CPUs are well-suited for that particular test, and this is the first true Nvidia Quadro card I have tested from the Turing-based RTX generation.

As this is an AMD X570 board, it supports PCIe 4.0, but that is of little benefit to current GPUs. The one case where the extra bandwidth could currently make a difference is NVMe SSDs playing back high-resolution frames. This system only came with a PCIe 3.0 SSD, but I am hoping to get a newer PCIe 4.0 one to run benchmarks on for a future article. In the meantime, this one is doing just fine for most uses, with over 3GB/sec of read and over 2GB/sec of write bandwidth. This is more than fast enough for uncompressed 4K work.

Using Adobe Tools
Next I installed both the 2018 and 2019 versions of Adobe Premiere Pro and Media Encoder so I could run tests with the same applications I had used for previous benchmarks on other systems, for more accurate comparisons. I have a standard set of sequences I export in AME, which are based on raw camera footage from Red Monstro, Sony Venice and ARRI Alexa LF cameras, exported to HEVC at 8K and 4K, testing both 8-bit and deep color render paths. Most of these renders were also completed faster than on any other system I have tested, and this is “only” a single-socket consumer-level architecture (compared to Threadripper and Epyc).

I did further tests after adding a Mellanox 40GbE network card, and swapping out the Quadro RTX 4000 for more powerful GPUs. I tested a GeForce RTX 2080 TI, a Quadro RTX 6000, an older Quadro P6000 and an AMD Radeon Pro WX 8200. The 2080TI and RTX6000 did allow 8K playback in realtime from RedCineX, but the max resolution, full-frame 8K files were right at the edge of smooth (around 23fps). Any smaller frame sizes were fine at 24p. The more powerful GeForce card didn’t improve my AME export times much if at all and got a 25% lower OpenGL score in Cinebench, revealing that Quadro drivers still make a difference for some 3D applications and that Adobe users don’t benefit much from investing in a GPU beyond a GeForce 2070. The AMD card did much better than in my earlier tests, showing that AMD drivers and software support have improved significantly since then.

Real-World Use
Where the system really stood out is when I started to do some real work with it. The 40GbE connection to my main workstation allowed me to seamlessly open projects that are stored on my internal 40TB array. I am working on a large feature film at the moment, so I used it to export a number of reels and guide tracks. These are 4K sequences of 7K anamorphic Red footage with layers of GPU effects, titles, labels and notes, with over 20 layers of audio as well. Rendering out a 4K DNxHR file of a 20-minute reel takes 140 minutes on my 16-core dual-socket workstation, but this “consumer-level” AMD system kicks them out in under 90 minutes. My watermarked DNxHD guides render out 20% faster than before as well, even over the network. This is probably due to the higher overall CPU frequency, as I have discovered that Premiere doesn’t multi-thread very well.

For AME Render times, lower is better and for Cinebench scores, higher is better.
Comparison system details:
Dell Precision 7910 with the GeForce 2080 TI
Supermicro X9DRi with Quadro P6000
HP Z4 10-core workstation with GeForce 2080TI
Razer Blade 15 with GeForce 2080 TI Max-Q

I also did some test exports in Blackmagic DaVinci Resolve. I am less familiar with that program, so my testing was much more limited, but it exported nearly as fast as Premiere, and the Nvidia cards were only slightly faster than the AMD GPUs in that app. (But I have few previous Resolve tests to use as a point of comparison to other systems.)

As an AMD system, there are a few limitations as compared to a similar Intel model. First of all, there is no support for the hardware encoding available in Intel’s Quick Sync integrated graphics hardware. This lack of support only matters if you have software that uses that particular functionality, such as my Adobe apps. But the system seems fast enough to accomplish those encode and decode tasks on its own. It also lacks a Thunderbolt port, as until recently that was an exclusively Intel technology. Now that Thunderbolt 3 is being incorporated into USB 4.0, it will be more important to have, but it will become available in a wider variety of products. It might be possible to add a USB 4.0 card to this system when the time comes, which would alleviate this issue.

When I first received the system, it reported the CPU as an 800MHz chip, which was the result of a BIOS configuration issue. After fixing that, the only other problem I had was a conflict between my P6000 GPU and my 8K display, which usually work great together. But it won’t boot with that combo, which is a pretty obscure corner case. All other GPU and monitor combinations worked fine, and I tested a bunch. I worked with Boxx technical support on that and a few other minor issues, and they were very helpful, sending me spare parts to confirm that the issues weren’t caused by my own added hardware.

In the End
The system performed very well for me, and the configuration I received would meet the needs of most users. Even editing 8K footage no longer requires stepping up to a dual-socket system. The biggest variation will come with matching a GPU to your needs, as Boxx offers GeForce, Quadro and AMD options. Editors will probably be able to save some money, while those doing true 3D rendering might want to invest in an even more powerful GPU than the Quadro RTX 4000 that this system came with.

All of those options are available on the Boxx website, with the online configuration tool. The test model Boxx sent me retails for about $4,500. There are cheaper solutions available if you are a DIY person, but Boxx has assembled a well-balanced solution in a solid package, built and supported for you. They also sell much higher-end systems if you are in the market for that, but with recent advances, these mid-level systems probably meet the needs of most users. If you are interested in purchasing a system from them, using the code MIKEPOST at checkout will give you a discount.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Charlieuniformtango names company vets as new partners

Charlieuniformtango principal/CEO Lola Lott has named three of the full-service studio’s most veteran artists as new partners — editors Deedle LaCour and James Rayburn, and Flame artist Joey Waldrip. This is the first time in the company’s almost 25-year history that the partnership has expanded. All three will continue with their current jobs but have received the expanded titles of senior editor/partner and senior Flame artist/partner, respectively. Lott, who retains majority ownership of Charlieuniformtango, will remain principal/CEO, and Jack Waldrip will remain senior editor/co-owner.

“Deedle, Joey and James came to me and Jack with a solid business plan about buying into the company with their futures in mind,” explains Lott. “All have been with Charlieuniformtango almost from the beginning: Deedle for 20 years, Joey for 19 years and James for 18. Jack and I were very impressed and touched that they were interested and willing to come to us with funding and plans for continuing and growing their futures with us.

So why now after all these years? “Now is the right time because while Jack and I still have a passion for this business and we also have employees/talent — that have been with us for over 18 years — who also have a passion be a partner in this company,” says Lott. “While still young, they have invested and built their careers within the Tango culture and have the client bonds, maturity and understanding of the business to be able to take Tango to a greater level for the next 20 years. That was mine and Jack’s dream, and they came to us at the perfect time.”

Charlieuniformtango is a full-service creative studio that produces, directs, shoots, edits, mixes, animates and provides motion graphics, color grading, visual effects and finishing for commercials, short films, full-length feature films, documentaries, music videos and digital content.

Main Image: (L-R) Joey Waldrip, James Rayburn, Jack Waldrip, Lola Lott and Deedle LaCour

Review: Samsung’s 970 EVO Plus 500GB NVMe M.2 SSD

By Brady Betzel

It seems that the SSD drives are dropping in price by the hour. (This might be a slight over-exaggeration, but you understand what I mean.) Over the last year or so there has been a huge difference in pricing, including high-speed NVMe SSD drives. One of those is the highly touted Samsung EVO Plus NVMe line.

In this review, I am going to go over Samsung’s 500GB version of the 970 EVO Plus NVMe M.2 SSD drive. The Samsung 970 EVO Plus NVMe M.2 SSD drive comes in four sizes — 250GB, 500GB, 1TB, and 2TB — and retails (according to www.samsung.com) for $74.99, $119.99, $229.99 and $479.99, respectively. For what it’s worth, I really didn’t see much of price difference on other sites I visited, namely Amazon.com and Best Buy.

On paper, the EVO Plus line of drives can achieve speeds of up to 3,500MB/s read and 3,300MB/s write. Keep in mind that the lower the storage size the lower the read/write speeds will be. For instance, the EVO Plus 250GB SSD can still get up to 3,500MB/s in sequential read speeds, while the sequential write speeds dwindle down to max speeds of 2,300MB/s. Comparatively, the “standard” EVO line can get 3,400MB/s to 3,500MB/s sequential read speeds and 1,500MB/s sequential write speeds on the 250GB EVO SSD. The 500GB version costs just $89.99, but if you need more storage size, you will have to pay more.

There is another SSD to compare the 970 EVO Plus to, and that is the 970 Pro, which only comes in 512GB and 1TB sizes — costing around $169.99 and $349.99, respectively. While the Pro version has similar read speeds to the Plus (up to 3,500MB/s read) and actually slower write speeds (up to 2,700MB/s), the real ticket to admission for the Samsung 970 Pro is the Terabytes Written (TBW) warranty period. Samsung warranties the 970 line of drives for five years or Terabytes Written, whichever comes first. In the 500GB line of 970 drives, the “standard” and Plus 970 cover 300TBW, while the Pro covers a whopping 600TBW.

Samsung says its use of the latest V-NAND technology, in addition to its Phoenix controller, provides the highest speeds and power efficiency of the EVO NVMe drives. Essentially, V-NAND is a way to vertically stack memory instead of the previous method of stacking memory in a planar way. Stacking vertically allows for more memory in the same space in addition to longer life spans. You can read more about the Phoenix controller here.

If you are like me and want both a good warranty (or, really, faith in the product) and blazing speeds, check out the Samsung 970 EVO Plus line of drives. Great price point with almost all of the features as the Pro line. The 970 line of NVMe M.2 SSD drives fits the 2280 form factor (meaning 22mm x 80mm) and fits an M key-style interface. It’s important to understand what interface your SSD is compatible with: either M key (or M) or B key. Cards in the Samsung 970 EVO line are all M key. Most newer motherboards will have at least one if not two M.2 ports to plug drives into. You can also find PCIe adapters for under $20 or $30 on Amazon that will give you essentially the same read/write speeds. External USB 3.1 Gen 2, USB-C enclosures can also be found that will give you an easier way of replacing the drives when needed without having to open your case.

One really amazing way to use these newly lower-priced drives: When color correcting, editing, and/or performing VFX miracles in apps like Adobe Premiere Pro or Blackmagic Resolve, use NVMe drives for only cache, still stores, renders and/or optimized media. With the low cost of these NVMe M.2 drives, you might be able to include the price of one when charging a client and throw it on the shelf when done, complete with the project and media. Not only will you have a super-fast way to access the media, but you can easily get another one in the system when using an external drive.

Summing Up
In the end, the price points of the Samsung 970 EVO Plus NVMe M.2 drives are right in the sweet spot. There are, of course, competing drives that run a little bit cheaper, like the Western Digital Black SN750 NVMe SSDs (at around $99 for the 500GB model), but they come with a slightly slower read/write speed. So for my money, the Samsung 970 line of NVMe drives is a great combination of speed and value that can take your computer to the next level.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Foundry updates Nuke to version 12.0

Foundry has released Nuke 12.0, which introduces the next cycle of releases for the Nuke family. The Nuke 12.0 release brings improved interactivity and performance across the Nuke family, from additional GPU-enabled nodes for cleanup to a rebuilt playback engine in Nuke Studio and Hiero. Nuke 12.0 also sees the integration of GPU-accelerated tools integrated from Cara VR for camera solving, stitching and corrections and updates to the latest industry standards.

OpenEXR

New features of Nuke 12.0 include:
• UI interactivity and script loading – This release includes  a variety of optimizations throughout the software to improve performance, especially when working at scale. One key improvement offers a much smoother experience and noticeably maintains UI interactivity and reduced loading times when working in large scripts.
• Read and write performance – Nuke 12.0 includes focused improvement to OpenEXR read and write performance, including optimizations for several popular compression types (Zip1, Zip16, PIZ, DWAA, DWAB), improving render times and interactivity in scripts. Red and Sony camera formats also see additional GPU support.
• Inpaint and EdgeExtend – These GPU-accelerated nodes provide faster and more intuitive workflows for common tasks, with fine detail controls and contextual paint strokes.
• Grid Warp Tracker – Extending the Smart Vector toolset in NukeX, this node uses Smart Vectors to drive grids for match moving, warping and morphing images.
• Cara VR node integration – The majority of Cara VR’s nodes are now integrated into NukeX, including a suite of GPU-enabled tools for VR and stereo workflows and tools that enhance traditional camera solving and cleanup workflows.
• Nuke Studio, Hiero and HieroPlayer Playback – The timeline-based tools in the Nuke family see dramatic improvements in playback stability and performance as a result of a rebuilt playback engine optimized for the heavy I/O demands of color-managed workflows with multichannel EXRs.

Uppercut ups Tyler Horton to editor

After spending two years as an assistant at New York-based editorial house Uppercut, Tyler Horton has been promoted to editor. This is the first internal talent promotion for Uppercut.

Horton first joined Uppercut in 2017 after a stint as an assistant editor at Whitehouse Post. Stepping up as editor he’s cut notable projects, such as a recent Nike campaign “Letters to Heroes,” a series launched in conjunction with the US Open that highlights young athletes meeting their role models, including Serena Williams and Naomi Osaka. He also has cut campaigns for brands such as Asics, Hypebeast, Volvo and MOMA.

“From the beginning, Uppercut was always intentionally a boutique studio that embraced a collaborative of visions and styles — never just a one-person shop,” says Uppercut EP Julia Williams. “Tyler took initiative from day one to be as hands-on as possible with every project and we’ve been proud to see him really grow and refine his own voice.”

Horton’s love of film was sparked by watching sports reels and highlight videos. He went on to study film editing, then hit the road to tour with his band for four years before returning to his passion for film.

Cinelab London adds sound mastering supervisor and colorist

Cinelab London, which provides a wide range of film and digital restoration services, has added two new creatives to its staff — sound mastering supervisor Jason Stevens and senior colorist Mike David.

Stevens brings with him over 20 years of experience in sound and film archive restoration. Prior to his new role, he was part of the archive and restoration team at Pinewood Studios. Having worked there his whole career, Stevens’ worked on many big films, including the recent Yesterday, Rocketman and Judy. His clients have included the BFI, Arrow Films, Studio Canal and Fabulous Films.

During his career, Stevens has also been involved in short films, commercials and broadcast documentaries, recently completing a three-year project for Adam Matthew, the award-winning digital publisher of unique primary source collections from archives around the world.

“We have seen Jason’s enviable skills and talents put to their best use over the six years we have worked together,” says Adrian Bull, co-founder and CEO of Cinelab London. “Now we’re thrilled to have him join our growing in-house team. Talents like Jason’s are rare. He brings a wealth of creative and technical knowledge, so we feel lucky to be able to welcome him to our film family.”

Colorist Mike Davis also joins from Pinewood Studios (following its recent closure) where he spent five years grading feature films and episodic TV productions and specializing in archive and restoration. He has graded over 100 restoration titles for clients such as BFI, Studio Canal and Arrow Films on projects such as A Fish Called Wanda, Rita, Sue & Bob Too and Waterworld.

Davis has worked with the world’s leading DPs, handling dailies and grading major feature films including Mission Impossible, Star Wars: Rogue One and Annihilation. He enjoys working on a variety of content including short films, commercials, broadcast documentaries and Independent DI projects. He recently worked on Adewale Akinnuoye-Agbaje’s Farming, which won Best British Film at the Edinburgh Film Festival in June.

Davis started his career at Ascent Media, assisting on film rushes, learning how to grade and operate equipment. By 2010, he segued into production, spending time on set and on location working on stereoscopic 3D projects and operating 3D rigs. Returning to grading film and TV at Company 3, Davis then strengthened his talents working in long format film at Pinewood Studios.

Main Image: (L-R) Stevens and Davis

Pace Pictures and ShockBox VFX formalize partnership

Hollywood post house Pace Pictures and bicoastal visual effects, animation and motion graphics specialist ShockBox VFX have formed a strategic alliance for film and television projects. The two specialist companies provide studios and producers with integrated services encompassing all aspects of post in order to finish any project efficiently, cost-effectively and with greater creative control.

The agreement formalizes a successful collaborative partnership that has been evolving over many years. Pace Pictures and ShockBox collaborated informally in 2015 on the independent feature November Rule. Since then, they have teamed up on numerous projects, including, most recently, the Hulu series Veronica Mars, Lionsgate’s 3 From Hell and Universal Pictures’ Grand-Daddy Day Care and Undercover Brother 2. Pace provided services including creative editorial, color grading, editorial finishing and sound mixing. ShockBox contributed visual effects, animation and main title design.

“We offer complementary services, and our staff have developed a close working rapport,” says Pace Pictures president Heath Ryan. “We want to keep building on that. A formal alliance benefits both companies and our clients.”

“In today’s world of shrinking budgets and delivery schedules, the time for creativity in the post process can often suffer,” adds ShockBox founder and director Steven Addair. “Through our partnership with Pace, producers and studios of all sizes will be able to maximize our integrated VFX pipeline for both quality and volume.”

As part of the agreement, ShockBox will move its West Coast operations to a new facility that Pace plans to open later this fall. The two companies have also set up an encrypted, high-speed data connection between Pace Pictures Hollywood and ShockBox New York, allowing them to exchange project data quickly and securely.

FotoKem expands post services to Santa Monica

FotoKem is now offering its video post services in Santa Monica. This provides an accessible location for those working on the west side of LA, as well as access to the talent from its Burbank and Hollywood studios.

Designed to support an entire pipeline of services, the FotoKem Santa Monica facility is housed just off the 10 freeway, above FotoKem’s mixing and recording studio Margarita Mix. For many projects, color grading, sound mixing and visual effects reviews often take place in multiple locations around town. This facility offers showrunners and filmmakers a new west side post production option. Additionally, the secure fiber network connecting all FotoKem-owned locations ensures feature film and episodic finishing work can take place in realtime among sites.

FotoKem Santa Monica features a DI color grading theater, episodic and commercial color suite, editorial conform bay and a visual effects team — all tied to the comprehensive offerings at FotoKem’s main Burbank campus, Keep Me Posted’s episodic finishing facility and Margarita Mix Hollywood’s episodic grading suites. FotoKem’s entire roster of colorists are available to collaborate with filmmakers to ensure their vision is supported throughout the process. Recent projects include Shazam!, Vice, Aquaman, The Dirt, Little and Good Trouble.

Review: Accusonus Era 4 Pro audio repair plugins

By Brady Betzel

With each passing year it seems that the job title of “editor” changes. It’s not just someone responsible for shaping the story of the show but also for certain aspects of finishing, including color correction and audio mixing.

In the past, when I was offline editing more often, I learned just how important sending a properly mixed and leveled offline cut was. Whether it was a rough cut, fine cut or locked cut — the mantra to always put my best foot forward was constantly repeating in my head. I am definitely a “video” editor but, as I said, with editors becoming responsible for so many aspects of finishing, you have to know everything. For me this means finding ways to take my cuts from the middle of the road to polished with just a few clicks.

On the audio side, that means using tools like Accusonus Era 4 Pro audio repair plugins. Accusonus advertises these Era 4 plugins as one-button solutions, and they are as easy as one button but you can also nuance the audio if you like. The Era 4 Pro plugins work not only work with your typical DAW like Pro Tools 12.x and higher, but within nonlinear editors like Adobe Premiere Pro CC 2017 or higher, FCP X 10.4 or higher and Avid Media Composer 2018.12.

Digging In
Accusonus’ Era 4 Pro Bundle will cost you $499 for the eight plugins included in its audio repair offering. This includes De-Esser Pro, De-Esser, Era-D, Noise Remover, Reverb Remover, Voice Leveler, Plosive Remover and De-Clipper. There is also an Era 4 (non-pro) bundle for $149 that includes everything mentioned previously except for De-Esser Pro and Era-D. I will go over a few of the plugins in this review and why the Pro bundle might warrant the additional $350.

I installed the Era 4 Pro Bundle on a Wacom MobileStudio Pro tablet that is a few years old but can still run Premiere. I did this intentionally to see just how light the plugins would run. To my surprise my system was able to toggle each plug-in off and on without any issue. Playback was seamless when all plugins were applied. Now I wasn’t playing anything but video, but sometimes when I do an audio pass I turn off video monitoring to be extra sure I am concentrating on the audio only.

De-Esser
First up is the De-Esser, which tackles harsh sounds resulting from “s,” “z,” “ch,” “j” and “sh.” So if you run into someone who has some ear piercing “s” pronunciations, apply the De-Esser plugin and choose from narrow, normal or broad. Once you find which mode helps remove the harsh sounds (otherwise known as sibilance), you can enable “intense” to add more processing power (but doing this can potentially require rendering). In addition, there is an output gain setting, “Diff,” that plays only the parts De-Esser is affecting. If you want to just try the “one button” approach, the Processing dial is really all you need to touch. In realtime, you can hear the sibilance diminish. I personally like a little reality in my work so I might dial the processing to the “perfect” amount then dial it back 5% or 10%.

De-Esser Pro
Next up is De-Esser Pro. This one is for the editor who wants the one-touch processing but also the ability to dive into the specific audio spectrum being affected and see how the falloff is being performed. In addition, there are presets such as male vocals, female speech, etc. to jump immediately to where you need help. I personally find the De-Esser Pro more useful than the De-Esser. I can really shape the plugin. However, if you don’t want to be bothered with the more intricate settings, the De-Esser is a still a great solution. Is it worth the extra $350? I’m not sure, but combining it with the Era-D might make you want to shell out the cash for the Era 4 Pro bundle.

Era-D
Speaking of the Era-D, it’s the only plugin not described by its own title, funnily enough, but it is a joint de-noise and de-reverberation plugin. However, Era-D goes way beyond simple hum or hiss removal. With Era-D, you get “regions” (I love saying that because of the audio mixers who constantly talk in regions and not timecode) that can not only be split at certain frequencies — and have different percentage of plugin applied to said region — but also have individual frequency cutoff levels.

Something I had never heard of before is the ability to use two mics to fix a suboptimal recording on one of the two mics, which can be done in the Era-D plugin. There is a signal path window that you can use to mix the amount of de-noise and de-reverb. It’s possible to only use one or the other, and you can even run the plugin in parallel or cascade. If that isn’t enough, there is an advanced window with artifact control and more. Era-D is really the reason for that extra $350 between the standard Era 4 bundle and the Era 4 Bundle Pro — and it is definitely worth it if you find yourself removing tons of noise and reverb.

Noise Remover
My second favorite plugin in the Era 4 Bundle Pro is the Noise Remover. Not only is the noise removal pretty high-quality (again, I dial it back to avoid robot sounds), but it is painless. Dial in the amount of processing and you are 80% done. If you need to go further, then there are five buttons that let you focus where the processing occurs: all-frequencies (flat), high frequencies, low frequencies, high and low frequencies and mid frequencies. I love clicking the power button to hear the differences — with and without the noise removal — but also dialing the knob around to really get the noise removed without going overboard. Whether removing noise in video or audio, there is a fine art in noise reduction, and the Era 4 Noise Removal makes it easy … even for an online editor.

Reverb Remover
The Reverb Remover operates very much like the Noise Remover, but instead of noise, it removes echo. Have you ever gotten a line of ADR clearly recorded on an iPhone in a bathtub? I’ve worked on my fair share of reality, documentary, stage and scripted shows, and at some point, someone will send you this — and then the producers will wonder why it doesn’t match the professionally recorded interviews. With Era 4 Noise Remover, Reverb Remover and Era-D, you will get much closer to matching the audio between different recording devices than without plugins. Dial that Reverb Remover processing knob to taste and then level out your audio, and you will be surprised at how much better it will sound.

Voice Leveler
To level out your audio, Accusonus also has included the Voice Leveler, which does just what is says: It levels your audio so you won’t get one line blasting in your ears while the next one doesn’t because the speaker backed away from the mic. Much like the De-Esser, you get a waveform visual of what is being affected in your audio. In addition, there are two modes: tight and normal, helping to normalize your dialog. Think of the tight mode as being much more distinctive than a normal interview conversation. Accusonus describes tight as a more focused “radio” sound. The Emphasis button helps to address issues when the speaker turns away from a microphone and introduces tonal problems. Breath control is a simple

De-Clipper and Plosive Remover
The final two plugins in the Era 4 Bundle Pro are the Plosive Remover and De-Clipper. De-Clipper is an interesting little plugin that tries to restore lost audio due to clipping. If you recorded audio at high gain and it came out horribly, then it’s probably been clipped. De-Clipper tries to salvage this clipped audio by recreating overly saturated audio segments. While it’s always better to monitor your audio recording on set and re-record if possible, sometimes it is just too late. That’s when you should try De-Clipper. There are two modes: normal/standard use and one for trickier cases that take a little more processing power.

The final plugin, Plosive Remover, focuses on artifacting that’s typically caused by “p” and “b” sounds. This can happen if no pop screen is used and/or if the person being recorded is too close to the microphone. There are two modes: normal and extreme. Subtle pops will easily be repaired in normal mode, but extreme pops will definitely need the extreme mode. Much like De-Esser, Plosive Remover has an audio waveform display to show what is being affected, while the “Diff” mode only plays back what is being affected. However, if you just want to stick to that “one button” mantra, the Processing dial is really all you need to mess with. The Plosive Remover is another amazing plugin that, when you need it, really does a great job fast and easily.

Summing Up
In the end, I downloaded all of the Accusonus audio demos found on the Era 4 website, along with installers. This is the same place you can download the installers if you want to take part in the 14-day trial. I purposely limited my audio editing time to under one minute per clip and plugin to see what I could do. Check out my work with the Accusonus Era 4 Pro audio repair plugins on YouTube and see if anything jumps out at you. In my opinion, the Noise Remover, Reverb Remover and Era-D are worth the price of admission, but each plugin from Accusonus does great work.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

AJA adds HDR Image Analyzer 12G and more at IBC

AJA will soon offer the new HDR Image Analyzer 12G, bringing 12G-SDI connectivity to its realtime HDR monitoring and analysis platform developed in partnership with Colorfront. The new product streamlines 4K/Ultra HD HDR monitoring and analysis workflows by supporting the latest high-bandwidth 12G-SDI connectivity. The HDR Image Analyzer 12G will be available this fall for $19,995.

HDR Image Analyzer 12G offers waveform, histogram and vectorscope monitoring and analysis of 4K/Ultra HD/2K/HD, HDR and WCG content for broadcast and OTT production, post, QC and mastering. It also features HDR-capable monitor outputs that not only go beyond HD resolutions and offer color accuracy but make it possible to configure layouts to place the preferred tool where needed.

“Since its release, HDR Image Analyzer has powered HDR monitoring and analysis for a number of feature and episodic projects around the world. In listening to our customers and the industry, it became clear that a 12G version would streamline that work, so we developed the HDR Image Analyzer 12G,” says Nick Rashby, president of AJA.

AJA’s video I/O technology integrates with HDR analysis tools from Colorfront in a compact 1-RU chassis to bring HDR Image Analyzer 12G users a comprehensive toolset to monitor and analyze HDR formats, including PQ (Perceptual Quantizer) and hybrid log gamma (HLG). Additional feature highlights include:

● Up to 4K/Ultra HD 60p over 12G-SDI inputs, with loop-through outputs
● Ultra HD UI for native resolution picture display over DisplayPort
● Remote configuration, updates, logging and screenshot transfers via an integrated web UI
● Remote Desktop support
● Support for display referred SDR (Rec.709), HDR ST 2084/PQ and HLG analysis
● Support for scene referred ARRI, Canon, Panasonic, Red and Sony camera color spaces
● Display and color processing lookup table (LUT) support
● Nit levels and phase metering
● False color mode to easily spot pixels out of gamut or brightness
● Advanced out-of-gamut and out-of-brightness detection with error intolerance
● Data analyzer with pixel picker
● Line mode to focus a region of interest onto a single horizontal or vertical line
● File-based error logging with timecode
● Reference still store

At IBC 2019, AJA also showed new products and updates designed to advance broadcast, production, post and pro AV workflows. On the stand were the Kumo 6464-12G for routing and the newly shipping Corvid 44 12G developer I/O models. AJA has also introduced the FS-Mini utility frame sync Mini-Converter and three new OpenGear-compatible cards: OG-FS-Mini, OG-ROI-DVI and OG-ROI-HDMI. Additionally, the company previewed Desktop Software updates for Kona, Io and T-Tap; Ultra HD support for IPR Mini-Converter receivers; and FS4 frame synchronizer enhancements.

Behind the Title: Chapeau CD Lauren Mayer-Beug

This creative director loves the ideation process at the start of a project when anything is possible, and saving some of those ideas for future use.

COMPANY: LA’s Chapeau Studios

CAN YOU DESCRIBE YOUR COMPANY?
Chapeau provides visual effects, editorial, design, photography and story development fluidly with experience in design, web development, and software and app engineering.

WHAT’S YOUR JOB TITLE?
Creative Director

WHAT DOES THAT ENTAIL?
It often entails seeing a job through from start to finish. I look at it like making a painting or a sculpture.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Perhaps just how hands-on the process actually is. And how analog I am, considering we work in such a tech-driven environment.

Beats

WHAT’S YOUR FAVORITE PART OF THE JOB?
Thinking. I’m always thinking big picture to small details. I love the ideation process at the start of a project when anything is possible. Saving some of those ideas for future use, learning about what you want to do through that process. I always learn more about myself through every ideation session.

WHAT’S YOUR LEAST FAVORITE?
Letting go of the details that didn’t get addressed. Not everything is going to be perfect, so since it’s a learning process there is inevitably something that will catch your eye.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
My mind goes to so many buckets. A published children’s book author with a kick-ass coffee shop. A coffee bean buyer so I could travel the world.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always skewed in this direction. My thinking has always been in the mindset of idea coaxer and gatherer. I was put in that position in my mid-20s and realized I liked it (with lots to learn, of course), and I’ve run with it ever since.

IS THERE A PROJECT YOU ARE MOST PROUD OF?
That’s hard to say. Every project is really so different. A lot of what I’m most proud of is behind the scenes… the process that will go into what I see as bigger things. With Chapeau, I will always love the Facebook projects, all the pieces that came together — both on the engineering side and the fun creative elements.

Facebook

What I’m most excited about is our future stuff. There’s a ton on the sticky board that we aim to accomplish in the very near future. Thinking about how much is actually being set in motion is mind-blowing, humbling and — dare I say — makes me outright giddy. That is why I’m here, to tell these new stories — stories that take part in forming the new landscape of narrative.

WHAT TOOLS DO YOU USE DAY TO DAY?
Anything Adobe. My most effective tool is the good-old pen to paper. That works clearly in conveying ideas and working out the knots.

WHERE DO YOU FIND INSPIRATION?
I’m always looking for inspiration and find it everywhere, as many other creatives do. However, nature is where I’ve always found my greatest inspiration. I’m constantly taking photos of interesting moments to save for later. Oftentimes I will refer back to those moments in my work. When I need a reset I hike, run or bike. Movement helps.

I’m always going outside to look at how the light interacts with the environment. Something I’ve become known for at work is going out of my way to see a sunset (or sunrise). They know me to be the first one on the roof for a particularly enchanting magic hour. I’m always staring at the clouds — the subtle color combinations and my fascination with how colors look the way they do only by context. All that said, I often have my nose in a graphic design book.

The overall mood realized from gathering and creating the ever-popular Pinterest board is so helpful. Seeing the mood color wise and texturally never gets old. Suddenly, you have a fully formed example of where your mind is at. Something you could never have talked your way through.

Then, of course, there are people. People/peers and what they are capable of will always amaze me.

Mavericks VFX provides effects for Hulu’s The Handmaid’s Tale

By Randi Altman

Season 3 episodes of Hulu’s The Handmaid’s Tale are available for streaming, and if you had any illusions that things would lighten up a bit for June (Elizabeth Moss) and the ladies of Gilead, I’m sorry to say you will be disappointed. What’s not disappointing is that, in addition to the amazing acting and storylines, the show’s visual effects once again play a heavy role.

Brendan Taylor

Toronto’s Mavericks VFX has created visual effects for all three seasons of the show, based on Margaret Atwood’s dystopian view of the not-too-distant future. Its work has earned two Emmy nominations.

We recently reached out to Maverick’s founder and visual effects supervisor, Brendan Taylor, to talk about the new season and his workflow.

How early did you get involved in each season? What sort of input did you have regarding the shots?
The Handmaid’s Tale production is great because they involve us as early as possible. Back in Season 2, when we had to do the Fenway Park scene, for example, we were in talks in August but didn’t shoot until November. For this season, they called us in August for the big fire sequence in Episode 1, and the scene was shot in December.

There’s a lot of nice leadup and planning that goes into it. Our opinions are sought after and we’re able to provide input on what’s the best methodology to use to achieve a shot. Showrunner Bruce Miller, along with the directors, have a way of how they’d like to see it, and they’re great at taking in our recommendations. It was very collaborative and we all approach the process with “what’s best for the show” in mind.

What are some things that the showrunners asked of you in terms of VFX? How did they describe what they wanted?
Each person has a different approach. Bruce speaks in story terms, providing a broader sense of what he’s looking for. He gave us the overarching direction of where he wants to go with the season. Mike Barker, who directed a lot of the big episodes, speaks in more specific terms. He really gets into the details, determining the moods of the scene and communicating how each part should feel.

What types of effects did you provide? Can you give examples?
Some standout effects were the CG smoke in the burning fire sequence and the aftermath of the house being burned down. For the smoke, we had to make it snake around corners in a believable yet magical way. We had a lot of fire going on set, and we couldn’t have any actors or stunt person near it due to the size, so we had to line up multiple shots and composite it together to make everything look realistic. We then had to recreate the whole house in 3D in order to create the aftermath of the fire, with the house being completely burned down.

We also went to Washington, and since we obviously couldn’t destroy the Lincoln Memorial, we recreated it all in 3D. That was a lot of back and forth between Bruce, the director and our team. Different parts of Lincoln being chipped away means different things, and Bruce definitely wanted the head to be off. It was really fun because we got to provide a lot of suggestions. On top of that, we also had to create CGI handmaids and all the details that came with it. We had to get the robes right and did cloth simulation to match what was shot on set. There were about a hundred handmaids on set, but we had to make it look like there were thousands.

Were you able to reuse assets from last season for this one?
We were able to use a handmaids asset from last season, but it needed a lot of upgrades for this season. Because there were closer shots of the handmaids, we had to tweak it and made sure little things like the texture, shaders and different cloth simulations were right for this season.

Were you on set? How did that help?
Yes, I was on set, especially for the fire sequences. We spent a lot of time talking about what’s possible and testing different ways to make it happen. We want it to be as perfect as possible, so I had to make sure it was all done properly from the start. We sent another visual effects supervisor, Leo Bovell, down to Washington to supervise out there as well.

Can you talk about a scene or scenes where being on set played a part in doing something either practical or knowing you could do it in CG?
The fire sequence with the smoke going around the corner took a lot of on-set collaboration. We had tried doing it practically, but the smoke was moving too fast for what we wanted, and there was no way we could physically slow it down.

Having the special effects coordinator, John MacGillivray, there to give us real smoke that we could then match to was invaluable. In most cases on this show, very few audible were called. They want to go into the show knowing exactly what to expect so we were prepared and ready.

Can you talk about turnaround time? Typically, series have short ones. How did that affect how you worked?
The average turnaround time was eight weeks. We began discussions in August, before shooting, and had to delivery by January. We worked with Mike to simplify things without diminishing the impact. We just wanted to make sure we had the chance to do it well given the time we had. Mike was very receptive in asking what we needed to do to make it the best it could be in the timeframe that we had. Take the fire sequence, for example. We could have done full-CGI fire but that would have taken six months. So we did our research and testing to find the most efficient way to merge practical effects with CGI and presented the best version in a shorter period of time.

What tools were used?
We used Foundry Nuke for compositing. We used Autodesk Maya to build all the 3D houses, including the burned-down house, and to destroy the Lincoln Memorial. Then we used Side Effects Houdini to do all the simulations, which can range from the smoke and fire to crowd and cloth.

Is there a shot that you are most proud of or that was very challenging?
The shot where we reveal the crowd over June when we’re in Washington was incredibly challenging. The actual Lincoln Memorial, where we shot, is an active public park, so we couldn’t prevent people from visiting the site. The most we could do was hold them off for a few minutes. We ended up having to clean out all of the tourists, which is difficult with moving camera and moving people. We had to reconstruct about 50% of the plate. Then, in order to get the CG people to be standing there, we had to create a replica of the ground they’re standing on in CG. There were some models we got from the US Geological Society, but they didn’t completely line up, so we had to make a lot of decisions on the fly.

The cloth simulation in that scene was perfect. We had to match the dampening and the movement of all the robes. Stephen Wagner, who is our effects lead on it, nailed it. It looked perfect, and it was really exciting to see it all come together. It looked seamless, and when you saw it in the show, nobody believed that the foreground handmaids were all CG. We’re very proud.

What other projects are you working on?
We’re working on a movie called Queen & Slim by Melina Matsoukas with Universal. It’s really great. We’re also doing YouTube Premium’s Impulse and Netflix’s series Madam C.J. Walker.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.