Author Archives: Randi Altman

The 71st NATAS Technical & Engineering Emmy Award winners

The National Academy of Television Arts & Sciences (NATAS) has announced the recipients of the 71st Annual Technology & Engineering Emmy Awards. The event will take place in partnership with the National Association of Broadcasters, during the NAB Show on Sunday, April 19 in Las Vegas.

The Technology & Engineering Emmy Awards are awarded to a living individual, a company or a scientific or technical organization for developments and/or standardization involved in engineering technologies that either represent so extensive an improvement on existing methods or are so innovative in nature that they materially have affected television.

A Committee of engineers working in television considers technical developments in the industry and determines which, if any, merit an award.

“The Technology & Engineering Emmy Award was the first Emmy Award issued in 1949 and it laid the groundwork for all the other Emmys to come,” says Adam Sharp, CEO/president of NATAS. “We are especially excited to be honoring Yvette Kanouff with our Lifetime Achievement Award in Technology & Engineering.”

Kanouff has held CTO and president roles at various companies in the cable and media industry. Over the years, she has spearheaded transformational technologies, such as video on demand, cloud DVR, digital and on-demand advertising, streaming security and privacy.

And now the Awards recipients:

2020 Technical / Engineering Achievement Awards

Pioneering System for Live Performance-Based Animation Using Facial Recognition
– Adobe

HTML5 Development and Deployment of a Full TV Experience on Any Device
– Apple
– Google
– LG
– Microsoft
– Mozilla
– Opera
– Samsung

Pioneering Public Cloud Based linear Media Supply Chains
– AWS
– Discovery
– Evertz
– Fox Neo (Walt Disney Television)
– SDVI

Pioneering Development of Large Scale, Cloud Served, Broadcast Quality,
Linear Channel Transmission to Consumers
– Sling TV
– Sony PlayStation Vue
– Zattoo

Early Development of HSM Systems that Created a Pivotal Improvement in Broadcast Workflows
– Dell (Isilon)
– IBM
– Masstech
– Quantum

Pioneering Development and Deployment of Hybrid Fiber Coax Network Architecture
– Cable Labs

Pioneering Development of the CCD Image Sensor
– Bell Labs
– Michael Tompsett

VoCIP (Video over Bonded Cellular Internet)
– AVI West
– Dejero
– LiveU
– TVU Networks

Ultra-High Sensitivity HDTV Camera
– Canon
– Flovel

Development of Synchronized multi-channel uncompressed audio transport over IP Networks
– ALC NetworX
– Audinate
– Audio Engineering Society
– Kevin Gross
– QSC
– Telos Alliance
– Wheatstone

Emmy Statue image courtesy of ATAS/NATAS

Behind the Title: Design director Liron Eldar-Ashkenazi

NAME: Liron Eldar-Ashkenazi  (@iamlirona)

WHAT’S YOUR JOB TITLE?
Design Director

WHAT DOES THAT ENTAIL?
I help companies execute on their creative hopes and dreams, both hands-on and as a consultant and director.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Educating my clients about the lay of the land when it comes to getting what they want creatively. People typically think coming up with creative concepts is easy and quick. A big part of my job is helping companies see the full scope of taking a project from beginning to end with success while being mindful of timeline and budget.

HOW LONG HAVE YOU BEEN WORKING IN MOTION GRAPHICS?
I was accepted to the prestigious position of motion graphics artist in the Israeli defense force when I was 18 — all women and men have to serve in the military. It’s now been about 12 years that I’ve been creating and animating.

HOW HAS THE INDUSTRY CHANGED IN THE TIME YOU’VE BEEN WORKING? WHAT’S BEEN GOOD, WHAT’S BEEN BAD?
I see a lot more women 3D artists and animators. It’s so refreshing! It used to be a man’s world and I’m so thrilled to see the shift. Overall, it’s becoming a bit more challenging as screens are changing so fast and there are so many of them. Everything you create has to suit a thousand different use-cases and coming up with the right strategy for that takes longer than it did when we were only thinking in 15’s and 30’s 16:9.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love that there are so many facets to my work under one title. Coming up with concepts, designing, animating, creating prints and artworks, working with typography is just so much more rewarding than in the days when you only had one job — lighting, texturing, animating, designing. Now an artist is free to do multiple things, and it’s well appreciated.

WHAT’S YOUR LEAST FAVORITE?
Long rendering times. I think computers are becoming stronger, but we also demand more and more from them. I still hate sitting and waiting for a computer to show me what I’m working on.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Morning! I’m a morning person who loves to start early and finish when there’s still light out.

WHY DID YOU CHOOSE THIS PROFESSION?
I didn’t really choose it; it chose me.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
At age 16 I knew I would never be great at sitting on my behind and just studying the text. I knew I needed to create in order to succeed. It’s my safe space and what I do best.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Some other form of visual artist, or a psychologist.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Right before I left The-Artery as a design director, where I’d been working the past three years, we created visuals for a really interesting documentary. All the content was created in 3D using Cinema 4D and Octane. We produced about 18 different spots explaining different concepts. My team and I did everything from concept to rendering. It’ll be amazing to see it when it comes out.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
At The-Artery, I was in charge of a really interesting branding project for a Fin-tech company. We created an entire visual language in 3D for their everyday marketing, website, and blog use. All content was designed and rendered using Cinema 4D and it was so great combining a branding exercise with motion graphics to bring all the visuals to life.

YOU HAVE RECENTLY PRESENTED YOUR WORKFLOW AT TRADES SHOWS AND ROAD TOURS. TELL US ABOUT SHARING YOUR WORK PUBLICLY.
I’ve was invited by Maxon, the developers of Cinema 4D, to give a live-demo presentation at SIGGRAPH 2019. It was an exceptional experience, and I received really lovely responses from the community and artists looking to combine more graphic design into their motion graphics and 3D pipeline. I’ve shared some cool methods I’ve developed in Cinema 4D for creating fine-art looks for renders.

PRESENTLY, YOU ARE WORKING AS ARTIST IN RESIDENCE AT FACEBOOK. HOW DID THIS COME ABOUT AND WHAT KIND OF WORK ARE YOU DOING?
Facebook somehow found me. I assume it was through my Instagram account, where I share my wild, creative experiments. The program is a six-week residency at their New York office, where I get to flex my analog muscles and create prints at their Analog lab. In the lab, they have all the art supplies you can ask for along with an amazing Risograph printer. I’ve been creating posters and zines from my 3D rendered illustrations.

WHAT SOFTWARE TOOLS DO YOU USE DAY-TO-DAY?
Maxon Cinema 4D is my primary tool. I design almost everything I create in it, including work that seems to be flat and graphic.

WHERE DO YOU FIND INSPIRATION NOW?
I find talking to people and brainstorming has always been the thing that sparks the most creativity in me. Solving problems is another way I tackle every design assignment. I always need to figure out what needs to be fixed, be better or change completely, and that’s what I find most inspires me to create.

THIS IS A HIGH-STRESS JOB WITH DEADLINES AND CLIENT EXPECTATIONS. WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Planning is critical for me to feel confident about projects and helps me avoid stress in general. Giving my work 100% and not promising any false expectations to my clients also helps limit stress. It’s key to be honest from the get-go if I think something wouldn’t work in the timeline, or if late changes would hurt the final product. If I do get to a point that I’m really stressed, I find that running, going out dancing or dancing to my favorite music at home, and generally listening to music are all helpful.

Directing bookend sequences for Portals, a horror anthology film

By Hasraf “HaZ” Dulull

Portals is a genre-bending feature film anthology focusing on a series of worldwide blackouts — after which millions of mysterious objects appear everywhere across the planet. While many flee from the sentient objects, some people are drawn toward and into them with horrifying consequences.

Portals

The film was in the final stages of post when writer/director Liam O’Donnell (Beyond Skyline and the upcoming Skylines film) called to see if I would like to get involved and direct some bookend sequences to add more scope and setup, which the producers felt was very much needed. I loved the premise and the world of the anthology, so I said yes. I pitched an idea for an ending, that quickly evolved into an extra segment at the end of the film, which I directed. That’s why there are officially four directors on the show, with me getting executive producer and “end-segment created by” credits.

Two of the other sequences are around 20 to 25 minutes each, and O’Donnell’s sequence was around 35 minutes. The film is 85 minutes long. Eduardo Sanchez and Gregg Hale (The Blair Witch Project) co-directed their segments. So the anthology feature film is really three long segments with my bookend sequences. The only connections among all the stories are the objects that appear, the event itself and the actual “portal,” but everything else was unique to each segment’s story. In terms of production, the only consistencies throughout the anthology were the camera language — that slight hand-held feel — and, of course, the music/sound

I had to watch the latest cut of the entire anthology film to get my head into that world, but I was given freedom to bring my own style to my sequences. That is exactly the point of an anthology — for each director to bring his or her own sensibilities to the individual segments. Besides Liam, the main producers I worked closely with on this project were Alyssa Devine and Griffin Devine from Pigrat Productions. They are fans of my first feature film, The Beyond, so they really encouraged the grounded tone I had demonstrated in that film.

The portal in Portals.

I’ve been a huge advocate of Blackmagic cameras and technology for a long time. Additionally, I knew I had to a lot to shoot in a very short time space (two days!), so I needed a camera that was light and flexible yet able to shoot 4K. I brought on cinematographer Colin Emerson, who shoots in a very loose way but always makes his stuff look cinematic. We watched the cut of the film and noticed the consistent loose nature to the cinematography on all the segments. Colin uses the Fig Rig a lot and I love the way that rig works and the BMD Pocket Cinema 4K fits nicely on it along with his DSLR lenses he likes to use. The other reason was to be able to use Blackmagic’s new BRaw format too.

We also shot the segment using a skeleton crew, which comprised of myself as director/producer; VFX supervisor/1st AD John Sellings, who also did some focus pulling; James De Taranto (sound recording); DP/camera op Colin Emerson, FX makeup artists Kate Griffith and Jay James; and our two actors, Georgina Blackledge and Dare Emmanuel. I worked with both of them on my feature film The Beyond.

The Post
One thing I wanted to make sure of was that the post team at The Institution in LA was able to take my Resolve files and literally work from that for the picture post. One of the things I did during prep of the project (before we even cast) was to shoot some tests to show what I had in mind in terms of look and feel. We also tested the BRaw and color workflow between my setup in London and the LA team. Colin and I did this during location recce. This proved to be extremely useful to ensure we set our camera to the exact specs the post house wanted. So we shot at 23.98, 4K (4096×1716) 2:39 cropped, Blackmagic color design log color space.

HaZ’s segments were captured with the Blackmagic Pocket Cinema Camera.

During the test, I did some quick color tests to show the producers in LA the tone and mood I was going for and to make sure everyone was on board before I shot it. The look was very post apocalyptic, as it’s set after the main events have happened. I wanted the locations to be a contrast with each other, one interior and one exterior with greens.

Colin is used to shooting most of his stuff on the Panasonic GH, but he had the Cinema Pocket Camera and was looking for the right project to use it on. He found he could use all of his usual lenses because the Cinema Pocket Camera has the same mount. Lenses used were the Sigma 18-35mm f/1.8 + Metabones Speedbooster; the Olympus 12mm f2; and the Lumix 35-100mm f2.8

Colin used the onboard monitor screen on the Pocket Cinema Camera, while I used a tethered external monitor — the Ikan DH5e — for directing. We used a 1TB Samsung external SSD securely attached to the rig cage along with a 64GB CFast card. The resolution we shot in was determined by the tests we did. We set up the rushes for post after each of the two days of the shoot, so during the day we would swap out drives and back things up. At the end of the day, we would bring in all the picture and sound rushes and use the amazing autosync feature in Blackmagic DaVinci Resolve to set it all up. This way, when I headed back home I could start editing right away inside Resolve.

Resolve

I have to admit, we were hesitant at first because I was shooting and capturing Log in QuickTime ProRes 4:4:4:4, and I always avoided DNG raw because of the huge file size and data transfer. But the team at Blackmagic has always been so supportive and provided us with support right up till the end of the shoot, so after testing BRaw I was impressed. We had so much control as all that information is accessed within Resolve. . I was able to set the temp look during editing, and the colorist worked from there. Skin tones were of utmost importance; because of the intimate nature of the drama, I wanted a natural look to the skin tones. I am really happy with the way they came out at the end.

They couldn’t believe how cinematic the footage was when we told them we shot using the Pocket Cinema Camera, since the other segments were shot on cameras like Red. We delivered the same 4K deliverables spec as the other segments in the film.

HaZ on set, second from right.

I used the AMD Radeon RX Vega 56 version of the Blackmagic eGPU. The reason was because I wanted to edit on my MacBook Pro (late 2017) and needed the power to run 4K in realtime. I was so impressed with how much power it provided; it was like having a new MacBook Pro without having to buy one. The eGPU had all the connectivity (two Thunderbolt and four USB-3) I needed, which is a limitation of the MacBook Pro.

The beauty of keeping everything native was that there wasn’t much work to do when porting, as it’s just plug and play. And the Resolve detects the eGPU, which you can then set as default. The BRaw format makes it all so manageable to preview and playback in real time. Also, since it’s native, Resolve doesn’t need to do any transcoding in the background. I have always been a huge fan of the tracking in Resolve, and I was able to do eye effects very easily without it being budgeted or done as a VFX shot. I was able to get the VFX render assets from the visual effects artist (Justin Martinez ) in LA and do quick-slap comps during editing. I love the idea that I can set looks and store them as memories, which I can then recall very quickly to apply on a bunch of shots. This allows me to have a slick-looking preview rough cut of the film.

Portals

I sent a hard drive containing all the organized rushes to the team in LA while I was doing the final tweaks to the edit. Once the edit was signed off, or if any last-minute notes came in, I would do them and email them my Resolve file. It was super simple, and the colorists (Oliver Ojeil) and post team (Chad Van Horn and Danny Barone) in LA appreciated the simple workflow because there really wasn’t any conforming for them to do apart from a one-click relink of media location; they would just take my Resolve file and start working away with it.

We used practical effects to keep the horror as real and grounded as possible, and used VFX to augment further. We were fortunate to be able to get special effects makeup artist Kate Griffiths. Given the tight schedule she was able to create a terrifying effect, which I won’t give away. You need to watch the film to see it! We had to shoot those make-up FX-heavy shots at the end of the day, which meant we had to be smart about how we scheduled the shoot given the hours-long make-up process. Kate was also on hand to provide effects like the liquid coming out of the eyes and sweat etc. — every detail of which the camera picked up for us so we could bring it out in the grade.

The Skype-style shots at the start of the film (phone and computer monitor shots) had their VFX screen elements placed as a separate layer so the post team in LA could grade them separately and control the filters applied on them. For some of the wide shots showing our characters entering and leaving the portal, we keyframed some movement of the 4K shot along with motion blur to give the effect of in-camera movement. I also used the camera shake within Resolve, which comes with so many options to create bespoke movement on static frames.

Portals is now available on iTunes and other VOD platforms.


HaZ Dulull is known for his sci-fi feature films The Beyond and 2036 Origin Unknown, also in television for his pilot and episodes on Disney’s Fast Layne. He is currently busy on projects at various stages of development and production at his production company, hazfilm.com.

Picture Shop VFX acquires Denmark’s Ghost VFX

Burbank’s Picture Shop VFX has acquired Denmark’s Ghost VFX. This Copenhagen-base studio, founded in 1999, provides high-end visual work for film, television and several streaming platforms. The move helps Picture Shop “increase its services worldwide and broaden its talent and expertise,” according to Picture Shop VFX’s president Tom Kendall.

Over the years, Ghost has contributed to more than 70 feature films and titles. Some of Ghost’s work includes Star Wars: The Rise of Skywalker, The Mandalorian, The Walking Dead, See, Black Panther and Star Trek Discovery.

“As we continue to expand our VFX footprint into the international market, I am extremely excited to have Ghost join Picture Shop VFX,” says Bill Romeo, president of Picture Head Holdings.

Christensen says the studio takes up three floors and 13,000 square feet in a “vintage and beautifully renovated office building” in Copenhagen. Their main tools are Autodesk Maya, Foundry Nuke and SideFX Houdini.

“We are really looking forward to a tight-nit collaboration with all the VFX teams in the Picture Shop group,” says Christensen. “Right now Ghost will continue servicing current clients and projects, but we’re really looking forward to exploring the massive potential of being part of a larger and international family.”

Picture Shop VFX is a division of Picture Head Holdings. Picture Head Holdings has locations in Los Angeles, Vancouver, the United Kingdom, and Denmark.

Main Image: Ghost artists at work.

Sohonet beefs up offerings with Exchange acquisition

Sohonet, which provides connectivity, media services and network security for media and entertainment, has acquired Exchange Communications, which has been providing IT services to film and television productions for more than 20 years. The acquisition broadens the range of connectivity and collaboration solutions that each organization can offer its customers.

Sohonet has a global network of over 500 media companies as well as realtime collaboration, cloud-acceleration and file-transfer tools, while Exchange offers fixed production studio services for phones and video surveillance and rapidly available remote production communications. Together, the companies will serve the rapidly growing and changing production industry across features, episodic and advertising.
Sohonet will invest in the expansion of Exchange Communications services in other geographies, initially focusing on Canada and the UK.

Review: HP’s ZBook G6 mobile workstation

By Brady Betzel

In a year that’s seen AMD reveal an affordable 64-core processor with its Threadripper 3, it appears as though we are picking up steam toward next-level computing.

Apple finally released its much-anticipated Mac Pro (which comes with a hefty price tag for the 1.5TB upgrade), and custom-build workstation companies — like Boxx and Puget Systems — can customize good-looking systems to fit any need you can imagine. Additionally, over the past few months, I have seen mobile workstations leveling the playing field with their desktop counterparts.

HP is well-known in the M&E community for its powerhouse workstations. Since I started my career, I have either worked on a MacPro or an HP. Both have their strong points. However, workstation users who must be able to travel with their systems, there have always been some technical abilities you had to give up in exchange for a smaller footprint. That is, until now.

The newly released HP ZBook 15 G6 has become the rising the rising tide that will float all the boats in the mobile workstation market. I know I’ve said it before, but the classification of “workstation” is technically much more than just a term companies just throw around. The systems with workstation-level classification (at least from HP) are meant to be powered on and run at high levels 24 hours a day, seven days a week, 365 days a year.

They are built with high-quality, enterprise-level components, such as ECC (error correcting code) memory. ECC memory will self-correct errors that it sees, preventing things like blue screens of death and other screen freezes. ECC memory comes at a cost, and that is why these workstations are priced a little higher than a standard computer system. In addition, the warranties are a little more inclusive — the HP ZBook 15 G6 comes with a standard three-year/on-site service warranty.

Beyond the “workstation” classification, the ZBook 15 G6 is amazingly powerful, brutally strong and incredibly colorful and bright. But what really matters is under the hood. I was sent the HP ZBook 15 G6 that retails for $4,096 and contains the following specs:
– Intel Xeon E-2286M (eight cores/16 threads — 2.4GHz base/5GHz Turbo)
– Nvidia Quadro RTX 3000 (6GB VRAM)
15.6-inch UHD HP Dream Color display, anti-glare, WLED backlit 600 nits, 100% DCI-P3
– 64GB DDR4 2667MHz
– 1TB PCIe Gen 3 x4 NVMe SSD TLC
– FHD webcam 1080p plus IR camera
– HP collaboration keyboard with dual point stick
– Fingerprint sensor
– Smart Card reader
– Intel Wi-Fi 6 AX 200, 802.11ac 2×2 +BT 4.2 combo adapter (vPro)
– HP long-life battery four-cell 90 Wh
– Three-year limited warranty

The ZBook 15 G6 is a high-end mobile workstation with a price that reflects it. However, as I said earlier, true workstations are built to withstand constant use and, in this case, abuse. The ZBook 15 G6 has been designed to pass up to 21 extensive MIL-STD 810G tests, which is essentially worst-case scenario testing. For instance, drop testing of around four feet, sand and dust testing, radiation testing (the sun beating down on the laptop for an extended period) and much more.

The exterior of the G6 is made of aluminum and built to withstand abuse. The latest G6 is a little bulky/boxy, in my opinion, but I can see why it would hold up to some bumps and bruises, all while working at blazingly fast speeds, so bulk isn’t a huge issue for me. Because of that bulk, you can imagine that this isn’t the lightest laptop either. It weighs in at 5.79 pounds for the lowest end and measures 1 inch by 14.8 inches by 10.4 inches.

On the bottom of the workstation is an easy-to-access panel for performing repairs and upgrades yourself. I really like the bottom compartment. I opened it and noticed I could throw in an additional NVMe drive and an SSD if needed. You can also access memory here. I love this because not only can you perform easy repairs yourself, but you can perform upgrades or part replacements without voiding your warranty on the original equipment. I’m glad to see that HP kept this in mind.

The keyboard is smaller than a full-size version but has a number keypad, which I love using when typing in timecodes. It is such a time-saver for me. (I credit entering in repair order numbers when I fixed computers at Best Buy as a teenager.) On the top of the keyboard are some handy shortcuts if you do web conferences or calls on your computer, including answering and ending calls. The Bang & Olufsen speakers are some of the best laptop speakers I’ve heard. While they aren’t quite monitor-quality, they do have some nice sound on the low end that I was able to fine-tune in the Bang & Olufsen audio control app.

Software Tests
All right, enough of the technical specs. Let’s get on to what people really want to know — how the HP ZBook 15 G6 performs while using apps like Blackmagic’s DaVinci Resolve and Adobe Premiere Pro. I used sample Red and Blackmagic Raw footage that I use a lot in testing. You can grab the Red footage here and the BRaw footage here. Keep in mind you will need to download the BRaw software to edit with BRaw inside of Adobe products, which you can find here).

Performance monitor while exporting in Resolve with VFX.

For testing in Resolve and Premiere, I strung out one-minute of 4K, 6K and 8K Red media in one sequence and the 4608×2592 4K and 6K BRaw media in another. During the middle of my testing Resolve had a giant Red API upgrade to allow for better realtime playback of Red Raw files if you have an Nvidia CUDA-based GPU.

First up is Resolve 16.1.1 and then Resolve 16.1.2. Both sequences are set to UHD (3840×2160) resolution. One sequence of each codec contains just color correction, while another of each codec contains effects and color correction. The Premiere sequence with color and effects contains basic Lumetri color correction, noise reduction (50) and a Gaussian blur with settings of 0.4. In Resolve, the only difference in the color and effects sequence is that the noise reduction is spatial and set to Enhanced, Medium and 25/25.

In Resolve, the 4K Red media would play in realtime while the 6K (RedCode 3:1) would jump down to about 14fps to 15fps, and the 8K (RedCode 7:1) would play at 10fps at full resolution with just color correction. With effects, the 4K media would play at 20fps, 6K at 3fps and 8K at 10fps. The Blackmagic Raw video would play at real time with just color correction and around 3fps to 4fps with effects.

This is where I talk about just how loud the fans in the ZBook 15 G6 can get. When running exports and benchmarks, the fans are noticeable and a little distracting. Obviously, we are running some high-end testing with processor- and GPU-intensive tests but still, the fans were noticeable. However, the bottom of the mobile workstation was not terribly hot, unlike the MacBook Pros I’ve tested before. So my lap was not on fire.

In my export testing, I used those same sequences as before and from Adobe Premiere Pro 2020. I exported UHD files using Adobe Media Encoder in different containers and codecs: H.264 (Mov), H.265 (Mov), ProResHQ, DPX, DCP and MXF OP1a (XDCAM). The MXF OP1a was at 1920x1080p export.
Here are my results:

Red (4K,6K,8K)
– Color Only: H.264 – 5:27; H.265 – 4:45; ProResHQ – 4:29; DPX – 3:37; DCP – 10:38; MXF OP1a – 2:31

Red Color, Noise Reduction (50), Gaussian Blur .4: H.264 – 4:56; H.265 – 4:56; ProResHQ – 4:36; DPX – 4:02; DCP – 8:20; MXF OP1a – 2:41

Blackmagic Raw
Color Only: H.264 – 2:05; H.265 – 2:19; ProResHQ – 2:04; DPX – 3:33; DCP – 4:05; MXF OP1a – 1:38

Color, Noise Reduction (50), Gaussian Blur 0.4: H.264 – 1:59; H.265 – 2:22; ProResHQ – 2:07; DPX – 3:49; DCP – 3:45; MXF OP1a – 1:51

What is surprising is that when adding effects like noise reduction and a Gaussian blur in Premiere, the export times stayed similar. While using the ZBook 15 G6, I noticed my export times improved when I upgraded driver versions, so I re-did my tests with the latest Nvidia drivers to make sure I was consistent. The drivers also solved an issue in which Resolve wasn’t reading BRaw properly, so remember to always research drivers.

The Nvidia Quadro RTX 3000 really pulled its weight when editing and exporting in both Premiere and Resolve. In fact, in previous versions of Premiere, I noticed that the GPU was not really being used as well as it should have been. With the Premiere Pro 2020 upgrade it seems like Adobe really upped its GPU usage game — at some points I saw 100% GPU usage.

In Resolve, I performed similar tests, but instead of ProResHQ I exported a DNxHR QuickTime file/package instead of a DCP and IMF package. For the most part, they are stock exports in the Deliver page of Resolve, except I forced Video Levels, Forced Debayer and Resizing to Highest Quality. Here are my results from Resolve version 16.1.1 and 16.1.2. (16.1.2 will be in parenthesis.)

– Red (4K, 6K, 8K) Color Only: H.264 – 2:17 (2:31); H.265 – 2:23 (2:37); DNxHR – 2:59 (3:06); IMF – 6:37 (6:40); DPX – 2:48 (2:45); MXF OP1A – 2:45 (2:33)

Color, Noise Reduction (Spatial, Enhanced, Medium, 25/25), Gaussian Blur 0.4: H.264 – 5:00 (5:15); H.265 – 5:18 (5:21); DNxHR – 5:25 (5:02); IMF – 5:28 (5:11); DPX – 5:23 (5:02); MXF OP1a – 5:20 (4:54)

-Blackmagic Raw Color Only: H.264 – 0:26 (0:25); H.265 – 0:31 (0:30); DNxHR – 0:50 (0:50); IMF – 3:51 (3:36); DPX – 0:46 (0:46); MXF OP1a – 0:23 (0:22)

Color, Noise Reduction (Spatial, Enhanced, Medium, 25/25), Gaussian Blur 0.4: H.264 – 7:51 (7:53); H.265 – 7:45 (8:01); DNxHR – 7:53 (8:00); IMF – 8:13 (7:56); DPX – 7:54 (8:18); MXF OP1a – 7:58 (7:57)

Interesting to note: Exporting Red footage with color correction only was significantly faster from Resolve, but for Red footage with effects applied, export times were similar between Resolve and Premiere. With the CUDA Red SDK update to Resolve in 16.1.2, I thought I would see a large improvement, but I didn’t. I saw an approximate 10% increase in playback but no improvement in export times.

Puget

Puget Systems has some great benchmarking tools, so I reached out to Matt Bach, Puget Systems’ senior labs technician, about my findings. He suggested that the mobile Xeon could possibly still be the bottleneck for Resolve. In his testing he saw a larger increase in speed with AMD Threadripper 3 and Intel i9-based systems. Regardless, I am kind of going deep on realtime playback of 8K Red Raw media on a mobile workstation — what a time we are in. Nonetheless, Blackmagic Raw footage was insanely fast when exporting out of Resolve, while export time for the Blackmagic Raw footage with effects was higher than I expected. There was a consistent use of the GPU and CPU in Resolve much like in the new version of Premiere 2020, which is a trend that’s nice to see.

In addition to Premiere and Resolve testing, I ran some common benchmarks that provide a good 30,000-foot view of the HP ZBook 15 G6 when comparing it to other systems. I decided to use the Puget Systems benchmarking tools. Unfortunately, at the time of this review, the tools were only working properly with Premiere and After Effects 2019, so I ran the After Effects benchmark using the 2019 version. The ZBook 15 G6 received an overall score of 802, render score of 79, preview score of 75.2 and tracking score of 86.4. These are solid numbers that beat out some desktop systems I have tested.

Corona

To test some 3D applications, I ran the Cinebench R20, which gave a CPU score of 3243, CPU (single core) score of 470 and an M/P ratio of 6.90x. I recently began running the Gooseberry benchmark scene in Blender to get a better sense of 3D rendering performance, and it took 29:56 to export. Using the Corona benchmark, it took 2:33 to render 16 passes, 3,216,368 rays/s. Using Octane Bench the ZBook 15 G6 received a score of 139.79. In the Vray benchmark for CPU, it received 9833 Ksamples, and in the Vray GPU testing, 228 mpaths. I’m not going to lie; I really don’t know a lot about what these benchmarks are trying to tell me, but they might help you decide whether this is the mobile workstation for your work.

Cinebench

One benchmark I thought was interesting between driver updates for the Nvidia Quadro RTX 3000 was the Neat Bench from Neat Video — the noise reduction plugin for video. It measures whether your system should use the CPU, GPU or a combination thereof to run Neat Video. Initially, the best combination result was to use the CPU only (seven cores) at 11.5fps.

After updating to the latest Nvidia drivers, the best combination result was to use the CPU (seven cores) and GPU (Quadro RTX 3000) at 24.2fps. A pretty incredible jump just from a driver update. Moral of the story: Make sure you have the correct drivers always!

Summing Up
Overall, the HP ZBook 15 G6 is a powerful mobile workstation that will work well across the board. From 3D to color correction apps, the Xeon processor in combination with the Quadro RTX 3000 will get you running 4K video without a problem. With the HP DreamColor anti-glare display using up to 600 nits of brightness and covering 100% of the DCI-P3 color space, coupled with the HDR option, you can rely on the attached display for color accuracy if you don’t have your output monitor attached. And with features like two USB Type-C ports (Thunderbolt 3 plus DP 1.4 plus USB 3.1 Gen 2), you can connect external monitors for a larger view of your work

The HP Fast Charge will get you out of a dead battery fiasco with the ability to go from 0% to 50% charge in 45 minutes. All of this for around $4,000 seems to be a pretty low price to pay, especially because it includes a three-year on-site warranty and because the device is certified to work seamlessly with many apps that pros use with HP’s independent software vendor verifications.

If you are looking for a mobile workstation upgrade, are moving from desktop to mobile or want an alternative to a MacBook Pro, you should price a system out online.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Conductor Companion app targets VFX boutiques and freelancers

Conductor Technologies has introduced Conductor Companion, a desktop app designed to simplify the use of the cloud-based rendering service. Tailored for boutique studios and freelance artists, Companion streamlines the Conductor on-ramp and rendering experience, allowing users to easily manage and download files, write commands and handle custom submissions or plug-ins from their laptops or workstations. Along with this release, Conductor has added initial support for Blender creative software.

“Conductor was originally designed to meet the needs of larger VFX studios, focusing our efforts on maximizing efficiency and scalability when many artists simultaneously leverage the platform and optimizing how Conductor hooks into those pipelines,” explains CEO Mac Moore. “As Conductor’s user base has grown, we’ve been blown away by the number of freelance artists and small studios that have come to us for help, each of which has their own unique needs. Conductor Companion is a nod to that community, bringing all the functionality and massive render resource scale of Conductor into a user-friendly app, so that artists can focus on content creation versus pipeline management. And given that focus, it was a no-brainer to add Blender support, and we are eager to serve the passionate users of that product.”

Moore reports that this app will be the foundation of Conductor’s Intelligence Hub in the near future, “acting as a gateway to more advanced functionality like Shot Analytics and Intelligent Bid Assist. These features will leverage AI and Conductor’s cloud knowledge to help owners and freelancers make more informed business decisions as it pertains to project-to-project rendering financials.”

Conductor Companion is currently in public beta. You can download the app here.

In addition to Blender, applications currently supported by Conductor include Autodesk Maya and Arnold; Foundry’s Nuke, Cara VR, Katana, Modo and Ocula; Chaos Group’s V-Ray; Pixar’s RenderMan; Isotropix’s Clarisse; Golaem; Ephere’s Ornatrix; Yeti; and Miarmy.

The Mill opens boutique studio in Berlin

Technicolor’s The Mill has officially launched in Berlin. This new boutique studio is located in the heart of Berlin, situated in the creative hub of Mitte, near many of Germany’s agencies, production companies and brands.

The Mill has been working with German clients for years. Recent projects include the Mercedes’ Bertha Benz spot with director Sebastian Strasser; Netto’s The Easter Surprise, directed in-house by The Mill; and BMW The 8 with director Daniel Wolfe. The new studio will bring The Mill’s full range of creative services from color to experiential and interactive, as well as visual effects and design.

The Mill Berlin crew

Creative director Greg Spencer will lead the creative team. He is a multi-award winning creative, having won several VES, Cannes Lions and British Arrow awards. His recent projects include Carlsberg’s The Lake, PlayStation’s This Could Be You and Eve Cuddly Toy. Spencer also played a role in some of Mill Film’s major titles. He was the 2D supervisor for Les Misérables and also worked on the Lord of the Rings trilogy. His resume also includes campaigns for brands such as Nike and Samsung.

Executive producer Justin Stiebel moves from The Mill London, where he has been since early 2014, to manage client relationships and new business. Since joining the company, Stiebel has produced spots such as Audi’s Next Level and the Mini’s “The Faith of a Few” campaign. He has also collaborated with directors such as Sebastian Strasser, Markus Walter and Daniel Wolfe while working on brands like Mercedes, Audi and BMW.

Sean Costelloe is managing director of The Mill London and The Mill Berlin.

Main Image Caption: (L-R) Justin Stiebel and Greg Spencer

Quantum F1000: a lower-cost NVMe storage option

Quantum is now offering the F1000, a lower-priced addition to the Quantum F-Series family of NVMe storage appliances. Using the software-defined architecture introduced with the F2000, the F1000 offers “ultra-fast streaming” performance and response times at a lower entry price. The F-Series can be used to accelerate the capture, edit and finishing of high-definition content and to accelerate VFX and CGI render speeds up to 100 times for developing augmented and virtual reality.

The Quantum F-Series was designed to handle content such as HD video used for movie, TV and sports production, advertising content or image-based workloads that require high-speed processing. Pros are using F-Series NVMe systems as part of Quantum’s StorNext scale-out file storage cluster and leveraging the StorNext data management capabilities to move data between NVMe storage pools and other storage pools. Users can take advantage of the performance boost NVMe provides for workloads that require it, while continuing to use lower-cost storage for data where performance is less critical.

Quantum F-Series NVMe appliances accelerate pro workloads and also help customers move from Fibre Channel networks to less expensive IP-based networks. User feedback has shown that pros need a lower cost of entry into NVMe technology, which is what led Quantum to develop the F1000. According to Quantum, the F1000 offers performance that is five to 10 times faster than an equivalent SAS SSD storage array at a similar price.

The F1000 is available in two capacity points: 39TB and 77TB. It offers the same connectivity options as the F2000 — 32Gb Fibre Channel or iSER/RDMA using 100Gb Ethernet — and is designed to be deployed as part of a StorNext scale out file storage cluster.

DP Chat: The Grudge’s Zachary Galler

By Randi Altman

Being on set is like coming home for New York-based cinematographer Zachary Galler, who as a child would tag along with his father while he directed television and film projects. The younger Galler started in the industry as a lighting technician and quickly worked his way up to shooting various features and series.

His first feature as a cinematographer, The Sleepwalker, premiered at the in 2014 and was later distributed by IFC. His second feature, She’s Lost Control, was awarded the C.I.C.A.E. Award at the Berlin International Film Festival later that year. Other television credits include all eight episodes of Discovery’s scripted series Manhunt: Unabomber, Hulu’s The Act and USA’s Briarpatch (coming in February). He recently completed the feature Nicolas Pesce-directed thriller The Grudge, which stars John Cho and Betty Gilpin and is in theaters now.

Tell us about The Grudge. How early did you get involved in planning, and what direction were you given by the director about the look he wanted?
Nick and I worked together on a movie he directed called Piercing. That was our first collaboration, but we discovered that we had very similar ideas and working styles and we formed a special relationship. Shortly after that project, we started talking about The Grudge, and about a year later we were shooting. We talked a lot about how this movie should feel, and how we could achieve something new and different from something neither of us had done before. We used a lot of look-books and movie references to communicate, so when it came time to shoot we had the visual language down fluently and that allowed us keep each other consistent in execution.

How would you describe the look?
Nick really liked the bleach-bypass look from David Fincher’s Se7en, and I thought about a mix of that and (photographer) Bill Henson. We also knew that we had to differentiate between the different storyline threads in the movie, so we had lots to figure out. One of the threads is darker and looks very yellow, while another is warmer and more classic. Another is slightly more desaturated and darker. We did keep the same bleach-bypass look throughout, but adjusted our color temperature, contrast and saturation accordingly. For a horror movie like this, I really wanted to be able to control where the shadow detail turned into black, because some of our scare scenes relied on that so we made sure to light accordingly, and were able to fine-tune most of that in-camera.

How did you work with the director and colorist to achieve that look?
We worked with FotoKem colorist Kostas Theodosiou (who used Blackmagic Resolve). I was shooting a TV show during the main color pass, so I only got to check in to set looks and approve final color, but Nick and Kostas did a beautiful job. Kostas is a master of contrast control and very tastefully helped us ride that line of where there should be detail and where it should not be detail. He was definitely an important part of the collaboration and helped make the movie better.

Where was it shot and how long was the shoot?
We shot the movie in 35 days in Winnipeg, Canada.

How did you go about choosing the right camera and lenses for this project and why these tools?
Nick decided early on that he wanted to shoot this film anamorphic. Panavision has been an important partner for me on most of my projects, and I knew that I loved their glass. We got a range of different lenses from Panavision Toronto to help us differentiate our storylines — we shot one on T Series, one on Primo anamorphics and one on G Series anamorphics. The Alexa Mini was the camera of choice because of its low light sensitivity and more natural feel.

Now more general questions…

How did you become interested in cinematography?
My father was a director, so I would visit him on set a lot when I was growing up. I didn’t know quite what I wanted to do when I was young but I knew that it was being on set. After dropping out of film school, I got a job working in a lighting rental warehouse and started driving trucks and delivering lights to sets in New York. I had always loved taking pictures as a kid and as I worked more and learned more, I realized that what I wanted to do was be a DP. I was very lucky in that I found some great collaborators early on in my career that both pushed me and allowed me to fail. This is the greatest job in the world.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
Artistically, I am inspired by painters, photographers and other DPs. There are so many people doing such amazing work right now. As far as technology is concerned, I’m a bit slow with adopting, as I need to hold something in my hands or see what it does before I adopt it. I have been very lucky to get to work with some great crews, and often a camera assistant, gaffer or key grip will bring something new to the table. I love that type of collaboration.

 

DP Zachary Galler (right) and director Nicolas Pesce on the set of Screen Gems’ The Grudge.

What new technology has changed the way you works?
For some reason, I was resistant to using LUTs for a long time. The Grudge was actually the first time I relied on something that wasn’t close to just plain Rec 709. I always figured that if I could get the 709 feeling good when I got into color I’d be in great shape. Now, I realize how helpful they can be, and that you can push much further. I also think that the Astera LED tubes are amazing. They allow you to do so much so fast and put light in places that would be very hard to do with other traditional lighting units.

What are some of your best practices or rules you try to follow on each job?
I try to be pretty laid back on set, and I can only do that because I’m very picky about who I hire in prep. I try and let people run their departments as much as possible and give them as much information as possible — it’s like cooking, where you try and get the best ingredients and don’t do much to them. I’ve been very lucky to have worked with some great crews over the years.

What’s your go-to gear — things you can’t live without?
I really try and keep an open mind about gear. I don’t feel romantically attached to anything, so that I can make the right choices for each project.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Directing Olly’s ‘Happy Inside Out’ campaign

How do you express how vitamins make you feel? Well, production company 1stAveMachine partnered with independent creative agency Yard NYC to develop the stylized “Happy Inside Out” campaign for Olly multivitamin gummies to show just that.

Beauty

The directing duo of Erika Zorzi and Matteo Sangalli, known as Mathery, highlighted the brand’s products and benefits by using rich textures, colors and lighting. They shot on an ARRI Alexa Mini. “Our vision was to tell a cohesive narrative, where each story of the supplements spoke the same visual language,” Mathery explains. “We created worlds where everything is possible and sometimes took each product’s concept to the extreme and other times added some romance to it.”

Each spot imagines various benefits of taking Olly products. The side-scrolling Energy, which features a green palette, shows a woman jumping and doing flips through life’s everyday challenges, including through her home to work, doing laundry and going to the movies. Beauty, with its pink color pallete, features another woman “feeling beautiful” while turning the heads of a parliament of owls. Meanwhile, Stress, with its purple/blue palette, features a women tied up in a giant ball of yarn, and as she unspools herself, the things that were tying her up spin away. In the purple-shaded Sleep, a lady lies in bed pulling off layer after layer of sleep masks until she just happily sleeps.

Sleep

The spots were shot with minimal VFX, other than a few greenscreen moments, and the team found itself making decisions on the fly, constantly managing logistics for stunt choreography, animal performances and wardrobe. Jogger Studios provided the VFX using Autodesk Flame for conform, cleanup and composite work. Adobe After Effects was used for all of the end tag animation. Cut+Run edited the campaign.

According to Mathery, “The acrobatic moves and obstacle pieces in the Energy spot were rehearsed on the same day of the shoot. We had to be mindful because the action was physically demanding on the talent. With the Beauty spot, we didn’t have time to prepare with the owls. We had no idea if they would move their heads on command or try to escape and fly around the whole time. For the Stress spot, we experimented with various costume designs and materials until we reached a look that humorously captured the concept.”

The campaign marks Mathery’s second collaboration with Yard NYC and Olly, who brought the directing team into the fold very early on, during the initial stages of the project. This familiarity gave everyone plenty of time to let the ideas breath.

VES Awards: The Lion King and Alita earn five noms each

The Visual Effects Society (VES) has announced its nominees for the 18th Annual VES Awards, which recognize outstanding visual effects artistry and innovation in film, animation, television, commercials and video games and the VFX supervisors, VFX producers and hands-on artists who bring this work to life. Alita: Battle Angel and The Lion King both have five nominations each; Toy Story 4 is the top animated film contender with five nominations, and Game of Thrones and The Mandalorian tie to lead the broadcast field with six nominations each.

Nominees in 25 categories were selected by VES members via events hosted by 11 VES sections, including Australia, the Bay Area, Germany, London, Los Angeles, Montreal, New York, New Zealand, Toronto, Vancouver and Washington.

The VES Awards will be held on January 29 at the Beverly Hilton Hotel. The VES Lifetime Achievement Award will be presented to Academy, DGA and Emmy-Award winning director-producer-screenwriter Martin Scorsese. The VES Visionary Award will be presented to director-producer-screenwriter Roland Emmerich. And the VES Award for Creative Excellence will be given to visual effects supervisor Sheena Duggal. Award-winning actor-comedian-author Patton Oswalt will once again host the event.

The nominees for the 18th Annual VES Awards in 25 categories are:

 

Outstanding Visual Effects in a Photoreal Feature

 

ALITA: BATTLE ANGEL

Richard Hollander

Kevin Sherwood

Eric Saindon

Richard Baneham

Bob Trevino

 

AVENGERS: ENDGAME

Daniel DeLeeuw

Jen Underdahl

Russell Earl

Matt Aitken

Daniel Sudick

 

GEMINI MAN

Bill Westenhofer

Karen Murphy-Mundell

Guy Williams

Sheldon Stopsack

Mark Hawker

 

STAR WARS: THE RISE OF SKYWALKER

Roger Guyett

Stacy Bissell

Patrick Tubach

Neal Scanlan

Dominic Tuohy

 

THE LION KING

Robert Legato

Tom Peitzman

Adam Valdez

Andrew R. Jones

 

Outstanding Supporting Visual Effects in a Photoreal Feature

 

1917

Guillaume Rocheron

Sona Pak

Greg Butler

Vijay Selvam

Dominic Tuohy

 

FORD V FERRARI

Olivier Dumont

Kathy Siegel

Dave Morley

Malte Sarnes

Mark Byers

 

JOKER

Edwin Rivera

Brice Parker

Mathew Giampa

Bryan Godwin

Jeff Brink

 

THE AERONAUTS

Louis Morin

Annie Godin

Christian Kaestner

Ara Khanikian

Mike Dawson

 

THE IRISHMAN

Pablo Helman

Mitch Ferm

Jill Brooks

Leandro Estebecorena

Jeff Brink

 

Outstanding Visual Effects in an Animated Feature

 

FROZEN 2

Steve Goldberg

Peter Del Vecho

Mark Hammel

Michael Giaimo

 

KLAUS

Sergio Pablos

Matthew Teevan

Marcin Jakubowski

Szymon Biernacki

 

MISSING LINK

Brad Schiff

Travis KnightSteve Emerson

Benoit Dubuc

 

THE LEGO MOVIE 2

David Burgess

Tim Smith

Mark Theriault

John Rix

 

TOY STORY 4

Josh Cooley

Mark Nielsen

Bob Moyer

Gary Bruins

 

Outstanding Visual Effects in a Photoreal Episode

 

GAME OF THRONES; The Bells

Joe Bauer

Steve Kullback

Ted Rae

Mohsen Mousavi

Sam Conway

 

HIS DARK MATERIALS; The Fight to the Death

Russell Dodgson

James Whitlam

Shawn Hillier

Robert Harrington

 

LADY AND THE TRAMP

Robert Weaver

Christopher Raimo

Arslan Elver

Michael Cozens

Bruno Van Zeebroeck

 

LOST IN SPACE – Episode: Ninety-Seven

Jabbar Raisani

Terron Pratt

Niklas Jacobson

Juri Stanossek

Paul Benjamin

 

STRANGER THINGS – Chapter Six: E Pluribus Unum

Paul Graff

Tom Ford

Michael Maher Jr.

Martin Pelletier

Andy Sowers

 

THE MANDALORIAN; The Child

Richard Bluff

Abbigail Keller

Jason Porter

Hayden Jones

Roy Cancinon

 

Outstanding Supporting Visual Effects in a Photoreal Episode

 

CHERNOBYL; 1:23:45

Max Dennison

Lindsay McFarlane

Clare Cheetham

Paul Jones

Claudius Christian Rauch

 

LIVING WITH YOURSELF; Nice Knowing You

Jay Worth

Jacqueline VandenBussche

Chris Wright

Tristan Zerafa

 

SEE; Godflame

Adrian de Wet

Eve Fizzinoglia

Matthew Welford

Pedro Sabrosa

Tom Blacklock

 

THE CROWN; Aberfan

Ben Turner

Reece Ewing

David Fleet

Jonathan Wood

 

VIKINGS; What Happens in the Cave

Dominic Remane

Mike Borrett

Ovidiu Cinazan

Tom Morrison

Paul Byrne

 

Outstanding Visual Effects in a Real-Time Project

 

Call of Duty Modern Warfare

Charles Chabert

Chris Parise

Attila Zalanyi

Patrick Hagar

 

Control

Janne Pulkkinen

Elmeri Raitanen

Matti Hämäläinen

James Tottman

 

Gears 5

Aryan Hanbeck

Laura Kippax

Greg Mitchell

Stu Maxwell

 

Myth: A Frozen Tale

Jeff Gipson

Nicholas Russell

Brittney Lee

Jose Luis Gomez Diaz

 

Vader Immortal: Episode I

Ben Snow

Mike Doran

Aaron McBride

Steve Henricks

 

Outstanding Visual Effects in a Commercial

 

Anthem Conviction

Viktor Muller

Lenka Likarova

Chris Harvey

Petr Marek

 

BMW Legend

Michael Gregory

Christian Downes

Tim Kafka

Toya Drechsler

 

Hennessy: The Seven Worlds

Carsten Keller

Selcuk Ergen

Kiril Mirkov

William Laban

 

PlayStation: Feel The Power of Pro

Sam Driscoll

Clare Melia

Gary Driver

Stefan Susemihl

 

Purdey’s: Hummingbird

Jules Janaud

Emma Cook

Matthew Thomas

Philip Child

 

Outstanding Visual Effects in a Special Venue Project

 

Avengers: Damage Control

Michael Koperwas

Shereif Fattouh

Ian Bowie

Kishore Vijay

Curtis Hickman

 

Jurassic World: The Ride

Hayden Landis

Friend Wells

Heath Kraynak

Ellen Coss

 

Millennium Falcon: Smugglers Run

Asa Kalama

Rob Huebner

Khatsho Orfali

Susan Greenhow

 

Star Wars: Rise of the Resistance

Jason Bayever

Patrick Kearney

Carol Norton

Bill George

 

Universal Sphere

James Healy

Morgan MacCuish

Ben West

Charlie Bayliss

 

Outstanding Animated Character in a Photoreal Feature

 

ALITA: BATTLE ANGEL; Alita

Michael Cozens

Mark Haenga

Olivier Lesaint

Dejan Momcilovic

 

AVENGERS: ENDGAME; Smart Hulk

Kevin Martel

Ebrahim Jahromi

Sven Jensen

Robert Allman

 

GEMINI MAN; Junior

Paul Story

Stuart Adcock

Emiliano Padovani

Marco Revelant

 

THE LION KING; Scar

Gabriel Arnold

James Hood

Julia Friedl

Daniel Fortheringham

 

 

 

 

Outstanding Animated Character in an Animated Feature

 

FROZEN 2; The Water Nøkk

Svetla Radivoeva

Marc Bryant

Richard E. Lehmann

Cameron Black

 

KLAUS; Jesper

Yoshimishi Tamura

Alfredo Cassano

Maxime Delalande

Jason Schwartzman

 

MISSING LINK; Susan

Rachelle Lambden

Brenda Baumgarten

Morgan Hay

Benoit Dubuc

 

TOY STORY 4; Bo Peep

Radford Hurn

Tanja Krampfert

George Nguyen

Becki Rocha Tower

 

Outstanding Animated Character in an Episode or Real-Time Project

 

LADY AND THE TRAMP; Tramp

Thiago Martins

Arslan Elver

Stanislas Paillereau

Martine Chartrand

 

STRANGER THINGS 3; Tom/Bruce Monster

Joseph Dubé-Arsenault

Antoine Barthod

Frederick Gagnon

Xavier Lafarge

 

THE MANDALORIAN; The Child; Mudhorn

Terry Bannon

Rudy Massar

Hugo Leygnac

 

THE UMBRELLA ACADEMY; Pilot; Pogo

Aidan Martin

Craig Young

Olivier Beierlein

Laurent Herveic

 

Outstanding Animated Character in a Commercial

 

Apex Legends; Meltdown; Mirage

Chris Bayol

John Fielding

Derrick Sesson

Nole Murphy

 

Churchill; Churchie

Martino Madeddu

Philippe Moine

Clement Granjon

Jon Wood

 

Cyberpunk 2077; Dex

Jonas Ekman

Jonas Skoog

Marek Madej

Grzegorz Chojnacki

 

John Lewis; Excitable Edgar; Edgar

Tim van Hussen

Diarmid Harrison-Murray

Amir Bazzazi

Michael Diprose

 

 

Outstanding Created Environment in a Photoreal Feature

 

ALADDIN; Agrabah

Daniel Schmid

Falk Boje

Stanislaw Marek

Kevin George

 

ALITA: BATTLE ANGEL; Iron City

John Stevenson-Galvin

Ryan Arcus

Mathias Larserud

Mark Tait

 

MOTHERLESS BROOKLYN; Penn Station

John Bair

Vance Miller

Sebastian Romero

Steve Sullivan

 

STAR WARS: THE RISE OF SKYWALKER; Pasaana Desert

Daniele Bigi

Steve Hardy

John Seru

Steven Denyer

 

THE LION KING; The Pridelands

Marco Rolandi

Luca Bonatti

Jules Bodenstein

Filippo Preti

 

 

Outstanding Created Environment in an Animated Feature

 

FROZEN 2; Giants’ Gorge

Samy Segura

Jay V. Jackson

Justin Cram

Scott Townsend

 

HOW TO TRAIN YOUR DRAGON: THE HIDDEN WORLD; The Hidden World

Chris Grun

Ronnie Cleland

Ariel Chisholm

Philippe Brochu

 

MISSING LINK; Passage to India Jungle

Oliver Jones

Phil Brotherton

Nick Mariana

Ralph Procida

 

TOY STORY 4; Antiques Mall

Hosuk Chang

Andrew Finley

Alison Leaf

Philip Shoebottom

 

 

Outstanding Created Environment in an Episode, Commercial, or Real-Time Project

 

GAME OF THRONES; The Iron Throne; Red Keep Plaza

Carlos Patrick DeLeon

Alonso Bocanegra Martinez

Marcela Silva

Benjamin Ross

 

LOST IN SPACE; Precipice; The Trench

Philip Engström

Benjamin Bernon

Martin Bergquist

Xuan Prada

 

THE DARK CRYSTAL: AGE OF RESISTANCE; The Endless Forest

Sulé Bryan

Charles Chorein

Christian Waite

Martyn Hawkins

 

THE MANDALORIAN; Nevarro Town

Alex Murtaza

Yanick Gaudreau

Marco Tremblay

Maryse Bouchard

 

Outstanding Virtual Cinematography in a CG Project

 

ALITA: BATTLE ANGEL

Emile Ghorayeb

Simon Jung

Nick Epstein

Mike Perry

 

THE LION KING

Robert Legato

Caleb Deschanel

Ben Grossmann

AJ Sciutto

 

THE MANDALORIAN; The Prisoner; The Roost

Richard Bluff

Jason Porter

Landis Fields IV

Baz Idione

 

 

TOY STORY 4

Jean-Claude Kalache

Patrick Lin

 

Outstanding Model in a Photoreal or Animated Project

 

LOST IN SPACE; The Resolute

Xuan Prada

Jason Martin

Jonathan Vårdstedt

Eric Andersson

 

MISSING LINK; The Manchuria

Todd Alan Harvey

Dan Casey

Katy Hughes

 

THE MAN IN THE HIGH CASTLE; Rocket Train

Neil Taylor

Casi Blume

Ben McDougal

Chris Kuhn

 

THE MANDALORIAN; The Sin; The Razorcrest

Doug Chiang

Jay Machado

John Goodson

Landis Fields IV

 

Outstanding Effects Simulations in a Photoreal Feature

 

DUMBO; Bubble Elephants

Sam Hancock

Victor Glushchenko

Andrew Savchenko

Arthur Moody

 

SPIDER-MAN: FAR FROM HOME; Molten Man

Adam Gailey

Jacob Santamaria

Jacob Clark

Stephanie Molk

 

 

 

 

 

STAR WARS: THE RISE OF SKYWALKER

Don Wong

Thibault Gauriau

Goncalo Cababca

Francois-Maxence Desplanques

 

THE LION KING

David Schneider

Samantha Hiscock

Andy Feery

Kostas Strevlos

 

Outstanding Effects Simulations in an Animated Feature

 

ABOMINABLE

Alex Timchenko

Domin Lee

Michael Losure

Eric Warren

 

FROZEN 2

Erin V. Ramos

Scott Townsend

Thomas Wickes

Rattanin Sirinaruemarn

 

HOW TO TRAIN YOUR DRAGON: THE HIDDEN WORLD; Water and Waterfalls

Derek Cheung

Baptiste Van Opstal

Youxi Woo

Jason Mayer

 

TOY STORY 4

Alexis Angelidis

Amit Baadkar

Lyon Liew

Michael Lorenzen

 

Outstanding Effects Simulations in an Episode, Commercial, or Real-Time Project

 

GAME OF THRONES; The Bells

Marcel Kern

Paul Fuller

Ryo Sakaguchi

Thomas Hartmann

 

Hennessy: The Seven Worlds

Selcuk Ergen

Radu Ciubotariu

Andreu Lucio

Vincent Ullmann

 

LOST IN SPACE; Precipice; Water Planet

Juri Bryan

Hugo Medda

Kristian Olsson

John Perrigo

 

STRANGER THINGS 3; Melting Tom/Bruce

Nathan Arbuckle

Christian Gaumond

James Dong

Aleksandr Starkov

 

THE MANDALORIAN; The Child; Mudhorn

Xavier Martin Ramirez

Ian Baxter

Fabio Siino

Andrea Rosa

 

Outstanding Compositing in a Feature

 

ALITA: BATTLE ANGEL

Adam Bradley

Carlo Scaduto

Hirofumi Takeda

Ben Roberts

 

AVENGERS: ENDGAME

Tim Walker

Blake Winder

Tobias Wiesner

Joerg Bruemmer

 

CAPTAIN MARVEL; Young Nick Fury

Trent Claus

David Moreno Hernandez

Jeremiah Sweeney

Yuki Uehara

 

STAR WARS: THE RISE OF SKYWALKER

Jeff Sutherland

John Galloway

Sam Bassett

Charles Lai

 

THE IRISHMAN

Nelson Sepulveda

Vincent Papaix

Benjamin O’Brien

Christopher Doerhoff

 

 

Outstanding Compositing in an Episode

 

GAME OF THRONES; The Bells

Sean Heuston

Scott Joseph

James Elster

Corinne Teo

 

GAME OF THRONES; The Long Night; Dragon Ground Battle

Mark Richardson

Darren Christie

Nathan Abbott

Owen Longstaff

 

STRANGER THINGS 3; Starcourt Mall Battle

Simon Lehembre

Andrew Kowbell

Karim El-Masry

Miklos Mesterhazy

 

WATCHMEN; Pilot; Looking Glass

Nathaniel Larouche

Iyi Tubi

Perunika Yorgova

Mitchell Beaton

 

Outstanding Compositing in a Commercial

 

BMW Legend

Toya Drechsler

Vivek Tekale

Guillaume Weiss

Alexander Kulikov

 

Feeding America; I Am Hunger in America

Dan Giraldo

Marcelo Pasqualino

Alexander Koester

 

Hennessy; The Seven Worlds

Rod Norman

Guillaume Weiss

Alexander Kulikov

Alessandro Granella

 

PlayStation: Feel the Power of Pro

Gary Driver

Stefan Susemihl

Greg Spencer

Theajo Dharan

 

Outstanding Special (Practical) Effects in a Photoreal or Animated Project

 

ALADDIN; Magic Carpet

Mark Holt

Jay Mallet

Will Wyatt

Dickon Mitchell

 

GAME OF THRONES; The Bells

Sam Conway

Terry Palmer

Laurence Harvey

Alastair Vardy

 

TERMINATOR: DARK FATE

Neil Corbould

David Brighton

Ray Ferguson

Keith Dawson

 

THE DARK CRYSTAL: THE AGE OF RESISTANCE; She Knows All the Secrets

Sean Mathiesen

Jon Savage

Toby Froud

Phil Harvey

 

Outstanding Visual Effects in a Student Project

 

DOWNFALL

Matias Heker

Stephen Moroz

Bradley Cocksedge

 

LOVE AND FIFTY MEGATONS

Denis Krez

Josephine Roß

Paulo Scatena

Lukas Löffler

 

OEIL POUR OEIL

Alan Guimont

Thomas Boileau

Malcom Hunt

Robin Courtoise

 

THE BEAUTY

Marc Angele

Aleksandra Todorovic

Pascal Schelbli

Noel Winzen

 

 

Plugable intros three new connectivity products at CES

Plugable, a developer of USB devices, introduced three new products at CES 2020: the TBT3-UDZ Thunderbolt 3 and USB-C docking station with 100W power, the 2.5Gbps USB Ethernet adapter, and the USB-C DisplayPort 1.4 MST to Dual HDMI 2.0 Adapter.

The Plugable TBT3-UDZ allows users to connect up to two additional 4K displays using either HDMI or DisplayPort without the need of external adapters. The Plugable TBT3-UDZ will be available for $299 in the spring of 2020.

The TBT3-UDZ uses the latest Intel Titan Ridge chipset, allowing for capability with Thunderbolt 3 and USB-C laptops. The TBT3-UDZ’s 100W power delivery can be used with the new 16-inch MacBook Pro. The docking station features 14 ports (Video, USB, SD/MicroSD, Ethernet, Audio) inputs.

The Plugable USB-C DisplayPort 1.4 MST to Dual HDMI 2.0 adapter lets users connect two 4K 60Hz HDMI displays at full native GPU performance to USB-C Windows systems like the new Surface Laptop 3 and Surface Pro 7. The Plugable USB-C DisplayPort 1.4 MST to dual HDMI 2.0 adapter will be available for $39.95 in Q2 2020.

The Plugable USB-C adapter can transmit at speeds up to 25.9Gbps over USB-C, surpassing HDMI 2.0’s 18Gbps. This enables higher-resolution dual displays over a single USB-C cable. The combination of DisplayPort 1.4 and MST capabilities lets users maximize bandwidth and convert the signal to HDMI 2.0 outputs to maximize compatibility with advanced 4K displays. This enables dual HDMI 2.0 displays with HDR from a single USB-C port with DisplayPort 1.4 at the full native performance of the system’s graphics processor.

The Plugable USB-C adapter targets creative professionals who need to expand their setups. Thanks to this specific combination of DisplayPort and HDMI technologies, users can extend their laptops or phones to two 4K displays at high resolution and refresh rates.

The Plugable 2.5Gb USB Ethernet adapter helps upgrade laptops and desktops with faster wired connection speeds. The adapter is a 2.5Gb USB solution with an attached USB-C to USB-A adapter, making it possible to upgrade any USB SuperSpeed (USB 3.0/3.1) laptop or desktop. The Plugable 2.5Gb USB Ethernet adapter will be available for $49.99 in Q2 2020.

This 2.5Gb Ethernet adapter more than doubles that performance over existing Cat 5e or better cabling with supporting hardware (switches and routers). The adapter is backwards compatible with earlier networking standards such as Gigabit (10/100/1000) networks, supports auto-negotiation and is compatible with both full-duplex or half-duplex networks.

Main Image: Plugable TBT3-UDZ

Behind the Title: Film Editor Edward Line

By Randi Altman

This British editor got his start at Final Cut in London, honing his craft and developing his voice before joining Cartel in Santa Monica.

NAME: Edward Line

COMPANY: Cartel

WHAT KIND OF COMPANY IS CARTEL?
Cartel is an editorial and post company based in Santa Monica. We predominantly service the advertising industry but also accommodate long-form projects and other creative content. I joined Cartel as one of the founding editors in 2015.

CAN YOU GIVE US SOME MORE DETAIL ABOUT YOUR JOB?
I assemble the raw material from a film shoot into a sequence that tells the story and communicates the idea of a script. Sometimes I am involved before the shoot and cut together storyboard frames to help the director decide what to shoot. Occasionally, I’ll edit on location if there is a technical element that requires immediate approval for the shoot to move forward.

Edward Line working on Media Composer

During the edit, I work closely with the directors and creative teams to realize their vision of the script or concept and bring their ideas to life. In addition to picture editing, I incorporate sound design, music, visual effects and graphics into the edit. It’s a collaboration between many departments and an opportunity to validate existing ideas and try new ones.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THE FILM EDITOR TITLE?
A big part of my job involves collaborating with others, working with notes and dealing with tricky situations in the cutting room. Part of being a good editor is having the ability to manage people and ideas while not compromising the integrity and craft of the edit. It’s a skill that I’m constantly refining.

WHAT’S YOUR FAVORITE PART OF THE JOB?
I love being instrumental in bringing creative visions together and seeing them realized on screen, while being able to express my individual style and craft.

WHAT’S YOUR LEAST FAVORITE?
Tight deadlines. Filming with digital formats has allowed productions to shoot more and specify more deliverables. However, providing the editor proportional time to process everything is not always a consideration and can add pressure to the process.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I am a morning person so I tend to be most productive when I have fresh eyes. I’ve often executed a scene in the first few hours of a day and then spent the rest of the day (and night) fine-tuning it.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I have always had a profound appreciation for design and architecture, and in an alternate universe, I could see myself working in that world.

WHY DID YOU CHOOSE THIS PROFESSION?
I’ve always had ambitions to work in filmmaking and initially worked in TV production after I graduated college. After a few years, I became curious about working in post and found an entry-level job at the renowned editorial company Final Cut in London. I was inspired by the work Final Cut was doing, and although I’d never edited before, I was determined to give editing a chance.

CoverGirl

I spent my weekends and evenings at the office, teaching myself how to edit on Avid Media Composer and learning editing techniques with found footage and music. It was during this experimental process, that I fell in love with editing and I never looked back.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
In the past year I have edited commercials for CoverGirl, Sephora, Bulgari, Carl’s Jr. and Smartcar. I have also cut a short film called Dad Was, which will be submitted to festivals in 2020.

HOW HAVE YOU DEVELOPED NEW SKILLS WHEN CUTTING FOR A SPECIFIC GENRE OR FORMAT?
Cutting music videos allowed me to hone my skills to edit musical performance while telling visual stories efficiently. I learned how to create rhythm and pace through editing and how to engage an audience when there is no obvious narrative. The format provided me with a fertile place to develop my individual editing style and perfect my storytelling skills.

When I started editing commercials, I learned to be more disciplined in visual storytelling, as most commercials are rarely longer than 60 seconds. I learned how to identify nuances in performance and the importance of story beats, specifically when editing comedy. I’ve also worked on numerous films with VFX, animation and puppetry. These films have allowed me to learn about the potential for these visual elements while gaining an understanding of the workflow and process.

More recently, I have been enjoying cutting dialogue in short films. Unlike commercials, this format allows more time for story and character to develop. So when choosing performances, I am more conscious of the emotional signals they send to the audience and overarching narrative themes.

Sephora

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’s tough to narrow this down to one project…

Recently, I worked on a commercial for the beauty retailer Sephora that promoted its commitment to diversity and inclusivity. The film Identify As We is a celebration of the non-binary community and features a predominantly transgender cast. The film champions ideas of being different and self expression while challenging traditional perceptions of beauty. I worked tirelessly with the director and creative team to make sure we treated the cast and footage with respect while honoring the message of the campaign.

I’m also particularly proud of a short film that I edited called Wale. The film was selected for over 30 film festivals across the globe and won several awards. The culmination of the film’s success was receiving a BAFTA nomination and being shortlisted for the 91st Academy Awards for Best Live Action Short Film.

WHAT DO YOU USE TO EDIT?
I work on Avid Media Composer, but I have recently started to flirt with Adobe Premiere. I think it’s good to be adaptable, and I’d hate to restrict my ability to work on a project because of software.

Wale

ARE YOU OFTEN ASKED TO DO MORE THAN EDIT? IF SO, WHAT ELSE ARE YOU ASKED TO DO?
Yes, I usually incorporate other elements such as sound design, music and visual effects into my edits as they can be instrumental to the storytelling or communication of an idea. It’s often useful for the creative team and other film departments to see how these elements contribute to the final film, and they can sometimes inform decisions in the edit.

For example, sound can play a major part in accenting a moment or providing a transition to another scene, so I often spend time placing sound effects and sourcing music during the edit process. This helps me visualize the scene in a broader context and provides new perspective if I’ve become overfamiliar with the footage.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
No surprises, but my smartphone! Apart from the obvious functions, it’s a great place to review edits and source music when I’m on the move. I’ve also recently purchased a Bluetooth keyboard and Wacom tablet, which make for a tidy work area.

I’m also enjoying using my “smart thermostat” at home which learns my behavior and seems to know when I’m feeling too hot or cold.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
Once I have left the edit bay, I decompress by listening to music on the way home. Once home, I take great pleasure from cooking for myself, friends and family.

Maryanne Brandon’s path, and editing Star Wars: The Rise of Skywalker

By Amy Leland

In the interest of full disclosure, I have been a fan of both the Star Wars world and the work of J.J. Abrams for a very long time. I saw Star Wars: Episode IV – A New Hope  in the theaters with my big brother when I was five years old, and we were hooked. I don’t remember a time in my life without Star Wars. And I have been a fan of all of Abrams’ work, starting with Felicity. Periodically, I go back and rewatch Felicity, Alias and Lost. I was, in fact, in the middle of Season 2 of Alias and had already purchased my ticket for The Rise of Skywalker when I was assigned this interview.

As a female editor, I have looked up to Maryann Brandon, ACE, and Mary Jo Markey, ACE — longtime Abrams collaborators — for years. A chance to speak with Brandon was more than a little exciting. After getting the fangirl out of my system at the start of the interview, we had a wonderful conversation about her incredible career and this latest Star Wars offering.

After working in the world of indie film in New York City after NYU film school, Brandon has not only been an important part of J.J. Abrams’ world — serving as a primary editor on Alias, and then on Mission Impossible III, Super 8 and two films each in the Star Trek and Star Wars worlds — but has also edited The Jane Austen Book Club, How to Train Your Dragon and Venom, among others.

Maryann Brandon

Let’s dig a bit deeper with Brandon…

How did your path to editing begin?
I started in college, but I wasn’t really editing. I was just a member of the film society. I was recruited by the NYU Graduate Film program in 1981 because they wanted women in the program. And I thought, it’s that or working on Wall Street, and I wasn’t really that great with the money or numbers. I chose film school.

I had no idea what it was going to be like because I don’t come from a film background or a film family. I just grew up loving films. I ended up spending three years just running around Manhattan, making movies with everyone, and everyone did every job. Then, when I got out of school, I had to finish my thesis film, and there was no one to edit it for me. So I ended up editing it myself. I started to meet people in the business because New York was very close. I got offered a paid position in editing, and I stayed.

I met and worked for some really incredible people along the way. I worked as a second assistant on the Francis Ford Coppola film The Cotton Club. I went from that to working as a first assistant on Richard Attenborough’s version of A Chorus Line. I was sent to London and got swept up in the editing part of it. I like telling stories. It became the thing I did. And that’s how it happened.

Who inspired you in those early days?
I was highly influenced by Dede Allen. She was this matriarch of New York at that time, and I was so blown away by her and her personality. I mean, her work spoke for itself, but she was also this incredible person. I think it’s my nature anyway, but I learned from her early on an approach of kindness and caring. I think that’s part of why I stayed in the cutting room.

On set, things tend to become quite fraught sometimes when you’re trying to make something happen, but the cutting room is this calm place of reality, and you could figure stuff out. She was very influential to me, and she was such a kind, caring person. She cared about everyone in the cutting room, and she took time to talk to everyone.

There was also John Bloom, who was the editor on A Chorus Line. We became very close, and he always used to call me over to see what he was doing. I learned tons from him. In those days, we cut on film, so it was running through your fingers.

The truth is everyone I meet influences me a bit. I am fascinated by each person’s approach and why they see things the way they do.

While your resume is eclectic, you’ve worked on many sci-fi and action films. Was that something you were aiming for, or did it happen by chance?
I was lucky enough to meet J.J. Abrams, and I was lucky enough to get on Alias, which was not something I thought I’d want to do. Then I did it because it seemed to suit me at the time. It was a bit of faith and a bit of, “Oh, that makes sense for you, because you grew up loving Twilight Zone and Star Trek.”

Of course, I’d love to do more drama. I did The Jane Austen Book Club and other films like that. One does tend to get sort of suddenly identified as, now I’m the expert on sci-fi and visual effects. Also, I think because there aren’t a lot of women who do that, it’s probably something people notice. But I’d love to do a good comedy. I’d love to do something like Jumanji, which I think is hilarious.

How did this long and wonderful collaboration with J.J. Abrams get started?
Well, my kids were getting older. It was getting harder and harder for me to go on location with the nanny, the dog, the nanny’s kids, my kids, set up a third grade class and figure out how to do it all. A friend of mine who was a producer on Felicity had originally tried to get me to work on that show. She said, “You’ll love J.J. You’ll love (series creator) Matt Reeves. Come and just meet us.” I just thought television is such hard work.

Then he was starting this new show, Alias. My friend said, “You’re going to love it. Just meet him.” And I did. Honestly, I went to an interview with him, and I spent an hour basically laughing at every joke he told me. I thought, “This guy’s never going to hire me.” But he said, “Okay, I’ll see you tomorrow.” That’s how it started.

What was that like?
Alias was so much fun. I didn’t work on Felicity, which was more of a straightforward drama about a college girl growing up. Alias was this crazy, complicated, action-filled show, but also a girl trying to grow up. It was all of those things. It was classic J.J. It was a challenge, and it was really fun because we all discovered it together. There were three other female editors who are amazing — Mary Jo Markey, Kristin Windell, and Virginia Katz — and there was J.J. and Ken Olin, who was a producer in residence there and director. We just found the show together, and that was really fun.

How has your collaboration with J.J. changed over time?
It’s changed in terms of the scope of a project and what we have to do. And, obviously, the level of conflict and communication is pretty easy because we’ve known each other for so long. There’s not a lot of barriers like, “Hey, I’m trying to get to know you. What do I…?” We just jump right in. Over the years, it’s changed a bit.

On The Rise of Skywalker, I cut this film with a different co-editor. Mary Jo [Markey, Brandon’s longtime co-editor] was doing something else at the time, so I ended up working with Stefan Grube. The way I had worked with Mary Jo was we would divide up the film. She’d do her thing and I’d do mine. But because these films are so massive, I prefer not to divide it up, but instead have both of us work on whatever needs working on at the time to get it done. I proposed this to J.J., and it worked out great. Everything got cut immediately and we got together periodically to ask him what he thought.

Another thing that changed was, because we needed to turn over our visual effects really quickly, I proposed that I cut on the set, on location, when they were shooting. At first J.J. was like, “We’ve never done this before.” I said, “It’s the only way I’m going to get your eyes on sequences,” because by the time the 12-hour day is over, everyone’s exhausted.

It was great and worked out well. I had this little mobile unit, and the joke was it was always within 10 feet of wherever J.J. was. It was also great because I felt like I was part of the crew, and they felt like they could talk to me. I had the DP asking me questions. I had full access to the visual effects supervisor. We worked out shots on the set. Given the fact that you could see what we already had, it really was a game-changer.

What are some of the challenges of working on films that are heavy on action, especially with the Star Wars and Star Trek films and all the effects and CGI?
There’s a scene where they arrive on Exogal, and they’re fighting with each other and more ships are arriving. All of that was in my imagination. It was me going, “Okay, that’ll be on the screen for this amount of time.” I was making up so much of it and using the performances and the story as a guide. I worked really closely with the visual effects people describing what I thought was going to happen. They would then explain that what I thought was going to happen was way too much money to do.

Luckily I was on the set, so I could work it out with J.J. as we went. Sometimes it’s better for me just to build something that I imagine and work off of that, but it’s hard. It’s like having a blank page and then knowing there’s this one element, and then figuring out what the next one will be.

There are people who are incredibly devoted to the worlds of Star Trek and Star Wars and have very strong feelings about those worlds. Does that add more pressure to the process?
I’m a big fan of Star Trek and Star Wars, as is J.J. I grew up with Star Trek, and it’s very different because Star Trek was essentially a week-to-week serial that featured an adventure, and Star Wars is this world where they’re in one major war the whole time.

Sometimes I would go off on a tangent, and J.J. and my co-editor Stefan would be like, “That’s not in the lore,” and I’d have to pull it back and remember that we do serve a fan base that is loyal to it. When I edit anything, I really try to abandon any kind of preconceived thing I have so I can discover things.

I think there’s a lot of pressure to answer to the first two movies, because this is the third, and you can’t just ignore a story that’s been set up, right? We needed to stay within the boundaries of that world. So yeah, there’s a lot of pressure to do that, for sure. One of the things that Chris Terrio and J.J., as the writers, felt very strongly about was having it be Leia’s final story. That was a labor of love for sure. All of that was like a love letter to her.

I don’t know how much of that had been decided before Carrie Fisher (Leia) died. It was my understanding that you had to reconstruct based on things she shot for the other films.
She died before this film was even written, so all of the footage you see is from Episode 7. It’s all been repurposed, and scenes were written around it. Not just for the sake of writing around the footage, but they created scenes that actually work in the context of the film. A lot of what works is due to Daisy Ridley and the other actors who were in the scenes with her. I mean, they really brought her to life and really sold it. I have to say they were incredible.

With two editors co-editing on set during production, you must have needed an extensive staff of assistant editors. How do you work with assistant editors on something of this scale?
I’ve worked with an assistant editor named Jane Tones on the last couple of films. She is amazing. She was the one who figured out how to make the mobile unit work on set. She’s incredibly gifted, both technologically and story-wise. She was instrumental in organizing everything to do with the edit and getting us around. Stefan’s assistant was Warren Paeff, and he is very experienced. We also had a sound person we carried with us and a couple of other assistants. I had another assistant, Ben Cox, who was such a Star Wars fan. When I said, “I’m happy to hire you, but I only have a second assistant position.” He was like, “I’ll take it!”

What advice do you have for someone starting out or who would like to build the kind of career you’ve made?
I would say, try to get a PA job or a job in the cutting room where you really enjoy the people, and pay attention. If you have ideas, don’t be shy but figure out how to express your ideas. I think people in the cutting room are always looking for anyone with an opinion or reaction because you need to step back from it. It’s a love of film, a love of storytelling and a lot of luck. I work really hard, but I also had a lot of good fortune meeting the people I did.


Amy Leland is a film director and editor. Her short film, Echoes, is now available on Amazon Video. She also has a feature documentary in post, a feature screenplay in development, and a new doc in pre-production. She is an editor for CBS Sports Network and recently edited the feature “Sundown.” You can follow Amy on social media on Twitter at @amy-leland and Instagram at @la_directora.

CVLT adds Joe Simons as lead editor

Bi-coastal production studio CVLT, which offers full-service production and post, has Joe Simons as lead editor. He will be tasked with growing CVLT’s editorial department. He edits on Adobe Premiere and will be based in the New York studio.

Simons joins CVLT after three years at The Mill, where his edited the “It’s What Connects Us” campaign for HBO, the “Top Artist of the Year” campaign for Spotify and several major campaigns for Ralph Lauren, among many others. Prior to The Mill, he launched his career at PS260 before spending four years at editing house Cut+Run.

Simons’ addition comes at a time when CVLT is growing into a full concept-to-completion creative studio, launching campaigns for top luxury and fashion brands, including Lexus, Peloton and Louis Vuitton.

“Having soaked up everything I could at The Mill and Cut+Run, it was time for me to take that learning and carve my own path,” says Simons.

Dell launches PCs and displays featuring AI and 5G

Dell Technologies has new products and software across its premium Latitude, XPS and displays that feature AI and 5G.

The Dell Latitude 9000 series laptops are smaller and thinner than before, with a larger display. The new Latitude 9510 offers longer battery life, a 5G-ready design, enhanced audio features and intelligent solutions designed to increase productivity.

Starting at only 3.2 pounds, the Latitude 9510 features a large screen size, along with Intel Wi-Fi 6 (Gig+) and 5G mobile broadband capabilities. The design incorporates 5G antennas into the speakers to retain the InfinityEdge display, while carbon blade fans and dual heat pipes make the laptop quiet and cool to the touch. Launching with up to the latest 10th Gen Intel Core i7 processors, the vPro-ready Latitude 9510 features a machined-aluminum finish with diamond cut edges.

Dell Latitude

The Latitude 9510 has new built-in and automated AI-based optimization technology called Dell Optimizer, which help reduce lags, delays and frustration thanks to:
• ExpressResponse: Based on user preferences and machine learning with Intel Adaptix Technology, it launches frequently used applications faster, switches quickly between applications and improves overall application performance to boost productivity.
• ExpressCharge: AI and machine learning improves battery life utilization based on a given user’s battery charge patterns and typical power usage. When critically low on battery, the Latitude 9510 will subtly adjust settings to preserve resources, like dimming the screen. It will also choose the best charging policy, like ExpressCharge Boost, which provides up to a 35% charge in 20 minutes to get running in a crunch.
• ExpressSign-in: This senses a user’s presence, enabling faster log-in and enhanced security with Dell’s PC proximity sensor enabled by Intel Context Sensing Technology and Windows Hello.

Dell XPS 13 9000

In addition to the 9000 series, Dell has also redesigned the XPS 13 laptop to have a smaller, thinner profile and a larger screen with a four-sided, virtually borderless InfinityEdge display. Made from machined aluminum, carbon fiber, woven glass fiber and hardened Corning Gorilla Glass, the XPS 13 has narrow bezels on every side, reducing its InfinityEdge borders and creating a smaller and thinner form factor than its XPS predecessors. With a larger 16:10 display that spans from all four edges, the new 25% brighter XPS InfinityEdge display offers more screen space to multitask throughout the day. This new design offers a 13.4-inch display in an 11-inch form factor — think about it fitting neatly on an airplane tray.

The XPS 13 offers 10th Gen Intel Core processors and long battery life. It features a larger display and touchpad, edge-to-edge keyboard and one-handed opening. Available options include the traditional XPS 13 with Windows 10 or the Developer Edition featuring Ubuntu 18.04LTS.

Dell has also introduced a range of new monitors. The Dell 86 4K Interactive Touch Monitor is designed to connect users and increase collaboration in real time, featuring 4K UHD resolution, 20-point multi-touch, USB-C connectivity and Dell’s own Screen Drop Feature, which helps improve accessibility and reachability for users of different heights.

The UltraSharp 43 4K USB-C monitor allows users to view content from up to four connected PCs simultaneously to maximize productivity. This 42.5-inch monitor is height-adjustable and features USB-C connectivity and up to 90W power. And the new UltraSharp 27 4K USB-C monitor with VESA DisplayHDR 400 offers wide color coverage for accurate color reproduction.

Pricing and availability:
• XPS 13, starting at $999.99, is available in the US, Canada, Sweden, UK, Germany and France now and globally in February.
• XPS 13 Developer Edition, starting at $1,199.99, will be available in the US, Canada and select European countries in February.
• Latitude 9510, starting at $1,799, will be available globally March 26.
• Dell 86 4K Interactive Touch Monitor will be available globally April 10. Pricing to be announced.
• Dell UltraSharp 43 4K USB-C, starting at $1,049.99, will be available globally January 30.
• Dell UltraSharp 27 4K USB-C Monitor, starting at $709.99, will be available globally January 30.

ASC’s feature film nominees include 1917, Joker

The American Society of Cinematographers (ASC) has nominated eight feature films in the Theatrical and Spotlight categories of the 34th ASC Outstanding Achievement Awards. Winners will be named at the ASC’s annual awards on January 25 at the Ray Dolby Ballroom at Hollywood & Highland.

This year’s nominees are:

Theatrical Release

Roger Deakins, ASC, BSC, for 1917
Phedon Papamichael, ASC, GSC, for Ford v Ferrari
Rodrigo Prieto, ASC, AMC, for The Irishman
Robert Richardson, ASC, for Once Upon a Time… in Hollywood
Lawrence Sher, ASC, for Joker

Spotlight Award
Jarin Blaschke for The Lighthouse
Natasha Braier, ASC, ADF, for Honey Boy
Jasper Wolf, NSC, for Mono

This is Deakins’ 16th nomination by the society, which has sent him home a winner four times (The Shawshank Redemption, The Man Who Wasn’t There, Skyfall, Blade Runner 2049). Richardson earns his 11th nomination, while Papamichael and Prieto have each been recognized three times in the past by the organization. Sher, Blaschke, Braier and Wolf are first-time nominees.

Last year’s Theatrical winner was Łukasz Żal, PSC, for Cold War, which was also Oscar-nominated for Best Cinematography.

The Spotlight Award, introduced in 2014, recognizes cinematography in features that may not receive wide theatrical release. The accolade went to Giorgi Shvelidze for Namme in 2019.

Recreating the Vatican and Sistine Chapel for Netflix’s The Two Popes

The Two Popes, directed by Fernando Meirelles, stars Anthony Hopkins as Pope Benedict XVI and Jonathan Pryce as current pontiff Pope Francis in a story about one of the most dramatic transitions of power in the Catholic Church’s history. The film follows a frustrated Cardinal Bergoglio (the future Pope Francis) who in 2012 requests permission from Pope Benedict to retire because of his issues with the direction of the church. Instead, facing scandal and self-doubt, the introspective Benedict summons his harshest critic and future successor to Rome to reveal a secret that would shake the foundations of the Catholic Church.

London’s Union was approached in May 2017 and supervised visual effects on location in Argentina and Italy over several months. A large proportion of the film takes place within the walls of Vatican City. The Vatican was not involved in the production and the team had very limited or no access to some of the key locations.

Under the direction of production designer Mark Tildesley, the production replicated parts of the Vatican at Rome’s Cinecitta Studios, including a life-size, open ceiling, Sistine Chapel, which took two months to build.

The team LIDAR-scanned everything available and set about amassing as much reference material as possible — photographing from a permitted distance, scanning the set builds and buying every photographic book they could lay their hands on.

From this material, the team set about building 3D models — created in Autodesk Maya — of St. Peter’s Square, the Basilica and the Sistine Chapel. The environments team was tasked with texturing all of these well-known locations using digital matte painting techniques, including recreating Michelangelo’s masterpiece on the ceiling of the Sistine Chapel.

The story centers on two key changes of pope in 2005 and 2013. Those events attracted huge attention, filling St. Peter’s Square with people eager to discover the identity of the new pope and celebrate his ascension. News crews from around the world also camp out to provide coverage for the billions of Catholics all over the world.

To recreate these scenes, the crew shot at a school in Rome (Ponte Mammolo) that has the same pattern on its floor. A cast of 300 extras was shot in blocks in different positions at different times of day, with costume tweaks including the addition of umbrellas to build a library that would provide enough flexibility during post to recreate these moments at different times of day and in different weather conditions.

Union also called on Clear Angle Studios to individually scan 50 extras to provide additional options for the VFX team. This was an ambitious crowd project, so the team couldn’t shoot in the location, and the end result had to stand up at 4K in very close proximity to the camera. Union designed a Houdini-based system to deal with the number of assets and clothing in such a way that the studio could easily art-direct them as individuals, allow the director to choreograph them and deliver a believable result.

Union conducted several motion capture shoots inhouse at Union to provide some specific animation cycles that married with the occasions they were recreating. This provided even more authentic-looking crowds for the post team.

Union worked on a total of 288 VFX shots, including greenscreens, set extensions, window reflections, muzzle flashes, fog and rain and a storm that included a lightning strike on the Basilica.

In addition, the team did a significant amount of de-aging work to accommodate the film’s eight-year main narrative timeline as well as a long period in Pope Francis’ younger years.

Uncut Gems directors Josh and Benny Safdie

By Iain Blair

Filmmakers Josh and Benny Safdie have been on the verge of the big time since they started making their own distinctive brand of cinema: one full of anxiety, brashness, untamed egos and sweaty palms. They’ve finally done it with A24’s Uncut Gems.

Following their cinema verité Heaven Knows What — with its look at the New York City heroin subculture — and the crime thriller Good Time, the Safdies return to the mean streets of New York City with their latest, Uncut Gems. The film is a twisty, tense tale that explores the tragic sway of fortune, family and fate.

The Safdies on set.

It stars Adam Sandler in a career-defining performance as Howard Ratner, a profane, charismatic New York City jeweler who’s always on the lookout for the next big score. When he makes a series of high-stakes bets that could lead to the windfall of a lifetime, Howard must perform a high-wire act by balancing business, family and encroaching adversaries on all sides.

Uncut Gems combines relentless pacing with gritty visuals, courtesy of DP Darius Khondji, and a score from Brooklyn-based experimental composer Daniel Lopatin.

In the tradition of ‘70s urban thrillers by Sidney Lumet, William Friedkin and Martin Scorsese (who produced, along with Scott Rudin), the film creates an authentic tapestry of indelible faces, places, sounds and moods.

Behind the camera, the Safdies also assembled a stellar team of go-to collaborators that included co-editor Ronald Bronstein and production designer Sam Lisenco.

I recently sat down with the Safdies, whose credits include Daddy Longlegs, Lenny Cooke and The Pleasure of Being Robbed, to talk about making the film (which is generating awards buzz) and their workflow.

What sort of film did you set out to make?
Josh Safdie: The exact one you see on the screen. It changed a lot along the way, but the cosmic vibe of it and the mélange of characters who don’t seem to fit together but do on this journey where we are all on on this colorful rock that might as well be a colorful uncut gem – it was all there in the original idea. It’s pulpy, cosmic, funny, tense, and it’s what we wanted to do.

Benny Safdie: We have veteran actors, first-time actors and non-professionals in the cast, working alongside people we love so much. It’s great to see it all come together like it did.

How tough was it getting Adam Sandler, as I heard he initially passed on it?
Josh: He did. We sent it to him back in 2012, and I’m not sure it even got past “the moat,” as we call it. But once he saw our work, he immediately responded; he called us right after seeing Good Time. The irony is, one of his favorites was Daddy Longlegs, which we’d tried to approach him with. Once we actually made contact and started talking, it was instantly a strong kinship.

Benny: Now it’s like a deep friendship. He really got our need to dig deep on who this character is, and he put in the time and the care.

Any surprises working with him?
Josh: What’s funny is, we had a bunch of jokes written for him, and he then ad-libbed so many little things. He made us all smile every day.

What did he bring to the role of Howard, who could easily be quite unlikeable?
Josh: Exactly, and Adam brought that likeability in a way only he can. We had the benefit of following up his 50-city standup tour, where he did three hours of material every night, and we had a script loaded with dialogue. His mind was so sharp, so by the time he did this — and we were giving him new pages over lunch sometimes — he could just ingest them and regurgitate them and go out on a limb and try out a new joke, and then come back to the dialogue. He was so well oiled in the character that it was second nature to him.

Benny: And you root for him. You want him to succeed. Adam pushed us on stuff, like the family and the kids. He knew it was important to show those relationships, that audiences would want to see and feel that. And he wanted to create a very complicated person. Yes, Howard’s doing some bad things, but you want him to get there.

Was it difficult getting former pro basketball player Kevin Garnett to act in the film?
Josh: It’s always tough when someone is very successful in their own field. When you try to convince them to do acting, they know it’s a lot of work and they don’t need the money, and you’re asking them to play a version of themselves — and there’s the huge time commitment. But Kevin really committed, and he came back a bunch to shoot scenes we needed.

You’re well known for your run-and-gun, guerilla-style of shooting. Was that how you shot this film?
Josh: Yeah, a lot of locations, but we built the office sets. And we got permits for everything.

Benny: But we kept the streets open and mixed in the 80 SAG actors in the background.

How does it work on the set in terms of co-directing?
Josh: On a technical level, we’ll work with our DP on placing the camera. It was a bit different this time since Benny wasn’t also acting, like he did in Good Time. We were co-directing and getting that much closer to the action; you see different parts of a performance that way, and we have each other’s backs. We are able to put our heads together and get a really full picture of what’s happening on set. And if one of us talks to somebody, it’s always coming from both of us. We’ve been working together since we were kids, and we have a huge amount of trust in each other.

The way characters talk over each other, and then all the background chatter, reminded me a lot of Robert Altman and his approach.
Benny: Thanks. He was a huge influence on us. It’s using sound as it’s heard in real life. We heard this great story about Altman and the film McCabe and Mrs. Miller. About 15 minutes into the premiere Warren Beatty turned to Altman and asked, “Does the whole movie sound like this?” And Altman replied excitedly, “Yeah!” He was so far ahead of his time, and that’s what we tried to emulate.

What’s so great about Altman is that he saw life as a film, and he tried to get the film to ride up parallel to life in that sense. We ended up writing 45 extra pages of dialogue recording — just for the background. Scott Rudin was like, “You wrote a whole other script for background people?” We’d have a character there just to say one line, but it added all these extra layers.

Josh: On top of the non-stop dialogue, Howard’s a real loudmouth; he hears everything. Our post sound team was very special, and it was very educational for us. We began with Oscar-winning re-recording mixer Tom Fleischman, but then he had to go off to do The Irishman, so Skip Lievsay took over. Then Warren Shaw came on, and we worked with the two of them for a very long time.

Thanks to our producers, we had the time to really get in there and go deeper and deeper. I’d say the soundscape they built in Dolby Atmos really achieved something like life, and it also had areas for music and sound design that are so meticulous and rich that we’d watch the movie without the dialogue.

Where did you do the post?
Benny: All in New York. For sound, we started off at Soundtrack and then went to Warner Bros. Sound. We edited at our company offices with co-editor Ronny Bronstein. Brainstorm Digital did most of the crazy visual effects. We worked closely with them and on the whole idea of, “What does the inside of a gemstone look like?”

How does editing work with Ronny?
Josh: He’s often on the set with us, but we didn’t cut a frame until we sat down after the shoot and watched it all. I think that kept it fresh for us. Our assistant editor developed binders with all the script and script supervisor notes, and we didn’t touch it once during the edit. I think coming from documentaries, and that approach to the material, has informed all our editing. You look at what’s in front of you, and that’s what you use to make your film. Who cares what the script says!

One big challenge was the sheer amount of material, even though we only shot for 35 days — that includes the African unit. We had so many setups and perspectives, in things like the auction and the Seder scenes, but the scene we spent the most time writing and editing was the scene between Howard and Kevin in the back room… and we had the least time to shoot it — just over three hours.

L-R: Benny Safdie, Iain Blair and Josh Safdie.

You have a great score by your go-to composer Daniel Lopatin, who records as Oneohtrix Point Never.
Josh: We did the score at his studio in Brooklyn. It’s really another main character, and he did a great job as usual.

The DI must have been vital?
Josh: Yes, and we did all the color at The Mill with colorist Damien van der Cruyssen, who’s a really great colorist and also ran our dailies. Darius likes to spend a lot of time in the DI experimenting and finding the look, so we ended up doing about a month on it. Usually, we get just four days.

What’s next? A big studio movie?
Josh: Maybe, but we don’t want to abandon what we’ve got going right now. We know what we want. People offer us scripts but I can’t see us doing that.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

VFX pipeline trends for 2020

By Simon Robinson

A new year, more trends — some burgeoning, and others that have been dominating industry discussions for a while. Underpinning each is the common sentiment that 2020 seems especially geared toward streamlining artist workflows, more so than ever before.

There’s an increasing push for efficiency; not just through hardware but through better business practices and solutions to throughput problems.

Exciting times lie ahead for artists and studios everywhere. I believe the trends below form the pillars of this key industry mission for 2020.

Machine Learning Will Make Better, Faster Artists
Machines are getting smarter. AI software is becoming more universally applied in the VFX industry, and with this comes benefits and implications for artist workflows.

As adoption of machine learning increases, the core challenge for 2020 lies in artist direction and participation, especially since the M.O. of machine learning is its ability to solve entire problems on its own.

The issue is this: if you rely on something 99.9% of the time, what happens if it fails in that extra 0.1%? Can you fix it? While ML means less room for human error, will people have the skills to fix something gone wrong if they don’t need them anymore?

So this issue necessitates building a bridge between artist and algorithm. ML can do the hard work, giving artists the time to get creative and perfect their craft in the final stages.

Gemini Man

We’ve seen this pay off in the face of accessible and inexpensive deepfake technology giving rise to “quick and easy” deepfakes, which rely entirely on ML. In contrast to these, bridging the uncanny valley remains in the realm of highly-skilled artists, requiring thought, artistry and care to produce something that tricks the human eye. Weta Digital’s work on Gemini Man is a prime example.

As massive projects like these continue to emerge, studios strive for efficiency and being able to produce at scale. Since ML and AI are all about data, the manipulation of both can unlock endless potential for the speed and scale at which artists can operate.

Foundry’s own efforts in this regard revolve around improving the persistence and availability of captured data. We’re figuring out how to deliver data in a more sensible way downstream, from initial capture to timestamping and synchronization, and then final arrangement in an easy, accessible format.

Underpinning our research into this is Universal Scene Description (USD), which you’ve probably heard about…

USD Becomes Uniform
Despite having a legacy and prominence from its development with Pixar, the still relevant open-sourcing and gradual adoption of Universal Scene Description means that it’s still maturing for wider pipelines and workflows.

New iterations of USD are now being released at a three month cadence, where before it used to be every two months. With each new release comes improvements as growing pains and teething issues are ironed out, and the slower pace provides some respite for artists who rely on specific versions of USD.

But challenges still exist, namely mismatched USD pipelines, and scattered documentation which means that solutions to these can’t be easily found. Currently, no one is officially rubber stamping USD best practice.

Capturing volumetric datasets for future testing.

To solve this issue, the industry needs a universal application of USD so it can exist in pipelines as an application-standard plugin to prevent an explosion of multiple variants of USD, which may cause further confusion.

If this comes off, documentation could be made uniform, and information could be shared across software, teams and studios with even more ease and efficiency.

It’ll make Foundry’s life easier, too. USD is vital to us to power interoperability in our products, allowing clients to extend their software capabilities on top of what we do ourselves.

At Foundry, our lighting tool, Katana, uses USD Hydra tech as the basis for much improved viewer experiences. Most recently, its Advanced Viewport Technology aims at delivering a consistent visual experience across software.

This wouldn’t be possible without USD. Even in its current state, the benefits are tangible, and its core principles — flexibility, modularity, interoperability  — underpin 2020’s next big trends.

Artist Pipelines Will Look More Iterative 
The industry is asking, “How can you be more iterative through everything?” Calls for this will only grow louder as we move into next year.

There’s an increasing push for efficiency as the common sentiment prevails: too much work, not enough people to do it. While maximizing hardware usage might seem like a go-to solution to this, the actual answer lies in solving throughput problems by improving workflows and facilitating sharing between studios and artists.

Increasingly, VFX pipelines don’t work well as a waterfall structure anymore, where each stage is done, dusted, and passed onto the next department in a structured, rigid process.

Instead, artists are thinking about how data persists throughout their pipeline and how to make use of it in a smart way. The main aim is to iterate on everything simultaneously for a more fluid, consistent experience across teams and studios.

USD helps tremendously here, since it captures all of the data layers and iterations in one. Artists can go to any one point in their pipeline, change different aspects of it, and it’s all maintained in one neat “chunk.” No waterfalls here.

Compositing in particular benefits from this new style of working. Being able to easily review in context lends an immense amount of efficiency and creativity to artists working in post production.

That’s Just the Beginning
Other drivers for artist efficiency that may gain traction in 2020 include: working across multiple shots (currently featured in Nuke Studio), process automation, and volumetric-style workflows to let artists work with 3D representations featuring depth and volume.

The bottom line is that 2020 looks to be the year of the artist — and we can’t wait.


Simon Robinson is the co-founder and chief scientist at Foundry.

ILM’s Pablo Helman on The Irishman‘s visual effects

By Karen Moltenbrey

When a film stars Robert De Niro, Joe Pesci and Al Pacino, well, expectations are high. These are no ordinary actors, and Martin Scorsese is no ordinary director. These are movie legends. And their latest project, Netflix’s The Irishman, is no ordinary film. It features cutting-edge de-aging technology from visual effects studio Industrial Light & Magic (ILM) and earned the film’s VFX supervisor, Pablo Helman, an Oscar nomination.

The Irishman, adapted from the book “I Heard You Paint Houses,” tells the story of an elderly Frank “The Irishman” Sheeran (De Niro), whose life is nearing the end, as he looks back on his earlier years as a truck driver-turned-mob hitman for Russell Bufalino (Pesci) and family. While reminiscing, he recalls the role he played in the disappearance of his longtime friend, Jimmy Hoffa (Al Pacino), former president of the Teamsters, who famously disappeared in 1975 at the age of 62, and whose body has never been found.

The film contains 1,750 visual effects shots, most of which involve the de-aging of the three actors. In the film, the actors are depicted at various stages of their lives — mostly younger than their present age. Pacino is the least aged of the three actors, since he enters the story about a third of the way through — from the 1940s to his disappearance three decades later. He was 78 at the time of filming, and he plays Hoffa at various ages, from age 44 to 62. De Niro, who was 76 at the time of filming, plays Sheeran at certain points from age 20 to 80. Pesci plays Bufalino between age 53 and 83.

For the significantly older Sheeran, during his introspection, makeup was used. However, making the younger versions of all three actors was much more difficult. Indeed, current technology makes it possible to create believable younger digital doubles. But, it typically requires actors to perform alone on a soundstage wearing facial markers and helmet cameras, or requires artists to enhance or create performances with CG animation. That simply would not do for this film. Neither the actors nor Scorsese wanted the tech to interfere with the acting process in any way. Recreating their performances was also off the table.

“They wanted a technology that was non-intrusive and one that would be completely separate from the performances. They didn’t want markers on their faces, they did not want to wear helmet cams and they did not want to wear the gray [markered] pajamas that we normally use,” says VFX supervisor Helman. “They also wanted to be on set with theatrical lighting, and there wasn’t going to be any kind of re-shoots of performances outside the set.”

In a nutshell, ILM needed a markerless approach that occurred on-set during filming. To this end, ILM spent two years developing Flux, a new camera system and software, whereby a three-camera rig would extract performance data from lighting and textures captured on set and translate that to 3D computer-generated versions of the actors’ younger selves.

The camera rig was developed in collaboration with The Irishman’s DP, Rodrigo Prieto, and camera maker ARRI. It included two high-resolution (3.8K) Alexa Mini witness cameras that were modified with infrared rings; the two cameras were attached to and synched up with the primary sensor camera (the director’s Red Helium 8K camera). The infrared light from the two cameras was necessary to help neutralize any shadows on the actors’ faces, since Flux does not handle shadows well, yet remained “unseen” by the production camera.

Flux, meanwhile, used that camera information and translated that into deformable geometry mesh. “Flux takes that information from the three cameras and compares it to the lighting on set, deforms the geometry and changes the geometry and the shape of the actors on a frame-by-frame basis,” says Helman.

In fact, ILM continued to develop the software as it was working on the film. “It’s kind of like running the Grand Prix while you’re building the Ferrari,” Helman adds. “Then, you get better and better, and faster and faster, and your software gets better, and you are solving problems and learning from the software. Yes, it took a long time to do, but we knew we had time to do it and make it work.”

Pablo Helman (right) on The Irishman set.

At the beginning of the project, prior to the filming, the actors were digitally scanned performing a range of facial movements using ILM’s Medusa system, as well as on a light stage, which captured texture info under different lighting conditions. All that data was then used to create a 3D contemporary digital double of each of the actors. The models were sculpted in Autodesk’s Maya and with proprietary tools running on ILM’s Zeno platform.

ILM applied the 3D models to the exact performance data of each actor captured on set with the special camera rig, so the physical performances were now digital. No keyframe animation was used. However, the characters were still contemporary to the actors’ ages.

As Helman explains, after the performance, the footage was returned to ILM, where an intense matchmove was done of the actors’ bodies and heads. “The first thing that got matchmoved was the three cameras that were documenting what the actor was doing in the performance, and then we matchmoved the lighting instruments that were lighting the actor because Flux needs that lighting information in order to work,” he says.

Helman likens Flux to a black box full of little drawers where various aspects are inserted, like the layout, the matchimation, the lighting information and so forth, and it combines all that information to come up with the geometry for the digital double.

The actual de-aging occurs in modeling using a combination of libraries that were created for each actor and connected to and referenced by Flux. Later, modelers created the age variations, starting with the youngest version of each person. Variants were then generated gradually using a slider to move through life’s timeline. This process was labor-intensive as artists had to also erase the effects of time, such as wrinkles and age spots.

Insofar as The Irishman is not an action movie, creating motion for decades-younger versions of the characters was not an issue. However, a motion analyst was on set to work with the actors as they played the younger versions of their characters. Also, some visual effects work helped thin out the younger characters.

Helman points out that Scorsese stressed that he did not want to see a younger version of the actors playing roles from the past; he wanted to see younger versions of these particular characters. “He did not want to rewind the clock and see Robert De Niro as Jimmy Conway in 1990’s Goodfellas. He wanted to see De Niro as a 30-year-younger Frank Sheeran,” he explains.

When asked which actor posed the most difficulty to de-age, Helman explains that once you crack the code of capturing the performance and then retargeting the performance to a younger variation of the character, there’s little difference. Nevertheless, De Niro had the most screen time and the widest age range.

Performance capture began about 15 years ago, and Helman sees this achievement as a natural evolution of the technology. “Eventually those [facial] markers had to go away because for actors, that’s a very interesting way to work, if you really think about it. They have to try to ignore the markers and not be distracted by all the other intrusive stuff going on,” Helman says. “That time is now gone. If you let the actors do what they do, the performances will be so much better and the shots will look so much better because there is eye contact and context with another actor.”

While this technology is a quantum leap forward, there are still improvements to be made. The camera rig needs to get smaller and the software faster — and ILM is working on both aspects, Helman says. Nevertheless, the accomplishment made here is impressive and groundbreaking — the first markerless system that captures performance on set with theatrical lighting, thanks to more than 500 artists working around the world to make this happen. As a result, it opens up the door for more storytelling and acting options — not only for de-aging, but for other types of characters too.

Commenting on his Oscar nomination, Helman said, “It was an incredible, surreal experience to work with Scorsese and the actors, De Niro, Pacino and Pesci, on this movie. We are so grateful for the trust and support we got from the producers and from Netflix, and the talent and dedication of our team. We’re honored to be recognized by our colleagues with this nomination.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Skywalker Sound and Cinnafilm create next-gen audio toolset

Iconic audio post studio Skywalker Sound and the makers of PixelStrings media conversion technology Cinnafilm are working together on a new audio tool expected to hit in the first quarter of 2020.

As the paradigms of theatrical, broadcast and online content begin to converge, the need to properly conform finished programs to specifications suitable for a variety of distribution channels has become more important than ever. To ensure high fidelity is maintained throughout the conversion process, it is important to implement high-quality tools to aid in time-domain, level, spatial and file-format processing for all transformed content intended for various audiences and playout systems.

“PixelStrings represents our body of work in image processing and media conversions. It is simple,  scalable and built for the future. But it is not just about image processing, it’s an ecosystem. We recognize success only happens by working with other like-minded technology companies. When Skywalker approached us with their ideas, it was immediate validation of this vision. We plan to put as much enthusiasm and passion into this new sound endeavor as we have in the past with picture — the customers will benefit as they see, and hear, the difference these tools make on the viewer experience,” says Cinnafilm CEO/ founder Lance Maurer.

To address this need, Skywalker Sound has created an audio tool set based on proprietary signal processing and orchestration technology. Skywalker Audio Tools will offer an intelligent, automated audio pipeline with features including sample-accurate retiming, loudness and standards analysis and correction, downmixing, channel mapping and segment creation/manipulation — all faster than realtime. These tools will be available exclusively within Cinnafilm’s PixelStrings media conversion platform.

Talking work and trends with Wave Studios New York

By Jennifer Walden

The ad industry is highly competitive by nature. Advertisers compete for consumers, ad agencies compete for clients and post houses compete for ad agencies. Now put all that in the dog-eat-dog milieu of New York City, and the market becomes more intimidating.

When you factor in the saturation level of the audio post industry in New York City — where audio facilities are literally stacked on top of each other (occupying different floors of the same building or located just down the hall from each other) — then the odds of a new post sound house succeeding seem dismal. But there’s always a place for those willing to work for it, as Wave Studios’ New York location is proving.

Wave Studios — a multi-national sound company with facilities in London and Amsterdam — opened its doors in NYC a little over a year ago. Co-founder/sound designer/mixer Aaron Reynolds worked on The New York Times “The Truth Is Worth It” ad campaign for Droga5 that earned two Grand Prix awards at the 2019 Cannes Lions International Festival of Creativity, and Reynolds’ sound design on the campaign won three Gold Lions. In addition, Wave Studios was recently named Sound Company of the Year 2019 at Germany’s Ciclope International Festival of Craft.

Here, Reynolds and Wave Studios New York executive producer Vicky Ferraro (who has two decades of experience in advertising and post) talk about what it takes to make it, what agency clients are looking for. They also share details on their creative approach to two standout spots they’ve done this year for Droga5.

How was your first year-plus in NYC? What were some challenges of being the new kid in town?
Vicky Ferraro: I joined Wave to help open the New York City office in May 2018. I had worked at Sound Lounge for 12 years, and I’ve worked on the ad agency side as well, so I’m familiar with the landscape.

One of the big challenges is that New York is quite a saturated market when it comes to audio. There are a lot of great audio places in the city. People have their favorite spots. So our challenges are to forge new relationships and differentiate ourselves from the competition, and figure out how to do that.

Also, the business model has changed quite a bit; a lot of agencies have in-house facilities. I used to work at Hogarth, so I’m quite familiar with how that side of the business works as well. You have a lot of brands that are working in-house with agencies.

So, opening a new spot was a little daunting despite all the success that Wave Studios in London and Amsterdam have had.
Aaron Reynolds: I worked in London, and we always had work from New York clients. We knew friends and people over here. Opening a facility in New York was something we always wanted to do, since 2007. The challenge was to get out there and tell people that we’re here. We were finally coming over from London and forging those relationships with clients we had worked with remotely.

New York has a slightly different work ethic in that they tend to do the sound design with us and then do the mix elsewhere. One challenge was to get across to our clients that we offer both, from start to finish.

Sound design and mixing are one and the same thing. When I’m doing my sound design, I’m thinking about how I want it to sound in the mix. It’s quite unique to do the sound design at one place and then do the mix somewhere else.

What are some trends you’re seeing in the New York City audio post scene? What are your advertising clients looking for?
Reynolds: On the work side, they come here for a creative sound design approach. They don’t want just a bit of sound here and a bit of sound there. They want something to be brought to the job through sound. That’s something that Wave has always done, and that’s been a bastion of our company. We have an idea, and we want to create the best sound design for the spot. It’s not just a case of, “bring me the sounds and we’ll do it for you.” We want to add a creative aspect to the work as well.

And what about format? Are clients asking for 5.1 mixes? Or stereo mixes still?
Reynolds: 99% of our work is done in stereo. Then, we’ll get the odd job mixed in 5.1 if it’s going to broadcast in 5.1 or play back in the cinema. But the majority of our mixes are still done in stereo.

Ferraro: That’s something that people might not be aware of, that most of our mixes are stereo. We deliver stereo and 5.1, but unless you’re watching in a 5.1 environment (and most people’s homes are not a 5.1 environment), you want to listen to a stereo mix. We’ve been talking about that with a lot of clients, and they’ve been appreciative of that as well.

Reynolds: If you tend to mix in 5.1 and then fold down to a stereo mix, you’re not getting a true stereo mix. It’s an artificial one. We’re saying, “Let’s do a stereo mix. And then let’s do a separate 5.1 mix. Then you’re getting the best of both.”

Most of what you’re listening to is stereo, so you want to have the best possible stereo mix you can have. You don’t want a second rate mix when 99% of the media will be played in stereo.

What are some of the benefits and challenges of having studios in three countries? Do you collaborate on projects?
Ferraro: We definitely collaborate! It’s been a great selling point, and a fantastic time-saver in a lot of cases. Sometimes we’ll get a project from London or Amsterdam, or vice versa. We have two sound studios in New York, and sometimes a job will come in and if we can’t accommodate it, we can send it over to London. (This is especially true for unsupervised work.) Then they’ll do the work, and our client has it the next morning. Based on the time zone difference, it’s been a real asset, especially when we’re under the gun.

Aaron has a great list of clients that he works with in London and Amsterdam who continue to work with him here in New York. It’s been very seamless. It’s very easy to send a project from one studio to another.

Reynolds: We all work on the same system — Steinberg Nuendo — so if I send a job to London, I can have it back the next morning, open it up, and have the clients review it with me. I can carry on working in the same session. It’s almost as if we can work on a 24-hour cycle.

All the Wave Studios use Steinberg Nuendo as their DAW?
Reynolds: It’s audio post software designed with sound designers in mind. Pro Tools is more mixing software, good for recording music and live bands. It’s good for mixing, but it’s not particularly great for doing sound design. Nuendo, on the other hand, has been built for sound design, roots up. It has a lot of great built-in plugins. With Pro Tools you need to get a lot of third-party plugins. Having all these built-in plugins makes the software really solid and reliable.

When it comes to third-party plugins, we really don’t need that many because Nuendo has so many built in. But some of the most-used third-party plugins are reverbs, like Audio Ease’s Altiverb and Speakerphone.

I think we’re one of the only studios that uses Nuendo as our main DAW. But Wave has always been a bit rogue. When we first set up years ago, we were using Fairlight, which no one else was using at the time. We’ve always had the desire to use the best tool that we can for the job, which is not necessarily the “industry standard.” When it came to upgrading all of our systems, we were looking into Pro Tools and Nuendo, but one of the partners at Wave, Johnnie Burn, uses Nuendo for the film side. He found it to be really powerful, so we made the decision to put it in all the facilities.

Why should agencies choose an independent audio facility instead of keeping their work in-house? What’s the benefit for them?
Ferraro: I can tell you from firsthand knowledge several benefits to going out-of-house. The main thing that draws clients to Wave Studios — and away from in-house — is that there is a high level of creativity and experience that comes with our engineers. We bring a different perspective than what you get from an in-house team. While there is a lot of talent in-house, those models often deal with freelancers that aren’t as vested in the company, and it poses challenges in building the brand. It’s a different approach to working and finishing up a piece.

Those two aspects play into it — the creativity and having engineers dedicated to our studio. We’re not bringing in freelancers or working with an unknown pool of people. That’s important.

From my own experience, sometimes the approach can feel more formulaic. As an independent audio facility, our approach is very collaborative. There’s a partnership that we create with all of our clients as soon as they’re on board. Sometimes we get involved even before we have a job assigned, just to help them explore how to expand their ideas through sound, how they should be capturing the sound on-set, and how they should be thinking about audio post. It’s a very involved process.

Reynolds: What we bring is a creative approach. Elsewhere, that can be more formulaic, as Vicky said. Here, we want to be as creative as possible and treat jobs with attention and care.

Wave Studios is an international audio company. Is that a draw for clients?
Ferraro: One hundred percent. You’ve got to admit, it’s got a bit of cachet to it for sure. It’s rare to be a commercial studio with outposts in other countries. I think clients really like that, and it does help us bring a different perspective. Aaron’s perspective coming from London is very different from somebody in New York. It’s also cool because our other engineer is based in the New York market, and so his perspective is different from Aaron’s. In this way, we have a blend of both.

There have been some big commercial audio post houses go under, like Howard Schwartz and Nutmeg. What does it take for an audio post house in NYC to be successful in the long run?
Reynolds: The thing to do to maintain a good studio — whether in New York City or anywhere — is not to get complacent. Don’t ever rest on your laurels. Take every job you do as if it’s your first — have that much enthusiasm about it. Keep forging for the best, and that will always shine through. Keep doing the most creative work you can do, and that will make people want to come back. Don’t get tired. Don’t get lazy. Don’t get complacent. That’s the key.

Ferraro: I also think that you need to be able to evolve with the changing environment. You need to be aware of how advertising is changing, stay on top of the trends and move with it rather than resisting it.

What are some spots that you’ve done recently at Wave Studios NYC? How do they stand out, soundwise?
Reynolds: There’s a New York Times campaign that I have been working on for Droga5. A spot in there is called Fearlessness, which was all about a journalist investigating ISIS. The visuals tell a strong story, and so I wanted to do that in an acoustic sort of way. I wanted people to be able to close their eyes and hear all of the details of the journey the writer was taking and the struggles she came across. Bombs had blown up a derelict building, and they are walking through the rubble. I wanted the viewer to feel the grit of that environment.

There’s a distorted subway train sound that I added to the track that sets the tone and mood. We explored a lot of sounds for the piece. The soundscapes were created from different layers using sounds like twisting metals and people shouting in both English and Arabic, which we sourced from libraries like Bluezone and BBC, in particular. We wanted to create a tone that was uneasy and builds to a crescendo.

We’ve got a massive amount of sound libraries — about 500,000 sound effects — that are managed via Nuendo. We don’t need any independent search engine. It’s all built within the Nuendo system. Our sound effects libraries are shared across all of our facilities in all three countries, and it’s all accessed through Nuendo via a local server for each facility.

We did another interesting spot for Droga5 called Night Trails for Harley-Davidson’s electric motorcycle. In the spot, the guy is riding through the city at night, and all of the lights get drawn into his bike. Ringan Ledwidge, one of the industry’s top directors, directed the spot. Soundwise, we were working with the actual sound of the bike itself, and I elaborated on it to make it a little more futuristic. In certain places, I used the sound of hard drives spinning and accelerating to create an electric bike-by. I had to be quite careful with it because they do have an actual sound for the bike. I didn’t want to change it too much.

For the sound of the lights, I used whispers of people talking, which I stretched out. So as the bike goes past a streetlight, for example, you hear a vocal “whoosh” element as the light travels down into the bike. I wanted the sound of the lights not to be too electric, but more light and airy. That’s why I used whispers instead of buzzing electrical sounds. In one scene, the light bends around a telephone pole, and I needed the sound to be dynamic and match that movement. So I performed that with my voice, changing the pitch of my voice to give the sound a natural arc and bend.

Main Image: (L-R) Aaron Reynolds and Vicky Ferraro


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

UPM releases ‘Evolution’ from music label Icon

Universal Production Music has released “Evolution,” a new album from music label Icon. Featuring original work from Icon co-founder Frederik Wiedmann and other top cinema, trailer and game composers, “Evolution” contains expansive, experimental compositions for film, television, advertising, trailers and promos. “Evolution” is available now for sync and download through the UPM website.

The tracks on “Evolution” are a sharp departure from standard trailer music, going beyond typical durations and employing novel instrumentation and orchestral effects. “The composers had infinite room to explore, experiment and push the envelope,” says Wiedmann. “The results are spectacular and show what happens when you let great artists loose, free from restrictions.”

Wiedmann is an Emmy Award-winning composer and producer who recently scored the thriller Pure from the Hulu original series Into the Dark. His credits also include the features Hangman, Stoic and Day Of the Dead: Bloodline; the epic Civil War drama Field of Lost Shoes; and the animated series Green Lantern and Son of Batman. Other contributors include British composer Gareth Coker, who scored the Xbox One game Ori and the Blind Forest (nominated for a BAFTA Game Award), as well as Ark: Survival Evolved and Minecraft. Also featured are Zach Lemmon (The Gallows, From Prada to Nada) and Axel Tenner (Looking for Mabel Normand, Die Muse).

The sound of “Evolution” is totally unexpected. “We eliminated the conventions and constraints that are usually applied to epic music releases,” notes Icon co-founder Joel Goodman, who produced the album with Andrew DeWitt. “We challenged the composers to create something that was first and foremost a fantastic listening experience.”

“This album delivers what editors are looking for in sync music,” adds DeWitt, “but musically it comes at it from a new direction. We wanted to make a splash and create something that is outside the norm.”

All of the tracks were recorded live with a string orchestra and vocal soloists. The latter included Tori Letzler, a former vocalist for Cirque Du Soleil who has performed on more than 40 film scores for composers including Hans Zimmer, Brian Tyler, Lorne Balfe and Rupert Gregson-Williams.
Universal Production Music has maintained a licensing agreement with Icon since 2017. It has released dozens of albums produced by the label, whose distinctive epic sound has proved popular for trailers, advertising, promos and other media. NASA has employed Icon tracks on several of its publicly released videos, including Tour of the Moon 4K, which has been viewed nearly 3 million times on YouTube.

“Icon has developed a very loyal fan base among our clients,” says Universal Production Music director of marketing Andy Donahue. “They look out for new Icon releases and go to them immediately when they appear. People love the power of this music, with its ability to grab an audience and draw them into a story.”

Watch the trailer.

Maxon and Red Giant to merge

Maxon, developers of pro 3D software solutions, and Red Giant, makers of tools for editors, VFX artists, and motion designers, have agreed to merge under the media and entertainment division of Nemetschek Group. The transaction is expected to close in January 2020, subject to regulatory approval and customary closing conditions.

Maxon, best known for its 3D product Cinema 4D, was formed in 1986 to provide high-end yet accessible 3D software solutions. Artists across the globe rely on Maxon products to create high-end visuals. In April of this year, Maxon acquired Redshift, developer of the GPU-accelerated Redshift render engine.

Since 2002, Red Giant has built its brand through products such as Trapcode, Magic Bullet, Universe, PluralEyes and its line of visual effects software. Its tools are used in the fields of film, broadcast and advertising.

The two companies provide tools for companies including ABC, CBS, NBC, HBO, BBC, Sky, Fox Networks, Turner Broadcasting, NFL Network, WWE, Viacom, Netflix, ITV Creative, Discovery Channel, MPC, Digital Domain, VDO, Sony, Universal, The Walt Disney Company, Blizzard Entertainment, BMW, Facebook, Apple, Google, Vitra, Nike and many more.

Main Photo: L-R: Maxon CEO Dave McGavran and Red Giant CEP Chad Bechert

DP Chat: The Morning Show cinematographer Michael Grady

By Randi Altman

There have never been more options to stream content than right now. In addition to veterans Netflix, Amazon and Hulu, Disney+ and Apple TV+ have recently joined the fray.

In fact, not only did Apple TV+ just launch last month, its The Morning Show — about what goes on behind the scenes on, well, a morning show — has earned three Golden Globe nominations. The show stars Jennifer Aniston, Reese Witherspoon, Steve Carell and Billy Crudup.

L-R: Mimi Leder and Michael Grady on set.

Veteran cinematographer Michael Grady (On the Basis of Sex, The Leftovers, Ozark) was called on by frequent collaborator and executive producer Mimi Leder to shoot the show. We reached out to Grady to find out more about the show and how he works.

How early did you get involved on The Morning Show, and what direction were you given about the shoot?
I have worked with Mimi Leder often over the last 15 years. We have done multiple projects, so we have a great shorthand. We were finishing a movie called On the Basis of Sex when she first mentioned The Morning Show. We spoke about the project even before she was certain that she would take it on as the executive producer and lead director. Ultimately, it is awesome to work with Mimi because she really creates an amazingly collaborative and open work environment. She really allows each person to bring something specific to a project while always staying at the wheel and gently guiding everyone toward a common goal. It’s a very different process from many directors. She knows how to maximize the talents of those around her while staying in control.

Mimi directed episodes 1 and 2. They are essentially the pilot and the setup of the show. After her two episodes, there was a very seasoned and talented group of directors that did episodes 3 through 9. On episode 4, I worked with Lynn Shelton, who is truly an amazing director working with actors. She is one of the loveliest and deeply collaborative directors that I have ever experienced.

For episode 6, I had the pleasure of working with Tucker Gates. Tucker is a brilliant veteran director that I immediately felt at ease with. I adored working with him. We really saw eye to eye on the common ground of filmmaking. He is a director that is experienced enough to really understand all aspects of filmmaking and respects each person on the crew and what they are also trying to achieve. I thought that he directed the most technically challenging episode created this season.

Next, I had another decorated director to make episode 6. Michelle MacLaren has made some great TV in the past, and she did it again on her episode of The Morning Show. We had worked together on The Leftovers, and I loved the work we did together on that show and again on this one. She is a visionary director. Extremely driven.

How would you describe the look of show?
Well, Mimi and I looked at a lot of films as reference before we began. We settled in on a clean, elegant, classical feel. The look of Michael Clayton, shot by Robert Elswit, was a key reference for the show. We used a motif of reflections: glass, mirrors, water, steel, hard surfaces, and … of course, moving images on monitors. Both natural and man-made reflections of our cast were all sign posts for framing the look of the show. It seemed an appropriately perfect motif for telling the story of how America’s morning news programs function and the underbelly that we attempt to investigate.

How did you work with the producers and the colorist to achieve the intended look?
Siggy Ferstl at Company 3 is our colorist. Siggy and I have collaborated on well over 10 movies for the last 15 years. I think he is, without question, the best colorist in the movie business.

The show has a rich, reserved elegance about it. On a project like this, Siggy needs very little guidance or direction from me. He easily understands the narrative and what the look and feel should be on a show. We talk, and it evolves. He is amazing at identifying what you have created in the shoot and then expanding upon those concepts in an attempt to enhance and solidify what you were attempting in image acquisition.

Where was it shot, and how long was the shoot?
We shot in LA on the Sony lot, all over LA and then also in New York City and Las Vegas. We shot for five months. The shoot ran November through the middle of May. I began prep a month or so before.

How did you go about choosing the right camera and lenses for this project? Can you talk about camera tests?
We opted for the Panavision Millennium DXL2 with Primo 70 lenses — Apple’s specs required a 4K minimum. We tested a few systems and ended up choosing the Panavision for its awesome versatility. One camera does it all — Steadi, hand-held, studio, etc. At the time, there were few options for large format. There are many more now. We tested quite a few lenses with Jenn, Reese and Billy and ultimately chose the Primo 70s. We loved the clean but smooth look of these lenses. We shot them clean with zero filtration. I really liked the performance of the lenses for this show.

Can you talk about lighting?
Obviously, lighting is everything. Depth, contrast, color and the overall richness of the image are all achieved through lighting and art direction. Planning and previsualization are the key elements. We tried to take great care in each image, but this Panavision camera is groundbreaking in terms of shooting raw on the streets at night. The native 1600 ASA is insane. The images are so fast and clean, but our priority on this show was how the camera functioned within the realm of skin tones, texture, etc. We loved the natural look and feel of this camera and lens combo. The DXL really is an awesome addition to the large-format camera choices out there right now.

Any challenging scenes that you are particularly proud of or found most challenging?
We had an episode that took place in Las Vegas, and we shot for one long night. All mostly just grab and go. Very guerilla-style. Most of the sequences on Las Vegas Boulevard are all natural … no artificial light. Along the same lines as above, the camera performed on the streets of Las Vegas beautifully. The images were very clean, and the range and latitude of the DXL were amazing. I am shocked at how today’s cameras perform at low-light levels.

You’ve also shot feature films. Can you talk about differences in your process?
I don’t really see huge differences any more between features and high-end TV like this show. It used to be that features were given more time, and then more was expected. Well, you may still get more time, but everyone expects feature-quality, cinema-like images in dramatic television.

We have three huge international movie stars; I treat them no differently than if their images were to go up on the big screen. One-hour drama shows are the single most difficult and demanding projects to work on these days. They have all of the same demands as feature films, but they are created in a much tighter window. The expectations of quality seem very much the same today. Further, I think that the long grind of episodic TV also makes it tougher. It’s a long marathon, not a sprint.

Now for some more general questions …
How did you become interested in cinematography?
I was always an art student. I studied painting and drawing mostly. Later, I studied philosophy and business in college. I took private lessons from local artists growing up and also spent a lot of time playing sports (football). But I always loved movies. I found this to be the perfect job for me. It combines all of those elements. To me, telling stories with pictures requires the many skills that I learned from both sports and art.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
I used to be inspired by photos and art and other cinematographers’ work on films and shows. Of course, those things still inspire me, but people inspire me so much more now. The inspiration of reality seems far more interesting to me than abstractions today. The real emotions of people are what actually inspires movies to attempt to be art. Emotions are what movies are truly about. How can an emotion inspire an image and how can an image inspire an emotion? That’s the deal.

Technology and I don’t really get along very well. To me, it’s always only about storytelling. Technology has never been all that interesting to me. I stay somewhat aware of the new toys, but I’m not very obsessed. My crew keeps me informed also.

That being said, how has technology changed the way you work?
The film-to-video transition was obviously the biggest technological change in my career. Today, I think that the insanely sensitive cameras and their high native ASA ratios are the biggest technological advantages now. These fast cameras allow us to work in such low-light levels that it just seems a lot easier than it was 20 years ago.

The incredible advances in LED lights have also really altered the work process. They are now so powerful in such a compact footprint that it is increasingly easier to get a decent image today. It’s all smaller and not so cumbersome.

What are some of your best practices or rules you try to follow on each job?
Over the years, I have gone from obsessed to maniacal to relaxed to obsessed again. Today, I am really trying to be more respectful of all the artists working on the project and to just not let it all get to me. The pressure of it affects people differently. I really try to stay more even and calm. It really is all about the crew. DPs just direct traffic. Point people in the right way and direct them. Don’t give them line readings, but direct them. You just must be confident in why you are directing them a certain way. If you don’t believe, neither will they. In the end, the best practice a DP can ever have is to always get the best crew possible. People make movies. You need good people. You are only as good as your crew. It really is that simple.

Explain your ideal collaboration with the director when setting the look of a project.
Mimi Leder is my best example of a collaborative director. We have a shared taste, and she really understands the value of a talented crew working together on a clearly defined common goal. The best directors communicate well and share enough information and ideas with the crew so that the crew can go and execute those ideas, and ultimately, expand upon them.

If a director clearly understands the story that they are telling, then they can eloquently communicate the concepts that will bring that story to life. The artists around them can expand a director’s ideas and solidify and embed them into the images. We all bring multiple levels of detail to the story. Hopefully, we are all inserting the same thematic ideas in our storytelling decisions. Mimi really allows her department heads to explore the story and bring their aesthetic sensibilities to the project. In the end, real collaboration is synonymous with good directing.

What’s your go-to gear? Things you can’t live without?
Lots of iced lattes and cold brew. I really don’t have a constant accessory, short of coffee.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

A Beautiful Day in the Neighborhood director Marielle Heller

By Iain Blair

If you are of a certain age, the red cardigan, the cozy living room and the comfy sneakers can only mean one thing — Mister Rogers! Sony Pictures’ new film, A Beautiful Day in the Neighborhood, is a story of kindness triumphing over cynicism. It stars Tom Hanks and is based on the real-life friendship between Fred Rogers and journalist Tom Junod.

Marielle Heller

In the film, jaded writer Lloyd Vogel (Matthew Rhys), whose character is loosely based on Junod, is assigned a profile of Rogers. Over the course of his assignment, he overcomes his skepticism, learning about empathy, kindness and decency from America’s most beloved neighbor.

A Beautiful Day in the Neighborhood is helmed by Marielle Heller, who most recently directed the film Can You Ever Forgive Me? and whose feature directorial debut was 2015’s The Diary of a Teenage Girl. Heller has also directed episodes of Amazon’s Transparent and Hulu’s Casual.

Behind the scenes, Heller collaborated with DP Jody Lee Lipes, production designer Jade Healy, editor Anne McCabe, ACE, and composer Nate Heller.

I recently spoke with Heller about making the film, which is generating a lot of Oscar buzz, and her workflow.

What sort of film did you set out to make?
I didn’t want to make a traditional biopic, and part of what I loved about the script was it had this larger framing device — that it’s a big episode of Mister Rogers for adults. That was very clever, but it’s also trying to show who he was deep down and what it was like to be around him, rather than just rattling off facts and checking boxes. I wanted to show Fred in action and his philosophy. He believed in authenticity and truth and listening and forgiveness, and we wanted to embody all that in the filmmaking.

It couldn’t be more timely.
Exactly, and it’s weird since it’s taken eight years to get it made.

Is it true Tom Hanks had turned this down several times before, but you got him in a headlock and persuaded him to do it?
(Laughs) The headlock part is definitely true. He had turned it down several times, but there was no director attached. He’s the type of actor who can’t imagine what a project will be until he knows who’s helming it and what their vision is.

We first met at his grandkid’s birthday party. We became friends, and when I came on board as director, the producers told me, “Tom Hanks was always our dream for playing Mister Rogers, but he’s not interested.” I said, “Well, I could just call him and send him the script,” and then I told Tom I wasn’t interested in doing an imitation or a sketch version, and that I wanted to get to his essence right and the tone right. It would be a tightrope to walk, but if we could pull it off, I felt it would be very moving. A week later he was like, “Okay, I’ll do it.” And everyone was like, “How did you get him to finally agree?” I think they were amazed.

What did he bring to the role?
Maybe people think he just breezed into this — he’s a nice guy, Fred’s a nice guy, so it’s easy. But the truth is, Tom’s an incredibly technically gifted actor and one of the hardest-working ones I’ve ever worked with. He does a huge amount of research, and he came in completely prepared, and he loves to be directed, loves to collaborate and loves to do another take if you need it. He just loves the work.

Any surprises working with him?
I just heard that he’s actually related to Fred, and that’s another weird thing. But he truly had to transform for the role because he’s not like Fred. He had to slow everything down to a much slower pace than is normal for him and find Fred’s deliberate way of listening and his stillness and so on. It was pretty amazing considering how much coffee Tom drinks every day.

What did Matthew Rhys bring to his role?
It’s easy to forget that he’s actually the protagonist and the proxy for all the cynicism and neuroticism that many of us feel and carry around. This is what makes it so hard to buy into a Mister Rogers world and philosophy. But Matthew’s an incredibly complex, emotional person, and you always know how much he’s thinking. He’s always three steps ahead of you, he’s very smart, and he’s not afraid of his own anger and exploring it on screen. I put him through the ringer, as he had to go through this major emotional journey as Lloyd.

How important was the miniature model, which is a key part of the film?
It was a huge undertaking, but also the most fun we had on the movie. I grew up building miniatures and little cities out of clay, so figuring it all out — What’s the bigger concept behind it? How do we make it integrate seamlessly into the story? — fascinated me. We spent months figuring out all the logistics of moving between Fred’s set and home life in Pittsburgh and Lloyd’s gritty, New York environment.

While we shot in Pittsburgh, we had a team of people spend 12 weeks building the detailed models that included the Pittsburgh and Manhattan skylines, the New Jersey suburbs, and Fred’s miniature model neighborhood. I’d visit them once a week to check on progress. Our rule of thumb was we couldn’t do anything that Fred and his team couldn’t do on the “Neighborhood,” and we expanded a bit beyond Fred’s miniatures, but not outside of the realm of possibility. We had very specific shots and scenes all planned out, and we got to film with the miniatures for a whole week, which was a delight. They really help bridge the gap between the two worlds — Mister Rogers’ and Lloyd’s worlds.

I heard you shot with the same cameras the original show used. Can you talk about how you collaborated with DP Jody Lee Lipes, to get the right look?
We tracked down original Ikegami HK-323 cameras, which were used to film the show, and shipped them in from England and brought them to the set in Pittsburgh. That was huge in shooting the show and making it even more authentic. We tried doing it digitally, but it didn’t feel right, and it was Jody who insisted we get the original cameras — and he was so right.

Where did you post?
We did it in New York — the editing at Light Iron, the sound at Harbor and the color at Deluxe.

Do you like the post process?
I do, as it feels like writing. There’s always a bit of a comedown from production for me, which is so fast-paced. You really slow down for post; it feels a bit like screeching to a halt for me, but the plus is you get back to the deep critical thinking needed to rewrite in the edit, and to retell the story with the sound and the DI and so on.

I feel very strongly that the last 10% of post is the most important part of the whole process. It’s so tempting to just give up near the end. You’re tired, you’ve lost all objectivity, but it’s critical you keep going.

Talk about editing with Anne McCabe. What were the big editing challenges?
She wasn’t on the set. We sent dailies to her in New York, and she began assembling while we shot. We have a very close working relationship, so she’d be on the phone immediately if there were any concerns. I think finding the right tone was the biggest challenge, and making it emotionally truthful so that you can engage with it. How are you getting information and when? It’s also playing with audiences’ expectations. You have to get used to seeing Tom Hanks as Mister Rogers, so we decided it had to start really boldly and drop you in the deep end — here you go, get used to it! Editing is everything.

There are quite a few VFX. How did that work?
Obviously, there’s the really big VFX sequence when Lloyd goes into his “fever dreams” and imagines himself shrunk down on the set of the neighborhood and inside the castle. We planned that right from the start and did greenscreen — my first time ever — which I loved. And even the practical miniature sets all needed VFX to integrate them into the story. We also had seasonal stuff, period-correct stuff, cleanup and so on. Phosphene in New York did all the VFX.

Talk about the importance of sound and music.
My composer’s also my brother, and he starts very early on so the music’s always an integral part of post and not just something added at the end. He’s writing while we shoot, and we also had a lot of live music we had to pre-record so we could film it on the day. There’s a lot of singing too, and I wanted it to sound live and not overly produced. So when Tom’s singing live, I wanted to keep that human quality, with all the little mouth sounds and any mistakes. I left all that in purposely. We never used a temp score since I don’t like editing to temp music, and we worked closely with the sound guys at Harbor in integrating all of the music, the singing, the whole sound design.

How important is the DI to you?
Hugely important and we finessed a lot with colorist Sam Daley. When you’re doing a period piece, color is so crucial – that it feels authentic to that world. Jody and Sam have worked together for a long time and they worked very hard on the LUT before we began, and every department was aware of the color palette and how we wanted it to look and feel.

What’s next?
I just started a new company called Defiant By Nature, where I’ll be developing and producing TV projects by other people. As for movies, I’m taking a little break.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Atomos Shogun 7 updated to 10.4, offers multicam switching

Atomos has updated its Shogun 7 HDR monitor-recorder/switcher designed to make multicamera filmmaking available to a variety of content creators. The update adds touch-controlled switching, quad monitoring and ISO recording functionality to the Shogun 7. This allows users to accurately switch back and forth between four live HD SDI video streams up to 1080p/60fps.

Tapping on an input stream’s window in the quad-view will switch to that source as the program feed — which is then output via HDMI, SDI or both simultaneously. You can record all four streams, with the switched program output as a fifth stream. This is done asynchronously, without the need for genlocked sources.

The streams are all recorded to the same high-performance SSD drive and can be ready to edit right after the shoot ends. Each is recorded as a separate ISO in either Apple ProRes or Avid DNx. This helps post production with the convenience of having multiple, synchronized camera angles when they are in the edit suite.

All input switches are recorded in metadata with a choice of transition type. Once users have finished capturing the streams, they can import the resulting Apple Final Cut XML file, along with the ISOs, into an NLE and the timeline automatically populates with all the transitions in place.

In terms of audio, every ISO can record the two-channel embedded digital audio from each source, as well as analog stereo channels coming into the Shogun 7 (via optional XLR breakout cable). The program stream always records the analog feed and can switch between audio inputs for the on-camera audio to match the switched feed.

On-screen, you can choose to view the four input streams at once in a quad-view or a full-screen view of any one of the four input streams. In full-screen you can access tools like waveforms, magnify or engage peaking check focus for each angle and make any adjustments to get the perfect HDR or SDR shots.

“This latest update takes the Shogun 7 — a product that costs less than $1,500 — to a whole new level by giving a new storytelling tool to our customers,” says Jeromy Young, CEO of Atomos. “With live switching now available to any filmmaker, on a simple-to-use, touchscreen device, we’re taking multicamera video out of the purely professional domain and giving every content creator access.”

Atomos 10.4 is available now as a free download from the Atomos website. This replaces the earlier limited beta and users of that version should update to Atomos 10.4.

Behind the title: Cutters editor Steve Bell

“I’ve always done a fair amount of animation design, music rearranging and other things that aren’t strictly editing, but most editors are expected to play a role in aspects of the post process that aren’t strictly editing.”

Name: Steve Bell

What’s your job title?
Editor

Company: Cutters Editorial

Can you describe your company?
Cutters is part of a global group of companies offering offline editing, audio engineering, VFX and picture finishing, production and design – all of which fall under Cutters Studios. Here in New York, we do traditional broadcast TV advertising and online content, as well as longer format work and social media content for brands, directors and various organizations that hire us to develop a concept, shoot and direct.

Cutters New York

What’s your job title?
Editor

What’s your favorite part of the job?
There’s a stage to pretty much every project where I feel I’ve gotten a good enough grasp of the material that I can connect the storytelling dots and see it come to life. I like problem solving and love the feeling you get when you know you’ve “figured it out.”

Depending on the scale of the project, it can start a few hours in, a few days in or a few weeks in, but once it hits you can’t stop until you see the piece finished. It’s like reading a good page-turner; you can’t put it down. That’s the part of the creative process I love and what I like most about my job.

What’s your least favorite?
It’s those times when it becomes clear that I’ve/we’ve probably looked at something too many times to actually make it better. That certainly doesn’t happen on many jobs, but when it does, it’s probably because too many voices have had a say; too many cooks in the kitchen, as they say.

What is your most productive time of the day?
Early in the morning. I’m most clearheaded at the very beginning of the day, and then sometimes toward the very end of a long day. But those times also happen to be when I’m most likely to be alone with what I’m working on and free from other distractions.

If you didn’t have this job, what would you be doing instead? 
Baseball player? Astronaut? Joking. But let’s face it, we all fantasize about fulfilling the childhood dreams that are completely different from what we do. To be truthful I’m sure I’d be doing some kind of writing, because it was my desire to be a writer, particularly of film, that indirectly led me to be an editor.

Why did you choose this profession? How early on did you know this would be your path?
Well the simple answer is probably that I had opportunities to edit professionally at a relatively young age, which forced me to get better at editing way before I had a chance to get better at writing. If I keep editing I may never know if I can write!

Stella Artois

Can you name some recent projects you have worked on?
The Dwyane Wade Budweiser retirement film, Stella Artois holiday spots, a few films for the Schott/Hamilton watch collaboration. We did some fun work for Rihanna’s Savage X Fenty release. Early in the year I did a bunch of lovely spots for Hallmark Hall of Fame programming.

Do you put on a different hat when cutting for a specific genre?
For sure. There are overlapping tasks, but I do believe it takes a different set of skills to do good dramatic storytelling than it takes to do straight comedy, or doc or beauty. Good “Storytelling” (with a capital ‘S’) is helpful in all of it — I’d probably say crucial. But it comes down to the important element that’s used to create the story: emotion, humor, rhythm, etc. And then you need to know when it needs to be raw versus formal, broad versus subtle and so forth. Different hats are needed to get that exactly right.

What is the project that you are most proud of and why?
I’m still proud of the NHL’s No Words spot I worked on with Cliff Skeete and Bruce Jacobson. We’ve become close friends as we’ve collaborated on a lot of work since then for the NHL and others. I love how effective that spot is, and I’m proud that it continues to be referenced in certain circles.

NHL No Words

In a very different vein, I think I’m equally proud of the work I’ve done for the UN General Assembly meetings, especially the film that accompanied Kathy Jetnil-Kijiner’s spoken word performance of her poem “Dear Matafele Peinem” during the opening ceremonies of the UN’s first Climate Change conference. That’s an issue that’s very important to me and I’m grateful for the chance to do something that had an impact on those who saw it.

What do you use to edit?
I’m a Media Composer editor, and it probably goes back to the days when I did freelance work for Avid and had to learn it inside out. The interface at least is second nature to me. Also, the media sharing and networking capabilities of Avid make it indispensable. That said, I appreciate that Premiere has some clear advantages in other ways. If I had to start over I’m not sure I wouldn’t start with Premiere.

What is your favorite plugin?
I use a lot of Boris FX plugins for stabilization, color correction and so forth. I used to use After Effects often, and Boris FX offers a way of achieving some of what I once did exclusively in After Effects.

Are you often asked to do more than edit? If so, what else are you asked to do?
I’ve always done a fair amount of animation design, music rearranging and other things that aren’t strictly editing, but most editors are expected to play a role in aspects of the post process that aren’t strictly “film editing.”

Many of my clients know that I have strong opinions about those things, so I do get asked to participate in music and animation quite often. I’m also sometimes asked to help with the write-ups of what we’ve done in the edit because I like talking about the process and clarifying what I’ve done. If you can explain what you’ve done you’re probably that much more confident about the reasons you did it. It can be a good way to call “bullshit” on yourself.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
Yeah, right?! It can be stressful, especially when you’re occasionally lucky enough to be busy with multiple projects all at once. I take decompressing very seriously. When I can, I spend a lot of time outdoors — hiking, biking, you name it — not just for the cardio and exercise, which is important enough, but also because it’s important to give your eyes a chance to look off into the distance. There are tremendous physical and psychological benefits to looking to the horizon.

Shape+Light VFX boutique opens in LA with Trent, Lehr at helm


Visual effects and design studio boutique Shape+Light has officially launched in Santa Monica. At the helm is managing director/creative director Rob Trent and executive producer Cara Lehr. Shape+Light provides visual effects, design and finishing services for agency and brand-direct clients. The studio, which has been quietly operating since this summer, has already delivered work for Nike, Apple, Gatorade, Lexus and Proctor & Gamble.

Gatorade

Trent is no stranger to running VFX boutiques. An industry veteran, he began his career as a Flame artist, working at studios including Imaginary Forces and Digital Domain, and then at Asylum VFX as a VFX supervisor/creative director before co-founding The Mission VFX in 2010. In 2015, he established Saint Studio. During his career he has worked on big campaigns, including the launch of the Apple iPhone with David Fincher, celebrating the NFL with Nike and Michael Mann, and honoring moms with Alma Har’el and P&G for the Olympics. He has also contributed to award-winning feature films such as The Curious Case of Benjamin Button, Minority Report, X-Men and Zodiac.

Lehr is an established VFX producer with over 20 years of experience in both commercials and features. She has worked for many of LA’s leading VFX studios, including Zoic Studios, Asylum VFX, Digital Domain, Brickyard VFX and Psyop. She most recently served as EP at Method Studios, where she was on staff since 2012. She has worked on ad campaigns for brands including Apple, Microsoft, Nike, ESPN, Coca Cola, Taco Bell, AT&T, the NBA, Chevrolet and more.

Frame.io platform is now on the iPad

Frame.io’s review and collaboration platform is now available on the iPad. The company acknowledges there are times when watching dailies, evaluating VFX shots or getting a better sense of composition and color benefits from a larger display than a smartphone can offer. Enter Frame.io for iPad. They also point to the tablet’s high-resolution screen, which allows users to view “true-to-life” color.

New features include a split view, which lets users keep Frame.io in view on one side of the iPad screen while using apps like Final Draft, Slack or FaceTime on the other. There is also the ability to draw detailed annotations with Apple Pencil. Users can fine-tune stills or moving images, create illustrations, or work in Photoshop and import assets into Frame.io right on the iPad.

In addition to this latest product news, Frame.io recently received $50 million in C Series funding led by Insight Partners. The company’s co-founders and brain trust say they will use this money to keep enhancing their product and further embrace cloud-based workflows.

In talking about these types of workflows, Frame.io co-founder Emery Wells says, “It’s not so much about how the Frame.io platform will continue to develop for the cloud, but how the development of the cloud will enable new platform capabilities.” He points to 5G as an example. “We’ve watched many industries go through the cloud adoption curve, and the filmmaking industry is at the precipice of that taking off. We just haven’t been able to get data into the cloud; that’s been the big blocker. Cameras can produce anywhere from 2TB to 5TB of footage per camera, per day — that is big data. Now that gigabit connections are becoming commonplace, we can start moving the data into the cloud.”

Wells and company believe the video creation process is finally adopting the cloud. “The virtualization of post is upon us,” he reports. “There will be intense security requirements, but every signal has shown we’ve turned a corner. We’re right at the start of that adoption curve.”

L-R: Michael Cioni and Emery Wells

He points to the recent hire of post veteran Michael Cioni, who joined the Frame.io team to oversee a new initiative the company is calling “camera-to-cloud.” Wells says the goal is to integrate Frame.io right into the camera, allowing you to shoot from camera into Frame.io — into the cloud. “This transforms the way people will work because we’re already integrated into the creative tools, which means you’re shooting right into the editing tools — Premiere, Final Cut, Resolve — making it truly camera-to-cutting room.”

Cioni agrees, but realizes there is a learning curve. “Looking back at how other industry-changing technologies have evolved into Hollywood mainstream practices, we can predict the biggest challenge will likely be in the behavior changes required to join our plan. Blanketed resistance to change can be one of society’s worst characteristics, and we expect there will be a large amount of anxiety about relying more and more on the cloud and having less and less on-prem infrastructure.”

Cioni says the company has already begun to examine the value of education so they can work with the community to help reduce anxiety by listening to their concerns. “In order to do that, it’s imperative that we are able to engage in healthy conversations about what leaning more on the cloud means to each company, each department and each individual. We agree with the MovieLabs white paper, but the opportunity lies not only within who is willing to invest in cloud infrastructure, but who is willing to work with the community and strategically deploy each new iteration all within the right timing.”

Review: The Sensel Morph hardware interface

By Brady Betzel

As an online editor and colorist, I have tried a lot of hardware interfaces designed for apps like Adobe Premiere, Avid Media Composer, Blackmagic DaVinci Resolve and others. With the exception of professional color correction surfaces like the FilmLight Baselight, the Resolve Advanced Panel and Tangent’s Element color correction panels, it’s hard to get exactly what I need.

While they typically work well, there is always a drawback for my workflow; usually they are missing one key shortcut or feature. Enter Sensel Morph, a self-proclaimed morphable hardware interface. In reality, it is a pressure-sensitive trackpad that uses individual purchasable magnetic rubber overlays and keys for a variety of creative applications. It can also be used as a pressure-sensitive trackpad without any overlays.

For example, inside of the Sensel app you can identify the Morph as a trackpad and click “Send Map to Morph,” and it will turn itself into a large trackpad. If you are a digital painter, you can turn the Morph into “Paintbrush Area” and use a brush and/or your fingers to paint! Once you understand how to enable the different mappings you can quickly and easily Morph between settings.

For this review, I am going to focus on how you can use the Sensel Morph with Adobe Premiere Pro. For the record, you can actually use it with any NLE by creating your own map inside of the Sensel app. The Morph essentially works with keyboard shortcuts for NLEs. With that in mind, if you customize your keyboard shortcuts you are going to want to enable the default mapping inside of Premiere or adjust your settings to match the Sensel Morph’s settings.

Before you plug in your Morph, you will need to click over to https://sensel.com/pages/support, where you can get a quick-start guide in addition to the Sensel app you will need to install before you get working. After it’s downloaded and installed, you will want to plug in the Morph via the USB and let it charge before using the Bluetooth connection. It took a while for the Morph to fully charge, about two hours, but once I installed the Sensel app, added the Video Editing Overlay and opened Adobe Premiere, I was up and working.

To be honest, I was a little dubious about the Sensel Morph. A lot of these hardware interfaces have come across my desk, and they usually have poor software implementation, or the hardware just doesn’t hold up. But the Sensel Morph broke through my preconceived ideas of hardware controllers for NLEs like Premiere, and for the first time in a long time, I was inspired to use Premiere more often.

It’s no secret that I learned professional editing in Avid Media Composer and Symphony. And most NLEs can’t quite rise to the level of professional experience that I have experienced in Symphony. One of those experiences is how well and fluid the keyboard and Wacom tablet work together. The first time I plugged in the Sensel Morph, overlayed the Video Editing Overlay on top of the Morph and opened Premiere, I began to have that same feeling but inside of Premiere!

While there are still things Premiere has issues with, the Sensel Morph really got me feeling good about how well this Adobe NLE worked. And to be honest, some of those issues relate to me not learning Premiere’s keyboard shortcuts like I did in Avid. The Sensel Morph felt like a natural addition to my Premiere editing workflow. It was the first time I started to feel that “flow state” inside of Premiere that I previously got into when using Media Composer or Symphony, and I started trimming and editing like a mad man. It was kind of shocking to me.

You may be thinking that I am blowing this out of proportion, and maybe I am, a little, but the Morph immediately improved my lazy Premiere editing. In fact, I told someone that Adobe should package these with first-time Premiere users.

I really like the way the timeline navigation works (much like the touch bar). I also like the quick Ripple Left/Right commands, and I like how you can quickly switch timelines by pressing the “Timeline” button multiple times to cycle through them. I did feel like I needed a mouse some of the time and keyboard for some of the time, but for about 60% of the time I could edit without them. Much like how I had to force myself to use a Wacom tablet for editing, if you try not to use a mouse I think you will get by just fine. I did try and use a Wacom stylus with the Sensel Morph and, unfortunately, it did not work.

What improvements could the Sensel Morph make? Specifically in Premiere, I wish they had a full-screen shortcut (“`”) labeled on the Morph. It’s one of those shortcuts I use all the time, whether I want to see my timeline full screen, the effects controls full screen or the Program feed full screen. And while I know I could program it using the Sensel app, the OCD in me wants to see that reflected onto the keys. While we are on the keys subject, or overlay, I do find it a little hard to use when I customize the key presses. Maybe ordering a custom printed overlay could assuage this concern.

One thing I found odd was the GPU usage that the Sensel app needed. My laptop’s fans were kicking on, so I opened up Task Manager and saw that the Sensel app was taking 30% of my Nvidia RTX 2080. Luckily, you really only need it open when changing overlays or turning it into a trackpad, but I found myself leaving it open by accident, which could really hurt performance.

Summing Up
In the end, is the Sensel Morph really worth the $249? It does come with one free overlay of your choice with the $249 purchase price, along with a one-year warranty; but if you want more overlays those will set you back from $35 to $59 depending on the overlay.

The Video Editing one is $35 while the new Buchla Thunder overlay is $59. From a traditional Keyboard, Piano Key, Music Production, or even Drum Pad Overlay there are a few different options you can choose from. If you are a one-person band that goes between Premiere and apps like Abelton, then it’s 100 percent worth it. If you use Premiere a lot, I still think it is worth it. The iPad Mini-size and weight is really nice, and when using over Bluetooth you feel untethered. Its sleek and thin design really allows you to bring this morphable hardware interface with you anywhere you take your laptop or tablet.

The Sensel Morph is not like any of the other hardware interfaces I have used. Not only is it extremely mobile, but it works well and is compatible with a lot of content creation apps that pros use daily. They really delivered on this one.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Ford v Ferrari’s co-editors discuss the cut

By Oliver Peters

After a failed attempt to acquire European carmaker Ferrari, an outraged Henry Ford II sets out to trounce Enzo Ferrari on his own playing field — automobile endurance racing. That is the plot of 20th Century Fox’s Ford v Ferrari, directed by James Mangold. In the end, Ford’s effort falls short, leading him to independent car designer Carroll Shelby (Matt Damon). Shelby’s outspoken lead test driver Ken Miles (Christian Bale) complicates the situation by making an enemy out of Ford senior VP Leo Beebe.

Michael McCusker

Nevertheless, Shelby and his team are able to build one of the greatest race cars ever — the GT40 MkII — setting up a showdown between the two auto legends at the 1966 24 Hours of Le Mans.

The challenge of bringing this clash of personalities to the screen was taken on by director James Mangold (Logan, Wolverine, 3:10 to Yuma) and his team of long-time collaborators.

I recently spoke with film editors Michael McCusker, ACE, (Walk the Line, 3:10 to Yuma, Logan) and Andrew Buckland (The Girl On the Train) — both of whom were recently nominated for an Oscar and ACE Eddie Award for their work on the film — about what it took to bring Ford v Ferrari together.

The post team for this film has worked with James Mangold on quite a few films. Tell me a bit about the relationship.
Michael McCusker: I cut my very first movie, Walk the Line, for Jim 15 years ago and have since cut his last six movies. I was the first assistant editor on Kate & Leopold, which was shot in New York in 2001. That’s where I met Andrew, who was hired as one of the local New York film assistants. We became fast friends. Andrew moved to LA in 2009, and I hired him to assist me on Knight & Day.

Andrew Buckland

I always want to keep myself available for Jim — he chooses good material, attracts great talent and is a filmmaker who works across multiple genres. Since I’ve worked with him, I’ve cut a musical movie, a western, a rom-com, an action movie, a straight-up superhero movie, a dystopian superhero movie and now a racing film.

As a film editor, it must be great not to get typecast for any particular cutting style.
McCusker: Exactly. I worked for David Brenner for years as his first. He was able to cross genres, and that’s what I wanted to do. I knew even then that the most important decisions I would make would be choosing projects. I couldn’t have foreseen that Jim was going to work across all these genres — I simply knew that we worked well together and that the end product was good.

In preparing for Ford v Ferrari, did you study any other recent racing films, like Ron Howard’s Rush?
McCusker: I saw that movie, and liked it. Jim was aware of it, too, but I think he wanted to do something a little more organic. We watched a lot of older racing films, like Steve McQueen’s Le Mans and John Frankenheimer’s Grand Prix.

Jim’s original intention was to play the racing in long takes and bring the audience along for the ride. As he was developing the script, and we were in preproduction, it became clear that there was more drama for him to portray during the racing sequences than he anticipated. So the races took on more of an energized pace.

Energized in what way? Do you mean in how you cut it or in a change of production technique, like more stunt cameras and angles?
McCusker: I was fortunate to get involved about two-and-a-half months prior to the start of production. We were developing the Le Mans race in previs. This required a lot of editing and discussions about shot design and figuring out what the intercutting was going to be during that sequence, which is like the fourth act of the movie.

You’re dealing with Mollie and Peter [Miles’ wife and son] at home watching the race, the pit drama, what’s going on with Shelby and his crew, with Ford and Leo Beebe and also, of course, what’s going on in the car with Ken. It’s a three-act movie unto itself, so Jim was trying to figure out how it was all going to work before he had to shoot it. That’s where I came in. The frenetic pace of Le Mans was more a part of the writing process — and part of the writing process was the previs. The trick was how to make sure we weren’t just following cars around a track. That’s where redundancy can tend to beleaguer an audience in racing movies.

What was the timeline for production and post?
McCusker: I started at the end of May 2018. Production began at the beginning of August and went all the way through to the end of November. We started post in earnest at the beginning of November of last year, took some time off for the holidays, and then showed the film to the studios around February or March.

When did you realize you were going to need help?
The challenge was that there was going to be a lot of racing footage, which meant there was going to be a lot of footage. I knew I was going to need a strong co-editor, so Andrew was the natural choice. He had been cutting on his own and cutting with me over the years. We share a common approach to editing and have a similar aesthetic.

There was a point when things got really intense and we needed another pair of hands, so I brought in Dirk Westervelt to help out for a couple of months. That kept our noses above water, but the process was really enjoyable. We were never in a crisis mode. We got a great response from preview audiences and, of course, that calms everybody down. At that point it was just about quality control and making sure we weren’t resting on our laurels.

How long was your initial cut, and what was your process for trimming the film down to the present run time?
McCusker: We’re at 2:30:00 right now and I think the first cut was 3:10 or 3:12. The Le Mans section was longer. The front end of the movie had more scenes in it. We ended up lifting some scenes and rearranging others. Plus, the basic trimming of scenes brought the length down.

But nothing was the result of a panic, like, “Oh my God, we’ve got to get to 2:30!” There were no demands by the studio or any pressures we placed upon ourselves to hit a particular running time. I like to say that there’s real time and there’s cinematic time. You can watch Once Upon a Time in America, which is 3:45, and feels like it’s an hour. Or you can watch an 89-minute movie and feel like it’s drudgery. We just wanted to make sure we weren’t overstaying our welcome.

How extensively did you rearrange scenes during the edit? Or did the structure of the film stay pretty much as scripted?
McCusker: To a great degree it stayed as scripted. We had some scenes in the beginning that we felt were a little bit tangential and weren’t serving the narrative directly, and those were cut.

The real endeavor of this movie starts the moment that these two guys [Shelby and Miles] decide to tackle the challenge of developing this car. There’s a scene where Miles sees the car for the first time at LAX. We understood that we had to get to that point in a very efficient way, but also set up all the other characters — their motives and their desires.

It’s an interesting movie, because it starts off with a lot of characters. But then it develops into a movie about two guys and their friendship. So it goes from an ensemble piece to being about Ken and Carroll, while at the same time the scope of the movie is opening up and becoming larger as the racing is going on. For us, the trickiest part was the front end — to make sure we spent enough time with each character so that we understood them, but not so much time that audience would go, “Enough already! Get on with it!”

Did that help inform your cutting style for this film?
McCusker: I don’t think so. Where it helped was knowing the sound of the broadcasters and race announcers. I liked Chris Economaki and Jim McKay — guys who were broadcasting the races when I was a kid. I was intrigued about how they gave us the narrative of the race. It came in handy while we were making this movie, because we were able to get our hands on some of Jim McKay’s actual coverage of Le Mans and used it in the movie. That brings so much authenticity.

Let’s talk sound. I would imagine the sound design was integral to your rough cuts. How did you tackle that?
Andrew Buckland: We were fortunate to have the sound team on very early during preproduction. We were cutting in a 5.1 environment, so we wanted to create sound design early. The engine sounds might not have been the exact sounds that would end up in the final, but they were adequate enough to allow you to experience the scenes as intended. Because we needed to get Jim’s response early, some of the races were cut with the production sound — from the live mics during filming. This allowed Jim and us to quickly see how the scenes would flow.

Other scenes were cut strictly MOS because the sound design would have been way too complicated for the initial cut of the scene. Once the scene was cut visually, we’d hand over the scene to sound supervisor Don Sylvester, who was able to provide us with a set of 5.1 stems. That was great, because we could recut and repurpose those stems for other races.

McCusker: We had developed a strategy with Don to split the sound design into four or five stems to give us enough discrete channels to recut these sequences. The stems were a palette of interior perspectives, exterior perspectives, crowds, car-bys, and so on. By employing this strategy, we didn’t need to continually turn over the cut to sound for patch-up work.

Then, as Don went out and recorded the real cars and was developing the actual sounds for what was going to be used in the mix, he’d generate new stems and we would put them into the Media Composer. This was extremely informative to Jim, because he could experience our Avid temp mix in 5.1 and give notes, which ultimately informed the final sound design and the mix.

What about temp music? Did you also weave that into your rough cuts?
McCusker: Ted Caplan, our music editor, has also worked with Jim for 15 years. He’s a bit of a renaissance man — a screenwriter, a novelist, a one-time musician and a sound designer in his own right. When he sits down to work with music, he’s coming at it from a story point-of-view. He has a very instinctual knowledge of where music should start, and it happens to dovetail into the aesthetic that Jim, Andrew, and I are working toward. None of us like music to lead scenes in a way that anticipates what the scene is going to be about before you experience it.

For this movie, it was challenging to develop what the musical tone of the movie would be. Ted was developing the temp track along with us from a very early stage. We found over time that not one particular musical style was going to work. This is a very complex score. It includes a kind of surf-rock sound with Carroll Shelby in LA, an almost jaunty, lounge jazz sound for Detroit and the Ford executives, and then the hard-driving rhythmic sound for the racing.

The final score was composed by Marco Beltrami and Buck Sanders.

I presume you were housed in multiple cutting rooms at a central facility.
McCusker: We cut at 20th Century Fox, where Jim has a large office space. We cut Logan and Wolverine there before this movie. It has several cutting spaces and I was situated between Andrew and Don. Ted was next to Don and John Berri, our additional editor. Assistants were right around the corner. It makes for a very efficient working environment.

Since the team was cutting with Avid Media Composer, did any of its features stand out to you for this film?
Both: FluidMorph! (laughing)

McCusker: FluidMorph, speed-ramping — we often had to manipulate the shot speeds to communicate the speed of the cars. A lot of these cars were kit cars that could drive safely at a certain speed for photography, but not at race speed. So we had to manipulate the speed a lot to get the sense of action that these cars have.

What about Avid’s ScriptSync? I know a lot of narrative editors love it.
McCusker: I used ScriptSync once a few years ago and I never cut a scene faster. I was so excited. Then I watched it, and it was terrible. To me there’s so much more to editing than hitting the next line of dialogue. I’m more interested in the lines between the lines — subtext. I do understand the value of it in certain applications. For instance, I think it’s great on straight comedy. It’s helpful to get around and find things when you are shooting tons of coverage for a particular joke. But for me, it’s not something I lean on. I mark up my own dailies and find stuff that way.

Tell me a bit more about your organizational process. Do you start with a Kem roll or stringouts of selected takes?
McCusker: I don’t watch dailies, at least in a traditional sense. I don’t start in the morning, watch the dailies and then cut. And I don’t ask my assistants to organize any of my dailies in bins. I come in and grab the scene that I have in front on me. I’ll look at the last take of every set-up quickly and then I spend an enormous amount of time — particularly on complex scenes — creating a bin structure that I can work with.

Sometimes it’s the beats in a scene, sometimes I organize by shot size, sometimes by character — it depends on what’s driving the scene. I learn my footage by organizing it. I remember shot sizes. I remember what was shot from set-up to set-up. I have a strong visual memory of where things are in a bin. So, if I ask an assistant to do that, then I’m not going to remember it. If there are a lot of resets or restarts in a take, I’ll have the assistant mark those up. But, I’ll go through and mark up beats or pivotal points in a scene, or particularly beautiful moments, and then I’ll start cutting.

Buckland: I’ve adopted a lot of Mike’s methodology, mainly because I assisted Mike on a few films. But it actually works for me, as well. I have a similar aesthetic to Mike.

Was this was shot digitally?
McCusker: It was primarily shot with ARRI Alexa 65 LFs, plus some other small-format cameras. A lot of it was shot with old anamorphic lenses on the Alexa that allowed them to give it a bit of a vintage feeling. It’s interesting that as you watch it, you see the effect of the old lenses. There’s a fall-off on the edges, which is kind of cool. There were a couple of places where the subject matter was framed into the curve of the lens, which affects the focus. But we stuck with it, because it feels “of the time.”

Since the film takes place in the 1960s and has a lot of racing sequences, I assume there a lot of VFX?
McCusker: The whole movie is a period film and we would temp certain things in the Avid for the rough cuts. John Berri was wrangling visual effects. He’s a master in the Avid and also Adobe After Effects. He has some clever ways of filling in backgrounds or greenscreens with temp elements to give the director an idea of what’s going to go there. We try to do as much temp work in the Avid as we are capable of doing, but there’s so much 3D visual effects work in this movie that we weren’t able to do that all of the time.

The racing is real. The cars are real. The visual effects work was for a lot of the backgrounds. The movie was shot almost entirely in Los Angeles with some second unit footage shot in Georgia. The modern-day Le Mans track isn’t at all representative of what Le Mans was in 1966, so there was no way to shoot that. Everything had to be doubled and then augmented with visual effects. In addition to Georgia, where they shot most of the actual racing for Le Mans, they went to France to get some shots of the actual town of Le Mans. Of those, I think only about four of those shots are left. (laughs)

Any final thoughts about how this film turned out?
McCusker: I’m psyched that people seem to like the film. Our concern was that we had a lot of story to tell. Would we wear audiences out? We continually have people tell us, “That was two and a half hours? We had no idea.” That’s humbling for us and a great feeling. It’s a movie about these really great characters with great scope and great racing. You can put all the big visual effects in a film that you want to, but it’s really about people.

Buckland: I agree. It’s more of a character movie with racing. Also, because I am not a racing fan per se, the character drama really pulled me into the film while working on it.


Oliver Peters is an experienced film and commercial editor/colorist. In addition, he regularly interviews editors for trade publications. He may be contacted through his website at oliverpeters.com.

Colorfront’s Express Dailies 2020 for Mac Pro, new rental model

Coinciding with Apple’s launch of the latest Mac Pro workstation, Colorfront announced a new, annual rental model for Colorfront Express Dailies.

Launching in Q1 2020, Colorfront’s subscription service allows users to rent Express Dailies 2020 for an annual fee of $5,000, including maintenance support, updates and upgrades. Additionally, the availability of Apple’s brand-new Pro Display XDR, designed for use with the new Mac Pro, makes on-set HDR monitoring, enabled by Colorfront systems, more cost effective.

Express Dailies 2020 supports 6K HDR/SDR workflow along with the very latest camera and editorial formats, including Apple ProRes and Apple ProRes RAW, ARRI MXF-wrapped ProRes, ARRI Alexa LF and Alexa Mini LF ARRIRAW, Sony Venice 5.0, Blackmagic RAW 1.5, and Codex HDE (High Density Encoding).

Express Dailies 2020 is optimized for 6K HDR/SDR dailies processing on the new Mac Pro running MacOS Catalina, leveraging the performance of the Mac Pro’s Intel Xeon 28 core CPU processor and multi-GPU rendering.

“With the launch of the new Mac Pro and Apple Pro Display XDR, we identified a new opportunity to empower top-end DITs and dailies facilities to adopt HDR workflows on a wide range of high-end TV ad motion picture productions,” says Aron Jaszberenyi, managing director of Colorfront. “When combined with the new Mac Pro and Pro Display XDR, Express Dailies 2020 subscription model gives new and cost-effective options for filmmakers wanting to take full advantage of 6K HDR/SDR workflows and HDR on-set.”

 

The 70th annual ACE Eddie Award nominations

The American Cinema Editors (ACE), the honorary society of the world’s top film editors, has announced its nominations for the 70th Annual ACE Eddie Awards recognizing outstanding editing in 11 categories of film, television and documentaries.

For the first time in ACE’s history, three foreign language films are among the nominees, including The Farewell, I Lost My Body and Parasite, despite there not being a specific category for films predominantly in a foreign language.

Winners will be revealed during a ceremony on Friday, January 17 at the Beverly Hilton Hotel and will be presided over by ACE president, Stephen Rivkin, ACE. Final ballots open December 16 and close on January 6.

Here are the nominees:

BEST EDITED FEATURE FILM (DRAMA):
Ford v Ferrari
Michael McCusker, ACE & Andrew Buckland

The Irishman
Thelma Schoonmaker, ACE

Joker 
Jeff Groth

Marriage Story
Jennifer Lame, ACE

Parasite
Jinmo Yang

BEST EDITED FEATURE FILM (COMEDY):
Dolemite is My Name
Billy Fox, ACE

The Farewell
Michael Taylor & Matthew Friedman

Jojo Rabbit
Tom Eagles

Knives Out
Bob Ducsay

Once Upon a Time in Hollywood
Fred Raskin, ACE

BEST EDITED ANIMATED FEATURE FILM:
Frozen 2
Jeff Draheim, ACE

I Lost My Body
Benjamin Massoubre

Toy Story 4
Axel Geddes, ACE

BEST EDITED DOCUMENTARY (FEATURE):
American Factory
Lindsay Utz

Apollo 11
Todd Douglas Miller

Linda Ronstadt: The Sound of My Voice
Jake Pushinsky, ACE & Heidi Scharfe, ACE

Making Waves: The Art of Cinematic Sound
David J. Turner & Thomas G. Miller, ACE

BEST EDITED DOCUMENTARY (NON-THEATRICAL):
Abducted in Plain Sight
James Cude

Bathtubs Over Broadway
Dava Whisenant

Leaving Neverland
Jules Cornell

What’s My Name: Muhammad Ali
Jake Pushinsky, ACE

BEST EDITED COMEDY SERIES FOR COMMERCIAL TELEVISION:
Better Things: “Easter”
Janet Weinberg, ACE

Crazy Ex-Girlfriend: “I Need To Find My Frenemy” 
Nena Erb, ACE

The Good Place: “Pandemonium” 
Eric Kissack

Schitt’s Creek: “Life is a Cabaret”
Trevor Ambrose

BEST EDITED COMEDY SERIES FOR NON-COMMERCIAL TELEVISION:
Barry: “berkman > block”
Kyle Reiter, ACE

Dead to Me: “Pilot”
Liza Cardinale

Fleabag: “Episode 2.1”
Gary Dollner, ACE

Russian Doll: “The Way Out”
Todd Downing

BEST EDITED DRAMA SERIES FOR COMMERCIAL TELEVISION:
Chicago Med: “Never Going Back To Normal”
David J. Siegel, ACE

Killing Eve: “Desperate Times”
Dan Crinnion

Killing Eve: “Smell Ya Later”
Al Morrow

Mr. Robot: “401 Unauthorized”
Rosanne Tan, ACE

BEST EDITED DRAMA SERIES FOR NON-COMMERCIAL TELEVISION:
Euphoria: “Pilot””
Julio C. Perez IV

Game of Thrones: “The Long Night”
Tim Porter, ACE

Mindhunter: “Episode 2”
Kirk Baxter, ACE

Watchmen: “It’s Summer and We’re Running Out of Ice”
David Eisenberg

BEST EDITED MINISERIES OR MOTION PICTURE FOR TELEVISION:
Chernobyl: “Vichnaya Pamyat”
Jinx Godfrey & Simon Smith

Fosse/Verdon: “Life is a Cabaret”
Tim Streeto, ACE

When They See Us: “Part 1”
Terilyn A. Shropshire, ACE

BEST EDITED NON-SCRIPTED SERIES:
Deadliest Catch: “Triple Jeopardy”
Ben Bulatao, ACE, Rob Butler, ACE, Isaiah Camp, Greg Cornejo, Joe Mikan, ACE

Surviving R. Kelly: “All The Missing Girls”
Stephanie Neroes, Sam Citron, LaRonda Morris, Rachel Cushing, Justin Goll, Masayoshi Matsuda, Kyle Schadt

Vice Investigates: “Amazon on Fire”
Cameron Dennis, Kelly Kendrick, Joe Matoske, Ryo Ikegami

Main Image: Marriage Story

Maya 2020 and Arnold 6 now available from Autodesk

Autodesk has released Autodesk Maya 2020 and Arnold 6 with Arnold GPU. Maya 2020 brings animators, modelers, riggers and technical artists a host of new tools and improvements for CG content creation, while Arnold 6 allows for production rendering on both the CPU and GPU.

Maya 2020 adds more than 60 new updates, as well as performance enhancements and new simulation features to Bifrost, the visual programming environment in Maya.

Maya 2020

Release highlights include:

— Over 60 animation features and updates to the graph editor and time slider.
— Cached Playback: New preview modes, layered dynamics caching and more efficient caching of image planes.
— Animation bookmarks: Mark, organize and navigate through specific events in time and frame playback ranges.
— Bifrost for Maya: Performance improvements, Cached Playback support and new MPM cloth constraints.
— Viewport improvements: Users can interact with and select dense geometry or a large number of smaller meshes faster in the viewport and UV editors.
— Modeling enhancements: New Remesh and Retopologize features.
— Rigging improvements: Matrix-driven workflows, nodes for precisely tracking positions on deforming geometry and a new GPU-accelerated wrap deformer.

The Arnold GPU is based on Nvidia’s OptiX framework and takes advantage of Nvidia RTX technology. Arnold 6 highlights include:

— Unified renderer— Toggle between CPU and GPU rendering.
— Lights, cameras and More— Support for OSL, OpenVDB volumes, on-demand texture loading, most LPEs, lights, shaders and all cameras.
— Reduced GPU noise— Comparable to CPU noise levels when using adaptive sampling, which has been improved to yield faster, more predictable results regardless of the renderer used.
— Optimized for Nvidia RTX hardware— Scale up rendering power when production demands it.
— New USD components— Hydra render delegate, Arnold USD procedural and USD schemas for Arnold nodes and properties are now available on GitHub.

Arnold 6

— Performance improvements— Faster creased subdivisons, an improved Physical Sky shader and dielectric microfacet multiple scattering.

Maya 2020 and Arnold 6 are available now as standalone subscriptions or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. Monthly, annual and three-year single-user subscriptions of Arnold are available on the Autodesk e-store.

Arnold GPU is also available to try with a free 30-day trial of Arnold 6. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Maxon Cinema 4D and Foundry Katana.

Company 3 ups Jill Bogdanowicz to co-creative head, feature post  

Company 3 senior colorist Jill Bogdanowicz will now share the title of creative head, feature post with senior colorist Stephen Nakamura. In this new role she will collaborate with Nakamura working to foster communication among artists, operations and management in designing and implementing workflows to meet the ever-changing needs of feature post clients.

“Company 3 has been and will always be guided by artists,” says senior colorist/president Stefan Sonnenfeld. “As we continue to grow, we have been formalizing our intra-company communication to ensure that our artists communicate among themselves and with the company as a whole. I’m excited that Jill will be joining Stephen as a representative of our feature colorists. Her years of excellent work and her deep understanding of color science makes her a perfect choice for this position.”

Among the kinds of issues Bogdanowicz and Nakamura will address: Mentorship within the company, artist recruitment and training and adapting for emerging workflows and client expectations.

Says Bogdanowicz, “As the company continues to expand, both in size and workload, I think it’s more important than ever to have Stephen and me in a position to provide guidance to help the features department grow efficiently while also maintaining the level of quality our clients expect. I intend to listen closely to clients and the other artists to make sure that their ideas and concerns are heard.”

Bogdanowicz has been a leading feature film colorist since the early 2000s. Recent work includes Joker, Spider-Man: Far From Home and Dr. Sleep, to name a few.

Storage for Visual Effects

By Karen Moltenbrey

When creating visual effects for a live-action film or television project, the artist digs right in. But not before the source files are received and backed up. Of course, during the process, storage again comes into play, as the artist’s work is saved and composited into the live-action file and then saved (and stored) yet again. At mid-sized Artifex Studios and the larger Jellyfish Pictures, two visual effects studios, storage might not be the sexiest part of the work they do, but it is vital to a successful outcome nonetheless.

Artifex Studios
An independent studio in Vancouver, BC, Artifex Studios is a small- to mid-sized visual effects facility producing film and television projects for networks, film studios and streaming services. Founded in 1997 by VFX supervisor Adam Stern, the studio has grown over the years from a one- to two-person operation to one staffed by 35 to 45 artists. During that time it has built up a lengthy and impressive resume, from Charmed, Descendants 3 and The Crossing to Mission to Mars, The Company You Keep and Apollo 18.

To handle its storage needs, Artifex uses the Qumulo QC24 four-node storage cluster for its main storage system, along with G-Tech and LaCie portable RAIDs and Angelbird Technologies and Samsung portable SSD drives. “We’ve been running [Qumulo] for several years now. It was a significant investment for us because we’re not a huge company, but it has been tremendously successful for us,” says Stern.

“The most important things for us when it comes to storage are speed, data security and minimal downtime. They’re pretty obvious things, but Qumulo offered us a system that eliminated one of the problems we had been having with the [previous] system bogging down as concurrent users were moving the files around quickly between compositors and 3D artists,” says Stern. “We have 40-plus people hitting this thing, pulling in 4K, 6K, 8K footage from it, rendering and [creating] 3D, and it just ticks along. That was huge for us.”

Of course, speed is of utmost importance, but so is maintaining the data’s safety. To this end, the new system self-monitors, taking its own snapshots to maintain its own health and making sure there are constantly rotating levels of backups. Having the ability to monitor everything about the system is a big plus for the studio as well.

Because data safety and security is non-negotiable, Artifex uses Google Cloud services along with Qumulo for incremental storage, every night incrementally backing up to Google Cloud. “So while Qumulo is doing its own snapshots incrementally, we have another hard-drive system from Synology, which is more of a prosumer NAS system, whose only job is to do a local current backup,” Stern explains. “So in-house, we have two local backups between Qumulo and Synology, and then we have a third backup going to the cloud every night that’s off-site. When a project is complete, we archive it onto two sets of local hard drives, and one leaves the premises and the other is stored here.” At this point, the material is taken off the Qumulo system, and seven days later, the last of the so-called snapshots is removed.

As soon as data comes into Artifex — either via Aspera, Signiant’s Media Shuttle or hard disks — the material is immediately transferred to the Qumulo system, and then it is cataloged and placed into the studio’s ftrack database, which the studio uses for shot tracking. Then, as Stern says, the floodgates open, and all the artists, compositors, 3D team members and admin coordination team members access the material that resides on the Qumulo system.

Desktops at the studio have local storage, generally an SSD built into the machine, but as Stern points out, that is a temporary solution used by the artists while working on a specific shot, not to hold studio data.

Artifex generally works on a handful of projects simultaneously, including the Nickelodeon horror anthology Are You Afraid of the Dark? “Everything we do here requires storage, and we’re always dealing with high-resolution footage, and that project was no exception,” says Stern. For instance, the series required Artifex to simulate 10,000 CG cockroaches spilling out of every possible hole in a room — work that required a lot of high-speed caching.

“FX artists need to access temporary storage very quickly to produce those simulations. In terms of the Qumulo system, we need it to retrieve files at the speed our effects artists can simulate and cache, and make sure they are able to manage what can be thousands and thousands of files generated just within a few hours.”

Similarly, for Netflix’s Wu Assassins, the studio generated multiple simulations of CG smoke and fog within SideFX’s Side Effects Houdini and again had to generate thousands and thousands of cache files for all the particles and volume information. Just as it did with the caching for the CG cockroaches, the current system handled caching for the smoke and fog quite efficiently.

At this point, Stern says the vendor is doing some interesting things that his company has not yet taken advantage of. For instance, today one of the big pushes is working in the cloud and integrating that with infrastructures and workflows. “I know they are working on that, and we’re looking into that,” he adds. There are also some new equipment features, “bleeding-edge stuff” Artifex has not explored yet. “It’s OK to be cutting-edge, but bleeding-edge is a little scary for us,” Stern notes. “I know they are always playing with new features, but just having the important foundation of speed and security is right where we are at the moment.”

Jellyfish Pictures
When it comes to big projects with big storage needs, Jellyfish Pictures is no fish out of water. The studio works on myriad projects, from Hollywood blockbusters like Star Wars to high-end TV series like Watchmen to episodic animation like Floogals and Dennis & Gnasher: Unleashed! Recently, it has embarked on an animated feature for DreamWorks and has a dedicated art department that works on visual development for substantial VFX projects and children’s animated TV content.

To handle all this work, Jellyfish has five studios across the UK: four in London and one in Sheffield, in the north of England. What’s more, in early December, Jellyfish expanded further with a brand-new virtual studio in London seating over 150 artists — increasing its capacity to over 300 people. In line with this expansion, Jellyfish is removing all on-site infrastructure from its existing locales and moving everything to a co-location. This means that all five present locations will be wholly virtual as well, making Jellyfish the largest VFX and animation studio in the world operating this way, contends CTO Jeremy Smith.

“We are dealing with shows that have very large datasets, which, therefore, require high-performance computing. It goes without saying, then, that we need some pretty heavy-duty storage,” says Smith.

Not only must the storage solution be able to handle Jellyfish’s data needs, it must also fit into its operational model. “Even though we work across multiple sites, we don’t want our artists to feel that. We need a storage system that can bring together all locations into one centralized hub,” Smith explains. “As a studio, we do not rely on one storage hardware vendor; therefore, we need to work with a company that is hardware-agnostic in addition to being able to operate in the cloud.”

Also, Jellyfish is a TPN-assessed studio and thus has to work with vendors that are TPN compliant — another serious, and vital, consideration when choosing its storage solution. TPN is an initiative between the Motion Picture Association of America (MPAA) and the Content Delivery and Security Association (CDSA) that provides a set of requirements and best practices around preventing leaks, breaches and hacks of pre-released, high-valued media content.

With all those factors in mind, Jellyfish uses PixStor from Pixit Media for its storage solution. PixStor is a software-defined storage solution that allows the studio to use various hardware storage from other vendors under the hood. With PixStor, data moves seamlessly through many tiers of storage — from fast flash and disk tiers to cost-effective, high-capacity object storage to the cloud. In addition, the studio uses NetApp storage within a different part of the same workflow on Dell R740 hardware and alternates between SSD and spinning disks, depending on the purpose of the data and the file size.

“We’ve future-proofed our studio with the Mellanox SN2100 switch for the heavy lifting, and for connecting our virtual workstations to the storage, we are using several servers from the Dell N3000 series,” says Smith.

As a wholly virtual studio, Jellyfish has no storage housed locally; it all sits in a co-location, which is accessed through remote workstations powered by Teradici’s PCoIP technology.

According to Smith, becoming a completely virtual studio is a new development for Jellyfish. Nevertheless, the facility has been working with Pixit Media since 2014 and launched its first virtual studio in 2017, “so the building blocks have been in place for a while,” he says.

Prior to moving all the infrastructure off-site, Jellyfish ran its storage system out of its Brixton and Soho studios locally. Its own private cloud from Brixton powered Jellyfish’s Soho and Sheffield studios. Both PixStor storage solutions in Brixton and Soho were linked with the solution’s PixCache. The switches and servers were still from Dell and Mellanox but were an older generation.

“Way back when, before we adopted this virtual world we are living in, we still worked with on-premises and inflexible storage solutions. It limited us in terms of the work we could take on and where we could operate,” says Smith. “With this new solution, we can scale up to meet our requirements.”

Now, however, using Mellanox SN2100, which has 100GbE, Jellyfish can deal with obscene amounts of data, Smith contends. “The way the industry is moving with 4K and 8K, even 16K being thrown around, we need to be ready,” he says.

Before the co-location, the different sites were connected through PixCache; now the co-location and public cloud are linked via Ngenea, which pre-caches files locally to the render node before the render starts. Furthermore, the studio is able to unlock true multi-tenancy with a single storage namespace, rapidly deploying logical TPN-accredited data separation and isolation and scaling up services as needed. “Probably two of the most important facets for us in running a successful studio: security and flexibility,” says Smith.

Artists access the storage via their Teradici Zero Clients, which, through the Dell switches, connect users to the standard Samba SMB network. Users who are working on realtime clients or in high resolution are connected to the Pixit storage through the Mellanox switch, where PixStor Native Client is used.

“Storage is a fundamental part of any VFX and animation studio’s workflow. Implementing the correct solution is critical to the seamless running of a project, as well as the security and flexibility of the business,” Smith concludes. “Any good storage system is invisible to the user. Only the people who build it will ever know the precision it takes to get it up and running — and that is the sign you’ve got the perfect solution.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage for Color and Post

By Karen Moltenbrey

At nearly every phase of the content creation process, storage is at the center. Here we look at two post facilities whose projects continually push boundaries in terms of data, but through it all, their storage solution remains fast and reliable. One, Light Iron, juggles an average of 20 to 40 data-intensive projects at a time and must have a robust storage solution to handle its ever-growing work. Another, Final Frame, recently took on a project whose storage requirements were literally out of this world.

Amazon’s The Marvelous Mrs. Maisel

Light Iron
Light Iron provides a wide range of services, from dailies to post on feature films, indies and episodic shows, to color/conform/beauty work on commercials and short-form projects. The facility’s clients include Netflix, Amazon Studios, Apple TV+, ABC Studios, HBO, Fox, FX, Paramount and many more. Light Iron has been committed to evolving digital filmmaking techniques over the past 10 years and understands the importance of data availability throughout the pipeline. Having a storage solution that is reliable, fast and scalable is paramount to successfully servicing data-centric projects with an ever-growing footprint.

More than 100 full-time employees located at Light Iron’s Los Angeles and New York locations regularly access the company’s shared storage solutions. Both facilities are equipped for dailies and finishing, giving clients an option between its offices based on proximity. In New York, where space is at a premium, the company also offers offline editorial suites.

The central storage solution used at both locations is a Quantum StorNext file system along with a combination of network-attached and direct-attached storage. On the archive end, both sites use LTO-7 tapes for backing up before moving the data off the spinning disc storage.

As Lance Hayes, senior post production systems engineer, explains, the facility segments the storage between three different types of options. “We structured our storage environment in a three-tiered model, with redundancy, flexibility and security in mind. We have our fast disks (tier one), which are fast volumes used primarily for playbacks in the rooms. Then there are deliverable volumes (tier two), where the focus is on the density of the storage. These are usually the destination for rendered files. And then, our nearline network-attached storage (tier three) is more for the deep storage, a holding pool before output to tape,” he explains.

Light Iron has been using Quantum as its de facto standard for the past several years. Founded in 2009, Light Iron has been on an aggressive growth trajectory and has evolved its storage strategy in response to client needs and technological advancement. Before installing its StorNext system, it managed with JBOD (“just a bunch of discs”) direct-attached storage on a very limited number of systems to service its staff of then-30-some employees, says Keenan Mock, senior media archivist at Light Iron. Light Iron, though, grew quickly, “and we realized we needed to invest in a full infrastructure,” he adds.

Lance Hayes

At Light Iron, work often starts with dailies, so the workflow teams interact with production to determine the cameras being used, the codecs being shot, the number of shoot days, the expected shooting ratio and so forth. Based on that information, the group determines which generation of LTO stock makes the most sense for the project (LTO-6 or LTO-7, with LTO-8 soon to be an option at the facility). “The industry standard, and our recommendation as well, is to create two LTO tapes per shoot day,” says Mock. Then, those tapes are geographically separated for safety.

In terms of working materials, the group generally restores only what is needed for each individual show from LTO tape, as opposed to keeping the entire show on spinning disc. “This allows us to use those really fast discs in a cost-effective way,” Hayes says.

Following the editorial process, Light Iron restores only the needed shots plus handles from tape directly to the StorNext SAN, so online editors can have immediate access. The material stays on the system while the conform and DI occur, followed by the creation of final deliverables, which are sent to the tier two and tier three spinning disk storage. If the project needs to be archived to tape, Mock’s department takes care of that; if it needs to be uploaded, that usually happens from the spinning discs.

Light Iron’s FilmLight Baselight systems have local storage, which is used mainly as cache volumes to ensure sustained playback in the color suite. In addition, Blackmagic Resolve color correctors play back content directly to the SAN using tier two storage.

Keenan Mock

Light Iron continually analyzes its storage infrastructure and reviews its options in terms of the latest technologies. Currently, the company considers its existing storage solution to be highly functional, though it is reviewing options for the latest versions of flash solutions from Quantum in 2020.

Based on the facility’s storage workflow, there’s minimal danger of maxing out the storage space anytime soon.

While Light Iron is religious about creating a duplicate set of tapes for backup, “it’s a very rare occurrence [for the duplicate to be needed],” notes Mock, “But it can happen, and in that circumstance, Light Iron is prepared.”

As for the shared storage, the datasets used in post, compared to other industries, are very large, “and without shared storage and a clustered file system, we wouldn’t be able to do the jobs we are currently doing,” Hayes notes.

Final Frame
With offices in New York City and London, Final Frame is a full-featured post facility offering a range of services, including DI of every flavor, 8mm to 77mm film scanning and restoration, offline editing, VFX, sound editing (theatrical and home Dolby Atmos) and mastering. Its work spans feature films, documentaries and television. The facility’s recent work on the documentary film Apollo 11, though, tested its infrastructure like no other, including the amount of storage space it required.

Will Cox

“A long time ago, we decided that for the backbone of all our storage needs, we were going to rely on fiber. We have a total of 55 edit rooms, five projection theaters and five audio mixing rooms, and we have fiber connectivity between all of those,” says Will Cox, CEO/supervising colorist. So, for the past 20 years, ever since 1Gb fiber became available, Final Frame has relied on this setup, though every five years or so, the shop has upgraded to the next level of fiber and is currently using 16Gb fiber.

“Storage requirements have increased because image data has increased and audio data has increased with Atmos. So, we’ve needed more storage and faster storage,” Cox says.

While the core of the system is fiber, the facility uses a variety of storage arrays, the bulk of which are 16Gb 4000 Series SAN offerings from Infortrend, totaling approximately 2PB of space. In addition, the studio uses 8GB Promise Technology VTrak arrays, also totaling about 1PB. Additionally installed at the facility are some JetStor 8GB offerings. For SAN management, Final Frame uses Tiger Technology’s Tiger Store.

Foremost in Cox’s mind when looking for a storage solution is interoperability, since Final Frame uses Linux, Mac and Windows platforms; reliability and fault tolerance are important as well. “We run RAID-6 and RAID-60 for pretty much everything,” he adds. “We also focus on how good the remote management is. We’ve brought online so much storage, we need the storage vendors to provide good interfaces so that our engineers and IT people can manage and get realtime feedback about the performance of the arrays and any faults that are creeping in, whether it’s due to failed drives or drives that are performing less than we had anticipated.”

Final Frame has also brought on a good deal more SSD storage. “We manage projects a bit differently now than we used to, where we have more tiered storage,” Cox adds. “We still do a lot of spinning discs, but SSD is moving in, and that is changing our workflows somewhat in that we don’t have to render as many files and as many versions when we have really fast storage. As a result, there’s some cost-savings on personnel at the workflow level when you have extremely fast storage.”

When working with clients who are doing offline editing, Final Frame will build an isolated SAN for them, and when it comes time to finish the project, whether it’s a picture or audio, the studio will connect its online and mixing rooms to that SAN. This setup is beneficial to security, Cox contends, as it accelerates the workflow since there’s no copying of data. However, aside from that work, everyone generally has parallel access to the storage infrastructure and can access it at any time.

More recently, in addition to other projects, Final Frame began working on Apollo 11, a film directed by Todd Douglas Miller. Miller wanted to rescan all the original negatives and all the original elements available from the Apollo 11 moon landing for a documentary film using audio and footage (16mm and 35mm) from NASA during that extraordinary feat. “He asked if we could make a movie just with the archival elements of what existed,” says Cox.

While ramping up and determining a plan of attack — Final Frame was going to scan the data at 4K resolution — NASA and NARA (National Archives and Records Administration) discovered a lost cache of archives containing 65mm and 70mm film.

“At that point, we decided that existing scanning technology wasn’t sufficient, and we’d need a film scanner to scan all this footage at 16K,” Cox adds, noting the company had to design and build an entirely new 16K film scanner and then build a pipeline that could handle all that data. “If you can imagine how tough 4K is to deal with, then think about 16K, with its insanely high data rates. And 8K is four times larger than 4K, and 16K is four times larger than 8K, so you’re talking about orders-of-magnitude increases in data.”

Adding to the complexity, the facility had no idea how much footage it would be using. Alas, Final Frame ultimately considered its storage structure and the costs needed to take it to the next level for 16K scanning and determined that amount of data was just too much to move and too much to store. “As it was, we filled up a little over a petabyte of storage just scanning the 8K material. We were looking at 4PB, quadrupling the amount of storage infrastructure needed. Then we would have had to run backups of everything, which would have increased it by another 4PB.”

Considering these factors, Final Frame changed its game plan and decided to scan at 8K. “So instead of 2PB to 2.5PB, we would have been looking at 8PB to 10PB of storage if we continued with our earlier plan, and that was really beyond what the production could tolerate,” says Cox.

Even scanning at 8K, the group had to have the data held in the central repository. “We were scanning in, doing what were extensively dailies, restoration and editorial, all from the same core set of media. Then, as editorial was still going on, we were beginning to conform and finish the film so we could make the Sundance deadline,” recalls Cox.

In terms of scans, copies and so forth, Final Frame stored about 2.5PB of data for that project. But in terms of data created and then destroyed, the amount of data was between 12PB and 15PB. To handle this load, the facility needed storage that could perform quickly, be very redundant and large. This led the company to bring on an additional 1PB of Fibre Channel SAN storage to add to the 1.5PB already in place — dedicated to just the Apollo 11 project. “We almost had to double the amount of storage infrastructure in the whole facility just to run this one project,” Cox points out. The additional storage was added in half-petabyte array increments, all connected to the SAN, all at 16Gb fiber.

While storage is important to any project, it was especially true for the Apollo 11 project due to the aggressive deadlines and excessively large storage needs. “Apollo 11 was a unique project. We were producing imagery that was being returned to the National Archives to be part of the historic record. Because of the significance of what we were scanning, we had to be very attentive to the longevity and accuracy of the media,” says Cox. “So, how it was being stored and where it was being stored were important factors on this project, more so than maybe any other project we’ve ever done.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage Trends for M&E

By Tom Coughlin

Media and entertainment content is growing in size due to higher resolution, higher frame rates and more bits per pixel. In addition, the amount of digital content is growing as increasing numbers of creators provide unique content for online streaming channels and as the number of cameras used in a given project increases for applications such as sports 360-degree immersive video projects.

Projections on the growth of local (direct attached), local network and cloud storage for post apps from 2018 out to 2024.

More and larger content will require increasing amounts of digital storage and higher bandwidths to support modern workflows. In addition, in order to control the costs of video workflows, these projects must be cost-effective and make the most efficient use of physical and human resources possible. As a consequence of these opportunities and constraints, M&E workflows are using all types of storage technology to balance performance versus cost.

Hard disk drives (HDD), solid state drives (SSD), optical discs and magnetic tape technologies are increasing in storage capacity and performance and decreasing in the cost. This makes it easier to capture and store content, keep data available in a modern workflow and, when used in a private or public cloud data center, to provide readily available content for delivery and monetization. The NVMe interface for SSDs and NVMe over Fabrics (NVMe-oF) for storage systems is enabling very high-performance storage that can handle multi-stream 4K to 8K+ video projects with high frame rates, enabling more immersive video experiences.

Industry pros are turning to object-based digital storage to enable collaborative workflows and are using online cloud services for rendering, transcoding and other operations. This is becoming increasingly common because much content is now distributed online. Both small and large media houses are also moving toward private or public cloud archiving to help access and monetize valuable historical content.

Growth in Object Storage for various M&E applications over time.

Various artificial intelligence (AI) tools, such as machine learning (ML), are being used in M&E to extract metadata that allows more rapid search and use of media content. Increasingly, AI tools are also being used for media and storage management applications.

Let’s dig a little deeper…

Storage Device Evolution
HDDs and SSDs are currently the dominant storage technologies used in media and entertainment workflows. HDDs provide the best value per terabyte compared to SSDs, but NAND flash-based SSDs provide much greater performance, and Optane-based SSDs from Intel — and similar soon to be released 3D XPoint SSDs from Micron — can provide 1,000 times the performance of NAND flash. Optical discs and magnetic tape are often used in library systems and therefore have much longer latency from when data is requested to when it is delivered than HDDs. As a consequence, these technologies are primarily used for cold storage and archive applications.

The highest capacity HDDs shipping in volume have capacities up to 16TB and are available from Western Digital, Seagate and Toshiba. However, Western Digital announced that it is sampling nine-disk, 3.5-inch form factor, helium-sealed 18TB drives using some form of energy-assisted magnetic recording and that a 20TB drive will also be available that shingles recorded tracks on top of each other, resulting in higher effective track — and thus areal density — on the disks

Recently Introduced Western Digital 18TB and 20TB HDDs.

Seagate has also indicated that it would ship 20TB HDDs by 2020 using energy-assisted magnetic recording. These high-capacity drives are geared for enterprise applications, particularly in large (cloud) data centers. These drives should bring the price of HDD storage down to less than $0.02 per GB ($20/TB) when they are available in volume.

Both Sony and Panasonic are promoting the use of write-once Blu-ray optical discs for archival applications. These products are used for media archiving by some users, who are often attracted by the physical longevity of the inorganic optical storage media. The companies’ storage architectures for an optical library system differ, but they have worked together on standards for the underlying optical recording media.

According to Coughlin Associates’ 2019 Digital Storage for Media Professionals Survey, hard disk drives and magnetic tape are the most popular digital storage media. The most popular magnetic tape format in the industry is the LTO format.

Solid state drives using NAND flash — and, more recently, Intel Optane — are increasingly being used in modern media workflows. In post, there is a move to use SSDs for primary storage, particularly for facilities dealing with multiple streams of the highest resolution and frame-rate content. These SSDs are available in a wide range of storage capacities and form factors; interface options are traditional SATA, SAS, or the higher-performance Nonvolatile Memory Express (NVMe).

Samsung SSD form factors

Modern NAND flash SSDs use 3D flash memory in which memory storage cells are stacked on top of each other up to 96 layers today, while 128 or more memory cell layers will be available in 2020. Research has shown than 500-plus layers of NAND flash cells might be possible, and, as Figure 10 shows, the major NAND flash manufacturers will be introducing ever higher NAND flash layer devices (as well as more bits per cell) over the next few years.

In 2018, NAND flash SSDs were expensive because of the shortage of NAND flash. In 2019, NAND flash memory is widely available due to additional production capacity. As a result, SSDs have been dropping in price, with a consequent reduction in their cost per gigabyte. Lower prices have increased demand for SSDs.

Modern Storage Systems
Modern storage systems used for post are usually file-oriented (with either a NAS or SAN architecture), although object storage (sometimes in the cloud) is beginning to find some uses. Let’s look at some examples using HDDs and SATA/SAS SSDs, as well as storage systems using NVMe SSDs and network storage using NVMe over Fabrics.

Avid Nexis E2 all-flash array

The latest generation of the Avid Nexis storage platform includes HDD as well as larger SSD all-flash storage array configurations. Nexis is Avid’s software-defined storage for storage virtualization in media applications. It can be integrated into Avid and third-party workflows as well as across Avid MediaCentral and scale from 9.6TB up to 6.4PB. It allows on-demand access to a shared pool of centralized storage. The product allows the use of up to 38.4TB of NAND flash SSD storage in its E2 SSD engine to accelerate 4K through 8K mastering workflows.

The E5 nearline storage engine is another option that can be used by itself or integrated with other enterprise-class Avid Nexis engines.

Facilis Hub

At IBC in September, ATTO announced a partnership with Facilis to integrate ATTO ThunderLink NS 3252 Thunderbolt to 24GbE within the Facilis Hub shared storage platform. The storage solution provides flexible, scalable, high-bandwidth connectivity for Apple’s new Mac Pro, iMac Pro and Mac mini. Facilis’ Hub shared storage platform uses ATTO Celerity 32Gb and 16Gb Fibre Channel HBAs and Fastframe 25GB Ethernet NICs. Facilis Hub represents the evolution of the Facilis shared file system with block-level virtualization and multi-connectivity built for demanding media production workflows.

In addition, Facilis servers include ATTO 12Gb ExpressSAS HBAs. These technologies allow Facilis to create powerful solutions that fulfill a diverse set of customer connectivity needs and workflow demands.

With a new infusion of funding and the addition of many new managers, Editshare has a new next-generation file system and management console, the EFS 2020. The new EFS is designed to support collaborative workflows with up to a 20% performance improvement and with an easy-to-use user interface that also provides administrators and technicians with useful media management tools.

The EFS 2020 also has File Auditing, which offers a realtime, purpose-built content auditing platform for the entire production workflow. File Auditing tracks all content movement on the server, including a deliberately obscured change. According to Editshare, EFS 2020 File Auditing provides a complete, user-friendly activity report with a detailed trail back to the instigator.

EditShare EFS

Promise introduced its Pegasus32 series storage systems. It used Intel’s latest Titan Ridge Thunderbolt 3 chip and can power hosts up to 85W and offers up to 112TB of raw capacity with an eight-drive system. It supports Thunderbolt at up to 40Gbps or USB 3.2 at 10Gbps. It includes HW RAID-5 protection with hot-swappable 7,200 RPM HDDs and dual Thunderbolt 3 ports that allow daisy-chaining of peripheral devices.

Although Serial AT Attached (SATA) and Serial Attached SCSI (SAS) HDDs and SSDs are widely used, these older interfaces — which were based upon the needs of HDDs when they were developed — can restrict the data rate and latency that SSDs would be capable of. This has led to the wide use of an interface that brings more of the internal performance of the SSD to the computers it’s connected to. This new interface is called NVMe, which can be extended over various fabric networks such as InfiniBand, Fibre Channel and, more recently, Ethernet.

NVMe SSDs are finding increased use as primary storage for many applications, including media post projects, since they can provide the performance that large high-data-rate projects require. NVMe SSDs also provide lower latency to content than HDDs, which is important for media pros. With the lower price of SSD storage, their total cost of ownership has declined, making them even more attractive for high-performance applications, such as post production and VFX.

At IBC 2019, Dell EMC was showing its new PowerMax storage system. This included dual-port Intel Optane SSDs as persistent storage and NVMe-oF using 32Gb Fibre Channel I/O modules, directors and 32Gb NVMe host adapters using Dell EMC PowerPath multipathing software.

Dell PowerMax 2000 storage system.

According to Dell EMC, this end-to-end NVMe and Intel Optane architecture provides customers with a faster, more efficient storage system that delivers the following performance improvements:
• Up to 15 million I/Os
• Up to 350GB/sec bandwidth
• Up to 50% better response times
• Sub-100µs read response times
The built-in machine learning engine uses predictive analytics and pattern recognition to automatically place data on the correct media type (Optane or Flash memory) based upon its I/O profile. It can analyze and forecast 40 million data sets in real time, driving 6 billion decisions per day. PowerMax works with several plugins for virtualization and container storage, as well as Ansible modules. It can also be part of a multi-cloud storage architecture with Dell EMC Cloud Storage Services.

Quantum introduced its F-Series NVMe storage system to help media professionals power their modern post workflows.

Quantum F2000 NVMe storage array

It features SSD storage capacities up to 184TB. High uptime is ensured by dual-ported SSDs, dual-node servers and redundant power supplies. The NVMe SSDs allow performance of about one million random reads per second, with latencies of under 20 microseconds. Quantum found that NVMe storage can deliver more than 10 times the read and write throughput performance with a single client compared with NFS and SMB attached clients.

The NVMe SSDs support a huge amount of parallel processing. The F-Series array uses Remote Direct Memory Access (RDMA) networking technology to provide direct access between workstations and the NVMe storage devices. The F-Series array was designed for video data. It is made to handle the performance requirements of multiple streams of 4K+, high-frame-rate data as well as other types of unstructured data.

These capabilities enable editors in several rooms to work on multiple streams of 4K and even 8K video using one storage volume. The higher performance of NVMe SSDs avoids the over-provisioning of storage often required with HDD-based storage systems.

Private and Public Cloud for M&E
Digital media workflows are increasingly using either on-premises or remote cloud storage (shared data center storage) of various types for project collaboration or for access to online services and tools, such as rendering and content delivery services. Below are a few recent developments in public and private cloud storage.

Avid’s Cloudspaces allows projects and back-up media in the cloud, freeing up on-site Avid Nexis workspaces. Avid’s preferred cloud-hosting platform is Microsoft Azure, which has been making major inroads for cloud storage for the M&E industry by providing valuable partnerships and services for the industry.

The Facilis Object Cloud virtualizes cloud and LTO storage into a cache volume on the server, available on the client desktops through the Facilis shared file system and providing a highly scalable object storage cache. Facilis also announced that it had partnered with Wasabi for cloud storage.

Cloudian HyperStore Xtreme

Cloudian makes private cloud storage for the M&E industry, and at IBC it announced its HyperStore Xtreme. HyperStore Xtreme is said to provide ready access to video content whenever and wherever needed and unlock its full value through AI and other analytics applications.

The Cloudian HyperStore Xtreme is built on an ultra-dense Seagate server platform. The solution enables users to store and manage over 55,000 hours of 4K video (UAVC-4K, Ultra HD format) within just 12U of rack space. The company says that this represents a 75% space savings over what it would take to achieve the same capacity with an LTO-8 tape library.

Scality’s Ring 8 is a software-defined system that handles large-scale, on-prem storage of unstructured data. It is useful for petabyte-scale storage and beyond, and it works across multiple clouds as well as core and edge environments. The Extended Data Management (XDM) also allows integrating cloud data orchestration into the ring. The new version adds stringent security, multi-tenancy and cloud-native application support.

Summing Up
Media and entertainment storage and bandwidth demands are driving the use of more storage and new storage products, such as NVMe SSDs and NVMe-oF. While the use of NAND flash and other SSDs is growing, so is demand for HDDs for colder storage and the use of tape or cloud storage (which can be HDD or tape in the data center) for archiving. Cloud storage is growing to support collaborative work, cloud-based service providers and content distribution through online channels. Various types of AI tools are being used to generate metadata and even to manage storage and data resources, expanding upon standard media asset management tools.


Tom Coughlin, president of Coughlin Associates, is a digital storage analyst and business and technology consultant. He has over 37 years in the data storage industry, with engineering and management positions at several companies.

Storage for UHD and 4K

By Peter Collins

Over the past few years, we have seen a huge audience uptake of UHD and 4K technologies. The increase in resolution offering more detailed imagery, and the adoption of HDR bringing bigger and brighter colors.

UHD technologies are a significant selling point, and are quickly becoming the “new normal ” for many commissioners. VOD providers, in particular, are behind the wheel and pushing things forward rapidly — it’s not just a creative decision, but one that is now required for delivery. Essentially, something the cinematographers used to have to fight for is now being man-dated by those commissioning the content.

This is all very exciting, but what does this mean for productions in general? There are wide-ranging implications and questions of logistics — timescales for data transfer and processing increase, post production infrastructure and workflows must be adapted, and archiving and retrieval times are extended (to say the least).

With these UHD and 4K productions having storage requirements into the hundreds of terabytes between various stages of the supply chain, the need to store the data in an accessible, secure and affordable manner is critical.

The majority of production, VFX, post and mastering facilities are currently still working the traditional way — from physically on-premise storage (on-prem for those who like to shave off a couple of syllables) such as NAS, local storage, LTO and SANs to distributed data stores spread across different buildings of a facility.

With UHD and 4K projects sometime generating north of half a petabyte of data (which needs to stick around until delivery is complete and beyond), it’s not a simple problem to ensure that large chunks of that data are available and accessible for every-one involved in the project who needs it — at least not in the most time effective way. And as sure as death and taxes, no matter how much storage you have to hand, you will miraculously start running out far sooner than you anticipated. Since this affects all stages of the supply chain, doesn’t it make sense to have some central store of data for everyone to access what they need, when they need it?

Across all areas of the industry, we are seeing the adoption of cloud storage over the traditional on-premises solution and are starting to see opportunities where a cloud-based solution might save money, time or, even better, both! There are numerous cloud “types” out there and below is my overview of the four most widely adopted.

Public: The public cloud can offer large amounts of storage for as long as it’s required (i.e., paid for) and stop charging you for it when it’s not (which is a nice change from having to buy storage with a lengthy support contract). The physical infrastructure of a public cloud is shared with other customers of the cloud provider (this is known as multi-tenancy), however all the resources allocated to you are invisible to other customers. Your data may be spread across several different areas of the data center (or beyond) depending on where the provider’s infrastructure has the most availability.

Private: Private clouds (from a storage perspective) are useful for those needing finer grained control over their data. Private clouds are those in which companies build their own infrastructure to support the services they want to offer and have complete control over where their data physically resides.

The downside to private clouds is cost, as the business is effectively paying to be their own cloud provider and maintaining the systems over their lifetime. With this in mind, many of the bigger public cloud providers offer “virtual private clouds,” in which a chunk of their resources are dedicated solely to a single customer (single-tenancy). This of course comes at a slightly higher cost than the plain public cloud offering, but does allow more finely grained control for those consumers who need it.

Hybrid: Hybrid clouds are, as the name suggests, a mixture of the two cloud approaches outlined above (public and private). This offers the best of both worlds and can be a useful approach when flexibility is required, or when certain data accessing processes are not practical to run from an off-site public cloud (at time of writing, a 50fps realtime stream of uncompressed 4K raw to a grade, for example, is unlikely to happen from a vanilla public cloud agreement without some additional bandwidth discussions — and costs).

Having the flexibility to migrate data between a virtual private cloud and a local private cloud while continuing to work, could help minimize the impact on existing infrastructure locally, and could also enable workflows and interchange between local and “cloud-native” applications. Certain processes that take up a lot of resources locally could be re-located to a virtual private cloud for a lower cost, freeing up local resources for more time-sensitive applications.

Community: Here’s where the cloud could shine as a prospect from a production standpoint. This cloud model is based on businesses and those with a stake in the process pooling their resources and collaborating, coming up with a system and overarching set of processes that they all operate under — in effect offering a completely customized set of cloud services for any given project.

From a storage perspective, this could mean a production company running a virtual private cloud with the cost being distributed across all stakeholders accessing that data. Original camera files, for example, may be transferred to this virtual private cloud during the shoot, with post, VFX, marketing and reversioning houses downloading and uploading their work in turn. As all data transfers are monitored and tracked, the billing from a production standpoint on a per-vendor (or departmental) basis becomes much easier — everyone just pays for what they use.

MovieLabs’ “Envisioning Production in 2030” white paper, goes deeper into production related applications of cloud technologies over the coming decade (among other sharp in-sights), and is well worth absorbing over a cup of coffee or two.

As production technologies progress, we are only ever going to generate more and more data. For storage professionals, those managing systems, or project managers looking to improve timeframes and reduce costs, solutions may not only be financial or center around logistics. They may also factor in how easily it facilitates collaboration, interchange and fostering closer working relationships. To that question, the cloud may well be a clear best fit.

Studio Images: Goldcrest Post Production / Neil Harrison


Peter Collins is a post professional with experience working in film and television globally. He has worked at the forefront of new production technologies and consults on workflows, project management and industry best practices. He can be contacted via twitter via @PCPostPro or email at pcpostpro@icloud.com.

Reallusion’s Headshot plugin for realistic digi-doubles via AI

Reallusion has introduced a plugin for Character Creator 3 to help create realistic-looking digital doubles. According to the company, the Headshot plugin uses AI technology to automatically generate a digital human in minutes from one single photo, and those characters are fully rigged for voice lipsync, facial expression and full body animation.

Headshot allows game developers and virtual production teams to quickly funnel a cast of digital doubles into iClone, Unreal, Unity, Maya, ZBrush and more. The idea is to allow the digital humans to go anywhere they like and give creators a solution to rapidly develop, iterate and collaborate in realtime.

The plugin has two AI modes: Auto Mode and Pro Mode. Auto Mode is a one-click solution for creating mid-rez digital human crowds. This process allows one-click head and hair creation for realtime 3D head models. It also generates a separate 3D hair mesh with alpha mask to soften edge lines. The 3D hair is fully compatible with Character Creator’s conformable hair format (.ccHair). Users can add them into their hair library, and apply them to other CC characters.

Headshot Pro Mode offers full control of the 3D head generation process with advanced features such as Image Matching, Photo Reprojection and Custom Mask with up to 4,096-texture resolution.

The Image Matching Tool overlays an image reference plane for advanced head shape refinement and lens correction. With Photo Reprojection, users can easily fix the texture-to-mesh discrepancies resulting from face morph change.

Using high-rez source images and Headshot’s 1,000-plus morphs, users can get a scan-quality digital human face in 4K texture details. Additional textures include normal, AO, roughness, metallic, SSS and Micro Normal for more realistic digital human rendering.

The 3D Head Morph System is designed to achieve the professional and detailed look of 3D scan models. The 3D sculpting design allow users to hover over a control area and use directional mouse drags to adjust the corresponding mesh shape, from full head and face sculpting to individual features — head contour, face, eyes, nose, mouth and ears with more than 1,000 head morphs. It is now free with a purchase of the Headshot plugin.

The Headshot plugin for Character Creator is $199 and comes with the content pack Headshot Morph 1,000+ ($99). Character Creator 3 Pipeline costs $199.

Storage for Editors

By Karen Moltenbrey

Whether you are a small-, medium- or large-size facility, storage is at the heart of your workflow. Consider, for instance, the one-person shop Fin Film Company, which films and edits footage for branding and events, often on water. Then there’s Uppercut, a boutique creative/post studio where collaborative workflow is the key to pushing boundaries on commercials and other similar projects.

Let’s take a look at Uppercut’s workflow first…

Uppercut
Uppercut is a creative editorial boutique shop founded by Micah Scarpelli in 2015 and offering a range of post services. Based in New York and soon Atlanta, the studio employs five editors with their own suites along with an in-house Flame artist who has his own suite.

Taylor Schafer

In contrast to Uppercut’s size, its storage needs are quite large, with five editors working on as many as five projects at a time. Although most of it is commercial work, some of those projects can get heavy in terms of the generated media, which is stored on-site.

So, for its storage needs, the studio employs an EditShare RAID system. “Sometimes we have multiple editors working on one large campaign, and then usually an assistant is working with an editor, so we want to make sure they have access to all the media at the same time,” says Taylor Schafer, an assistant editor at Uppercut.

Additionally, Uppercut uses a Supermicro nearline server to store some of its VFX data, as the Flame artist cannot access the EditShare system on his CentOS operating system. Furthermore, the studio uses LTO-6 archive media in a number of ways. “We use EditShare’s Ark to LTO our partitions once the editors are done with them for their projects. It’s wonderfully integrated with the whole EditShare system. Ark is easy to navigate, and it’s easy to swap LTO tapes in and out, and everything is in one location,” says Schafer.

The studio employs the EditShare Ark to archive its editors’ working files, such as Premiere and Avid projects, graphics, transcodes and so forth. Uppercut also uses BRU (Backup Restore Utility) from Tolis Group to archive larger files that only live on LaCie hard drives and not on EditShare, such as a raw grade. “Then we’re LTO’ing the project and the whole partition with all the working files at the end through Ark,” Schafer explains.

The importance of having a system like this was punctuated over the summer when Uppercut underwent a renovation and had to move into temporary office space at Light Iron, New York — without the EditShare system. As a result, the team had to work off of hard drives and Light Iron’s Avid Nexis for some limited projects. “However, due to storage limits, we mainly worked off of the hard drives, and I realized how important a file storage system that has the ability to share data in real time truly is,” Schafer recalls. “It was a pain having to copy everything onto a hard drive, hand it back to the editor to make new changes, copy it again and make sure all the files were up to date, as opposed to using a storage system like ours, where everything is instantly up to date. You don’t have to worry whether something copied over correctly or not.”

She continues: “Even with Nexis, we were limited in our ability to restore old projects, which lived on EditShare.”

When a new project comes in at Uppercut, the first thing Schafer and her colleagues do is create a partition on EditShare and copy over the working template, whether it’s for Avid or Premiere, on that partition. Then they get their various working files and start the project, copying over the transcodes they receive. As the project progresses, the artists will get graphics and update the partition size as needed. “It’s so easy to change on our end,” notes Schafer. And once the project is completed, she or another assistant will make sure all the files they would possibly need, dating back to day one of the project, are on the EditShare, and that the client files are on the various hard drives and FTP links.

Reebok

“We’ll LTO the partition on EditShare through Ark onto an LTO-6 tape, and once that is complete, then generally we will take the projects or partition off the EditShare,” Schafer continues. The studio has approximately 26TB of RAID storage but, due to the large size of the projects, cannot retain everything on the EditShare long term. Nevertheless, the studio has a nearline server that hosts its masters and generics, as well as any other file the team might need to send to a client. “We don’t always need to restore. Generally the only time we try to restore is when we need to go back to the actual working files, like the Premiere or Avid project,” she adds.

Uppercut avoids keeping data locally on workstations due to the collaborative workflow.

According to Schafer, the storage setup is easy to use. Recently, Schafer finished a Reebok project she and two editors had been working on. The project initially started in Avid Media Composer, which was preferred by one of the editors. The other editor prefers Premiere but is well-versed on the Avid. After they received the transcodes and all the materials, the two editors started working in tandem using the EditShare. “It was great to use Avid on top of it, having Avid bins to open separately and not having to close out of the project and sharing through a media browser or closing out of entire projects, like you have to do with a Premiere project,” she says. “Avid is nice to work with in situations where we have multiple editors because we can all have the project open at once, as opposed to Premiere projects.”

Later, after the project was finished, the editor who prefers Premiere did a director’s cut in that software. As a result, Schafer had to re-transcode the footage, “which was more complicated because it was shot on 16mm, so it was also digitized and on one large video reel instead of many video files — on top of everything else we were doing,” she notes. She re-transcoded for Premiere and created a Premiere project from scratch, then added more storage on EditShare to make sure the files were all in place and that everything was up to date and working properly. “When we were done, the client had everything; the director had his director’s cut and everything was backed up to our nearline for easy access. Then it was LTO’d through Ark on LTO-6 tapes and taken off EditShare, as well as LTO’d on BRU for the raw and the grade. It is now done, inactive and archived.”

Without question, says Schafer, storage is important in the work she and her colleagues do. “It’s not so much about the storage itself, but the speed of the storage, how easily I’m able to access it, how collaborative it allows me to be with the other people I’m working with. Storage is great when it’s accessible and easy for pretty much anyone to use. It’s not so good when it’s slow or hard to navigate and possibly has tech issues and failures,” Schafer says. “So, when I’m looking for storage, I’m looking for something that is secure, fast and reliable, and most of all, easy to understand, no matter the person’s level of technical expertise.”

Chris Aguilar

Fin Film Company
People can count themselves fortunate when they can mix business with pleasure and integrate their beloved hobby with their work. Such is the case for solo producer/director/editor Chris Aguilar of Fin Film Company in Southern California, which he founded a decade ago. As Aguilar says, he does it all, as does Fin Film, which produces everything from conferences to music videos and commercial/branded content. But his real passion involves outdoor adventure paddle sports, from stand-up paddleboarding to pro paddleboarding.

“That’s been pretty much my niche,” says Aguilar, who got his start doing in-house production (photography, video and so forth) for a paddleboard company. Since then, he has been able to turn his passion and adventures into full-time freelance work. “When someone wants an event video done, especially one involving paddleboard races, I get the phone call and go!”

Like many videographers and editors, Aguilar got his start filming weddings. Always into surfing himself, he would shoot surfing videos of friends “and just have fun with it,” he says of augmenting that work. Eventually, this allowed him to move into areas he is more passionate about, such as surfing events and outdoor sports. Now, Aguilar finds that a lot of his time is spent filming paddleboard events around the globe.

Today, there are many one-person studios with solo producers, directors and editors. And as Aguilar points out, their storage needs might not be on the level of feature filmmakers or even independent TV cinematographers, but that doesn’t negate their need for storage. “I have some pretty wide-ranging storage needs, and it has definitely increased over the years,” he says.

In his work, Aguilar has to avoid cumbersome and heavy equipment, such as Atomos recorders, because of their weight on board the watercraft he uses to film paddleboard events. “I’m usually on a small boat and don’t have a lot of room to haul a bunch of gear around,” he says. Rather, Aguilar uses Panasonic’s AG-CX350 as well as Panasonic’s EVA1 and GH5, and on a typical two-day shoot (the event and interviews), he will fill five to six 64GB cards.

“Because most paddleboard races are long-distance, we’re usually on the water for about five to eight hours,” says Aguilar. “Although I am not rolling cameras the whole time, the weight still adds up pretty quickly.”

As for storage, Aguilar offloads his video onto SSD drives or other kinds of external media. “I call it my ‘working drive for editing and that kind of thing,’” he says. “Once I am done with the edit and other tasks, I have all those source files somewhere.” He calls on the G-Technology G-Drive Mobile SSD 1TB for in the field and some editing and their Ev Raw portable raw drive for back ups and some editing. He also calls on Gylph’s Atom SSD for the field.

For years, that “somewhere” has been a cabinet that was filled with archived files. Indeed, that cabinet is currently holding, in Aguilar’s estimate, 30TB of data, if not more. “That’s just the archives. I have 10 or 11 years of archives sitting there. It’s pretty intense,” he adds. But, as soon as he gets an opportunity, those will be ported to the same cloud backup solution he is using for all his current work.

Yes, he still uses the source cards, but for a typical project involving an end-to-end shoot, Aguilar will use at least a 1TB drive to house all the source cards and all the subsequent work files. “Things have changed. Back in the day, I used hard drives – you should see the cabinet in my office with all these hard drives in it. Thank God for SSDs and other options out there. It’s changed our lives. I can get [some brands of] 1TB SSD for $99 or a little more right now. My workflow has me throwing all the source cards onto something like that that’s dedicated to all those cards, and that becomes my little archive,” explains Aguilar.

He usually uploads the content as fast as possible to keep the data secure. “That’s always the concern, losing it, and that’s where Backblaze comes in,” Aguilar says. Backblaze is a cloud backup solution that is easily deployed across desktops and laptops and managed centrally — a solution Aguilar recently began employing. He also uses Iconik Solutions’ digital management system, which eases the task of looking up video files or pulling archived files from Backblaze. The digital management system sits on top of Backblaze and creates little offline proxies of the larger content, allowing Aguilar to view the entire 10-year archive online in one interface.

According to Aguilar, his archived files are an important aspect of his work. Since he works so many paddleboard events, he often receives requests for clips from specific racers or races, some dating back years. Prior to using Backblaze, if someone requested footage, it was a challenge to locate it because he’d have to pull that particular hard drive and plug it into the computer, “and if I had been organized that year, I’ll know where that piece of content is because I can find it. If I wasn’t organized that year, I’d be in trouble,” he explains. “At best, though, it would be an hour and a half or more of looking around. Now I can locate and send it in 15 minutes.”

Aguilar says the Iconik digital management system allows him to pull up the content on the interface and drill down to the year of the race, click on it, download it and send it off or share it directly through his interface to the person requesting the footage.

Aguilar went live with this new Backblaze and digital management system storage workflow this year and has been fully on board with it for just the past two to three months. He is still uncovering all the available features and the power underneath the hood. “Even for a guy who’s got a technical background, I’m still finding things I didn’t know I could do,” and as such, Aguilar is still fine-tuning his workflow. “The neat thing with Iconik is that it could actually support online editing straight up, and that’s the next phase of my workflow, to accommodate that.”

Fortunately or unfortunately, at this time Aguilar is just starting to come off his busy season, so now he can step back and explore the new system. And transfer onto the new system all the material on the old source cards in that cabinet of his.

“[The new solution] is more efficient and has reduced costs since I am not buying all these drives anymore. I can reuse them now. But mostly, it has given me peace of mind that I know the data is secure,” says Aguilar. “I have been lucky in my career to be present for a lot of cool moments in the sport of paddling. It’s a small community and a very close-knit group. The peace of mind knowing that this history is preserved, well, that’s something I greatly appreciate. And I know my fellow paddlers also appreciate it.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Behind the Title: Matter Films president Matt Moore

Part of his job is finding talent and production partners. “We want the most innovative and freshest directors, cinematographers and editors from all over the world.”

NAME: Matt Moore

COMPANY: Phoenix and Los Angeles’ Matter Films
and OH Partners

CAN YOU DESCRIBE YOUR COMPANY?
Matter Films is a full-service production company that takes projects from script to screen — doing both pre-production and post in addition to producing content. We are joined by our sister company OH Partners, a full-service advertising agency.

WHAT’S YOUR JOB TITLE?
President of Matter Films and CCO of OH Partners,

WHAT DOES THAT ENTAIL?
I’m lucky to be the only person in the company who gets to serve on both sides of the fence. Knowing that, I think that working with Matter and OH gives me a unique insight into how to meet our clients’ needs best. My number one job is to push both teams to be as innovative and outside of the box as possible. A lot of people do what we do, so I work on our points of differentiation.

Gila River Hotels and Casinos – Sports Partnership

I spend a lot of time finding talent and production partners. We want the most innovative and freshest directors, cinematographers and editors from all over the world. That talent must push all of our work to be the best. We then pair that partner with the right project and the right client.

The other part of my job is figuring out where the production industry is headed. We launched Matter Films because we saw a change within the production world — many production companies weren’t able to respond quickly enough to the need for social and digital work, so we started a company able to address that need and then some.

My job is to always be selling ideas and proposing different avenues we could pursue with Matter and with OH. I instill trust in our clients by using our work as a proof point that the team we’ve assembled is the right choice to get the job done.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People assumed when we started Matter Films that we would keep everything in-house and have no outside partners, and that’s just not the case. Matter actually gives us even more resources to find those innovators from across the globe. It allows us to do more.

The variation in budget size that we accept at Matter Films would also surprise people. We’ll take on projects with anywhere from $1,000 to one million-plus budgets. We’ve staffed ourselves in such a way that even small projects can be profitable.

WHAT’S YOUR FAVORITE PART OF THE JOB?
It sounds so cliché, but I would have to say the people. I’m around people that I genuinely want to see every single day. I love when we all get together for our meetings, because while we do discuss upcoming projects, we also goof off and just hang out. These are the people I go into battle with every single day. I choose to go into the battle with people that I whole-heartedly care about and enjoy being with. It makes life better.

WHAT’S YOUR LEAST FAVORITE?
What’s tough is how fast this business changes. Every day there’s a new conference or event, and just when you think an idea you’ve had is cutting edge and brand new, you realize you have to keep going and push to be more innovative. Just when you get caught up, you’re already behind. The big challenge is how you’re going to constantly step up your game.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I’m an early morning person. I can get more done if I start before everybody else.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I was actually pre-med for two years in college with the desire to be a surgeon. When I was an undergrad, I got an abysmal grade on one of our exams and the professor pulled me aside and told me that a score that low proved that I truly did not care about learning the material. He allowed me to withdraw from the class to find something I was more passionate about, and that was life changing.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I found out in college. I genuinely just loved making a product that either entertained or educated people. I started in the news business, so every night I would go home after work and people could tell me about the news of the day because of what I’d written, edited and put on TV.

People knew about what was going on because of the stories that we told. I have a great love for telling stories and having others engage with that story. If you’re good at the job, peoples’ lives will be different as a result of what you create.

Barbuda Ocean Club

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped a large shoot in Maryland for Live Casino, and a different tourism project for a luxury property in Barbuda. We’re currently developing our work with Virgin, and we have a shoot for a technology company focused on developing autonomous driving and green energy upcoming as well. We’re all over the map with the range of work that we have in the pipeline.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
One of my favorite projects actually took place before Matter Films was officially around, but we had a lot of the same team. We did an environmentally sensitive project for Sedona, Arizona, called Sedona Secret 7. Our campaign told the millions of tourists who arrive there how to find other equally beautiful destinations in and around Sedona instead of just the ones everyone already knew.

It was one of those times when advertising wasn’t about selling something, but about saving something.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, a pair of AirPods and a laptop. The Matter Films team gave me AirPods for my birthday, so those are extra special!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
My usage on Instagram is off the charts; it’s embarrassing. While I do look at everyone’s vacation photos or what workout they did that day, I also use Instagram as a talent sourcing tool for a lot of work purposes: I follow directors, animation studios and tons of artists that I either get inspiration from or want to work with.

A good percentage of people I follow are creatives that I want to work with at some point. I also reach out to people all the time for potential collaborations.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I love outdoor adventures. Some days I’ll go on a crazy hike here in Arizona or rent a four-wheeler and explore the desert or mountains. I also love just hanging out with my kids — they’re a great age.

Redshift integrates Cinema 4D noises, nodes and more

Maxon and Redshift Rendering Technologies have released Redshift 3.0.12, which has native support for Cinema 4D noises and deeper integration with Cinema 4D, including the option to define materials using Cinema 4D’s native node-based material system.

Cinema 4D noise effects have been in demand within other 3D software packages because of their flexibility, efficiency and look. Native support in Redshift means that users of other DCC applications can now access Cinema 4D noises by using Redshift as their rendering solution. Procedural noise allows artists to easily add surface detail and randomness to otherwise perfect surfaces. Cinema 4D offers 32 different types of noise and countless variations based on settings. Native support for Cinema 4D noises means Redshift can preserve GPU memory while delivering high-quality rendered results.

Redshift 3.0.12 provides content creators deeper integration of Redshift within Cinema 4D. Redshift materials can now be defined using Cinema 4D’s nodal material framework, introduced in Release 20. As well, Redshift materials can use the Node Space system introduced in Release 21, which combines the native nodes of multiple render engines into a single material. Redshift is the first to take advantage of the new API in Cinema 4D to implement its own Node Spaces. Users can now also use any Cinema 4D view panel as a Redshift IPR (interactive preview render) window, making it easier to work within compact layouts and interact with a scene while developing materials and lighting.

Redshift 3.0.12 is immediately available from the Redshift website.

Maxon acquired RedShift in April of 2019.

Creative Outpost buys Dolby-certified studios, takes on long-form

After acquiring the studio assets from now-closed Angell Sound, commercial audio house Creative Outpost is now expanding its VFX and audio offerings by entering the world of long-form audio. Already in picture post on its first Netflix series, the company is now open for long-form ADR, mix and review bookings.

“Space is at a premium in central Soho, so we’re extremely privileged to have been able to acquire four studios with large booths that can accommodate crowd sessions,” says Creative Outpost co-founders Quentin Olszewski and Danny Etherington. “Our new friends in the ADR world have been super helpful in getting the word out into the wider community, having seen the size, build quality and location of our Wardour Street studios and how they’ll meet the demands of the growing long-form SVOD market.”

With the Angell Sound assets in place, the team at Creative Outpost has completed a number of joint picture and sound projects for online and TV. Focusing two of its four studios primarily on advertising work, Creative Outpost has provided sound design and mix on campaigns including Barclays’ “Team Talk,” Virgin Mobile’s “Sounds Good,” Icee’s “Swizzle, Fizzle, Freshy, Freeze,” Green Flag’s “Who The Fudge Are Green Flag,” Santander’s “Antandec” and Coca Cola’s “Coaches.” Now, the team’s ambitions are to apply its experience from the commercial world to further include long-form broadcast and feature work. Its Dolby-approved studios were built by studio architect Roger D’Arcy.

The studios are running Avid Pro Tools Ultimate, Avid hardware controllers and Neumann U87 microphones. They are also set up for long-form/ADR work with EdiCue and EdiPrompt, Source-Connect Pro and ISDN capabilities, Sennheiser MKH 416 and DPA D:screet microphones.

“It’s an exciting opportunity to join Creative Outpost with the aim of helping them grow the audio side of the company,” says Dave Robinson, head of sound at Creative Outpost. “Along with Tom Lane — an extremely talented fellow ex-Angell engineer — we have spent the last few months putting together a decent body of work to build upon, and things are really starting to take off. As well as continuing to build our core short-form audio work, we are developing our long-form ADR and mix capabilities and have a few other exciting projects in the pipeline. It’s great to be working with a friendly, talented bunch of people, and I look forward to what lies ahead.”

 

Localization: Removing language barriers on global content

By Jennifer Walden

Foreign films aren’t just for cinephiles anymore. Streaming platforms are serving up international content to the masses. There are incredible series — like Netflix’s Spanish series Money Heist, Danish series The Rain and the German series Dark — that would have been otherwise unknown to American audiences. The same holds true for American content reaching foreign audiences. For instance, Starz series American Gods is available in French. Great stories are always worth sharing and language shouldn’t be the barrier that holds back the flood of global entertainment.

Now I know there are purists who feel a film or show should be experienced in its original language, but admit it, sometimes you just don’t feel like reading subtitles. (Or, if you do, you can certainly watch those aforementioned shows with subtitles and hear the original language.) So you pop on the audio for your preferred language and settle in.

Chris Carey in the Burbank studio

Dubbing used to be a poorly lipsynced affair, with bad voiceovers that didn’t fit the characters on screen in any capacity. Not so anymore. In fact, dubbing has evolved so much that it’s earned a new moniker — localization. The increased offering of globally produced content has dramatically increased the demand for localization. And as they say, practice makes perfect… or better, anyway.

Two major localization providers — BTI Studios and Iyuno Media Group — have recently joined forces under the Iyuno brand, which is now headquartered in London. Together, they have 40 studio facilities in 30 different countries, and support 82 different languages, according to its chief revenue officer/managing director of the Americas Chris Carey.

Those are impressive numbers. But what does this mean for the localization end result?

Iyuno is able to localize audio locally. The language localization for a specific market is happening in that market. This means the language is current. The actors aren’t just fluent; they’re native speakers. “Dialects change really fast. Slang changes. Colloquialisms change. These things are changing all the time, and if you’re not in the market with the target audience you can miss a lot of things that a good geographically diverse network of performers can give you,” says Carey.

Language expertise doesn’t end with actor performance. There are also the scripts and subtitles to think about. Localization isn’t a straight translation. There’s the process of script adaptation in which words are chosen based on meaning (of course) but also on syllable count in order to match lipsync as closely as possible. It’s a feat that requires language fluency and creativity.

BTI France

“If you think about the Eastern languages, and the European and Eastern European languages, they use a lot of consonants and syllables to make a simple English word. So we’re rewriting the script to use a different word that means the same thing but will fit better with the actor on-screen. So when the actor says the line in Polish and it comes out of what appears to be the mouth of the American actor on-screen, the lipsync is better,” explains Carey.

Iyuno doesn’t just do translations — dubbing and subtitles — to and from English. Of the 82 languages it covers, it can translate any one of those into another. This process requires a network of global linguists and a cloud-based infrastructure that can support tons of video streaming and asset sharing — including the “dubbing script” that’s been adapted into the destination language.

The magic of localization is 49% script adaptation, 49% dialogue editing and 2% processing in Avid Pro Tools, like time shifting and time compression/expansion to finesse the sync. “You’re looking at the actors on screen and watching their lip movement and trying to adjust this different language to come out of their mouth as close as possible,” says Carey. “There isn’t an automated-fit sound tool that would apply for localization. The actor, the director and the engineer are in the studio together working on the sync, adjusting the lines and editing the takes.”

As the voice record session is happening, “sometimes the actor will suggest a better way to say a line, too, and they’ll do an ‘as recorded script,’” says Carey. “They’ll make red lines and markups to the script, and all of that workflow we have managed into our technology platform, so we can deliver back to the customer the finished dub, the mix, and the ‘as recorded script’ with all of the adaptations and modifications that we had done.”

Darkest Hours is just one of the many titles they’ve worked on.

Iyuno’s technology platform (its cloud-based collaboration infrastructure) is custom-built. It can be modified and updated as needed to improve the workflow. “That backend platform does all the script management and file asset management; we are getting the workflow very efficient. We break all the scripts down into line counts by actor, so he/she can do the entire session’s worth of lines throughout that show. Then we’ll bring in the next actor to do it,” says Carey.

Pro Tools is the de facto DAW for all the studios in the Iyuno Media Group. Having one DAW as the standard makes it easy to share sessions between facilities. When it comes to mic selection, Carey says the studios’ engineers make those choices based on what’s best for each project. He adds, “And then factor in the acoustic space, which can impart a character to the sound in a variety of different ways. We use good studios that we built with great acoustic properties and use great miking techniques to create a sound that is natural and sounds like the original production.”

Iyuno is looking to improve the localization process even further by building up a searchable database of actors’ voices. “We’re looking at a bit more sophisticated science around waveform analysis. You can do a Fourier transform on the audio to get a spectral analysis of somebody’s voice. We’re looking at how to do that to build a sound-alike library so that when we have a show, we can listen to the actor we are trying to replace and find actors in our database that have a voice match for that. Then we can pull those actors in to do a casting test,” says Carey.

Subtitles
As for subtitles, Iyuno is moving toward a machine-assisted workflow. According to Carey, Iyuno is inputting data on language pairs (source and destination) into software that trains on that combination. Once it “learns” how to do those translations, the software will provide a first pass “in a pretty automated fashion, quite faster than a human would have done that. Then a human QCs it to make sure the words are right, makes some corrections, corrects intentions that weren’t literal and needs to be adjusted,” he says. “So we’re bringing a lot of advancement in with AI and machine learning to the subtitling world. We will expect that to continue to move pretty dramatically toward an all-machine-based workflow.”

But will machines eventually replace human actors on the performance side? Carey asks, “When were you moved by Google assistant, Alexa or Siri talking to you? I reckon we have another few turns of the technology crank before we can have a machine produce a really good emotional performance with a synthesized voice. It’s not there yet. We’re not going to have that too soon, but I think it’ll come eventually.”

Main Image: Starz’s American Gods – a localization client.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Framestore VFX will open in Mumbai in 2020

Oscar-winning creative studio Framestore will be opening a full-service visual effects studio in Mumbai in 2020 to target India’s booming creative industry. The studio will be located in the Nesco IT Park in Goregaon, in the center of Mumbai’s technology district. The news hammers home Framestore’s continued interest in India, after having made a major investment in Jesh Krishna Murthy’s VFX studio, Anibrain, in 2017.

“Mumbai represents a rolling of wheels that were set in motion over two years ago,” says Framestore founder/CEO William Sargent. “Our investment in Anibrain has grown considerably, and we continue in our partnership with Jesh Krishna Murthy to develop and grow that business. Indeed, they will become a valued production partner to our Mumbai offering.”

Framestore looks to make considerable hires in the coming months, aiming to build an initial 500-strong team with existing Framestore talent combined with the best of local Indian expertise. Mumbai will work alongside the global network, including London and Montreal, to create a cohesive virtual team delivering high-quality international work.

“Mumbai has become a center of excellence in digital filmmaking. There’s a depth of talent that can deliver to the scale of Hollywood with the color and flair of Bollywood,” Sargent continues. “It’s an incredibly vibrant city and its presence on the international scene is holding us all to a higher standard. In terms of visual effects, we will set the standard here as we did in Montreal almost eight years ago.”