Author Archives: Randi Altman

Colorfront’s Express Dailies 2020 for Mac Pro, new rental model

Coinciding with Apple’s launch of the latest Mac Pro workstation, Colorfront announced a new, annual rental model for Colorfront Express Dailies.

Launching in Q1 2020, Colorfront’s subscription service allows users to rent Express Dailies 2020 for an annual fee of $5,000, including maintenance support, updates and upgrades. Additionally, the availability of Apple’s brand-new Pro Display XDR, designed for use with the new Mac Pro, makes on-set HDR monitoring, enabled by Colorfront systems, more cost effective.

Express Dailies 2020 supports 6K HDR/SDR workflow along with the very latest camera and editorial formats, including Apple ProRes and Apple ProRes RAW, ARRI MXF-wrapped ProRes, ARRI Alexa LF and Alexa Mini LF ARRIRAW, Sony Venice 5.0, Blackmagic RAW 1.5, and Codex HDE (High Density Encoding).

Express Dailies 2020 is optimized for 6K HDR/SDR dailies processing on the new Mac Pro running MacOS Catalina, leveraging the performance of the Mac Pro’s Intel Xeon 28 core CPU processor and multi-GPU rendering.

The 70th annual ACE Eddie Award nominations

The American Cinema Editors (ACE), the honorary society of the world’s top film editors, has announced its nominations for the 70th Annual ACE Eddie Awards recognizing outstanding editing in 11 categories of film, television and documentaries.

For the first time in ACE’s history, three foreign language films are among the nominees, including The Farewell, I Lost My Body and Parasite, despite there not being a specific category for films predominantly in a foreign language.

Winners will be revealed during a ceremony on Friday, January 17 at the Beverly Hilton Hotel and will be presided over by ACE president, Stephen Rivkin, ACE. Final ballots open December 16 and close on January 6.

Here are the nominees:

BEST EDITED FEATURE FILM (DRAMA):
Ford v Ferrari
Michael McCusker, ACE & Andrew Buckland

The Irishman
Thelma Schoonmaker, ACE

Joker 
Jeff Groth

Marriage Story
Jennifer Lame, ACE

Parasite
Jinmo Yang

BEST EDITED FEATURE FILM (COMEDY):
Dolemite is My Name
Billy Fox, ACE

The Farewell
Michael Taylor & Matthew Friedman

Jojo Rabbit
Tom Eagles

Knives Out
Bob Ducsay

Once Upon a Time in Hollywood
Fred Raskin, ACE

BEST EDITED ANIMATED FEATURE FILM:
Frozen 2
Jeff Draheim, ACE

I Lost My Body
Benjamin Massoubre

Toy Story 4
Axel Geddes, ACE

BEST EDITED DOCUMENTARY (FEATURE):
American Factory
Lindsay Utz

Apollo 11
Todd Douglas Miller

Linda Ronstadt: The Sound of My Voice
Jake Pushinsky, ACE & Heidi Scharfe, ACE

Making Waves: The Art of Cinematic Sound
David J. Turner & Thomas G. Miller, ACE

BEST EDITED DOCUMENTARY (NON-THEATRICAL):
Abducted in Plain Sight
James Cude

Bathtubs Over Broadway
Dava Whisenant

Leaving Neverland
Jules Cornell

What’s My Name: Muhammad Ali
Jake Pushinsky, ACE

BEST EDITED COMEDY SERIES FOR COMMERCIAL TELEVISION:
Better Things: “Easter”
Janet Weinberg, ACE

Crazy Ex-Girlfriend: “I Need To Find My Frenemy” 
Nena Erb, ACE

The Good Place: “Pandemonium” 
Eric Kissack

Schitt’s Creek: “Life is a Cabaret”
Trevor Ambrose

BEST EDITED COMEDY SERIES FOR NON-COMMERCIAL TELEVISION:
Barry: “berkman > block”
Kyle Reiter, ACE

Dead to Me: “Pilot”
Liza Cardinale

Fleabag: “Episode 2.1”
Gary Dollner, ACE

Russian Doll: “The Way Out”
Todd Downing

BEST EDITED DRAMA SERIES FOR COMMERCIAL TELEVISION:
Chicago Med: “Never Going Back To Normal”
David J. Siegel, ACE

Killing Eve: “Desperate Times”
Dan Crinnion

Killing Eve: “Smell Ya Later”
Al Morrow

Mr. Robot: “401 Unauthorized”
Rosanne Tan, ACE

BEST EDITED DRAMA SERIES FOR NON-COMMERCIAL TELEVISION:
Euphoria: “Pilot””
Julio C. Perez IV

Game of Thrones: “The Long Night”
Tim Porter, ACE

Mindhunter: “Episode 2”
Kirk Baxter, ACE

Watchmen: “It’s Summer and We’re Running Out of Ice”
David Eisenberg

BEST EDITED MINISERIES OR MOTION PICTURE FOR TELEVISION:
Chernobyl: “Vichnaya Pamyat”
Jinx Godfrey & Simon Smith

Fosse/Verdon: “Life is a Cabaret”
Tim Streeto, ACE

When They See Us: “Part 1”
Terilyn A. Shropshire, ACE

BEST EDITED NON-SCRIPTED SERIES:
Deadliest Catch: “Triple Jeopardy”
Ben Bulatao, ACE, Rob Butler, ACE, Isaiah Camp, Greg Cornejo, Joe Mikan, ACE

Surviving R. Kelly: “All The Missing Girls”
Stephanie Neroes, Sam Citron, LaRonda Morris, Rachel Cushing, Justin Goll, Masayoshi Matsuda, Kyle Schadt

Vice Investigates: “Amazon on Fire”
Cameron Dennis, Kelly Kendrick, Joe Matoske, Ryo Ikegami

Main Image: Marriage Story

Maya 2020 and Arnold 6 now available from Autodesk

Autodesk has released Autodesk Maya 2020 and Arnold 6 with Arnold GPU. Maya 2020 brings animators, modelers, riggers and technical artists a host of new tools and improvements for CG content creation, while Arnold 6 allows for production rendering on both the CPU and GPU.

Maya 2020 adds more than 60 new updates, as well as performance enhancements and new simulation features to Bifrost, the visual programming environment in Maya.

Maya 2020

Release highlights include:

— Over 60 animation features and updates to the graph editor and time slider.
— Cached Playback: New preview modes, layered dynamics caching and more efficient caching of image planes.
— Animation bookmarks: Mark, organize and navigate through specific events in time and frame playback ranges.
— Bifrost for Maya: Performance improvements, Cached Playback support and new MPM cloth constraints.
— Viewport improvements: Users can interact with and select dense geometry or a large number of smaller meshes faster in the viewport and UV editors.
— Modeling enhancements: New Remesh and Retopologize features.
— Rigging improvements: Matrix-driven workflows, nodes for precisely tracking positions on deforming geometry and a new GPU-accelerated wrap deformer.

The Arnold GPU is based on Nvidia’s OptiX framework and takes advantage of Nvidia RTX technology. Arnold 6 highlights include:

— Unified renderer— Toggle between CPU and GPU rendering.
— Lights, cameras and More— Support for OSL, OpenVDB volumes, on-demand texture loading, most LPEs, lights, shaders and all cameras.
— Reduced GPU noise— Comparable to CPU noise levels when using adaptive sampling, which has been improved to yield faster, more predictable results regardless of the renderer used.
— Optimized for Nvidia RTX hardware— Scale up rendering power when production demands it.
— New USD components— Hydra render delegate, Arnold USD procedural and USD schemas for Arnold nodes and properties are now available on GitHub.

Arnold 6

— Performance improvements— Faster creased subdivisons, an improved Physical Sky shader and dielectric microfacet multiple scattering.

Maya 2020 and Arnold 6 are available now as standalone subscriptions or with a collection of end-to-end creative tools within the Autodesk Media & Entertainment Collection. Monthly, annual and three-year single-user subscriptions of Arnold are available on the Autodesk e-store.

Arnold GPU is also available to try with a free 30-day trial of Arnold 6. Arnold GPU is available in all supported plug-ins for Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Maxon Cinema 4D and Foundry Katana.

Company 3 ups Jill Bogdanowicz to co-creative head, feature post  

Company 3 senior colorist Jill Bogdanowicz will now share the title of creative head, feature post with senior colorist Stephen Nakamura. In this new role she will collaborate with Nakamura working to foster communication among artists, operations and management in designing and implementing workflows to meet the ever-changing needs of feature post clients.

“Company 3 has been and will always be guided by artists,” says senior colorist/president Stefan Sonnenfeld. “As we continue to grow, we have been formalizing our intra-company communication to ensure that our artists communicate among themselves and with the company as a whole. I’m excited that Jill will be joining Stephen as a representative of our feature colorists. Her years of excellent work and her deep understanding of color science makes her a perfect choice for this position.”

Among the kinds of issues Bogdanowicz and Nakamura will address: Mentorship within the company, artist recruitment and training and adapting for emerging workflows and client expectations.

Says Bogdanowicz, “As the company continues to expand, both in size and workload, I think it’s more important than ever to have Stephen and me in a position to provide guidance to help the features department grow efficiently while also maintaining the level of quality our clients expect. I intend to listen closely to clients and the other artists to make sure that their ideas and concerns are heard.”

Bogdanowicz has been a leading feature film colorist since the early 2000s. Recent work includes Joker, Spider-Man: Far From Home and Dr. Sleep, to name a few.

Storage for Visual Effects

By Karen Moltenbrey

When creating visual effects for a live-action film or television project, the artist digs right in. But not before the source files are received and backed up. Of course, during the process, storage again comes into play, as the artist’s work is saved and composited into the live-action file and then saved (and stored) yet again. At mid-sized Artifex Studios and the larger Jellyfish Pictures, two visual effects studios, storage might not be the sexiest part of the work they do, but it is vital to a successful outcome nonetheless.

Artifex Studios
An independent studio in Vancouver, BC, Artifex Studios is a small- to mid-sized visual effects facility producing film and television projects for networks, film studios and streaming services. Founded in 1997 by VFX supervisor Adam Stern, the studio has grown over the years from a one- to two-person operation to one staffed by 35 to 45 artists. During that time it has built up a lengthy and impressive resume, from Charmed, Descendants 3 and The Crossing to Mission to Mars, The Company You Keep and Apollo 18.

To handle its storage needs, Artifex uses the Qumulo QC24 four-node storage cluster for its main storage system, along with G-Tech and LaCie portable RAIDs and Angelbird Technologies and Samsung portable SSD drives. “We’ve been running [Qumulo] for several years now. It was a significant investment for us because we’re not a huge company, but it has been tremendously successful for us,” says Stern.

“The most important things for us when it comes to storage are speed, data security and minimal downtime. They’re pretty obvious things, but Qumulo offered us a system that eliminated one of the problems we had been having with the [previous] system bogging down as concurrent users were moving the files around quickly between compositors and 3D artists,” says Stern. “We have 40-plus people hitting this thing, pulling in 4K, 6K, 8K footage from it, rendering and [creating] 3D, and it just ticks along. That was huge for us.”

Of course, speed is of utmost importance, but so is maintaining the data’s safety. To this end, the new system self-monitors, taking its own snapshots to maintain its own health and making sure there are constantly rotating levels of backups. Having the ability to monitor everything about the system is a big plus for the studio as well.

Because data safety and security is non-negotiable, Artifex uses Google Cloud services along with Qumulo for incremental storage, every night incrementally backing up to Google Cloud. “So while Qumulo is doing its own snapshots incrementally, we have another hard-drive system from Synology, which is more of a prosumer NAS system, whose only job is to do a local current backup,” Stern explains. “So in-house, we have two local backups between Qumulo and Synology, and then we have a third backup going to the cloud every night that’s off-site. When a project is complete, we archive it onto two sets of local hard drives, and one leaves the premises and the other is stored here.” At this point, the material is taken off the Qumulo system, and seven days later, the last of the so-called snapshots is removed.

As soon as data comes into Artifex — either via Aspera, Signiant’s Media Shuttle or hard disks — the material is immediately transferred to the Qumulo system, and then it is cataloged and placed into the studio’s ftrack database, which the studio uses for shot tracking. Then, as Stern says, the floodgates open, and all the artists, compositors, 3D team members and admin coordination team members access the material that resides on the Qumulo system.

Desktops at the studio have local storage, generally an SSD built into the machine, but as Stern points out, that is a temporary solution used by the artists while working on a specific shot, not to hold studio data.

Artifex generally works on a handful of projects simultaneously, including the Nickelodeon horror anthology Are You Afraid of the Dark? “Everything we do here requires storage, and we’re always dealing with high-resolution footage, and that project was no exception,” says Stern. For instance, the series required Artifex to simulate 10,000 CG cockroaches spilling out of every possible hole in a room — work that required a lot of high-speed caching.

“FX artists need to access temporary storage very quickly to produce those simulations. In terms of the Qumulo system, we need it to retrieve files at the speed our effects artists can simulate and cache, and make sure they are able to manage what can be thousands and thousands of files generated just within a few hours.”

Similarly, for Netflix’s Wu Assassins, the studio generated multiple simulations of CG smoke and fog within SideFX’s Side Effects Houdini and again had to generate thousands and thousands of cache files for all the particles and volume information. Just as it did with the caching for the CG cockroaches, the current system handled caching for the smoke and fog quite efficiently.

At this point, Stern says the vendor is doing some interesting things that his company has not yet taken advantage of. For instance, today one of the big pushes is working in the cloud and integrating that with infrastructures and workflows. “I know they are working on that, and we’re looking into that,” he adds. There are also some new equipment features, “bleeding-edge stuff” Artifex has not explored yet. “It’s OK to be cutting-edge, but bleeding-edge is a little scary for us,” Stern notes. “I know they are always playing with new features, but just having the important foundation of speed and security is right where we are at the moment.”

Jellyfish Pictures
When it comes to big projects with big storage needs, Jellyfish Pictures is no fish out of water. The studio works on myriad projects, from Hollywood blockbusters like Star Wars to high-end TV series like Watchmen to episodic animation like Floogals and Dennis & Gnasher: Unleashed! Recently, it has embarked on an animated feature for DreamWorks and has a dedicated art department that works on visual development for substantial VFX projects and children’s animated TV content.

To handle all this work, Jellyfish has five studios across the UK: four in London and one in Sheffield, in the north of England. What’s more, in early December, Jellyfish expanded further with a brand-new virtual studio in London seating over 150 artists — increasing its capacity to over 300 people. In line with this expansion, Jellyfish is removing all on-site infrastructure from its existing locales and moving everything to a co-location. This means that all five present locations will be wholly virtual as well, making Jellyfish the largest VFX and animation studio in the world operating this way, contends CTO Jeremy Smith.

“We are dealing with shows that have very large datasets, which, therefore, require high-performance computing. It goes without saying, then, that we need some pretty heavy-duty storage,” says Smith.

Not only must the storage solution be able to handle Jellyfish’s data needs, it must also fit into its operational model. “Even though we work across multiple sites, we don’t want our artists to feel that. We need a storage system that can bring together all locations into one centralized hub,” Smith explains. “As a studio, we do not rely on one storage hardware vendor; therefore, we need to work with a company that is hardware-agnostic in addition to being able to operate in the cloud.”

Also, Jellyfish is a TPN-assessed studio and thus has to work with vendors that are TPN compliant — another serious, and vital, consideration when choosing its storage solution. TPN is an initiative between the Motion Picture Association of America (MPAA) and the Content Delivery and Security Association (CDSA) that provides a set of requirements and best practices around preventing leaks, breaches and hacks of pre-released, high-valued media content.

With all those factors in mind, Jellyfish uses PixStor from Pixit Media for its storage solution. PixStor is a software-defined storage solution that allows the studio to use various hardware storage from other vendors under the hood. With PixStor, data moves seamlessly through many tiers of storage — from fast flash and disk tiers to cost-effective, high-capacity object storage to the cloud. In addition, the studio uses NetApp storage within a different part of the same workflow on Dell R740 hardware and alternates between SSD and spinning disks, depending on the purpose of the data and the file size.

“We’ve future-proofed our studio with the Mellanox SN2100 switch for the heavy lifting, and for connecting our virtual workstations to the storage, we are using several servers from the Dell N3000 series,” says Smith.

As a wholly virtual studio, Jellyfish has no storage housed locally; it all sits in a co-location, which is accessed through remote workstations powered by Teradici’s PCoIP technology.

According to Smith, becoming a completely virtual studio is a new development for Jellyfish. Nevertheless, the facility has been working with Pixit Media since 2014 and launched its first virtual studio in 2017, “so the building blocks have been in place for a while,” he says.

Prior to moving all the infrastructure off-site, Jellyfish ran its storage system out of its Brixton and Soho studios locally. Its own private cloud from Brixton powered Jellyfish’s Soho and Sheffield studios. Both PixStor storage solutions in Brixton and Soho were linked with the solution’s PixCache. The switches and servers were still from Dell and Mellanox but were an older generation.

“Way back when, before we adopted this virtual world we are living in, we still worked with on-premises and inflexible storage solutions. It limited us in terms of the work we could take on and where we could operate,” says Smith. “With this new solution, we can scale up to meet our requirements.”

Now, however, using Mellanox SN2100, which has 100GbE, Jellyfish can deal with obscene amounts of data, Smith contends. “The way the industry is moving with 4K and 8K, even 16K being thrown around, we need to be ready,” he says.

Before the co-location, the different sites were connected through PixCache; now the co-location and public cloud are linked via Ngenea, which pre-caches files locally to the render node before the render starts. Furthermore, the studio is able to unlock true multi-tenancy with a single storage namespace, rapidly deploying logical TPN-accredited data separation and isolation and scaling up services as needed. “Probably two of the most important facets for us in running a successful studio: security and flexibility,” says Smith.

Artists access the storage via their Teradici Zero Clients, which, through the Dell switches, connect users to the standard Samba SMB network. Users who are working on realtime clients or in high resolution are connected to the Pixit storage through the Mellanox switch, where PixStor Native Client is used.

“Storage is a fundamental part of any VFX and animation studio’s workflow. Implementing the correct solution is critical to the seamless running of a project, as well as the security and flexibility of the business,” Smith concludes. “Any good storage system is invisible to the user. Only the people who build it will ever know the precision it takes to get it up and running — and that is the sign you’ve got the perfect solution.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage for Color and Post

By Karen Moltenbrey

At nearly every phase of the content creation process, storage is at the center. Here we look at two post facilities whose projects continually push boundaries in terms of data, but through it all, their storage solution remains fast and reliable. One, Light Iron, juggles an average of 20 to 40 data-intensive projects at a time and must have a robust storage solution to handle its ever-growing work. Another, Final Frame, recently took on a project whose storage requirements were literally out of this world.

Amazon’s The Marvelous Mrs. Maisel

Light Iron
Light Iron provides a wide range of services, from dailies to post on feature films, indies and episodic shows, to color/conform/beauty work on commercials and short-form projects. The facility’s clients include Netflix, Amazon Studios, Apple TV+, ABC Studios, HBO, Fox, FX, Paramount and many more. Light Iron has been committed to evolving digital filmmaking techniques over the past 10 years and understands the importance of data availability throughout the pipeline. Having a storage solution that is reliable, fast and scalable is paramount to successfully servicing data-centric projects with an ever-growing footprint.

More than 100 full-time employees located at Light Iron’s Los Angeles and New York locations regularly access the company’s shared storage solutions. Both facilities are equipped for dailies and finishing, giving clients an option between its offices based on proximity. In New York, where space is at a premium, the company also offers offline editorial suites.

The central storage solution used at both locations is a Quantum StorNext file system along with a combination of network-attached and direct-attached storage. On the archive end, both sites use LTO-7 tapes for backing up before moving the data off the spinning disc storage.

As Lance Hayes, senior post production systems engineer, explains, the facility segments the storage between three different types of options. “We structured our storage environment in a three-tiered model, with redundancy, flexibility and security in mind. We have our fast disks (tier one), which are fast volumes used primarily for playbacks in the rooms. Then there are deliverable volumes (tier two), where the focus is on the density of the storage. These are usually the destination for rendered files. And then, our nearline network-attached storage (tier three) is more for the deep storage, a holding pool before output to tape,” he explains.

Light Iron has been using Quantum as its de facto standard for the past several years. Founded in 2009, Light Iron has been on an aggressive growth trajectory and has evolved its storage strategy in response to client needs and technological advancement. Before installing its StorNext system, it managed with JBOD (“just a bunch of discs”) direct-attached storage on a very limited number of systems to service its staff of then-30-some employees, says Keenan Mock, senior media archivist at Light Iron. Light Iron, though, grew quickly, “and we realized we needed to invest in a full infrastructure,” he adds.

Lance Hayes

At Light Iron, work often starts with dailies, so the workflow teams interact with production to determine the cameras being used, the codecs being shot, the number of shoot days, the expected shooting ratio and so forth. Based on that information, the group determines which generation of LTO stock makes the most sense for the project (LTO-6 or LTO-7, with LTO-8 soon to be an option at the facility). “The industry standard, and our recommendation as well, is to create two LTO tapes per shoot day,” says Mock. Then, those tapes are geographically separated for safety.

In terms of working materials, the group generally restores only what is needed for each individual show from LTO tape, as opposed to keeping the entire show on spinning disc. “This allows us to use those really fast discs in a cost-effective way,” Hayes says.

Following the editorial process, Light Iron restores only the needed shots plus handles from tape directly to the StorNext SAN, so online editors can have immediate access. The material stays on the system while the conform and DI occur, followed by the creation of final deliverables, which are sent to the tier two and tier three spinning disk storage. If the project needs to be archived to tape, Mock’s department takes care of that; if it needs to be uploaded, that usually happens from the spinning discs.

Light Iron’s FilmLight Baselight systems have local storage, which is used mainly as cache volumes to ensure sustained playback in the color suite. In addition, Blackmagic Resolve color correctors play back content directly to the SAN using tier two storage.

Keenan Mock

Light Iron continually analyzes its storage infrastructure and reviews its options in terms of the latest technologies. Currently, the company considers its existing storage solution to be highly functional, though it is reviewing options for the latest versions of flash solutions from Quantum in 2020.

Based on the facility’s storage workflow, there’s minimal danger of maxing out the storage space anytime soon.

While Light Iron is religious about creating a duplicate set of tapes for backup, “it’s a very rare occurrence [for the duplicate to be needed],” notes Mock, “But it can happen, and in that circumstance, Light Iron is prepared.”

As for the shared storage, the datasets used in post, compared to other industries, are very large, “and without shared storage and a clustered file system, we wouldn’t be able to do the jobs we are currently doing,” Hayes notes.

Final Frame
With offices in New York City and London, Final Frame is a full-featured post facility offering a range of services, including DI of every flavor, 8mm to 77mm film scanning and restoration, offline editing, VFX, sound editing (theatrical and home Dolby Atmos) and mastering. Its work spans feature films, documentaries and television. The facility’s recent work on the documentary film Apollo 11, though, tested its infrastructure like no other, including the amount of storage space it required.

Will Cox

“A long time ago, we decided that for the backbone of all our storage needs, we were going to rely on fiber. We have a total of 55 edit rooms, five projection theaters and five audio mixing rooms, and we have fiber connectivity between all of those,” says Will Cox, CEO/supervising colorist. So, for the past 20 years, ever since 1Gb fiber became available, Final Frame has relied on this setup, though every five years or so, the shop has upgraded to the next level of fiber and is currently using 16Gb fiber.

“Storage requirements have increased because image data has increased and audio data has increased with Atmos. So, we’ve needed more storage and faster storage,” Cox says.

While the core of the system is fiber, the facility uses a variety of storage arrays, the bulk of which are 16Gb 4000 Series SAN offerings from Infortrend, totaling approximately 2PB of space. In addition, the studio uses 8GB Promise Technology VTrak arrays, also totaling about 1PB. Additionally installed at the facility are some JetStor 8GB offerings. For SAN management, Final Frame uses Tiger Technology’s Tiger Store.

Foremost in Cox’s mind when looking for a storage solution is interoperability, since Final Frame uses Linux, Mac and Windows platforms; reliability and fault tolerance are important as well. “We run RAID-6 and RAID-60 for pretty much everything,” he adds. “We also focus on how good the remote management is. We’ve brought online so much storage, we need the storage vendors to provide good interfaces so that our engineers and IT people can manage and get realtime feedback about the performance of the arrays and any faults that are creeping in, whether it’s due to failed drives or drives that are performing less than we had anticipated.”

Final Frame has also brought on a good deal more SSD storage. “We manage projects a bit differently now than we used to, where we have more tiered storage,” Cox adds. “We still do a lot of spinning discs, but SSD is moving in, and that is changing our workflows somewhat in that we don’t have to render as many files and as many versions when we have really fast storage. As a result, there’s some cost-savings on personnel at the workflow level when you have extremely fast storage.”

When working with clients who are doing offline editing, Final Frame will build an isolated SAN for them, and when it comes time to finish the project, whether it’s a picture or audio, the studio will connect its online and mixing rooms to that SAN. This setup is beneficial to security, Cox contends, as it accelerates the workflow since there’s no copying of data. However, aside from that work, everyone generally has parallel access to the storage infrastructure and can access it at any time.

More recently, in addition to other projects, Final Frame began working on Apollo 11, a film directed by Todd Douglas Miller. Miller wanted to rescan all the original negatives and all the original elements available from the Apollo 11 moon landing for a documentary film using audio and footage (16mm and 35mm) from NASA during that extraordinary feat. “He asked if we could make a movie just with the archival elements of what existed,” says Cox.

While ramping up and determining a plan of attack — Final Frame was going to scan the data at 4K resolution — NASA and NARA (National Archives and Records Administration) discovered a lost cache of archives containing 65mm and 70mm film.

“At that point, we decided that existing scanning technology wasn’t sufficient, and we’d need a film scanner to scan all this footage at 16K,” Cox adds, noting the company had to design and build an entirely new 16K film scanner and then build a pipeline that could handle all that data. “If you can imagine how tough 4K is to deal with, then think about 16K, with its insanely high data rates. And 8K is four times larger than 4K, and 16K is four times larger than 8K, so you’re talking about orders-of-magnitude increases in data.”

Adding to the complexity, the facility had no idea how much footage it would be using. Alas, Final Frame ultimately considered its storage structure and the costs needed to take it to the next level for 16K scanning and determined that amount of data was just too much to move and too much to store. “As it was, we filled up a little over a petabyte of storage just scanning the 8K material. We were looking at 4PB, quadrupling the amount of storage infrastructure needed. Then we would have had to run backups of everything, which would have increased it by another 4PB.”

Considering these factors, Final Frame changed its game plan and decided to scan at 8K. “So instead of 2PB to 2.5PB, we would have been looking at 8PB to 10PB of storage if we continued with our earlier plan, and that was really beyond what the production could tolerate,” says Cox.

Even scanning at 8K, the group had to have the data held in the central repository. “We were scanning in, doing what were extensively dailies, restoration and editorial, all from the same core set of media. Then, as editorial was still going on, we were beginning to conform and finish the film so we could make the Sundance deadline,” recalls Cox.

In terms of scans, copies and so forth, Final Frame stored about 2.5PB of data for that project. But in terms of data created and then destroyed, the amount of data was between 12PB and 15PB. To handle this load, the facility needed storage that could perform quickly, be very redundant and large. This led the company to bring on an additional 1PB of Fibre Channel SAN storage to add to the 1.5PB already in place — dedicated to just the Apollo 11 project. “We almost had to double the amount of storage infrastructure in the whole facility just to run this one project,” Cox points out. The additional storage was added in half-petabyte array increments, all connected to the SAN, all at 16Gb fiber.

While storage is important to any project, it was especially true for the Apollo 11 project due to the aggressive deadlines and excessively large storage needs. “Apollo 11 was a unique project. We were producing imagery that was being returned to the National Archives to be part of the historic record. Because of the significance of what we were scanning, we had to be very attentive to the longevity and accuracy of the media,” says Cox. “So, how it was being stored and where it was being stored were important factors on this project, more so than maybe any other project we’ve ever done.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage Trends for M&E

By Tom Coughlin

Media and entertainment content is growing in size due to higher resolution, higher frame rates and more bits per pixel. In addition, the amount of digital content is growing as increasing numbers of creators provide unique content for online streaming channels and as the number of cameras used in a given project increases for applications such as sports 360-degree immersive video projects.

Projections on the growth of local (direct attached), local network and cloud storage for post apps from 2018 out to 2024.

More and larger content will require increasing amounts of digital storage and higher bandwidths to support modern workflows. In addition, in order to control the costs of video workflows, these projects must be cost-effective and make the most efficient use of physical and human resources possible. As a consequence of these opportunities and constraints, M&E workflows are using all types of storage technology to balance performance versus cost.

Hard disk drives (HDD), solid state drives (SSD), optical discs and magnetic tape technologies are increasing in storage capacity and performance and decreasing in the cost. This makes it easier to capture and store content, keep data available in a modern workflow and, when used in a private or public cloud data center, to provide readily available content for delivery and monetization. The NVMe interface for SSDs and NVMe over Fabrics (NVMe-oF) for storage systems is enabling very high-performance storage that can handle multi-stream 4K to 8K+ video projects with high frame rates, enabling more immersive video experiences.

Industry pros are turning to object-based digital storage to enable collaborative workflows and are using online cloud services for rendering, transcoding and other operations. This is becoming increasingly common because much content is now distributed online. Both small and large media houses are also moving toward private or public cloud archiving to help access and monetize valuable historical content.

Growth in Object Storage for various M&E applications over time.

Various artificial intelligence (AI) tools, such as machine learning (ML), are being used in M&E to extract metadata that allows more rapid search and use of media content. Increasingly, AI tools are also being used for media and storage management applications.

Let’s dig a little deeper…

Storage Device Evolution
HDDs and SSDs are currently the dominant storage technologies used in media and entertainment workflows. HDDs provide the best value per terabyte compared to SSDs, but NAND flash-based SSDs provide much greater performance, and Optane-based SSDs from Intel — and similar soon to be released 3D XPoint SSDs from Micron — can provide 1,000 times the performance of NAND flash. Optical discs and magnetic tape are often used in library systems and therefore have much longer latency from when data is requested to when it is delivered than HDDs. As a consequence, these technologies are primarily used for cold storage and archive applications.

The highest capacity HDDs shipping in volume have capacities up to 16TB and are available from Western Digital, Seagate and Toshiba. However, Western Digital announced that it is sampling nine-disk, 3.5-inch form factor, helium-sealed 18TB drives using some form of energy-assisted magnetic recording and that a 20TB drive will also be available that shingles recorded tracks on top of each other, resulting in higher effective track — and thus areal density — on the disks

Recently Introduced Western Digital 18TB and 20TB HDDs.

Seagate has also indicated that it would ship 20TB HDDs by 2020 using energy-assisted magnetic recording. These high-capacity drives are geared for enterprise applications, particularly in large (cloud) data centers. These drives should bring the price of HDD storage down to less than $0.02 per GB ($20/TB) when they are available in volume.

Both Sony and Panasonic are promoting the use of write-once Blu-ray optical discs for archival applications. These products are used for media archiving by some users, who are often attracted by the physical longevity of the inorganic optical storage media. The companies’ storage architectures for an optical library system differ, but they have worked together on standards for the underlying optical recording media.

According to Coughlin Associates’ 2019 Digital Storage for Media Professionals Survey, hard disk drives and magnetic tape are the most popular digital storage media. The most popular magnetic tape format in the industry is the LTO format.

Solid state drives using NAND flash — and, more recently, Intel Optane — are increasingly being used in modern media workflows. In post, there is a move to use SSDs for primary storage, particularly for facilities dealing with multiple streams of the highest resolution and frame-rate content. These SSDs are available in a wide range of storage capacities and form factors; interface options are traditional SATA, SAS, or the higher-performance Nonvolatile Memory Express (NVMe).

Samsung SSD form factors

Modern NAND flash SSDs use 3D flash memory in which memory storage cells are stacked on top of each other up to 96 layers today, while 128 or more memory cell layers will be available in 2020. Research has shown than 500-plus layers of NAND flash cells might be possible, and, as Figure 10 shows, the major NAND flash manufacturers will be introducing ever higher NAND flash layer devices (as well as more bits per cell) over the next few years.

In 2018, NAND flash SSDs were expensive because of the shortage of NAND flash. In 2019, NAND flash memory is widely available due to additional production capacity. As a result, SSDs have been dropping in price, with a consequent reduction in their cost per gigabyte. Lower prices have increased demand for SSDs.

Modern Storage Systems
Modern storage systems used for post are usually file-oriented (with either a NAS or SAN architecture), although object storage (sometimes in the cloud) is beginning to find some uses. Let’s look at some examples using HDDs and SATA/SAS SSDs, as well as storage systems using NVMe SSDs and network storage using NVMe over Fabrics.

Avid Nexis E2 all-flash array

The latest generation of the Avid Nexis storage platform includes HDD as well as larger SSD all-flash storage array configurations. Nexis is Avid’s software-defined storage for storage virtualization in media applications. It can be integrated into Avid and third-party workflows as well as across Avid MediaCentral and scale from 9.6TB up to 6.4PB. It allows on-demand access to a shared pool of centralized storage. The product allows the use of up to 38.4TB of NAND flash SSD storage in its E2 SSD engine to accelerate 4K through 8K mastering workflows.

The E5 nearline storage engine is another option that can be used by itself or integrated with other enterprise-class Avid Nexis engines.

Facilis Hub

At IBC in September, ATTO announced a partnership with Facilis to integrate ATTO ThunderLink NS 3252 Thunderbolt to 24GbE within the Facilis Hub shared storage platform. The storage solution provides flexible, scalable, high-bandwidth connectivity for Apple’s new Mac Pro, iMac Pro and Mac mini. Facilis’ Hub shared storage platform uses ATTO Celerity 32Gb and 16Gb Fibre Channel HBAs and Fastframe 25GB Ethernet NICs. Facilis Hub represents the evolution of the Facilis shared file system with block-level virtualization and multi-connectivity built for demanding media production workflows.

In addition, Facilis servers include ATTO 12Gb ExpressSAS HBAs. These technologies allow Facilis to create powerful solutions that fulfill a diverse set of customer connectivity needs and workflow demands.

With a new infusion of funding and the addition of many new managers, Editshare has a new next-generation file system and management console, the EFS 2020. The new EFS is designed to support collaborative workflows with up to a 20% performance improvement and with an easy-to-use user interface that also provides administrators and technicians with useful media management tools.

The EFS 2020 also has File Auditing, which offers a realtime, purpose-built content auditing platform for the entire production workflow. File Auditing tracks all content movement on the server, including a deliberately obscured change. According to Editshare, EFS 2020 File Auditing provides a complete, user-friendly activity report with a detailed trail back to the instigator.

EditShare EFS

Promise introduced its Pegasus32 series storage systems. It used Intel’s latest Titan Ridge Thunderbolt 3 chip and can power hosts up to 85W and offers up to 112TB of raw capacity with an eight-drive system. It supports Thunderbolt at up to 40Gbps or USB 3.2 at 10Gbps. It includes HW RAID-5 protection with hot-swappable 7,200 RPM HDDs and dual Thunderbolt 3 ports that allow daisy-chaining of peripheral devices.

Although Serial AT Attached (SATA) and Serial Attached SCSI (SAS) HDDs and SSDs are widely used, these older interfaces — which were based upon the needs of HDDs when they were developed — can restrict the data rate and latency that SSDs would be capable of. This has led to the wide use of an interface that brings more of the internal performance of the SSD to the computers it’s connected to. This new interface is called NVMe, which can be extended over various fabric networks such as InfiniBand, Fibre Channel and, more recently, Ethernet.

NVMe SSDs are finding increased use as primary storage for many applications, including media post projects, since they can provide the performance that large high-data-rate projects require. NVMe SSDs also provide lower latency to content than HDDs, which is important for media pros. With the lower price of SSD storage, their total cost of ownership has declined, making them even more attractive for high-performance applications, such as post production and VFX.

At IBC 2019, Dell EMC was showing its new PowerMax storage system. This included dual-port Intel Optane SSDs as persistent storage and NVMe-oF using 32Gb Fibre Channel I/O modules, directors and 32Gb NVMe host adapters using Dell EMC PowerPath multipathing software.

Dell PowerMax 2000 storage system.

According to Dell EMC, this end-to-end NVMe and Intel Optane architecture provides customers with a faster, more efficient storage system that delivers the following performance improvements:
• Up to 15 million I/Os
• Up to 350GB/sec bandwidth
• Up to 50% better response times
• Sub-100µs read response times
The built-in machine learning engine uses predictive analytics and pattern recognition to automatically place data on the correct media type (Optane or Flash memory) based upon its I/O profile. It can analyze and forecast 40 million data sets in real time, driving 6 billion decisions per day. PowerMax works with several plugins for virtualization and container storage, as well as Ansible modules. It can also be part of a multi-cloud storage architecture with Dell EMC Cloud Storage Services.

Quantum introduced its F-Series NVMe storage system to help media professionals power their modern post workflows.

Quantum F2000 NVMe storage array

It features SSD storage capacities up to 184TB. High uptime is ensured by dual-ported SSDs, dual-node servers and redundant power supplies. The NVMe SSDs allow performance of about one million random reads per second, with latencies of under 20 microseconds. Quantum found that NVMe storage can deliver more than 10 times the read and write throughput performance with a single client compared with NFS and SMB attached clients.

The NVMe SSDs support a huge amount of parallel processing. The F-Series array uses Remote Direct Memory Access (RDMA) networking technology to provide direct access between workstations and the NVMe storage devices. The F-Series array was designed for video data. It is made to handle the performance requirements of multiple streams of 4K+, high-frame-rate data as well as other types of unstructured data.

These capabilities enable editors in several rooms to work on multiple streams of 4K and even 8K video using one storage volume. The higher performance of NVMe SSDs avoids the over-provisioning of storage often required with HDD-based storage systems.

Private and Public Cloud for M&E
Digital media workflows are increasingly using either on-premises or remote cloud storage (shared data center storage) of various types for project collaboration or for access to online services and tools, such as rendering and content delivery services. Below are a few recent developments in public and private cloud storage.

Avid’s Cloudspaces allows projects and back-up media in the cloud, freeing up on-site Avid Nexis workspaces. Avid’s preferred cloud-hosting platform is Microsoft Azure, which has been making major inroads for cloud storage for the M&E industry by providing valuable partnerships and services for the industry.

The Facilis Object Cloud virtualizes cloud and LTO storage into a cache volume on the server, available on the client desktops through the Facilis shared file system and providing a highly scalable object storage cache. Facilis also announced that it had partnered with Wasabi for cloud storage.

Cloudian HyperStore Xtreme

Cloudian makes private cloud storage for the M&E industry, and at IBC it announced its HyperStore Xtreme. HyperStore Xtreme is said to provide ready access to video content whenever and wherever needed and unlock its full value through AI and other analytics applications.

The Cloudian HyperStore Xtreme is built on an ultra-dense Seagate server platform. The solution enables users to store and manage over 55,000 hours of 4K video (UAVC-4K, Ultra HD format) within just 12U of rack space. The company says that this represents a 75% space savings over what it would take to achieve the same capacity with an LTO-8 tape library.

Scality’s Ring 8 is a software-defined system that handles large-scale, on-prem storage of unstructured data. It is useful for petabyte-scale storage and beyond, and it works across multiple clouds as well as core and edge environments. The Extended Data Management (XDM) also allows integrating cloud data orchestration into the ring. The new version adds stringent security, multi-tenancy and cloud-native application support.

Summing Up
Media and entertainment storage and bandwidth demands are driving the use of more storage and new storage products, such as NVMe SSDs and NVMe-oF. While the use of NAND flash and other SSDs is growing, so is demand for HDDs for colder storage and the use of tape or cloud storage (which can be HDD or tape in the data center) for archiving. Cloud storage is growing to support collaborative work, cloud-based service providers and content distribution through online channels. Various types of AI tools are being used to generate metadata and even to manage storage and data resources, expanding upon standard media asset management tools.


Tom Coughlin, president of Coughlin Associates, is a digital storage analyst and business and technology consultant. He has over 37 years in the data storage industry, with engineering and management positions at several companies.

Storage for UHD and 4K

By Peter Collins

Over the past few years, we have seen a huge audience uptake of UHD and 4K technologies. The increase in resolution offering more detailed imagery, and the adoption of HDR bringing bigger and brighter colors.

UHD technologies are a significant selling point, and are quickly becoming the “new normal ” for many commissioners. VOD providers, in particular, are behind the wheel and pushing things forward rapidly — it’s not just a creative decision, but one that is now required for delivery. Essentially, something the cinematographers used to have to fight for is now being man-dated by those commissioning the content.

This is all very exciting, but what does this mean for productions in general? There are wide-ranging implications and questions of logistics — timescales for data transfer and processing increase, post production infrastructure and workflows must be adapted, and archiving and retrieval times are extended (to say the least).

With these UHD and 4K productions having storage requirements into the hundreds of terabytes between various stages of the supply chain, the need to store the data in an accessible, secure and affordable manner is critical.

The majority of production, VFX, post and mastering facilities are currently still working the traditional way — from physically on-premise storage (on-prem for those who like to shave off a couple of syllables) such as NAS, local storage, LTO and SANs to distributed data stores spread across different buildings of a facility.

With UHD and 4K projects sometime generating north of half a petabyte of data (which needs to stick around until delivery is complete and beyond), it’s not a simple problem to ensure that large chunks of that data are available and accessible for every-one involved in the project who needs it — at least not in the most time effective way. And as sure as death and taxes, no matter how much storage you have to hand, you will miraculously start running out far sooner than you anticipated. Since this affects all stages of the supply chain, doesn’t it make sense to have some central store of data for everyone to access what they need, when they need it?

Across all areas of the industry, we are seeing the adoption of cloud storage over the traditional on-premises solution and are starting to see opportunities where a cloud-based solution might save money, time or, even better, both! There are numerous cloud “types” out there and below is my overview of the four most widely adopted.

Public: The public cloud can offer large amounts of storage for as long as it’s required (i.e., paid for) and stop charging you for it when it’s not (which is a nice change from having to buy storage with a lengthy support contract). The physical infrastructure of a public cloud is shared with other customers of the cloud provider (this is known as multi-tenancy), however all the resources allocated to you are invisible to other customers. Your data may be spread across several different areas of the data center (or beyond) depending on where the provider’s infrastructure has the most availability.

Private: Private clouds (from a storage perspective) are useful for those needing finer grained control over their data. Private clouds are those in which companies build their own infrastructure to support the services they want to offer and have complete control over where their data physically resides.

The downside to private clouds is cost, as the business is effectively paying to be their own cloud provider and maintaining the systems over their lifetime. With this in mind, many of the bigger public cloud providers offer “virtual private clouds,” in which a chunk of their resources are dedicated solely to a single customer (single-tenancy). This of course comes at a slightly higher cost than the plain public cloud offering, but does allow more finely grained control for those consumers who need it.

Hybrid: Hybrid clouds are, as the name suggests, a mixture of the two cloud approaches outlined above (public and private). This offers the best of both worlds and can be a useful approach when flexibility is required, or when certain data accessing processes are not practical to run from an off-site public cloud (at time of writing, a 50fps realtime stream of uncompressed 4K raw to a grade, for example, is unlikely to happen from a vanilla public cloud agreement without some additional bandwidth discussions — and costs).

Having the flexibility to migrate data between a virtual private cloud and a local private cloud while continuing to work, could help minimize the impact on existing infrastructure locally, and could also enable workflows and interchange between local and “cloud-native” applications. Certain processes that take up a lot of resources locally could be re-located to a virtual private cloud for a lower cost, freeing up local resources for more time-sensitive applications.

Community: Here’s where the cloud could shine as a prospect from a production standpoint. This cloud model is based on businesses and those with a stake in the process pooling their resources and collaborating, coming up with a system and overarching set of processes that they all operate under — in effect offering a completely customized set of cloud services for any given project.

From a storage perspective, this could mean a production company running a virtual private cloud with the cost being distributed across all stakeholders accessing that data. Original camera files, for example, may be transferred to this virtual private cloud during the shoot, with post, VFX, marketing and reversioning houses downloading and uploading their work in turn. As all data transfers are monitored and tracked, the billing from a production standpoint on a per-vendor (or departmental) basis becomes much easier — everyone just pays for what they use.

MovieLabs’ “Envisioning Production in 2030” white paper, goes deeper into production related applications of cloud technologies over the coming decade (among other sharp in-sights), and is well worth absorbing over a cup of coffee or two.

As production technologies progress, we are only ever going to generate more and more data. For storage professionals, those managing systems, or project managers looking to improve timeframes and reduce costs, solutions may not only be financial or center around logistics. They may also factor in how easily it facilitates collaboration, interchange and fostering closer working relationships. To that question, the cloud may well be a clear best fit.

Studio Images: Goldcrest Post Production / Neil Harrison


Peter Collins is a post professional with experience working in film and television globally. He has worked at the forefront of new production technologies and consults on workflows, project management and industry best practices. He can be contacted via twitter via @PCPostPro or email at pcpostpro@icloud.com.

Reallusion’s Headshot plugin for realistic digi-doubles via AI

Reallusion has introduced a plugin for Character Creator 3 to help create realistic-looking digital doubles. According to the company, the Headshot plugin uses AI technology to automatically generate a digital human in minutes from one single photo, and those characters are fully rigged for voice lipsync, facial expression and full body animation.

Headshot allows game developers and virtual production teams to quickly funnel a cast of digital doubles into iClone, Unreal, Unity, Maya, ZBrush and more. The idea is to allow the digital humans to go anywhere they like and give creators a solution to rapidly develop, iterate and collaborate in realtime.

The plugin has two AI modes: Auto Mode and Pro Mode. Auto Mode is a one-click solution for creating mid-rez digital human crowds. This process allows one-click head and hair creation for realtime 3D head models. It also generates a separate 3D hair mesh with alpha mask to soften edge lines. The 3D hair is fully compatible with Character Creator’s conformable hair format (.ccHair). Users can add them into their hair library, and apply them to other CC characters.

Headshot Pro Mode offers full control of the 3D head generation process with advanced features such as Image Matching, Photo Reprojection and Custom Mask with up to 4,096-texture resolution.

The Image Matching Tool overlays an image reference plane for advanced head shape refinement and lens correction. With Photo Reprojection, users can easily fix the texture-to-mesh discrepancies resulting from face morph change.

Using high-rez source images and Headshot’s 1,000-plus morphs, users can get a scan-quality digital human face in 4K texture details. Additional textures include normal, AO, roughness, metallic, SSS and Micro Normal for more realistic digital human rendering.

The 3D Head Morph System is designed to achieve the professional and detailed look of 3D scan models. The 3D sculpting design allow users to hover over a control area and use directional mouse drags to adjust the corresponding mesh shape, from full head and face sculpting to individual features — head contour, face, eyes, nose, mouth and ears with more than 1,000 head morphs. It is now free with a purchase of the Headshot plugin.

The Headshot plugin for Character Creator is $199 and comes with the content pack Headshot Morph 1,000+ ($99). Character Creator 3 Pipeline costs $199.

Storage for Editors

By Karen Moltenbrey

Whether you are a small-, medium- or large-size facility, storage is at the heart of your workflow. Consider, for instance, the one-person shop Fin Film Company, which films and edits footage for branding and events, often on water. Then there’s Uppercut, a boutique creative/post studio where collaborative workflow is the key to pushing boundaries on commercials and other similar projects.

Let’s take a look at Uppercut’s workflow first…

Uppercut
Uppercut is a creative editorial boutique shop founded by Micah Scarpelli in 2015 and offering a range of post services. Based in New York and soon Atlanta, the studio employs five editors with their own suites along with an in-house Flame artist who has his own suite.

Taylor Schafer

In contrast to Uppercut’s size, its storage needs are quite large, with five editors working on as many as five projects at a time. Although most of it is commercial work, some of those projects can get heavy in terms of the generated media, which is stored on-site.

So, for its storage needs, the studio employs an EditShare RAID system. “Sometimes we have multiple editors working on one large campaign, and then usually an assistant is working with an editor, so we want to make sure they have access to all the media at the same time,” says Taylor Schafer, an assistant editor at Uppercut.

Additionally, Uppercut uses a Supermicro nearline server to store some of its VFX data, as the Flame artist cannot access the EditShare system on his CentOS operating system. Furthermore, the studio uses LTO-6 archive media in a number of ways. “We use EditShare’s Ark to LTO our partitions once the editors are done with them for their projects. It’s wonderfully integrated with the whole EditShare system. Ark is easy to navigate, and it’s easy to swap LTO tapes in and out, and everything is in one location,” says Schafer.

The studio employs the EditShare Ark to archive its editors’ working files, such as Premiere and Avid projects, graphics, transcodes and so forth. Uppercut also uses BRU (Backup Restore Utility) from Tolis Group to archive larger files that only live on LaCie hard drives and not on EditShare, such as a raw grade. “Then we’re LTO’ing the project and the whole partition with all the working files at the end through Ark,” Schafer explains.

The importance of having a system like this was punctuated over the summer when Uppercut underwent a renovation and had to move into temporary office space at Light Iron, New York — without the EditShare system. As a result, the team had to work off of hard drives and Light Iron’s Avid Nexis for some limited projects. “However, due to storage limits, we mainly worked off of the hard drives, and I realized how important a file storage system that has the ability to share data in real time truly is,” Schafer recalls. “It was a pain having to copy everything onto a hard drive, hand it back to the editor to make new changes, copy it again and make sure all the files were up to date, as opposed to using a storage system like ours, where everything is instantly up to date. You don’t have to worry whether something copied over correctly or not.”

She continues: “Even with Nexis, we were limited in our ability to restore old projects, which lived on EditShare.”

When a new project comes in at Uppercut, the first thing Schafer and her colleagues do is create a partition on EditShare and copy over the working template, whether it’s for Avid or Premiere, on that partition. Then they get their various working files and start the project, copying over the transcodes they receive. As the project progresses, the artists will get graphics and update the partition size as needed. “It’s so easy to change on our end,” notes Schafer. And once the project is completed, she or another assistant will make sure all the files they would possibly need, dating back to day one of the project, are on the EditShare, and that the client files are on the various hard drives and FTP links.

Reebok

“We’ll LTO the partition on EditShare through Ark onto an LTO-6 tape, and once that is complete, then generally we will take the projects or partition off the EditShare,” Schafer continues. The studio has approximately 26TB of RAID storage but, due to the large size of the projects, cannot retain everything on the EditShare long term. Nevertheless, the studio has a nearline server that hosts its masters and generics, as well as any other file the team might need to send to a client. “We don’t always need to restore. Generally the only time we try to restore is when we need to go back to the actual working files, like the Premiere or Avid project,” she adds.

Uppercut avoids keeping data locally on workstations due to the collaborative workflow.

According to Schafer, the storage setup is easy to use. Recently, Schafer finished a Reebok project she and two editors had been working on. The project initially started in Avid Media Composer, which was preferred by one of the editors. The other editor prefers Premiere but is well-versed on the Avid. After they received the transcodes and all the materials, the two editors started working in tandem using the EditShare. “It was great to use Avid on top of it, having Avid bins to open separately and not having to close out of the project and sharing through a media browser or closing out of entire projects, like you have to do with a Premiere project,” she says. “Avid is nice to work with in situations where we have multiple editors because we can all have the project open at once, as opposed to Premiere projects.”

Later, after the project was finished, the editor who prefers Premiere did a director’s cut in that software. As a result, Schafer had to re-transcode the footage, “which was more complicated because it was shot on 16mm, so it was also digitized and on one large video reel instead of many video files — on top of everything else we were doing,” she notes. She re-transcoded for Premiere and created a Premiere project from scratch, then added more storage on EditShare to make sure the files were all in place and that everything was up to date and working properly. “When we were done, the client had everything; the director had his director’s cut and everything was backed up to our nearline for easy access. Then it was LTO’d through Ark on LTO-6 tapes and taken off EditShare, as well as LTO’d on BRU for the raw and the grade. It is now done, inactive and archived.”

Without question, says Schafer, storage is important in the work she and her colleagues do. “It’s not so much about the storage itself, but the speed of the storage, how easily I’m able to access it, how collaborative it allows me to be with the other people I’m working with. Storage is great when it’s accessible and easy for pretty much anyone to use. It’s not so good when it’s slow or hard to navigate and possibly has tech issues and failures,” Schafer says. “So, when I’m looking for storage, I’m looking for something that is secure, fast and reliable, and most of all, easy to understand, no matter the person’s level of technical expertise.”

Chris Aguilar

Fin Film Company
People can count themselves fortunate when they can mix business with pleasure and integrate their beloved hobby with their work. Such is the case for solo producer/director/editor Chris Aguilar of Fin Film Company in Southern California, which he founded a decade ago. As Aguilar says, he does it all, as does Fin Film, which produces everything from conferences to music videos and commercial/branded content. But his real passion involves outdoor adventure paddle sports, from stand-up paddleboarding to pro paddleboarding.

“That’s been pretty much my niche,” says Aguilar, who got his start doing in-house production (photography, video and so forth) for a paddleboard company. Since then, he has been able to turn his passion and adventures into full-time freelance work. “When someone wants an event video done, especially one involving paddleboard races, I get the phone call and go!”

Like many videographers and editors, Aguilar got his start filming weddings. Always into surfing himself, he would shoot surfing videos of friends “and just have fun with it,” he says of augmenting that work. Eventually, this allowed him to move into areas he is more passionate about, such as surfing events and outdoor sports. Now, Aguilar finds that a lot of his time is spent filming paddleboard events around the globe.

Today, there are many one-person studios with solo producers, directors and editors. And as Aguilar points out, their storage needs might not be on the level of feature filmmakers or even independent TV cinematographers, but that doesn’t negate their need for storage. “I have some pretty wide-ranging storage needs, and it has definitely increased over the years,” he says.

In his work, Aguilar has to avoid cumbersome and heavy equipment, such as Atomos recorders, because of their weight on board the watercraft he uses to film paddleboard events. “I’m usually on a small boat and don’t have a lot of room to haul a bunch of gear around,” he says. Rather, Aguilar uses Panasonic’s AG-CX350 as well as Panasonic’s EVA1 and GH5, and on a typical two-day shoot (the event and interviews), he will fill five to six 64GB cards.

“Because most paddleboard races are long-distance, we’re usually on the water for about five to eight hours,” says Aguilar. “Although I am not rolling cameras the whole time, the weight still adds up pretty quickly.”

As for storage, Aguilar offloads his video onto SSD drives or other kinds of external media. “I call it my ‘working drive for editing and that kind of thing,’” he says. “Once I am done with the edit and other tasks, I have all those source files somewhere.” He calls on the G-Technology G-Drive Mobile SSD 1TB for in the field and some editing and their Ev Raw portable raw drive for back ups and some editing. He also calls on Gylph’s Atom SSD for the field.

For years, that “somewhere” has been a cabinet that was filled with archived files. Indeed, that cabinet is currently holding, in Aguilar’s estimate, 30TB of data, if not more. “That’s just the archives. I have 10 or 11 years of archives sitting there. It’s pretty intense,” he adds. But, as soon as he gets an opportunity, those will be ported to the same cloud backup solution he is using for all his current work.

Yes, he still uses the source cards, but for a typical project involving an end-to-end shoot, Aguilar will use at least a 1TB drive to house all the source cards and all the subsequent work files. “Things have changed. Back in the day, I used hard drives – you should see the cabinet in my office with all these hard drives in it. Thank God for SSDs and other options out there. It’s changed our lives. I can get [some brands of] 1TB SSD for $99 or a little more right now. My workflow has me throwing all the source cards onto something like that that’s dedicated to all those cards, and that becomes my little archive,” explains Aguilar.

He usually uploads the content as fast as possible to keep the data secure. “That’s always the concern, losing it, and that’s where Backblaze comes in,” Aguilar says. Backblaze is a cloud backup solution that is easily deployed across desktops and laptops and managed centrally — a solution Aguilar recently began employing. He also uses Iconik Solutions’ digital management system, which eases the task of looking up video files or pulling archived files from Backblaze. The digital management system sits on top of Backblaze and creates little offline proxies of the larger content, allowing Aguilar to view the entire 10-year archive online in one interface.

According to Aguilar, his archived files are an important aspect of his work. Since he works so many paddleboard events, he often receives requests for clips from specific racers or races, some dating back years. Prior to using Backblaze, if someone requested footage, it was a challenge to locate it because he’d have to pull that particular hard drive and plug it into the computer, “and if I had been organized that year, I’ll know where that piece of content is because I can find it. If I wasn’t organized that year, I’d be in trouble,” he explains. “At best, though, it would be an hour and a half or more of looking around. Now I can locate and send it in 15 minutes.”

Aguilar says the Iconik digital management system allows him to pull up the content on the interface and drill down to the year of the race, click on it, download it and send it off or share it directly through his interface to the person requesting the footage.

Aguilar went live with this new Backblaze and digital management system storage workflow this year and has been fully on board with it for just the past two to three months. He is still uncovering all the available features and the power underneath the hood. “Even for a guy who’s got a technical background, I’m still finding things I didn’t know I could do,” and as such, Aguilar is still fine-tuning his workflow. “The neat thing with Iconik is that it could actually support online editing straight up, and that’s the next phase of my workflow, to accommodate that.”

Fortunately or unfortunately, at this time Aguilar is just starting to come off his busy season, so now he can step back and explore the new system. And transfer onto the new system all the material on the old source cards in that cabinet of his.

“[The new solution] is more efficient and has reduced costs since I am not buying all these drives anymore. I can reuse them now. But mostly, it has given me peace of mind that I know the data is secure,” says Aguilar. “I have been lucky in my career to be present for a lot of cool moments in the sport of paddling. It’s a small community and a very close-knit group. The peace of mind knowing that this history is preserved, well, that’s something I greatly appreciate. And I know my fellow paddlers also appreciate it.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Behind the Title: Matter Films president Matt Moore

NAME: Matt Moore

COMPANY: Phoenix and Los Angeles’ Matter Films
and OH Partners

CAN YOU DESCRIBE YOUR COMPANY?
Matter Films is a full-service production company that takes projects from script to screen — doing both pre-production and post in addition to producing content. We are joined by our sister company OH Partners, a full-service advertising agency.

WHAT’S YOUR JOB TITLE?
President of Matter Films and CCO of OH Partners,

WHAT DOES THAT ENTAIL?
I’m lucky to be the only person in the company who gets to serve on both sides of the fence. Knowing that, I think that working with Matter and OH gives me a unique insight into how to meet our clients’ needs best. My number one job is to push both teams to be as innovative and outside of the box as possible. A lot of people do what we do, so I work on our points of differentiation.

Gila River Hotels and Casinos – Sports Partnership

I spend a lot of time finding talent and production partners. We want the most innovative and freshest directors, cinematographers and editors from all over the world. That talent must push all of our work to be the best. We then pair that partner with the right project and the right client.

The other part of my job is figuring out where the production industry is headed. We launched Matter Films because we saw a change within the production world — many production companies weren’t able to respond quickly enough to the need for social and digital work, so we started a company able to address that need and then some.

My job is to always be selling ideas and proposing different avenues we could pursue with Matter and with OH. I instill trust in our clients by using our work as a proof point that the team we’ve assembled is the right choice to get the job done.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People assumed when we started Matter Films that we would keep everything in-house and have no outside partners, and that’s just not the case. Matter actually gives us even more resources to find those innovators from across the globe. It allows us to do more.

The variation in budget size that we accept at Matter Films would also surprise people. We’ll take on projects with anywhere from $1,000 to one million-plus budgets. We’ve staffed ourselves in such a way that even small projects can be profitable.

WHAT’S YOUR FAVORITE PART OF THE JOB?
It sounds so cliché, but I would have to say the people. I’m around people that I genuinely want to see every single day. I love when we all get together for our meetings, because while we do discuss upcoming projects, we also goof off and just hang out. These are the people I go into battle with every single day. I choose to go into the battle with people that I whole-heartedly care about and enjoy being with. It makes life better.

WHAT’S YOUR LEAST FAVORITE?
What’s tough is how fast this business changes. Every day there’s a new conference or event, and just when you think an idea you’ve had is cutting edge and brand new, you realize you have to keep going and push to be more innovative. Just when you get caught up, you’re already behind. The big challenge is how you’re going to constantly step up your game.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I’m an early morning person. I can get more done if I start before everybody else.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I was actually pre-med for two years in college with the desire to be a surgeon. When I was an undergrad, I got an abysmal grade on one of our exams and the professor pulled me aside and told me that a score that low proved that I truly did not care about learning the material. He allowed me to withdraw from the class to find something I was more passionate about, and that was life changing.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I found out in college. I genuinely just loved making a product that either entertained or educated people. I started in the news business, so every night I would go home after work and people could tell me about the news of the day because of what I’d written, edited and put on TV.

People knew about what was going on because of the stories that we told. I have a great love for telling stories and having others engage with that story. If you’re good at the job, peoples’ lives will be different as a result of what you create.

Barbuda Ocean Club

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped a large shoot in Maryland for Live Casino, and a different tourism project for a luxury property in Barbuda. We’re currently developing our work with Virgin, and we have a shoot for a technology company focused on developing autonomous driving and green energy upcoming as well. We’re all over the map with the range of work that we have in the pipeline.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
One of my favorite projects actually took place before Matter Films was officially around, but we had a lot of the same team. We did an environmentally sensitive project for Sedona, Arizona, called Sedona Secret 7. Our campaign told the millions of tourists who arrive there how to find other equally beautiful destinations in and around Sedona instead of just the ones everyone already knew.

It was one of those times when advertising wasn’t about selling something, but about saving something.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, a pair of AirPods and a laptop. The Matter Films team gave me AirPods for my birthday, so those are extra special!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
My usage on Instagram is off the charts; it’s embarrassing. While I do look at everyone’s vacation photos or what workout they did that day, I also use Instagram as a talent sourcing tool for a lot of work purposes: I follow directors, animation studios and tons of artists that I either get inspiration from or want to work with.

A good percentage of people I follow are creatives that I want to work with at some point. I also reach out to people all the time for potential collaborations.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I love outdoor adventures. Some days I’ll go on a crazy hike here in Arizona or rent a four-wheeler and explore the desert or mountains. I also love just hanging out with my kids — they’re a great age.

Redshift integrates Cinema 4D noises, nodes and more

Maxon and Redshift Rendering Technologies have released Redshift 3.0.12, which has native support for Cinema 4D noises and deeper integration with Cinema 4D, including the option to define materials using Cinema 4D’s native node-based material system.

Cinema 4D noise effects have been in demand within other 3D software packages because of their flexibility, efficiency and look. Native support in Redshift means that users of other DCC applications can now access Cinema 4D noises by using Redshift as their rendering solution. Procedural noise allows artists to easily add surface detail and randomness to otherwise perfect surfaces. Cinema 4D offers 32 different types of noise and countless variations based on settings. Native support for Cinema 4D noises means Redshift can preserve GPU memory while delivering high-quality rendered results.

Redshift 3.0.12 provides content creators deeper integration of Redshift within Cinema 4D. Redshift materials can now be defined using Cinema 4D’s nodal material framework, introduced in Release 20. As well, Redshift materials can use the Node Space system introduced in Release 21, which combines the native nodes of multiple render engines into a single material. Redshift is the first to take advantage of the new API in Cinema 4D to implement its own Node Spaces. Users can now also use any Cinema 4D view panel as a Redshift IPR (interactive preview render) window, making it easier to work within compact layouts and interact with a scene while developing materials and lighting.

Redshift 3.0.12 is immediately available from the Redshift website.

Maxon acquired RedShift in April of 2019.

Creative Outpost buys Dolby-certified studios, takes on long-form

After acquiring the studio assets from now-closed Angell Sound, commercial audio house Creative Outpost is now expanding its VFX and audio offerings by entering the world of long-form audio. Already in picture post on its first Netflix series, the company is now open for long-form ADR, mix and review bookings.

“Space is at a premium in central Soho, so we’re extremely privileged to have been able to acquire four studios with large booths that can accommodate crowd sessions,” says Creative Outpost co-founders Quentin Olszewski and Danny Etherington. “Our new friends in the ADR world have been super helpful in getting the word out into the wider community, having seen the size, build quality and location of our Wardour Street studios and how they’ll meet the demands of the growing long-form SVOD market.”

With the Angell Sound assets in place, the team at Creative Outpost has completed a number of joint picture and sound projects for online and TV. Focusing two of its four studios primarily on advertising work, Creative Outpost has provided sound design and mix on campaigns including Barclays’ “Team Talk,” Virgin Mobile’s “Sounds Good,” Icee’s “Swizzle, Fizzle, Freshy, Freeze,” Green Flag’s “Who The Fudge Are Green Flag,” Santander’s “Antandec” and Coca Cola’s “Coaches.” Now, the team’s ambitions are to apply its experience from the commercial world to further include long-form broadcast and feature work. Its Dolby-approved studios were built by studio architect Roger D’Arcy.

The studios are running Avid Pro Tools Ultimate, Avid hardware controllers and Neumann U87 microphones. They are also set up for long-form/ADR work with EdiCue and EdiPrompt, Source-Connect Pro and ISDN capabilities, Sennheiser MKH 416 and DPA D:screet microphones.

“It’s an exciting opportunity to join Creative Outpost with the aim of helping them grow the audio side of the company,” says Dave Robinson, head of sound at Creative Outpost. “Along with Tom Lane — an extremely talented fellow ex-Angell engineer — we have spent the last few months putting together a decent body of work to build upon, and things are really starting to take off. As well as continuing to build our core short-form audio work, we are developing our long-form ADR and mix capabilities and have a few other exciting projects in the pipeline. It’s great to be working with a friendly, talented bunch of people, and I look forward to what lies ahead.”

 

Localization: Removing language barriers on global content

By Jennifer Walden

Foreign films aren’t just for cinephiles anymore. Streaming platforms are serving up international content to the masses. There are incredible series — like Netflix’s Spanish series Money Heist, Danish series The Rain and the German series Dark — that would have been otherwise unknown to American audiences. The same holds true for American content reaching foreign audiences. For instance, Starz series American Gods is available in French. Great stories are always worth sharing and language shouldn’t be the barrier that holds back the flood of global entertainment.

Now I know there are purists who feel a film or show should be experienced in its original language, but admit it, sometimes you just don’t feel like reading subtitles. (Or, if you do, you can certainly watch those aforementioned shows with subtitles and hear the original language.) So you pop on the audio for your preferred language and settle in.

Chris Carey in the Burbank studio

Dubbing used to be a poorly lipsynced affair, with bad voiceovers that didn’t fit the characters on screen in any capacity. Not so anymore. In fact, dubbing has evolved so much that it’s earned a new moniker — localization. The increased offering of globally produced content has dramatically increased the demand for localization. And as they say, practice makes perfect… or better, anyway.

Two major localization providers — BTI Studios and Iyuno Media Group — have recently joined forces under the Iyuno brand, which is now headquartered in London. Together, they have 40 studio facilities in 30 different countries, and support 82 different languages, according to its chief revenue officer/managing director of the Americas Chris Carey.

Those are impressive numbers. But what does this mean for the localization end result?

Iyuno is able to localize audio locally. The language localization for a specific market is happening in that market. This means the language is current. The actors aren’t just fluent; they’re native speakers. “Dialects change really fast. Slang changes. Colloquialisms change. These things are changing all the time, and if you’re not in the market with the target audience you can miss a lot of things that a good geographically diverse network of performers can give you,” says Carey.

Language expertise doesn’t end with actor performance. There are also the scripts and subtitles to think about. Localization isn’t a straight translation. There’s the process of script adaptation in which words are chosen based on meaning (of course) but also on syllable count in order to match lipsync as closely as possible. It’s a feat that requires language fluency and creativity.

BTI France

“If you think about the Eastern languages, and the European and Eastern European languages, they use a lot of consonants and syllables to make a simple English word. So we’re rewriting the script to use a different word that means the same thing but will fit better with the actor on-screen. So when the actor says the line in Polish and it comes out of what appears to be the mouth of the American actor on-screen, the lipsync is better,” explains Carey.

Iyuno doesn’t just do translations — dubbing and subtitles — to and from English. Of the 82 languages it covers, it can translate any one of those into another. This process requires a network of global linguists and a cloud-based infrastructure that can support tons of video streaming and asset sharing — including the “dubbing script” that’s been adapted into the destination language.

The magic of localization is 49% script adaptation, 49% dialogue editing and 2% processing in Avid Pro Tools, like time shifting and time compression/expansion to finesse the sync. “You’re looking at the actors on screen and watching their lip movement and trying to adjust this different language to come out of their mouth as close as possible,” says Carey. “There isn’t an automated-fit sound tool that would apply for localization. The actor, the director and the engineer are in the studio together working on the sync, adjusting the lines and editing the takes.”

As the voice record session is happening, “sometimes the actor will suggest a better way to say a line, too, and they’ll do an ‘as recorded script,’” says Carey. “They’ll make red lines and markups to the script, and all of that workflow we have managed into our technology platform, so we can deliver back to the customer the finished dub, the mix, and the ‘as recorded script’ with all of the adaptations and modifications that we had done.”

Darkest Hours is just one of the many titles they’ve worked on.

Iyuno’s technology platform (its cloud-based collaboration infrastructure) is custom-built. It can be modified and updated as needed to improve the workflow. “That backend platform does all the script management and file asset management; we are getting the workflow very efficient. We break all the scripts down into line counts by actor, so he/she can do the entire session’s worth of lines throughout that show. Then we’ll bring in the next actor to do it,” says Carey.

Pro Tools is the de facto DAW for all the studios in the Iyuno Media Group. Having one DAW as the standard makes it easy to share sessions between facilities. When it comes to mic selection, Carey says the studios’ engineers make those choices based on what’s best for each project. He adds, “And then factor in the acoustic space, which can impart a character to the sound in a variety of different ways. We use good studios that we built with great acoustic properties and use great miking techniques to create a sound that is natural and sounds like the original production.”

Iyuno is looking to improve the localization process even further by building up a searchable database of actors’ voices. “We’re looking at a bit more sophisticated science around waveform analysis. You can do a Fourier transform on the audio to get a spectral analysis of somebody’s voice. We’re looking at how to do that to build a sound-alike library so that when we have a show, we can listen to the actor we are trying to replace and find actors in our database that have a voice match for that. Then we can pull those actors in to do a casting test,” says Carey.

Subtitles
As for subtitles, Iyuno is moving toward a machine-assisted workflow. According to Carey, Iyuno is inputting data on language pairs (source and destination) into software that trains on that combination. Once it “learns” how to do those translations, the software will provide a first pass “in a pretty automated fashion, quite faster than a human would have done that. Then a human QCs it to make sure the words are right, makes some corrections, corrects intentions that weren’t literal and needs to be adjusted,” he says. “So we’re bringing a lot of advancement in with AI and machine learning to the subtitling world. We will expect that to continue to move pretty dramatically toward an all-machine-based workflow.”

But will machines eventually replace human actors on the performance side? Carey asks, “When were you moved by Google assistant, Alexa or Siri talking to you? I reckon we have another few turns of the technology crank before we can have a machine produce a really good emotional performance with a synthesized voice. It’s not there yet. We’re not going to have that too soon, but I think it’ll come eventually.”

Main Image: Starz’s American Gods – a localization client.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

Framestore VFX will open in Mumbai in 2020

Oscar-winning creative studio Framestore will be opening a full-service visual effects studio in Mumbai in 2020 to target India’s booming creative industry. The studio will be located in the Nesco IT Park in Goregaon, in the center of Mumbai’s technology district. The news hammers home Framestore’s continued interest in India, after having made a major investment in Jesh Krishna Murthy’s VFX studio, Anibrain, in 2017.

“Mumbai represents a rolling of wheels that were set in motion over two years ago,” says Framestore founder/CEO William Sargent. “Our investment in Anibrain has grown considerably, and we continue in our partnership with Jesh Krishna Murthy to develop and grow that business. Indeed, they will become a valued production partner to our Mumbai offering.”

Framestore looks to make considerable hires in the coming months, aiming to build an initial 500-strong team with existing Framestore talent combined with the best of local Indian expertise. Mumbai will work alongside the global network, including London and Montreal, to create a cohesive virtual team delivering high-quality international work.

“Mumbai has become a center of excellence in digital filmmaking. There’s a depth of talent that can deliver to the scale of Hollywood with the color and flair of Bollywood,” Sargent continues. “It’s an incredibly vibrant city and its presence on the international scene is holding us all to a higher standard. In terms of visual effects, we will set the standard here as we did in Montreal almost eight years ago.”

 

London’s Freefolk beefs up VFX team

Soho-based visual effects studio Freefolk, which has seen growth in its commercials and longform work, has grown its staff to meet this demand. As part of the uptick in work, Freefolk promoted Cheryl Payne from senior producer to head of commercial production. Additionally, Laura Rickets has joined as senior producer, and 2D artist Bradley Cocksedge has been added to the commercials VFX team.

Payne, who has been with Freefolk since the early days, has worked on some of the studio’s biggest commercials, including; Warburtons for Engine, Peloton for Dark Horses and Cadburys for VCCP.

Rickets comes to Freefolk with over 18 years of production experience working at some of the biggest VFX houses in London, including Framestore, The Mill and Smoke & Mirrors, as well as agency side for McCann. Since joining the team, Rickets has VFX-produced work on the I’m A Celebrity IDs, a set of seven technically challenging and CG-heavy spots for the new series of the show as well as ads for the Rugby World Cup and Who Wants to Be a Millionaire?.

Cocksedge is a recent graduate who joins from Framestore, where he was working as an intern on Fantastic Beasts: The Crimes of Grindelwald. While in school at the University of Hertfordshire, he interned at Freefolk and is happy to be back in a full-time position.

“We’ve had an exciting year and have worked on some really stand-out commercials, like TransPennine for Engine and the beautiful spot for The Guardian we completed with Uncommon, so we felt it was time to add to the Freefolk family,” says Fi Kilroe, Freefolk’s co-managing director/executive producer.

Main Image: (L-R) Cheryl Payne, Laura Rickets and Bradley Cocksedge

Quick Chat: The Rebel Fleet’s Michael Urban talks on-set workflows

When shooting major motion pictures and episodic television with multiple crews in multiple locations, production teams need a workflow that gives them fast access and complete control of the footage across the entire production, from the first day of the shoot to the last day of post. This is Wellington, New Zealand-based The Rebel Fleet’s reason for being.

What exactly do they do? Well we reached out to managing director Michael Urban to find out.

Can you talk more about what you do and what types of workflows you supply?
The Rebel Fleet supplies complete workflow solutions, from on-set Qtake video assist and DIT to dailies, QC, archive and delivery to post. By managing the entire workflow, we can provide consistency and certainty around the color pipeline, monitor calibration, crew expertise and communication, and production can rely on one team to take care of that part of the workflow.

We have worked closely with Moxion many times and use its Immediates workflow, which enables automated uploads direct from video assist into its secure dailies platform. Anyone with access to the project can view rushes and metadata from set moments after the video is shot. This also enables different shooting units to automatically and securely share media. Two units shooting in different countries can see what each other has shot, including all camera and scene/take metadata. This is then available and catalogued directly into the video assist system. We have a lot of experience working alongside camera and VFX on-set as well as delivering to post, making sure we are delivering exactly what’s needed in the right formats.

You recently worked on a film that was shot in New Zealand and China, and you sent crews to China. Can you talk about that workflow a bit and name the film?
I can’t name the film yet, but I can tell you that it’s in the adventure genre and is coming out in the second half of 2020. The main pieces of software are Colorfront On-Set Dailies for processing all the media and Yoyotta for downloading and verifying media. We also use Avid for some edit prep before handing over to editorial.

How did you work with the DP and director? Can you talk about those relationships on this particular film?
On this shoot the DP and director had rushes screenings each night to go over the main unit and second unit rushes and make sure the dailies grade was exactly what they wanted. This was the last finesse before handing over dailies to editorial, so it had to be right. As rushes were being signed off, we would send them off to the background render engine, which would create four different outputs in multiple resolutions and framing. This meant that moments after the last camera mag was signed off, the media was ready for Avid prep and delivery. Our data team worked hard to automate as many processes as possible so there would be no long nights sorting reports and sheets. That work happened as we went throughout the day instead of leaving a multitude of tasks for the end of the day.

How do your workflows vary from project to project?
Every shoot is approached with a clean slate, and we work with the producers, DP and post to make sure we create a workflow that suits the logistical, budgetary and technical needs of that shoot. We have a tool kit that we rely on and use it to select the correct components required. We are always looking for ways to innovate and provide more value for the bottom line.

You mentioned using Colorfront tools, what does that offer you? And what about storage? Seems like working on location means you need a solid way to back up.
Colorfront On-Set Dailies takes care of QC, grade, sound sync and metadata. All of our shared storage is built around Quantum Xcellis, plus the Quantum QXS hybrid storage systems for online and nearline. We create the right SAN for the job depending on the amount of storage and clients required for that shoot.

Can you name projects you’ve worked on in the past as well as some recent work?
Warner Bros.’ The Meg, DreamWorks’ Ghost in the Shell, Sonar’s The Shannara Chronicles, STX Entertainment’s Adrift, Netflix’s The New Legends of Monkey and The Letter for the King and Blumhouse’s Fantasy Island.

Video: The Irishman’s focused and intimate sound mixing

Martin Scorsese’s The Irishman, starring Robert De Niro, Al Pacino and Joe Pesci, tells the story of organized crime in post-war America as seen through the eyes of World War II veteran Frank Sheeran (DeNiro), a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. In the film, the actors have been famously de-aged, thanks to VFX house ILM, but it wasn’t just their faces that needed to be younger.

In this video interview, Academy Award-winning re-recording sound mixer and decades-long Scorsese collaborator Tom Fleischman — who will receive the Cinema Audio Society’s Career Achievement Award in January — talks about de-aging actors’ voices as well as the challenges of keeping the film’s sound focused and intimate.

“We really had to try and preserve the quality of their voices in spite of the fact we were trying to make them sound younger. And those edits are sometimes difficult to achieve without it being apparent to the audience. We tried to do various types of pitch changing and we us used different kinds of plugins. I listened to scenes from Serpico for Al Pacino and The King of Comedy for Bob DeNiro and tried to match the voice quality of what we had from The Irishman to those earlier movies.”

Fleischman worked on the film at New York’s Soundtrack.

Enjoy the video:

The Irishman editor Thelma Schoonmaker

By Iain Blair

Editor Thelma Schoonmaker is a three-time Academy Award winner who has worked alongside filmmaker Martin Scorsese for almost 50 years. Simply put, Schoonmaker has been Scorsese’s go-to editor and key collaborator over the course of some 25 films, winning Oscars for Raging Bull, The Aviator and The Departed. The 79-year-old also received a career achievement award from the American Cinema Editors (ACE).

Thelma Schoonmaker

Schoonmaker cut Scorsese’s first feature, 1967’s Who’s That Knocking at My Door, and since 1980’s Raging Bull has worked on all of his features, receiving a number of Oscar nominations along the way. There are too many to name, but some highlights include The King of Comedy, After Hours, The Color of Money, The Last Temptation of Christ, Goodfellas, Casino and Hugo.

Now Scorsese and Schoonmaker have once again turned their attention to the mob with The Irishman. Starring Robert De Niro, Al Pacino and Joe Pesci, it’s an epic saga that runs 3.5 hours and focuses on organized crime in post-war America. It’s told through the eyes of World War II veteran Frank Sheeran (De Niro). He’s a hustler and hitman who worked alongside some of the most notorious figures of the 20th century. Spanning decades, the film chronicles one of the greatest unsolved mysteries in American history, the disappearance of legendary union boss Jimmy Hoffa. It also offers a monumental journey through the hidden corridors of organized crime — its inner workings, rivalries and connections to mainstream politics.

But there’s a twist to this latest mob drama that Scorsese directed for Netflix from a screenplay by Steven Zaillian. Gone are the flashy wise guys and the glamour of Goodfellas and Casino. Instead, the film examines the mundane nature of mob killings and the sad price any survivors pay in the end.

Here, Schoonmaker — who in addition to her film editing works to promote the films and writings of her late husband, famed British director Michael Powell (The Red Shoes, Black Narcissus) — talks about cutting The Irishman, working with Scorsese and their long and storied collaboration.

The Irishman must have been very challenging to cut, just in terms of its 3.5-hour length?
Actually, it wasn’t very challenging to cut. It came together much more quickly than some of our other films because Scorsese and Steve Zaillian had created a very strong structure. I think some critics think I came up with this structure, but it was already there in the script. We didn’t have to restructure, which we do sometimes, and only dropped a few minor scenes.

Did you stay in New York cutting while he shot on location, or did you visit the set?
Almost everything in the The Irishman was shot in or around New York. The production was moving all over the place, so I never got to the set. I couldn’t afford the time.

When I last interviewed Marty, he told me that editing and post are his favorite parts of filmmaking. When the two of you sit down to edit, is it like having two editors in the room rather than a director and his editor?
Marty’s favorite part of filmmaking is editing, and he directs the editing after he finishes shooting. I do an assembly based on what he tells me in dailies and what I feel, and then we do all the rest of the editing together.

Could you give us some sense of how that collaboration works?
We’ve worked together for almost 50 years, and it’s a wonderful collaboration. He taught me how to edit at first, but then gradually it has become more of a collaboration. The best thing is that we both work for what is best for the film — it never becomes an ego battle.

How long did it take to edit the film, and what were the main challenges?
We edited for a year and the footage was so incredibly rich: the only challenge was to make sure we chose the best of it and took advantage of the wonderful improvisations the actors gave us. It was a complete joy for Scorsese and me to edit this film. After we locked the film, we turned over to ILM so they could do the “youthifying” of the actors. That took about seven months.

Could you talk about finding the overall structure and considerable use of flashbacks to tell the story?
Scorsese had such a strong concept for this film — and one of his most important ideas was to not explain too much. He respects the audience’s ability to figure things out themselves without pummeling them with facts. It was a bold choice and I was worried about it, frankly, at first. But he was absolutely right. He didn’t want the film to feel like a documentary. He wanted to use brushstrokes of history just to show how they affected the characters. The way the characters were developed in the film, particularly Frank Sheeran, the De Niro character, was what was most important.

Could you talk about the pacing, and how you and Marty kept its momentum going?
Scorsese was determined that The Irishman would have a slower pace than many films today. He gave the film a deceptive simplicity. Interestingly, our first audiences had no problem with this — they became gripped by the characters and kept saying they didn’t mind the length and loved the pace. Many of them said they wanted to see the film again right away.

There are several slo-mo sequences. Could you talk about why you used them and to what effect?
The Phantom camera slo-motion wedding sequence (250fps) near the end of the film was done to give the feeling of a funeral, instead of a wedding, because the DeNiro character has just been forced to do the worst thing he will ever do in his life. Scorsese wanted to hold on De Niro’s face and evoke what he is feeling and to study the Italian-American faces of the mobsters surrounding him. Instead of the joy a wedding is supposed to bring, there is a deep feeling of grief.

What was the most difficult sequence to cut and why?
The montage where De Niro repeatedly throws guns into the river after he has killed someone took some time to get right. It was very normal at first — and then we started violating the structure and jump cutting and shortening until we got the right feeling. It was fun.

There’s been a lot of talk about the digital de-aging process. How did it impact the edit?
Pablo Helman at ILM came up with the new de-aging process, and it works incredibly well. He would send shots and we would evaluate them and sometimes ask for changes — usually to be sure that we kept the amazing performances of De Niro, Pacino and Pesci intact. Sometimes we would put back in a few wrinkles if it meant we could keep the subtlety of De Niro’s acting, for example. Scorsese was adamant that he didn’t want to have younger actors play the three main parts in the beginning of the film. So he really wanted this “youthifying” process to work — and it does!

There’s a lot of graphic violence. How do you feel about that in the film?
Scorsese made the violence very quick in The Irishman and shot it in a deceptively simple way. There aren’t any complicated camera moves and flashy editing. Sometimes the violence takes place after a simple pan, when you least expect it because of the blandness of the setting. He wanted to show the banality of violence in the mob — that it is a job, and if you do it well, you get rewarded. There’s no morality involved.

Last time we talked, you were using the Lightworks editing system. Do you still use Lightworks, and if so, can you talk about the system’s advantages for you?
I use Lightworks because the editing surface is still the fastest and most efficient and most intuitive to use. Maintaining sync is different from all other NLE systems. You don’t correct sync by sync lock — if you go out of sync, Lightworks gives you a red icon with a number of frames that you are out of sync. You get to choose where you want to correct sync. Since editors place sound and picture on the timeline, adjusting sync where you want to adjust the sync is much more efficient.

You’ve been Marty’s editor since his very first film — a 50-year collaboration. What’s the secret?
I think Scorsese felt when he first met me that I would do what was right for his films — that there wouldn’t be ego battles. We work together extremely well. That’s all there is to it. There couldn’t be a better job.

Do you ever have strong disagreements about the editing?
If we do have disagreements, which is very rare, they are never strong. He is very open to experimentation. Sometimes we will screen two ways and see what the audience says. But that is very rare.

What’s next?
A movie about the Osage Nation in Oklahoma, based on the book “Killers of the Flower Moon” by David Grann.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Behind the Title: MPC’s CD Morten Vinther

This creative director/director still jumps on the Flame and also edits from time to time. “I love mixing it up and doing different things,” he says.

NAME: Morten Vinther

COMPANY: Moving Picture Company, Los Angeles

CAN YOU DESCRIBE YOUR COMPANY?
From original ideas all the way through to finished production, we are an eclectic mix of hard-working and passionate artists, technologists and creatives who push the boundaries of what’s possible for our clients. We aim to move the audience through our work.

WHAT’S YOUR JOB TITLE?
Creative Director and Director

WHAT DOES THAT ENTAIL?
I guide our clients through challenging shoots and post. I try to keep us honest in terms of making sure that our casting is right and the team is looked after and has the appropriate resources available for the tasks ahead, while ensuring that we go above and beyond on quality and experience. In addition to this, I direct projects, pitch on new business and develop methodology for visual effects.

American Horror Story

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I still occasionally jump on Flame and comp a job — right now I’m editing a commercial. I love mixing it up and doing different things.

WHAT’S YOUR FAVORITE PART OF THE JOB?
Writing treatments. The moments where everything is crystal clear in your head and great ideas and concepts are rushing onto paper like an unstoppable torrent.

WHAT’S YOUR LEAST FAVORITE?
Writing treatments. Staring at a blank page, writing something and realizing how contrived it sounds before angrily deleting everything.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
Early mornings. A good night’s sleep and freshly ground coffee creates a fertile breeding ground for pure clarity, ideas and opportunities.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would be carefully malting barley for my next small batch of artisan whisky somewhere on the Scottish west coast.

Adidas Creators

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I remember making a spoof commercial at my school when I was about 13 years old. I became obsessed with operating cameras and editing, and I began to study filmmakers like Scorsese and Kubrick. After a failed career as a shopkeeper, a documentary production company in Copenhagen took mercy on me, and I started as an assistant editor.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
American Horror Story, Apple Unlock, directed by Dougal Wilson, and Adidas Creators, directed by Stacy Wall.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
If I had to single one out, it would probably be Apple’s Unlock commercial. The spot looks amazing, and the team was incredibly creative on this one. We enjoyed a great collaboration between several of our offices, and it was a lot of fun putting it together.

Apple’s Unlock

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, laptop and PlayStation.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Some say social media rots your brains. That’s probably why I’m an Instagram addict.

CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Odesza, SBTRKT, Little Dragon, Disclosure and classic reggae.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I recently bought a motorbike, and I spin around LA and Southern California most weekends. Concentrating on how to survive the next turn is a great way for me to clear the mind.

Review: Acer ConceptD 7 laptop with Nvidia RTX 2080 GPU

By Brady Betzel

This year, Nvidia announced a new line of drivers for its latest desktop and mobile RTX GPUs. Nvidia has seen the need for more speed and power from today’s multimedia creators who aren’t necessarily going to buy a high-end workstation.

The latest line of Nvidia RTX GPUs, built on the latest Turing architecture, is being used in game creation (for realtime raytracing), music creation, video creation and many more multimedia creation workspaces. That’s because there are plenty of people — even those who are using Adobe’s After Effects or Premiere Pro or even Blackmagic’s DaVinci Resolve Studio — who don’t need the enterprise-level guarantees that the Nvidia Quadro GPU line offers. Staying with the RTX line gives them all the power they need … and therefore they can save a few dollars.

Nvidia has always had a primary focus on video game users with its GTX and now RTX drivers, so in the past, multimedia creators sort of played second fiddle. But with the new Studio drivers being released into the wild, Nvidia has changed the dynamic and is showing us multimedia creators that they are serious about supporting our workflows.

In fact, one of the most coveted aspects of the Quadro line of GPUs was the ability to work in 10-bit color (or 30-bit color, 10 per channel), and with the Nvidia Studio drivers, the GeForce and Titan lines have 10-bit color processing enabled.

The Acer ConceptD 7 is one of the many RTX Studio-blessed laptops out there, and that’s the one I’m going to focus on with this review.

The ConceptD is not necessarily a “mobile workstation” — the term workstation denotes official blessings from the workstation creator, like ISV (Independent Software Vendor) verifications, 365/24/7 uptime promises and enterprise-level components inside the machines.

The RTX Studio Drivers were built with apps like Resolve and Premiere Pro in mind. That’s not to say they don’t perform well for video games (my kids loved playing Roblox on it!). In fact, from the Acer website, the name ConceptD derives from “Concept,” being an idea that is still forming, and “D,” reflecting dynamism, discovery, design or whatever else a creator wants or needs from his or her laptop.

I ran a bunch of tests on the ConceptD 7, including benchmarks and real-world testing in Premiere, After Effects, Red Cine X Pro and Resolve 16 Studio, which I will get to in a bit. First, let’s take a look under the hood of the Acer ConceptD 7.

Its OS is Windows 10 Home (64-bit); Intel Core i7-9750H (6-core) 2.6GHz (4.5GHz Turbo) processor; Nvidia GeForce RTX 2080 GPU with 8GB of dedicated memory; and a 15.6-inch 4K (UHD) 16:9 IPS LED LCD. (Oddly enough, the display on the laptop seems to be only 8-bit color. With such a groundbreaking spec for the GeForce RTX GPUs running 10-bit color, I would think Nvidia would have used a 10-bit display, but maybe they shaved a few dollars off by making it 8-bit running off of the Intel GPU.) There’s also 32GB, DDR4 2666 memory; 1TB SSD; HDMI, three USB 3.1 Gen 1 Type-A ports, one USB 3.1 Gen 2 Type-C port, a headphone jack, a microphone jack, a USB Type-C port supporting up to 10Gb/s, DisplayPort over USB-C and Thunderbolt 3. Its size is 0.7” x 14.1” x 10” and it weighs 4.63 lbs. There is a one-year limited warranty.

The Acer ConceptD 7 with the RTX 2080 and 32GB of RAM retails for $2,999.99. The laptop is lightweight but feels sturdy. It is white. At first the white metal shroud is off-putting and feels like you might scratch or mark it up quickly, but I found the material resilient. The keyboard feels natural — meaning I didn’t fumble over the keys as I do on some laptops these days. The keys are high enough to have a nice tactile experience, and the keyboard has a natural amber glow that doesn’t feel distracting. The IPS UHD screen is bright and crisp. In fact, it was actually a little too bright for my tastes, so I bumped the brightness down about half.

Initial Impressions
When I turned the ConceptD on for the first time, Windows booted up quickly thanks to its NVMe SSD boot drive. On the desktop I noticed a PDF to read. It had the actual serial number of my particular ConceptD 7, along with the calibration results from the Konica Minolta CA-310 color analyzer. It made me feel like this laptop might be more focused toward multimedia creators even if the calibration results are from the Acer factory.

Nonetheless, the ConceptD covers 100% of the Adobe RGB gamut, and it has the Pantone-validated color fidelity certification and the color accuracy of Delta E <2. I did notice that the screen had an orange tinge to it and could not be correct, so I fumbled my way to an app called Acer ConceptD Palette, which gives some tuning options for the display’s color profile. I changed from Adobe RGB to Native. Native was the winner, where whites were much crisper and cleaner than the Adobe RGB. Not sure what that is about, but it was my answer.

Next step: music. I need music to listen to while I work, so I installed the Spotify Music app and began playing music. (For those interested, while writing this review, I was playing Wolves At The Gate, Tool, Thrice, Architects and Underoath.) When playing music through Spotify, I noticed the webcam light turned on, and I was immediately a little frightened. I couldn’t figure out what was causing the webcam to turn on. I jumped into the device manager and cut all access to the webcam by disabling it. In addition, after some digging to put my mind at ease, I noticed the pre-installed Waves Maxx audio EQ suite had a “camera tracking” option for audio positioning, which requires a webcam to work. I think that was the culprit and turned it off just in case.

Overall, I think Nvidia might need to vet the laptop’s software install a little more thoroughly with things like this popping up, since a lot of creators are careful about who has access to what on their systems. Privacy is a big deal. Maybe a “light” OS software install could be an option in the future to keep it clean. After about 30 minutes of cleanup, I had the ConceptD 7 ready to install Resolve Studio 16, Premiere, After Effects, Red Cine X Pro and the Nvidia GeForce RTX Studio drivers. The Studio drivers were not installed on the ConceptD, which was a little disappointing. I really wish they had already been installed or at least on the system already so I didn’t have to hunt for it, but I found it here.

Real Testing
As a professional online editor and colorist, I use Resolve Studio, Premiere and many more multimedia apps heavily, constantly and sometimes simultaneously. Having the Nvidia RTX 2080 GPU with Max-Q design essentially means you get the same power as the desktop version of the 2080 but with some thermal throttling to keep the operating temperature down. This means you could see a performance hit. But from my testing, I was ecstatic at how fast this system worked and how well it handled really heavy 4K, 6K and even 8K media.

Red Cine X Pro
In Red Cine X Pro, I was able to play back 4K, 6K and 8K raw Red R3D files in real time. That was mind-blowing, to be honest. The only downside is that software manufacturers like Blackmagic and Adobe haven’t incorporated the latest Red SDK into their software to allow this realtime playback of high-resolution raw Red media. So once they do, you will be playing back 8K raw Red footage flawlessly! Hopefully, multiple streams at once.

Premiere and After Effects
Inside Adobe Premiere Pro and Adobe After Effects, I ran my favorite benchmarks from Puget Systems, whose standard desktop system for benchmarking is a desktop class Intel i9 with 128GB of RAM and Nvidia GeForce RTX 2080. So while this laptop is close, it is just that — a laptop with more thermal throttling issues, one quarter of the memory and a lower-end CPU. However, the ConceptD scores were not completely disheartening.

Puget’s standard overall score for the Intel i9 system in After Effects is a 990, while the ConceptD 7 scored a 718. In Premiere, the ConceptD 7 scored quite low, at an overall score of 469 (out of 1,000). I credit this to Premiere’s focus on single cores for processing and not much reliance on GPUs and/or hyperthreading. In my practical testing of Premiere using Media Encoder for exports and encoding, I created two one-minute UHD sequences: Sequence 1 (Export Test 1) for basic color correction only and Sequence 2 (Export Test 2) with a few effects, including 110% zoom, sharpening 100% inside of Lumetri and a Gaussian blur of 20. I disabled any caching or optimized media and checked off “Maximum Render Quality” inside of Media Encoder.

Here are my results:

– H.264 – UHD, no audio, 10Mb/s, max render quality (standard Media Encoder H.264 setting)
Export Test 1: 3:37
Export Test 2: 3:50

– H.265 – UHD, no audio, 7Mb/s, max render quality (standard Adobe Media Encoder H.265 setting)
Export Test 1: 2:43
Export Test 2: 2:48

– DPX testing: 10-bit video levels, UHD
Export 1: 3:45
Export 2: 3:53

In terms of timeline playback performance inside of Premiere Pro, all clips would play back at half quality. 4K would play back at full quality; 6K 3:1 would play at half quality, but at full quality, it dropped 58 frames, while 8K dropped 48 frames. I tried to use Blackmagic’s new Blackmagic Raw plugin to test encoding and playback power in Premiere, but it wouldn’t work for me.

While editing still felt good in Premiere Pro, you still might want to use an offline/online workflow or transcode to a mezzanine format of ProRes and/or DNxHR to get a more fluid experience — at least until Adobe incorporates Red’s new SDK framework.

Resolve Studio 16
In Resolve Studio 16, the Acer ConceptD 7 with Nvidia GeForce RTX 2080 really shined when playing back Blackmagic raw (BRAW) files. It showed off how powerful a laptop can be with the right technology behind it. Similar to my Premiere testing, I created two one-minute-long UHD sequences with raw Red R3D media (which can be found here and in separate one-minute-long UHD sequences with BRAW clips (which can be found here). Boy, oh boy, the BRAW took full advantage of the CUDA cores in the 2080! Here are the tests:

When playing back the raw R3D Red files inside of Resolve 16 Studio, I was able to play back 4K, 29.97fps footage in real time at full resolution, premium debayer quality; 6K 23.98fps in real time at quarter resolution, good debayer quality; and 8K, 23.98 fps in real time at one-eighth resolution, good debayer quality. Each of these clips had only basic color correction.

When exporting, my results varied, but the speed really kicked in with resizes and effects, which jumpstarted the RTX 2080 GPU (the fans even began to take off). Make sure to check out the BRAW H.264 Nvidia export below:

– Export Test 1: Basic color corrections
– Export Test 2: one minute of 4K, 2x 6K (RedCode 3:1), 8K (Redcode 7:1) Red media in UHD sequence without audio — 110% zoom; spatial NR Faster, Small, 25; Resolve OFX Gaussian blur (default). No cached or optimized media. Force resize and debayer to highest quality.

– Custom export H.264 as a QuickTime movie full-quality resize and debayer (essentially the YouTube UHD preset, but with forcing resize and debayer to highest quality): Native H.264 encoding — about 36 minutes.
Export Test 1: 4:11
Export Test 2: 5:15

– Same export but changing “Native” to “Nvidia”
Export Test 1: 4:41
Export Test 2: 5:08

– Same Export but changing “Nvidia” to “Intel Quick Sync”

Export Test 1: 4:45
Export Test 2: 5:04

-Same export but H.265
Nvidia
Export Test 1: 3:03
Export Test 2: 4:47

 

– Intel QuickSync
Export Test 1: 3:15
Export Test 2: 4:43

– DPX testing:
Export 1: 3:13
Export 2: 4:46

– BMD raw – one minute of 4608×2592 BMD raw media in UHD sequence without audio

Export Test 1: Basic color corrections.Export Test 2: 110% zoom; spatial NR Faster, Small, 25; Resolve OFX Gaussian blur (default). No using cached or optimized media. Force resize and debayer to highest quality.

– Custom export H.264 as a QuickTime movie full-quality resize and debayer (essentially the YouTube UHD preset but with forcing resize and debayer to highest quality). Native H.264 encoding — about 36 minutes

Export Test 1: 2:53
Export Test 2: 1:05

– Same Export but changing “Native” to “Nvidia”
Export Test 1: :24 (Holy moly!)
Export Test 2: 1:01

– Same Export but changing “Nvidia” to “Intel Quick Sync”
Export Test 1: :32
Export Test 2: :58

– Same but H.265
Nvidia
Export Test 1: :28
Export Test 2: 1:02

– Intel QuickSync
Export Test 1: :1:07
Export Test 2: 1:13

– DPX testing:
Export 1: :34
Export 2: 1:19

I was blown away by the BRAW playback and export timings, the smoothness felt like I was working with low-resolution files, but I was playing with the highest resolution raw files. It was, for lack of a better term, truly liberating — that feeling of not being held back by technology. In fact, I really wanted to see how good the BRAW playback was, so I piled on serial nodes of OFX Gaussian blur at the default settings until I got dropped frames on playback, and it took 15 nodes until I started to drop below realtime playback. Even then, on the 16th node, I was getting 22-23.98fps, but it wouldn’t lock onto realtime playback.

Extra Testing
With the Windows version of the Blackmagic raw Speed Test being recently released, I obviously ran that. At 4K, it said I should get 93fps on CPU and 366 with the GPU using CUDA cores. The RTX 2080 has a whopping 2944 CUDA cores compared to the previous-generation GTX 1080’s 2560 CUDA cores. In the same test, 6K BRAW would play at 36fps on the CPU and 148fps via the CUDA cores; 8K at 25fps on the CPU and 95fps via CUDA cores.

According to the BRAW Speed Test, using 12:1 BRAW, you should get 1527fps on HD footage. Whoa! I ran one of the old standards, CineBench R20, which only runs CPU tests these days, and scored 2281 (eighth down on the list) for multiple cores and 445 for CPU single core, which is actually pretty good (around second on the list).

Finally, I went a little beyond the realm of testing for post production and ran the Unigine Superposition 4K optimized and 8K optimized benchmarks, which are reflective of video game playback. What is crazy is that, between the time I ran it the first time and the second time, there were Nvidia Studio Driver updates, and the performance actually decreased slightly. The first time, the 4K optimized score was 6112 with a minimum fps of 37.17, average fps of 45.72, and a max fps of 57.32, with a GPU utilization of 99%.

With the latest Nvidia Studio drivers, the 4K optimized Superposition benchmark went down to 5993 with a minimum fps of 36.69, average fps of 44.83 and a max fps of 55.07, with a GPU utilization of 99%. So it wasn’t a huge drop off, but still, the update decreased the overall frames per second. The first time, the 8K optimized score was 2473 with a minimum fps of 15.36, average fps of 18.50 and a max fps of 21.77, with a GPU utilization of 100%. With the latest Nvidia Studio Drivers, the 8K optimized Superposition benchmark went up to 2479 with a minimum fps of 15.44, average fps of 18.55 and a max fps of 21.76, with a GPU utilization of 100%. Overall, the results are similar, but the update did affect how many frames per second were achieved.

Summing Up
In the end, the Acer ConceptD 7 is a powerful editing and multimedia creation powerhouse. If Acer asked what I would improve, I would ask for a longer battery life, choice of colors (not just white) and a clean software installation option with Nvidia Studio Drivers ready to go. This battery seemed to last around four hours when using the laptop lightly and way less if I was hammering the Nvidia RTX 2080 in Resolve. Recharging was also slow; it took a few hours to fully charge (definitely not the fast charge I’ve seen on some laptops).

For software cleanliness, I would be willing to pay an extra fee, maybe $50, for a clean installation of Windows, Nvidia Studio drivers and any apps I select, like Resolve and Adobe Creative Cloud. That being said, the power the Nvidia Studio drivers combined with the RTX line of Nvidia GPU graphics cards is very promising. When working in multimedia creation apps like Premiere Pro and Resolve Studio 16, I rarely noticed the ConceptD 7 getting in my way.

I can see how freeing a system like this would be for people who work on the go but might want to plug in to a USB-C monitor at home to finish their UHD work. If you are thinking of converting from a MacBook Pro lifestyle to Windows-based, the Nvidia Studio line of laptops is where I would look first.

While this Acer ConceptD 7 is $2,999.99, there are other price options depending on whether you want a Quadro or an RTX 2060. Overall, for under $3,000, this laptop has really powerful components that will speed up your workflow and help get the technology out of your way to be creative in 3D modeling, editing and color correcting.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Deluxe NY adds color to Mister Rogers biopic

A Beautiful Day in the Neighborhood stars Tom Hanks as children’s television icon Fred Rogers in a story about kindness triumphing over cynicism. Inspired by the article “Can You Say…Hero?” by journalist Tom Junod, the film is directed by Marielle Heller. The cinematographer Jody Lee Lipes worked on the color finishing with Deluxe New York’s Sam Daley.

Together Heller and Lipes worked to replicate the feature’s late 1990’s film aesthetic through in-camera techniques. After testing various film and digital camera options, production opted to shoot a majority of the footage with ARRI Alexa cameras in Super 16 mode. To more accurately represent the look of the Mister Rogers’ Neighborhood, Lipes’ team scoured the globe for working versions of the same Ikegami video cameras that were used to tape the show. In a similar quest for authenticity, Daley got a touch up on the look of Mister Rogers’ Neighborhood by watching old episodes, even visiting a Pittsburgh museum that housed the show’s original set. He also researched film styles typical of the time period to help inform the overall look of the feature.

“Incorporating Ikegami video footage into the pipeline was the most challenging aspect of the color on this film, and we did considerable testing to make sure that the quality of the video recordings would hold up in a theatrical environment,” Daley explained. “Jody and I have been working together for more than 10 years; we’re aesthetically in-sync and we both like to take what some might consider risks visually, and this film is no different.”

Through the color finish process, Daley helped unify and polish the final footage, which included PAL and NTSC video in addition to the Alexa-acquired digital material. He paid careful attention to integrate the different video standards and frame rates while also shaping two distinct looks to reflect the narrative. For contrast between the optimistic Rogers and his colorful world, Daley incorporated a cool moody feel around the pessimistic Junod, named “Lloyd Vogel” in the film and played by Matthew Rhys.

Alaina Zanotti rejoins Cartel as executive producer

Santa Monica-based editorial and post studio Cartel has named Alaina Zanotti as executive producer to help with business development and to oversee creative operations along with partner and executive producer Lauren Bleiweiss. Additionally, Cartel has bolstered its roster with the signing of comedic editor Kevin Zimmerman.

Kevin Zimmerman

With more than 15 years of experience, Zanotti joins Cartel after working for clients that include BBDO, Wieden+Kennedy, Deutsch, Google, Paramount and Disney. Zanotti most recently served as senior executive producer at Method Studios, where she oversaw business development for global VFX and post. Prior to that stint, she joined Cartel in 2016 to assist the newly established post and editorial house’s growth. Previously, Zanotti spent more than a decade driving operations and raising brand visibility for Method and Company 3.

Editor Zimmerman joins Cartel following a tenure as a freelance editor, during which his comedic timing and entrepreneurial spirit earned him commercial work for Avocados From Mexico and Planters that aired during 2019’s Super Bowl.

Throughout his two-decade career in editorial, Zimmerman has held positions at Spot Welders, NO6, Whitehouse Post and FilmCore, with recent work for Sprite, Kia, hotels.com, Microsoft and Miller Lite, and a PSA for Girls Who Code. Zimmer has previously worked with Cartel partners Adam Robinson and Leo Scott.

HitFilm Pro supports After Effects plugins from Video Copilot, Red Giant

FXhome’s HitFilm 14 is the newest version of the editing, VFX and compositing platform for content creators, filmmakers, editors and visual effects artists. Among the upgrades is support for After Effects plugins from Red Giant and Andrew Kramer’s Video Copilot directly within HitFilm.

From Red Giant, HitFilm 14 supports:
– Trapcode Particular – for creating organic 3D particle effects and complex motion graphics elements

From Video Copilot, HitFilm 14 supports:
– Element 3D – for importing and animating realistic 3D models
– Optical Flares – for creating customizable, realistic lens flares
– Saber – for creating high-energy beams, neon lights and other similar effects
– Orb – for creating 3D spheres and planets
– Heat Distortion – for simulating realistic heat waves and mirage effects

Other new features in HitFilm 14 are:
– Video Textures for 3D Models: Allows creators to use a video layer as a texture on their 3D model to add animated bullet holes, cracked glass or changing textures.
– Chromatic Aberration Effect: Added as an effect in HitFilm Pro, this feature allows creators to replicate the red, green and blue fringes around edges when light is refracted through a lens. This allows for creative effects and can be pushed to extremes with parameters for distance (the distance between RGB channels), strength (opacity of the effect), radius (blur radius of the RGB channels) and “Use Lens,” which lens warps the effect around a point of choice.
– Improvements to the export process: In HitFilm 14, the Export Queue is now an Export Panel and is now much easier to use. Exporting can also now be done from the timeline and from comps. These “in-context” exports can apply to the content between the set in and out points or to the entire timeline using the current default preset (which can be changed from the menu).
– Additional Text Controls: Customizing text in HitFilm 14 has been further simplified, with Text panel options for All Caps, Small Caps, Subscript and Superscript. Users can also change the character spacing, horizontal or vertical scale, and baseline shift (for that Stranger Things-style titling).
– Usability and Workflow Enhancements: In addition to the new and improved export process, FXhome has also implemented new changes to the interface to further simplify the entire post production process, including a new “composite button” in the media panel, double-click and keyboard shortcuts.

HitFilm 14 is available now for $349 for a three-seat professional license, which includes 12 months of free upgrades and 12 months of free technical support. From November 30 to December 2, FXhome is offering HitFilm 14 at a discount from the FXhome Store as part of its Black Friday promotion.

Chris Hellman joins Harbor as ECD of editorial

Harbor has added award-winning editor Chris Hellman as executive creative director of editorial. Hellman brings 35 years of experience collaborating and editing with producers, art directors, writers and directors working on commercials. He will be based at Harbor in New York but available at its locations in LA and London as well.

During his long and distinguished career, Hellman has garnered multiple Cannes Lions, Addy Awards, Clios, The One Clubs, London International Awards, CA Annuals, and AICP Awards. Chris served as senior editor at Crew Cuts for 16 years. He was owner/partner and senior editor at Homestead Editorial and then became senior editor at Cutting Room Films. Hellman then took up the role of creative director of post production with the FCB Health network of agencies. His work has been seen in movie theaters, during concerts, and the Super Bowl and as short films and spoof commercials on Saturday Night Live.

“Creating great commercial advertising is about collaboration,” says Hellman. “Harbor is evolving to take that collaboration to a new level, offering clients an approach where the editor is brought into the creative process early on, bringing a new paradigm and a singular creative force.”

Hellman’s clients have included AT&T, Verizon, IBM, Intel, ESPN, NFL, MLB, NBA, Nike, Adidas, New Balance, 3M, Starbucks, Coke, Pepsi, Lipton, Tropicana, Audi, BMW, Volvo, Ford, Jaguar, GMC, Chrysler, Porsche, Pfizer, Merck, Novartis, AstraZeneca, Bayer, Johnson & Johnson, General Mills, Unilever, Lancome, Estee Lauder, Macy’s, TJ Maxx, Tommy Hilfiger, Victorias Secret, Lands End and The Jon Stewart Show, among many others.

Director Robert Eggers talks about his psychological thriller The Lighthouse

By Iain Blair

Writer/director Robert Eggers burst onto the scene when his feature film debut, The Witch, won the Directing Award in the US Dramatic category at the 2015 Sundance Film Festival. He followed up that success by co-writing and directing another supernatural, hallucinatory horror film, The Lighthouse, which is set in the maritime world of the late 19th century.

L-R: Director Robert Eggers and cinematographer Jarin Blaschke on set.

The story begins when two lighthouse keepers (Willem Dafoe and Robert Pattinson) arrive on a remote island off the coast of New England for their month-long stay. But that stay gets extended as they’re trapped and isolated due to a seemingly never-ending storm. Soon, the two men engage in an escalating battle of wills, as tensions boil over and mysterious forces (which may or may not be real) loom all around them.

The Lighthouse has the power of an ancient myth. To tell this tale, which was shot in black and white, Eggers called on many of those who helped him create The Witch, including cinematographer Jarin Blaschke, production designer Craig Lathrop, composer Mark Korven and editor Louise Ford.

I recently talked to Eggers, who got his professional start directing and designing experimental and classical theater in New York City, about making the film, his love of horror and the post workflow.

Why does horror have such an enduring appeal?
My best argument is that there’s darkness in humanity, and we need to explore that. And horror is great at doing that, from the Gothic to a bad slasher movie. While I may prefer authors who explore the complexities in humanity, others may prefer schlocky films with jump scares that make you spill your popcorn, which still give them that dose of darkness. Those films may not be seriously probing the darkness, but they can relate to it.

This film seems more psychological than simple horror.
We’re talking about horror, but I’m not even sure that this is a horror film. I don’t mind the label, even though most wannabe auteurs are like, “I don’t like labels!” It started with an idea my brother Max had for a ghost story set in a lighthouse, which is not what this movie became. But I loved the idea, which was based on a true story. It immediately evoked a black and white movie on 35mm negative with a boxy aspect ratio of 1.19:1, like the old movies, and a fusty, dusty, rusty, musty atmosphere — the pipe smoke and all the facial hair — so I just needed a story that went along with all of that. (Laughs) We were also thinking a lot about influences and writers from the time — like Poe, Melville and Stevenson — and soaking up the jargon of the day. There were also influences like Prometheus and Proteus and God knows what else.

Casting the two leads was obviously crucial. What did Willem and Robert bring to their roles?
Absolute passion and commitment to the project and their roles. Who else but Willem can speak like a North Atlantic pirate stereotype and make it totally believable? Robert has this incredible intensity, and together they play so well against each other and are so well suited to this world. And they both have two of the best faces ever in cinema.

What were the main technical challenges in pulling it all together, and is it true you actually built the lighthouse?
We did. We built everything, including the 70-foot tower — a full-scale working lighthouse, along with its house and outbuildings — on Cape Forchu in Nova Scotia, which is this very dramatic outcropping of volcanic rock. Production designer Craig Lathrop and his team did an amazing job, and the reason we did that was because it gave us far more control than if we’d used a real lighthouse.

We scouted a lot but just couldn’t find one that suited us, and the few that did were far too remote to access. We needed road access and a place with the right weather, so in the end it was better to build it all. We also shot some of the interiors there as well, but most of them were built on soundstages and warehouses in Halifax since we knew it’d be very hard to shoot interiors and move the camera inside the lighthouse tower itself.

Your go-to DP, Jarin Blaschke, shot it. Talk about how you collaborated on the look and why you used black and white.
I love the look of black and white, because it’s both dreamlike and also more realistic than color in a way. It really suited both the story and the way we shot it, with the harsh landscape and a lot of close-ups of Willem and Robert. Jarin shot the film on the Panavision Millennium XL2, and we also used vintage Baltar lenses from the 1930s, which gave the film a great look, as they make the sea, water and sky all glow and shimmer more. He also used a custom cyan filter by Schneider Filters that gave us that really old-fashioned look. Then by using black and white, it kept the overall look very bleak at all times.

How tough was the shoot?
It was pretty tough, and all the rain and pounding wind you see onscreen is pretty much real. Even on the few sunny days we had, the wind was just relentless. The shoot was about 32 days, and we were out in the elements in March and April of last year, so it was freezing cold and very tough for the actors. It was very physically demanding.

Where did you post?
We did it all in New York at Harbor Post, with some additional ADR work at Goldcrest in London with Robert.

Do you like the post process?
I love post, and after the very challenging shoot, it was such a relief to just get in a warm, dry, dark room and start cutting and pulling it all together.

Talk about editing with Louise Ford, who also cut The Witch. How did that work?
She was with us on the shoot at a bed and breakfast, so I could check in with her at the end of the day. But it was so tough shooting that I usually waited until the weekends to get together and go over stuff. Then when we did the stage work at Halifax, she had an edit room set up there, and that was much easier.

What were the big editing challenges?
The DP and I developed such a specific and detailed cinema language without a ton of coverage and with little room for error that we painted ourselves into a corner. So that became the big challenge… when something didn’t work. It was also about getting the running time down but keeping the right pace since the performances dictate the pace of the edit. You can’t just shorten stuff arbitrarily. But we didn’t leave a lot of stuff on the cutting room floor. The assembly was just over two hours and the final film isn’t much shorter.

All the sound effects play a big role. Talk about the importance of sound and working on them with sound designer Damian Volpe, whose credits include Can You Ever Forgive Me?, Leave No Trace, Mudbound, Drive, Winter’s Bone and Margin Call.
It’s hugely important in this film, and Louise and I did a lot of work in the picture edit to create temps for Damian to inspire him. And he was so relentless in building up the sound design, and even creating weird sounds to go with the actual light, and to go with the score by Mark Korven, who did The Witch, and all the brass and unusual instrumentation he used on this. So the result is both experimental and also quite traditional, I think.

There are quite a few VFX shots. Who did them, and what was involved?
We had MELS and Oblique in Quebec and Brainstorm Digital in New York also did some. The big one was that the movie’s set on an island but we shot on a peninsula, which also had a lighthouse further north, which unfortunately didn’t look at all correct, so we framed it out a lot but we had to erase it for some of the time. And our period-correct sea ship broke down and had to be towed around by other ships, so there was a lot of clean up. Also with all the safety cables we had to use for cliff shots with the actors.

Where did you do the DI, and how important is it to you?
We did it at Harbor with colorist Joe Gawler, and it was hugely important although it was fairly simple because there’s very little latitude on the Double-X film stock we used. We did a lot of fine detail work to finesse it, but it was a lot quicker than if it’d been in color.

Did the film turn out the way you hoped?
No, they always change and surprise you, but I’m very proud of what we did.

What’s next?
I’m prepping another period piece, but it’s not a horror film. That’s all I can say.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Object Matrix and Arvato partner for managing digital archives

Object Matrix and Arvato Systems have partnered to help companies instantly access, manage, browse and edit clips from their digital archives.

Using Arvato’s production asset management platform, VPMS EditMate along with the media-focused object storage solution from Object Matrix, MatrixStore, the companies report that organizations can significantly reduce the time needed to manage media workflows, while making content easily discoverable. The integration makes it easy to unlock assets held in archive, enable creative collaboration and monetize archived assets.

MatrixStore is a media-focused private and hybrid cloud storage platform that provides instant access to all media assets. Built upon object-based storage technology, MatrixStore provides digital content governance through an integrated and automated storage platform supporting multiple media-based workflows while providing a secure and scalable solution.

VPMS EditMate is a toolkit built for managing and editing projects in a streamlined, intuitive and efficient manner, all from within Adobe Premiere Pro. From project creation and collecting media, to the export and storage of edited material, users benefit from a series of features designed to simplify the spectrum of tasks involved in a modern and collaborative editing environment.

Behind the Title: Logan & Sons director Tom Schlagkamp

This director also loves editing, sound design and working with VFX long before and after the shoot.

Name: Tom Schlagkamp

Company: Logan & Sons, the live-action division of bicoastal content creation studio Logan, which is based in NYC and LA.

Job Title: Director

What’s your favorite part of the job?
I can honestly say I love every detail of the job, even the initial pitch, as it’s the first contact with a new story, a new project and a new challenge. I put a lot of heart into every aspect of a film — the better you’ve prepared in pre-production, the more creative you can be during the shoot; it brings you more time and oversight during shooting and more power to react if anything changes.

Tom Schlagkamp’s short film Dysconnected.

For my European, South African and Asian projects, I’m also very happy to be deeply involved in editing, sound design and post production, as I love working with the material. I usually shoot footage, so there are more possibilities to work with in editing.

What’s your least favorite?
Not winning a job, that’s why I’m trying to avoid that… (laughs).

If you didn’t have this job, what would you be doing instead?
Well, plan A would be a rock star — specifically, a guitarist in a thrash metal band. Plan B would be the exact opposite: working at my family’s winery — Schlagkamp-Desoye in Germany’s beautiful Mosel Valley. My brother runs this company now, which is in its 11th generation. Our family has grown wine since 1602. The winery also includes a wine museum.

How early on did you know this would be your path?
In Germany, you don’t necessarily jump from high school to college right away, so I took a short time to learn all the basics of filmmaking with as much practical experience as I could get. That included directing music videos and short films while I worked for Germany’s biggest TV station, RTL. There I learned to edit and produced campaigns for shows, and in particular movie trailers and campaigns for the TV premieres of blockbuster movies. That was a lot of work and fun at the same time.

What was it about directing that attracted you?
The whole idea of creating something completely new. I loved (and still do) the films of the “New Hollywood” and the Nouvelle Vague — they challenged the regular way of storytelling and created something outstanding that changed filmmaking forever. This fascinated me, and I knew I had to learn the rules first in order to be able to question them, so I started studying at Germany’s film academy, the Filmakademie Baden-Württemberg.

What is it about directing that keeps you interested?
It’s about always moving forward. There are so many more ways you can tell a story and so many stories that have not yet been told, so I love working on as many projects as possible.

Dysconnected

Do you get involved with post at all?
Yes, I love to be part of that whenever the circumstances allow it. As mentioned before, I love editing and sound design as well, but also planning and working with VFX long before and after the shoot is fascinating to me.

Can you name some recent projects you have worked on?
As I answer these questions, I’m sitting at the airport in Berlin, traveling to Johannesburg, South Africa. I’m excited about shooting a series of commercials in the African savanna. I shot many commercials this year, but was also happy that my short film Dysconnected, which I shot in Los Angeles last year, premiered at LA Shorts International Film Festival this summer.

What project are you most proud of?
I loved shooting the Rock ’n’ Roll Manifesto for Visions magazine, because it was the perfect combination of my job as a director and my before-mentioned “alternative Plan A,” making my living as a musician. Also, everybody involved in the project was so into it and it’s been the best shooting experience. And winning awards with it in the end was an added bonus.

Rock ‘n’ Roll Manifesto

Name three pieces of technology you can’t live without.
1. Noise cancelling headphones. When I travel, I love listening to music and podcasts, and with these headphones you can dive into that world perfectly.
2. My mobile phone, which I hardly use for phone calls anymore but everything else.
3. My laptop, which is part of every project from the beginning until the end.

What do you do to de-stress from it all?
Cycling, hiking and rock concerts. There is nothing like the silence of being in pure nature and the loudness of heavy guitars and drums at a metal show (laughs).

Alkemy X adds Albert Mason as head of production

Albert Mason has joined VFX house Alkemy X as head of production. He comes to Alkemy X with over two decades of experience in visual effects and post production. He has worked on projects directed by such industry icons as Peter Jackson on the Lord of the Rings trilogy, Tim Burton on Alice in Wonderland and Robert Zemeckis on The Polar Express. In his new role at Alkemy X, he will use his experience in feature films to target the growing episodic space.

A large part of Alkemy X’s work has been for episodic visual effects, with credits that include Amazon Prime’s Emmy-winning original series, The Marvelous Mrs. Maisel, USA’s Mr. Robot, AMC’s Fear the Walking Dead, Netflix’s Maniac, NBC’s Blindspot and Starz’s Power.

Mason began his career at MTV’s on-air promos department, sharpening his production skills on top series promo campaigns and as a part of its newly launched MTV Animation Department. He took an opportunity to transition into VFX, stepping into a production role for Weta Digital and spending three years working globally on the Lord of the Rings trilogy. He then joined Sony Pictures Imageworks, where he contributed to features including Spider-Man 3 and Ghost Rider. He has also produced work for such top industry shops as Logan, Rising Sun Pictures and Greymatter VFX.

“[Albert’s] expertise in constructing advanced pipelines that embrace emerging technologies will be invaluable to our team as we continue to bolster our slate of VFX work,” says Alkemy X president/CEO Justin Wineburgh.

Creating With Cloud: A VFX producer’s perspective

By Chris Del Conte

The ‘90s was an explosive era for visual effects, with films like Jurassic Park, Independence Day, Titanic and The Matrix shattering box office records and inspiring a generation of artists and filmmakers, myself included. I got my start in VFX working on seaQuest DSV, an Amblin/NBC sci-fi series that was ground-breaking for its time, but looking at the VFX of modern films like Gemini Man, The Lion King and Ad Astra, it’s clear just how far the industry has come. A lot of that progress has been enabled by new technology and techniques, from the leap to fully digital filmmaking and emergence of advanced viewing formats like 3D, Ultra HD and HDR to the rebirth of VR and now the rise of cloud-based workflows.

In my nearly 25 years in VFX, I’ve worn a lot of hats, including VFX producer, head of production and business development manager. Each role involved overseeing many aspects of a production and, collectively, they’ve all shaped my perspective when it comes to how the cloud is transforming the entire creative process. Thanks to my role at AWS Thinkbox, I have a front-row seat to see why studios are looking at the cloud for content creation, how they are using the cloud, and how the cloud affects their work and client relationships.

Chris Del Conte on the set of the IMAX film Magnificent Desolation.

Why Cloud?
We’re in a climate of high content demand and massive industry flux. Studios are incentivized to find ways to take on more work, and that requires more resources — not just artists, but storage, workstations and render capacity. Driving a need to scale, this trend often motivates studios to consider the cloud for production or to strengthen their use of cloud in their pipelines if already in play. Cloud-enabled studios are much more agile than traditional shops. When opportunities arise, they can act quickly, spinning resources up and down at a moment’s notice. I realize that for some, the concept of the cloud is still a bit nebulous, which is why finding the right cloud partner is key. Every facility is different, and part of the benefit of cloud is resource customization. When studios use predominantly physical resources, they have to make decisions about storage and render capacity, electrical and cooling infrastructure, and staff accommodations up front (and pay for them). Using the cloud allows studios to adjust easily to better accommodate whatever the current situation requires.

Artistic Impact
Advanced technology is great, but artists are by far a studio’s biggest asset; automated tools are helpful but won’t deliver those “wow moments” alone. Artists bring the creativity and talent to the table, then, in a perfect world, technology helps them realize their full potential. When artists are free of pipeline or workflow distractions, they can focus on creating. The positive effects spill over into nearly every aspect of production, which is especially true when cloud-based rendering is used. By scaling render resources via the cloud, artists aren’t limited by the capacity of their local machines. Since they don’t have to wait as long for shots to render, artists can iterate more fluidly. This boosts morale because the final results are closer to what artists envisioned, and it can improve work-life balance since artists don’t have to stick around late at night waiting for renders to finish. With faster render results, VFX supervisors also have more runway to make last-minute tweaks. Ultimately, cloud-based rendering enables a higher caliber of work and more satisfied artists.

Budget Considerations
There are compelling arguments for shifting capital expenditures to operational expenditures with the cloud. New studios get the most value out of this model since they don’t have legacy infrastructure to accommodate. Cloud-based solutions level the playing field in this respect; it’s easier for small studios and freelancers to get started because there’s no significant up-front hardware investment. This is an area where we’ve seen rapid cloud adoption. Considering how fast technology changes, it seems ill-advised to limit a new studio’s capabilities to today’s hardware when the cloud provides constant access to the latest compute resources.

When a studio has been in business for decades and might have multiple locations with varying needs, its infrastructure is typically well established. Some studios may opt to wait until their existing hardware has fully depreciated before shifting resources to the cloud, while others dive in right away, with an eye on the bigger picture. Rendering is generally a budgetary item on project bids, but with local hardware, studios are working to recoup a sunk cost. Using the cloud, render compute can be part of a bid and becomes a negotiable item. Clients can determine the delivery timeline based on render budget, and the elasticity of cloud resources allows VFX studios to pick up more work. (Even the most meticulously planned productions can run into 911 issues ahead of delivery, and cloud-enabled studios have bandwidth to be the hero when clients are in dire straits.)

Looking Ahead
When I started in VFX, giant rooms filled with racks and racks of servers and hardware were the norm, and VFX studios were largely judged by the size of their infrastructure. I’ve heard from an industry colleague about how their VFX studio’s server room was so impressive that they used to give clients tours of the space, seemingly a visual reminder of the studio’s vast compute capabilities. Today, there wouldn’t be nearly as much to view. Modern technology is more powerful and compact but still requires space, and that space has to be properly equipped with the necessary electricity and cooling. With cloud, studios don’t need switchers and physical storage to be competitive off the bat, and they experience fewer infrastructure headaches, like losing freon in the AC.

The cloud also opens up the available artist talent pool. Studios can dedicate the majority of physical space to artists as opposed to machines and even hire artists in remote locations on a per-project or long-term basis. Facilities of all sizes are beginning to recognize that becoming cloud-enabled brings a significant competitive edge, allowing them to harness the power to render almost any client request. VFX producers will also start to view facility cloud-enablement as a risk management tool that allows control of any creative changes or artistic embellishments up until delivery, with the rendering output no longer a blocker or a limited resource.

Bottom line: Cloud transforms nearly every aspect of content creation into a near-infinite resource, whether storage capacity, render power or artistic talent.


Chris Del Conte is senior EC2 business development manager at AWS Thinkbox.

Motorola’s next-gen Razr gets a campaign for today

Many of us have fond memories of our Razr flip phone. At the time, it was the latest and greatest. Then new technology came along, and the smartphone era was born. Now Motorola is asking, “Why can’t you have both?”

Available as of November 13, the new Razr fits in a palm or pocket when shut and flips open to reveal an immersive, full-length touch screen. There is a display screen called the Quick View when closed and the larger Flex View when open — and the two displays are made to work together. Whatever you see on Quick View then moves to the larger Flex View display when you flip it open.

In order to help tell this story, Motorola called on creative shop Los York to help relaunch the Razr. Los York created the new smartphone campaign to tap into the Razr’s original DNA and launch it for today’s user.

Los York developed a 360 campaign that included films, social, digital, TV, print and billboards, with visuals in stores and on devices (wallpapers, ringtones, startup screens). Los York treated the Razr as a luxury item and a piece of art, letting the device reveal itself unencumbered by taglines and copy. The campaign showcases the Razr as a futuristic, high-end “fashion accessory” that speaks to new industry conversations, such as advancing tech along a utopian or dystopian future.

The campaign features a mix of live action and CG. Los York shot on a Panavision DXL with Primo 70 lenses. CG was created using Maxon Cinema 4D with Redshift and composited in Adobe After Effects. The piece was edited in-house on Adobe Premiere.

We reached out to Los York CEO and founder Seth Epstein to find out more:

How much of this is live action versus CG?
The majority is CG, but, originally, the piece was intended to be entirely CG. Early in the creative process, we defined the world in which the new Razr existed and who would belong there. As we worked on the project, we kept feeling that bringing our characters to life in live action and blending the worlds. The proper live action was envisioned after the fact, which is somewhat unusual.

What were some of the most challenging aspects of this piece?
The most challenging part of the project was the fact that the project happened over a period of nine months. Wisely, the product release needed to push, and we continued to evolve the project over time, which is a blessing and a curse.

How did it feel taking on a product with a lot of history and then rebranding it for the modern day?
We felt the key was to relaunch an iconic product like the Razr with an eye to the future. The trap of launching anything iconic is falling back on the obvious retro throwback references, which can come across as too obvious. We dove into the original product and campaigns to extract the brand DNA of 2004 using archetype exercises. We tapped into the attitude and voice of the Razr at that time — and used that attitude as a starting point. We also wanted to look forward and stand three years in the future and imagine what the tone and campaign would be then. All of this is to say that we wanted the new Razr to extract the power of the past but also speak to audiences in a totally fresh and new way.

Check out the campaign here.

IDC goes bicoastal, adds Hollywood post facility 


New York’s International Digital Centre (IDC) has opened a new 6,800-square-foot digital post facility in Hollywood, with Rosanna Marino serving as COO. She will manage the day-to-day operations of the West Coast post house. IDC LA will focus on serving the entertainment, content creation, distribution and streaming industries.

Rosanna Marino

Marino will manage sales, marketing, engineering and the day-to-day operations for the Hollywood location, while IDC founder/CEO Marcy Gilbert, will lead the company’s overall activities and New York headquarters.

IDC will provide finishing, color grading and editorial in Dolby Vision 4K HDR, UHD as well as global QC. IDC LA features 11 bays and a DI theater, which includes Dolby 7.1 Atmos audio mixing, dubbing and audio description. They are also providing subtitle and closed caption-timed text creation and localization, ABS scripting and translations in over 40 languages.

To complete the end-to-end chain, they provide IMF and DCP creation, supplemental and all media fulfillment processing, including audio and timed text conforms for distribution. IDC is an existing Netflix Partner Program member — NP3 in New York and NPFP for the Americas and Canada.

IDC LA occupies the top two floors and rooftop deck in a vintage 1930’s brick building on Santa Monica Boulevard.

Julian Clarke on editing Terminator: Dark Fate

By Oliver Peters

Linda Hamilton’s Sarah Connor and Arnold Schwarzenegger T-800 are back to save humanity from a dystopian future in this latest installment of the Terminator franchise. James Cameron is also back and brings with him writing and producing credits, which is fitting — Terminator: Dark Fate is in essence Cameron’s sequel to Terminator 2: Judgment Day.

Julian Clarke

Tim Miller (Deadpool) is at the helm to direct the tale. It’s roughly two decades after the time of T2, and a new Rev-9 machine has been sent from an alternate future to kill Dani Ramos (Natalia Reyes), an unsuspecting auto plant worker in Mexico. But the new future’s resistance has sent back Grace (Mackenzie Davis), an enhanced super-soldier, to combat the Rev-9 and save her. They cross paths with Connor, and the story sets off on a mad dash to the finale at Hoover Dam.

Miller brought back much of his Deadpool team, including his VFX shop Blur, DP Ken Seng and editor Julian Clarke. This is also the second pairing of Miller and Clarke with Adobe. Both Deadpool and Terminator: Dark Fate were edited using Premiere Pro. In fact, Adobe was also happy to tie in with the film’s promotion through its own #CreateYourFate trailer remix challenge. Participants could re-edit their own trailer using supplied content from the film.

I recently spoke with Clarke about the challenges and fun of cutting this latest iteration of such an iconic film franchise.

Terminator: Dark Fate picks up two decades after Terminator 2, leaving out the timelines of the subsequent sequels. Was that always the plan, or did it evolve out of the process of making the film?
That had to do with the screenplay. You were written into a corner by the various sequels. We really wanted to bring Linda Hamilton’s character back. With Jim involved, we wanted to get back to first principles and have it based on Cameron’s mythology alone. To get back to the Linda/Arnold character arcs, and then add some new stuff to that.

Many fans were attracted to the franchise by Cameron’s two original Terminator films. Was there a conscious effort at integrating that nostalgia?
I come from a place of deep fandom for Terminator 2. As a teenager I had VHS copies of Aliens and Terminator 2 and watched them on repeat after school! Those films are deeply embedded in my psyche, and both of them have aged well — they still hold up. I watched the sequels, and they just didn’t feel like a Terminator film to me. So the goal was definitely to make it of the DNA of those first two movies. There’s going to be a chase. It’s going to be more grounded. It’s going to get back into the Sarah Connor character and have more heart.

This film tends to have elements of humor unlike most other action films. That must have posed a challenge to set the right tone without getting campy.
The humor thing is interesting. Terminator 2 has a lot of humor throughout. We have a little bit of humor in the first half and then more once Arnold shows up, but that’s really the way it had to be. The Dani Ramos character — who’s your entry point into the movie — is devastated when her whole family is killed. To have a lot of jokes happening would be terrible. It’s not the same in Terminator 2 because John Connor’s stepparents get very little screen time, and they don’t seem that nice. You feel bad for them, but it’s OK that you get into this funny stuff right off the bat. On this one we had to ease into the humor so you could [experience] the gravity of the situation at the start of the movie.

Did you have to do much to alter that balance during the edit?
There were one or two jokes that we nipped out, but it wasn’t like that whole first act was chock full of jokes. The tone of the first act is more like Terminator, which is more of a thriller or horror movie. Then it becomes more like T2 as the action gets bigger and the jokes come in. So the first half is like a bigger Terminator and the second half more like T2.

Deadpool, which Tim Miller also directed, used a very nonlinear story structure, balancing action, comedic moments and drama. Terminator was always designed with a linear, straightforward storyline. Right?
A movie hands you certain editing tools. Deadpool was designed to be nonlinear, with characters in different places, so there are a whole bunch of options for you. Terminator: Dark Fate is more like a road movie. The detonation of certain paths along the road are predetermined. You can’t be in Texas before Mexico. So the structural options you had were where to check in with the Rev-9, as well as the inter-scene structure. Once you are in the detention center, who are you cutting to? Sarah? Dani? However, where that is placed in the movie is pretty much set. All you can do is pace it up, pace it down, adjust how to get there. There aren’t a lot of mobile pieces that can be swapped around.

When we had talked after Deadpool, you discussed how you liked the assistants to build string-outs — what some call a Kem roll. Similar action is assembled back to back into a sequence in order from every take. Did you use that same organizational method on Terminator: Dark Fate?
Sometimes we were so swamped with material that there wasn’t time to create string-outs. I still like to have those. It’s a nice way to quickly see all the pieces that cover a moment. If you are trying to find the one take or action that’s 5% better than another, then it’s good to see them all in a row, rather than trying to keep it all in your head for a five-minute take. There was a lot of footage that we shot in the action scenes, but we didn’t do 11 or 12 takes for a dialogue scene. I didn’t feel like I needed some tool to quickly navigate through the dialogue takes. We would string out the ones that were more complicated.

Depending on the directing style, a series of takes may have increasingly calibrated performances with successive takes. With other directors, each take might be a lot different than the one before and after it. What is your approach to evaluating which is the best take to use?
It’s interesting when you use the earlier takes versus the later takes and what you get from them. The later takes are usually the ones that are most directed. The actors are warmed up and most closely nail what the director has in mind. So they are strong in that regard, but sometimes they can become more self-conscious. So sometimes the first take is more thrown away and may have less power but feels more real — more off the cuff. Sometimes a delivered dialogue line feels less written, and you’ll buy it more. Other times you’ll want that more dramatic quality of the later takes. My instinct is to first use the later takes, but as you start to revise a scene, you often go back to pieces of the earlier takes to ground it a little more.

How long did the production and post take?
It took a little over 100 days of shooting with a lot of units. I work on a lot of mid-budget films, so this seemed like a really long shoot. It was a little relentless for everyone — even squeezing it into those 100 days. Shooting action with a lot of VFX is slow due to the reset time needed between takes. The ending of the movie is 30 minutes of action in a row. That’s a big job shooting all of that stuff. When they have a couple of units cranking through the dialogue scenes plus shooting action sequences — that’s when I have to work hard to keep up. Once you hit the roadblocks of shooting just those little action pieces, you get a little time to catch up.

We had the usual director’s cut period and finished by the end of this September. The original plan was to finish by the beginning of September, but we needed the time for VFX. So everything piled up with the DI and the mix in order to still hit the release date. September got a little crazy. It seems like a long time — a total of 13 or 14 months — but it still was an absolute sprint to get the movie in shape and get the VFX into the film in time. This might be normal for some of these films, but compared to the other VFX movies I’ve done, it was definitely turning things up a notch!

I imagine that there was a fair amount of previz required to lay out the action for the large VFX and CG scenes. Did you have that to work with as placeholder shots? How did you handle adjusting the cut as the interim and final shots were delivered?
Tim is big into previz with his background in VFX and animation and owning his own VFX company. We had very detailed animatics going into production. Depending on a lot of factors, you still abandon a lot of things. For example, the freeway chases are quite a bit different because when you go there and do it with real cars, they do different things. Or only part of the cars look like they are going fast enough. Those scenes became quite different than the previz.

Others are almost 100% CG, so you can drop in the previz as placeholders. Although, even in those cases, sometimes the finished shot doesn’t feel real enough. In the “cartoon” world of previz, you can do wild camera moves and say, “Wow, that seems cool!” But when you start doing it at photoreal quality, then you go, “This seems really fake.” So we tried to get ahead of that stuff and find what to do with the camera to ground it. Kind of mess it up so it’s not too dynamic and perfect.

How involved were you with shaping the music? Did you use previous Terminator films’ scores as a temp track to cut with?
I was very involved with the music production. I definitely used a lot of temp music. Some of it was ripped from old Terminator movies, but there’s only so much Terminator 2 music you can put in. Those scores used a lot of synthesizers that date the sound. I did use “Desert Suite” from Terminator 2, when Sarah is in the hotel room. I loved having a very direct homage to a Sarah Connor moment while she’s talking about John. Then I begged our composer, Tom Holkenborg (from Junkie XL), to consider doing a version of it for our movie. So it is essentially the same chord progression.

That was an interesting musical and general question about how much do you lean into the homage thing. It’s powerful when you do it, but if you do it too much, it starts to feel artificial or pandering. So I tried to hit the sweet spot so you knew you were watching a Terminator movie, but not so much that it felt like Terminator karaoke. How many times can you go da-dum-dum-da-da-dum? You have to pick your moments for those Terminator motifs. It’s diminishing returns if you do it too much.

Another inspirational moment for me was another part in Terminator 2. There’s a disturbing industrial sound for the T-1000. It sounds more like a foghorn or something in a factory rather than music, and it created this unnerving quality to the T-1000 scenes, when he’s just scoping things out. So we came up with a modern-day electronic equivalent for the Rev-9 character, and that was very potent.

Was James Cameron involved much in the post production?
He’s quite busy with his Avatar movies. Some of the time he was in New Zealand, some of the time he was in Los Angeles. Depending on where he was and where we were in the process, we would hit milestones, like screenings or the first cut. We would send him versions and download a bunch of his thoughts.

Editing is very much a part of his wheelhouse. Unlike many other directors, he really thinks about this shot, then that shot, then the next shot. His mind really works that way. Sometimes he would give us pretty specific, dialed-in notes on things. Sometimes it would just be bigger suggestions, like, “Maybe the action cutting pattern could be more like this …” So we’d get his thoughts — and, of course, he’s Jim Cameron, and he knows the business and the Terminator franchise — so I listened pretty carefully to that input.

This is the second film that you’ve cut with Premiere Pro. Deadpool was first, and there were challenges using it on such a complex project. What was the experience like this time around?
Whenever you set out to use a new workflow — not to say Premiere is new because it’s been around a long time and has millions of users, but it’s unusual to use it on large VFX movies for specific reasons.

L-R: Matthew Carson and Julian Clarke

On Deadpool, that led to certain challenges, and that’s just what happens when you try to do something new. The fact that we had to split the movie into separate projects for each reel, instead of one large project. Even so, the size of our project files made it tough. They were so full of media that they would take five minutes to open. Nevertheless, we made it work, and there are lots of benefits to using Adobe over other applications.

In comparison, the interface to Avid Media Composer looks like it was designed 20 years ago, but they have multi-user collaboration nailed, and I love the trim tool. Yet, some things are old and creaky. Adobe’s not that at all. It’s nice and elegant in terms of the actual editing process. We got through it and sat down with Adobe to point out things that needed work, and they worked on them. When we started up Terminator, they had a whole new build for us. Project files now opened in 15 seconds. They are about halfway there in terms of multi-user editing. Now everyone can go into a big, shared project, and you can move bins back and forth. Although, only one user at a time has write access to the master project.

This is not simple software they are writing. Adobe is putting a lot of work into making it a more fitting tool for this type of movie. Even though this film was exponentially larger than Deadpool, from the Adobe side it was a smoother process. Props to them for doing that! The cool part about pioneering this stuff is the amount of work that Adobe is on board to do. They’ll have people work on stuff that is helpful to us, so we get to participate a little in how Adobe’s software gets made.

With two large Premiere Pro projects under your belt, what sort of new features would you like to see Adobe add to the application to make it even better for feature film editors?
They’ve built out the software from being a single-user application to being multi-user software, but the inherent software at the base level is still single-user. Sometimes your render files get unlinked when you go back and forth between multiple users. There’s probably stuff where they have to dig deep into the code to make those minor annoyances go away. Other items I’d like to see — let’s not use third-party software to send change lists to the mix stage.

I know Premiere Pro integrates beautifully with After Effects, but for me, After Effects is this precise tool for executing shots. I don’t want a fine tool for compositing — I want to work in broad strokes and then have someone come back and clean it up. I would love to have a tracking tool to composite two shots together for a seamless, split screen of two combined takes — features like that.

The After Effects integration and the color correction are awesome features for a single user to execute the film, but I don’t have the time to be the guy to execute the film at that high level. I just have to keep going. I want to be able to do a fast and dirty version so I know it’s not a terrible idea, and then turn to someone else and say, “OK, make that good.” After Effects is cool, but it’s more for VFX editors or single users who are trying to make a film on their own.

After all of these action films, are you ready to do a different type of film, like a period drama?
Funny you should say that. After Deadpool I worked on The Handmaid’s Tale pilot, and it was exactly that. I was working on this beautifully acted, elegant project with tons of women characters and almost everything was done in-camera. It was a lot of parlor room drama and power dynamics. And that was wonderful to work on after all of this VFX/action stuff. Periodically it’s nice to flex a different creative muscle.

It’s not that I only work on science-fiction/VFX projects — which I love — but, in part, people start associating you with a certain genre, and then that becomes an easy thing to pursue and get work for.

Much like acting, if you want to be known for doing a lot of different things, you have to actively pursue it. It’s easy to go where momentum will take you. If you want to be the editor who can cut any genre, you have to make it a mission to pursue those projects that will keep your resume looking diverse. For a brief moment after Deadpool, I might have been able to pivot to a comedy career (laughs). That was a real hybrid, so it was challenging to thread the needle of the different tones of the film and make it feel like one piece.

Any final thoughts on the challenges of editing Terminator: Dark Fate?
The biggest challenge of the film was that, in a way, the film was an ensemble with the Dani character, the Grace character, the Sarah character and Arnold’s character — the T-800. All of these characters are protagonists that all have their individual arcs. Feeling that you were adequately servicing those arcs without grinding the movie to a halt or not touching bases with a character often enough — finding out how to dial that in was the major challenge of the movie, plus the scale of the VFX and finessing all the action scenes. I learned a lot.


Oliver Peters is an experienced film and commercial editor/colorist. In addition, he regularly interviews editors for trade publications. He may be contacted through his website at oliverpeters.com

Carbon New York grows with three industry vets

Carbon in New York has grown with two senior hires — executive producer Nick Haynes and head of CG Frank Grecco — and the relocation of existing ECD Liam Chapple, who joins from the Chicago office.

Chapple joined Carbon in 2016, moving from Mainframe in London to open Carbon’s Chicago facility.  He brought in clients such as Porsche, Lululemon, Jeep, McDonald’s, and Facebook. “I’ve always looked to the studios, designers and directors in New York as the high bar, and now I welcome the opportunity to pitch against them. There is an amazing pool of talent in New York, and the city’s energy is a magnet for artists and creatives of all ilk. I can’t wait to dive into this and look forward to expanding upon our amazing team of artists and really making an impression in such a competitive and creative market.”

Chapple recently wrapped direction and VFX on films for Teflon and American Express (Ogilvy) and multiple live-action projects for Lululemon. The most recent shoot, conceived and directed by Chapple, was a series of eight live-action films focusing on Lululemon’s brand ambassadors and its new flagship store in Chicago.

Haynes joins Carbon from his former role as EP of MPC, bringing over 20 years of experience earned at The Mill, MPC and Absolute. Haynes recently wrapped the launch film for the Google Pixel phone and the Chromebook, as well as an epic Middle Earth: Shadow of War Monolith Games trailer combining photo-real CGI elements with live-action shot on the frozen Black Sea in Ukraine.  “We want to be there at the inception of the creative and help steer it — ideally, lead it — and be there the whole way through the process, from concept and shoot to delivery. Over the years, whether working for the world’s most creative agencies or directly with prestigious clients like Google, Guinness and IBM, I aim to be as close to the project as possible from the outset, allowing my team to add genuine value that will garner the best result for everyone involved.”

Grecco joins Carbon from Method Studios, where he most recently led projects for Google, Target, Microsoft, Netflix and Marvel’s Deadpool 2.  With a wide range of experience from Emmy-nominated television title sequences to feature films and Super Bowl commercials, Grecco looks forward to helping Carbon continue to push its visuals beyond the high bar that has already been set.

In addition to New York and Chicago, Carbon has a studio in Los Angeles.

Main Image: (L-R) Frank Grecco, Liam Chapple, Nick Haynes

Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”

Final Cut ups Zoe Schack to editor

Final Cut in LA has promoted Zoe Schack to editor after working at the studio as an assistant editor for three years. While at Final Cut, Schack has been mentored by Final Cut editors Crispin Struthers, Joe Guest, Jeff Buchanan and Rick Russell.

Schack has edited branded content and commercials for Audi, Infiniti, Doritos and Dollar Shave Club as well as music videos for Swae Lee and Whitney Woerz. She has also worked with a number of high-profile directors, including Dougal Wilson, Ava DuVernay, Michel Gondry, Craig Gillespie and Steve Ayson.

Originally from a small town north of New York, Schack studied film at Rhode Island School of Design and NYU’s Tisch School of the Arts. Her love for documentaries led her to intern with renowned filmmaker Albert Maysles and to produce the Bicycle Film Festival in Portland, Oregon. She edited several short documentaries and a pilot series that were featured in many film festivals.

“It’s been amazing watching Zoe’s growth the last few years,” says Final Cut executive producer Suzy Ramirez. “She’s so meticulous, always doing a deep dive into the footage. Clients love working with her because she makes the process fun. She’s grown here at Final Cut so much already under the guidance of our editors, and her craft keeps evolving. I’m excited to see what’s ahead”

Behind the Title: Compadre’s Jessica Garcia-Scharer

NAME: Jessica Garcia-Scharer

COMPANY: Culver City’s Compadre

CAN YOU DESCRIBE YOUR COMPANY?
We are a creative marketing agency. We make strategically informed branding and creative — and then help to get it out to the world in memorable ways. And we use strategy, design, planning and technology to do it.

WHAT’S YOUR JOB TITLE?
Head of Production

WHAT DOES THAT ENTAIL?
Head of production means different things at different companies. I’m the three-ring binder with the special zip pack that helps to hold everything together in an organized manner. Everything from hearing and understanding client needs, creating proposals, managing budget projections/actuals/contracts, getting in the right talent for the job, all the way to making sure that everyone in-house is happy, balanced and supported.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
Probably the proposals and planning charts. I’m also “Snack Mom!”

WHAT’S YOUR FAVORITE PART OF THE JOB?
Snack Mom. Ha! My favorite part of the job is being part of a team and bringing something to the table that is useful. I like when my team feels like everything is being handled.

WHAT’S YOUR LEAST FAVORITE?
If and when there isn’t enough quiet time to get into the paperwork zone.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
At work: When I get in early and no one is in yet. I get the most work done during that time. Also lunch. I try to make it a point now to get out to lunch and take co-workers with me. It’s nice to be able to break up the day and be regular people for an hour.

Non-work-related: When the sun is just coming up and it’s still a little brisk outside, but the air is fresh and the birds are starting to wake up and chirp. Also, when the sun is starting to descend and it’s still a little warm as the cool ocean breeze starts to come in. The birds are starting to wind down after a hard day of being a bird, and families are coming together to make dinner and talk about their days (well… on the weekend anyway). I am obviously very lucky, and I know that. There are many that don’t get to experience that, and I think of them during that time as well.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
It depends on if I were independently wealthy or not, and where I had been previously. Before going to college, I wanted to be a VFX make-up artist, a marine biologist working with dolphins or a park ranger in Yosemite.

If I were independently wealthy, I would complete a painting collection and put up an art show, start a female/those-who-identify-as-female agency, open up a vegan restaurant and be a hardcore animal activist.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I wish people thought about their careers as more than one path. I have many paths, and I don’t think I’m done just yet. You never know where life will take you from one day to the next, so it’s important to live for today.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
CNN 2020 Election promo package, ESPN 40th Anniversary and another that is pretty neat and a big puzzle to figure out, but I can’t tell you just yet…

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I technically work on everything, so they’re all my babies, and I’m proud of all of them for different reasons. Most, if not all, of the projects that we work on start out with a complex puzzle to solve. I work with the team to figure it out and present the solution to the client. That is where I thrive, and those documents are what I’m most proud of as far as my own personal accomplishments and physical contributions to the company.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Water filtration systems, giant greenhouses and air conditioning will be vital because of global warming.

For work, it would be really hard to function without my mobile phone, laptop and headphones.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Mainly Instagram and Facebook. Facebook is where I learn about events/concerts/protests coming up, keep tabs on people’s birthdays, weddings, babies and share my thoughts on factory farming. Instagram is mindless eye candy for the most part, but I do love how close I feel to certain communities there.

DO YOU LISTEN TO MUSIC WHILE YOU WORK? CARE TO SHARE YOUR FAVORITE MUSIC TO WORK TO?
Usually binaural beats (for focus and clarity) and new age relaxation; but if I’m organizing and cleaning up, then The Cure, Bowie, Duran Duran, Radiohead and Bel Canto.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
As I mentioned before, it’s important to take a lunch break and bond with co-workers and old friends. Taking a step away and remembering that I am a human being living a life that needs to be enjoyed is key to a happy work-life balance. We aren’t saving lives here; we are making fun things for fun people, so as long as you have the systems and resources in place, the stress is the excitement of making things that exceed expectations.

But if I do let things get to me, the best de-stressor is getting home and into my PJs and snuggling up with my family and animals… drowning myself in the escape of love. Oh, and dark chocolate (vegan, of course).

Report: Apple intros 16-inch MacBook Pro, previews new Mac Pro, display

By Pat Birk

At a New York City press event, Apple announced that it will being shipping a new 16-inch MacBook Pro this week. This new offering will feature an updated 16-inch Retina display with a pixel density of 226ppi; 9th-generation Intel processors featuring up to 8 cores and up to 64GB of DDR4 memory; vastly expanded SSDs ranging from 512GB to a whopping 8TB; upgraded discrete AMD Radeon Pro 5000M series graphics; completely redesigned speakers and internal microphones; and an overhauled keyboard dubbed, of course, the “Magic Keyboard.”

The MacBook Pro’s new Magic Keyboard.

These MacBooks also feature a new cooling system, with wider vents and a 35 percent larger heatsink, along with a 100-watt hour battery (which the company stressed is the maximum capacity allowed by the Federal Aviation Administration), contributing to an additional hour of battery life while web browsing or playing back video.

I had the opportunity to do a brief hands-on demo, and for the first time since Apple introduced the Touch Bar to the MacBook Pro, I have found myself wanting a new Mac. The keyboard felt great, offering far more give and far less plastic-y clicks than the divisive Butterfly keyboard. The Mac team has reintroduced a physical escape key, along with an inverted T-style cluster of arrow keys, both features that will be helpful for coders. Apple also previewed its upcoming Mac Pro tower and Pro Display XDR.

Sound Offerings
As an audio guy, I was naturally drawn to the workstation’s sound offerings and was happy when the company dedicated a good portion of the presentation to touting its enhanced speaker and microphone arrays. The six-speaker system features dual-opposed woofer drivers, which offer enhanced bass while canceling out problematic distortion-causing frequencies. When compared side by side with high-end offerings from other manufacturers, the MacBook offered a far more complete sonic experience than the competition, and I believe Apple is right in saying that they’ve achieved an extra half octave of bass range with this revision.

The all-new MacBook Pro features a 16-inch Retina display.

It’s really impressive for a laptop, but I honestly don’t see it replacing a good pair of headphones or a half-decent Bluetooth for most users. I can see it being useful in the occasional pitch meeting, or showing an idea or video to a friend with no other option, but feel it’s more of a nice touch than a major selling point.

The three-microphone array was impressive, as well, and I can see it offering legitimate functionality for working creatives. When A/B’d with competing internal microphones, there was really no comparison. The MacBook’s mics deliver crisp, clean recordings with very little hiss and no noticeable digital artifacting, both of which were clearly present in competing PCs. I could realistically see this working for a small podcast, or on-the-go musicians recording demos. We live in a world where Steve Lacie recorded and produced a beat for Kendrick Lamar on an iPhone. When Apple claims that the signal-to-noise ratio rivals or even surpasses that of digital mics like the Blue Yeti, they may very well be right. However, in an A/B comparison, I found the Blue to have more body and room ambience, while the MacBook sounded a bit thin and sterile.

Demos
The rest of the demo featured creative professionals — coders, animators, colorists and composers — pushing the spec’d out Mac and MacBook Pros to their limits. A coder demonstrated testing a program in realtime on eight emulations of iOS and iPad OS at once.

A video editor demonstrated the new Mac Pro (not the MacBook) running a project with six 8K video sources playing at once through an animation layer, with no rendering at all. We were also treated to a brief Blackmagic Da Vinci Resolve demo on a Pro Display XDR. A VFX artist demonstrated making realtime lighting changes to an animation comprised of eight million polygons on the Mac Pro, again with no need for rendering.

The Mac Pro and Pro Display XDR, the world’s best pro display, will be available in December.

Composers showed us a Logic X session running a track produced for Lizzo by Oak Felder. The song had over 200 tracks, replete with plugins and instruments — Felder was able to accomplish this on an MacBook Pro. Also on the MacBook, they had a session loaded running multiple instances of MIDI instruments using sample libraries from Cinesamples, Spitfire Audio and Orchestral Tools. The result could easily have fooled me into believing it had been recorded with a live orchestra, and the fact that all of these massive, processor intensive sample libraries could operate at the same without making the Mac Pro break a sweat had me floored.

Summing Up
Apple has delivered a very solid upgrade in the new 16-inch MacBook Pro, especially as a replacement for the earlier iterations of the Touch Bar MacBook Pros. They have begun taking orders, with prices starting at $2,399 for the 2.6GHz 6-core model, and $2,799 for the 2.3GHz 8-core model.
As for the new Mac Pro and Pro Display XDR, they’re coming in December, but company representatives remained tight-lipped on a release date.


Pat Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

Todd Phillips talks directing Warner Bros.’ Joker

By Iain Blair

Filmmaker Todd Phillips began his career in comedy, most notably with the blockbuster franchise The Hangover, which racked up $1.4 billion at the box office globally. He then leveraged that clout and left his comedy comfort zone to make the genre-defying War Dogs.

Todd Phillips directing Joaquin Phoenix

Joker puts comedy even further in his rearview mirror. This bleak, intense, disturbing and chilling tragedy has earned an astounding $1 billion worldwide since its release, making it the seventh-highest-grossing film of 2019 and the highest-grossing R-rated film of all time. Not surprisingly, Joker is also generating a lot of Oscar and awards buzz.

Directed, co-written and produced by Phillips, Joker is the filmmaker’s original vision of the infamous DC villain — an origin story infused with the character’s more traditional mythologies. Phillips’ exploration of Arthur Fleck, who is portrayed — and fully inhabited — by three-time Oscar-nominee Joaquin Phoenix, is of a man struggling to find his way in Gotham’s fractured society. Longing for any light to shine on him, he tries his hand as a stand-up comic but finds the joke always seems to be on him. Caught in a cyclical existence between apathy, cruelty and, ultimately, betrayal, Arthur makes one bad decision after another that brings about a chain reaction of escalating events in this powerful, allegorical character study.

Phoenix is joined by Oscar-winner Robert De Niro, who plays TV host Murray Franklin, and a cast that includes Zazie Beetz, Frances Conroy, Brett Cullen, Marc Maron, Josh Pais and Leigh Gill.

Behind the scenes, Phillips was joined by a couple of frequent collaborators in DP Lawrence Sher, ASC, and editor Jeff Groth. Also on the journey were Oscar-nominated co-writer Scott Silver, production designer Mark Friedberg and Oscar-winning costume designer Mark Bridges. Hildur Guðnadóttir provided the music.

Joker was produced by Phillips and actor/director Bradley Cooper, under their Joint Effort banner, and Emma Tillinger Koskoff.

I recently talked to Phillips, whose credits include Borat (for which he earned an Oscar nod for Best Adapted Screenplay), Due Date, Road Trip and Old School, about making the film, his love of editing and post.

You co-wrote this very complex, timely portrait of a man and a city. Was that the appeal for you?
Absolutely, 100 percent. While it takes place in the late ‘70s and early ‘80s, and we wrote it in 2016, it was very much about making a movie that deals with issues happening right now. Movies are often mirrors of society, and I feel this is exactly that.

Do you think that’s why so many people have been offended by it?
I do. It’s really resonated with audiences. I know it’s also been somewhat divisive, and a lot of people were saying, “You can’t make a movie about a guy like this — it’s irresponsible.” But do we want to pretend that these people don’t exist? When you hold up a mirror to society, people don’t always like what they see.

Especially when we don’t look so good.
(Laughs) Exactly.

This is a million miles away from the usual comic-book character and cartoon violence. What sort of film did you set out to make?
We set out to make a tragedy, which isn’t your usual Hollywood approach these days, for sure.

It’s hard to picture any other actor pulling this off. What did Joachin bring to the role?
When Scott and I wrote it, we had him in mind. I had a picture of him as my screensaver on my laptop — and he still is. And then when I pitched this, it was with him in mind. But I didn’t really know him personally, even though we created the character “in his voice.” Everything we wrote, I imagined him saying. So he was really in the DNA of the whole film as we wrote it, and he brought the vulnerability and intensity needed.

You’d assume that he’d jump at this role, but I heard it wasn’t so simple getting him.
You’re right. Getting him was a bit of a thing because it wasn’t something he was looking to do — to be in a movie set in the comic book world. But we spent a lot of timing talking about it, what it would be, what it means and what it says about society today and the lack of empathy and compassion that we have now. He really connected with those themes.

Now, looking back, it seems like an obvious thing for him to do, but it’s hard for actors because the business has changed so much and there’s so many of these superhero movies and comic book films now. Doing them is a big thing for an actor, because then you’re in “that group,” and not every actor wants to be in that group because it follows you, so to speak. A lot of actors have done really well in superhero movies and have done other things too, but it’s a big step and commitment for an actor. And he’d never really been in this kind of film before.

What were the main technical challenges in pulling it all together?
I really wanted to shoot on location all around New York City, and that was a big challenge because it’s far harder than it sounds. But it was so important to the vibe and feel of the movie. So many superhero movies use lots of CGI, but I needed that gritty reality of the actual streets. And I think that’s why it’s so unsettling to people because it does feel so real. Luckily, we had Emma Tillinger Koskoff, who’s one of the great New York producers. She was key in getting locations.

Did you do a lot of previz?
I don’t usually do that much. We did it once for War Dogs and it worked well, but it’s a really slow and annoying process to some extent. As crazy as it sounds, we tried it once on the big Murray Franklin scene with De Niro at the end, which is not a scene you’d normally previz — it’s just two guys sitting on a couch. But it was a 12-page scene with so many camera angles, so we began to previz it and then just abandoned it half-way through. The DP and I were like, “This isn’t worth it. We’ll just do it like we always do and just figure it out as we go.” But previz is an amazing tool. It just needed more time and money than we had, and definitely more patience than I have.

Where did you post?
We started off at my house, where Jeff and I had an Avid setup. We also had a satellite office at 9000 Sunset, where all the assistants were. VFX and our VFX supervisor Edwin Rivera were also based out of there along with our music editor, and that’s where most of it was done. Our supervising sound editor was Alan Robert Murray, a two-time Oscar-winner for his work on American Sniper and Letters From Iwo Jima, and we did the Atmos sound mix on the lot at Warners with Tom Ozanich and Dean Zupancic.

Talk about editing with Jeff Groth. What were the big editing challenges?
There are a lot of delusions in Arthur’s head, so it was a big challenge to know when to hide them and when to reveal them. The scene order in the final film is pretty different from the scripted order, and that’s all about deciding when to reveal information. When you write the script, every scene seems important, and everything has to happen in this order, but when you edit, it’s like, “What were we thinking? This could move here, we can cut this, and so on.”

Todd Phillips on set with Robert DeNiro

That’s what’s so fun about editing and why I love it and post so much. I see my editor as a co-writer. I think every director loves editing the most, because let’s face it — directors are all control freaks, and you have the most control in post and the editing room. So for me at least, I direct movies and go through all the stress of production and shooting just to get to the editing room. It’s all stuff I just have to deal with so I can then sit down and actually make the movie. So it’s the final draft of the script and I very much see it as a writing exercise.

Post is your last shot at getting the script right, and the most fun part of making a movie is the first 10 to 12 weeks of editing. The worst part is the final stretch of post, all that detail work and watching the movie 400 times. You get sick of it, and it’s so hard to be objective. This ended up taking 20 weeks before we had the first cut. Usually you get 10 for the director’s cut, but I asked Warners for more time and they were like, “OK.”

Visual effects play a big role in the film. How many were there?
More than you’d think, but they’re not flashy. I told Edwin early on, if you do your job right, no one will guess there are any VFX shots at all. He had a great team, and we used various VFX houses, including Scanline, Shade and Branch.

There’s a lot of blood, and I’m guessing that was all enhanced a lot?
In fact, there was no real blood — not a drop — used on set, and that amazes people when I tell them. That’s one of the great things about VFX now — you can do all the blood work in post. For instance, traditionally, when you film a guy being shot on the subway, you have all the blood spatters and for take two, you have to clean all that up and repaint the walls and reset, and it takes 45 minutes. This way, with VFX, you don’t have to deal with any of that. You just do a take, do it again until it’s right, and add all the blood in post. That’s so liberating.

What was the most difficult VFX shot to do?
I’d say the scene with Randall at his apartment, and all that blood tracking on the walls and on Arthur’s face and hands is pretty amazing, and we spent the most time on all that, getting it right.

Where did you do the DI, and how important is it to you?
At Company 3 with my regular colorist Jill Bogdanowicz, and it’s vital for the look. I only began doing DIs on the first Hangover, and the great thing about it is you can go in and surgically fix anything. And if you have a great DP like Larry Sher, who’s shot the last six movies for me, you don’t get lost in the maze of possibilities, and I trust him more than I trust myself sometimes.

We shot it digitally, though the original plan was to shoot 65mm large format, and when that fell through to shoot 35mm. Then Larry and I did a lot of tests and decided we’d shoot digital and make it look like film. And thanks to the way he lit and all the work he and Jill did, it has this weird photochemical feel and look. It’s not quite film, but it’s definitely not digital. It’s somewhere in the middle, its own thing.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

DP Chat: Good Boys cinematographer Jonathan Furmanski

By Randi Altman

Cinematographer Jonathan Furmanski is no stranger to comedy. His resume is long and includes such projects as the TV series Search Party and Inside Amy Schumer, as well as Judd Apatow’s documentary, The Zen Diaries of Garry Shandling.

Jonathan Furmanski

So when it came time to collaborate with director Gene Stupnitsky on the Seth Rogen and Evan Goldberg-produced Good Boys feature, he was more than ready.

Good Boys follows three 12-year-old boys (Jacob Tremblay, Brady Noon and Keith L. Williams) as they discover girls and how to get in and out of trouble. Inspired by earlier coming of age films, such as Stand By Me, Furmanski aimed for the look of the film to have “one foot in 2019 and the other in 1986.”

We reached out to Furmanski to find out about Good Boys, his workflows, inspiration and more.

Tell us about Good Boys. How early did you get involved in this film, and how did you work with the director Gene Stupnitsky?
Good Boys was a great experience. I interviewed with Gene and the producers several months before prep started. I flew up to Vancouver about a month before we started shooting, so had some time to sit with everyone to discuss the plan and style.

Everyone gave me a lot of room to figure out the look of the film, and there was universal agreement that we didn’t want Good Boys to look like a typical pre-teen comedy. Each conversation about the photography started with the idea that, despite the story being about three 12-year-old boys in a small suburban town, the film should feel bigger and more open. We wanted to show the thrill and fear of adolescence, discovery and adventure. That said, Gene was very clear not to undermine the comedy or constrain the actors.

How would you describe the look of film?
My hope was that Good Boys would feel like it had one foot in 2019 and the other in 1986. We got a lot of inspiration from movies like Stand By Me, The Goonies and ET. I didn’t want the film to be slick-looking; I wanted it to be sharp and vibrant and with a wider point of view. At the same time, it needed some grit and texture — despite all the sillier or crazier moments, I very much wanted the audience to be lulled into a suspension of disbelief. So, hopefully, we achieved that.

How did you work with the director and colorist to achieve the intended look?
We were very lucky to have Natasha Leonnet at Efilm do the final color on the film. She locked into the look and vibe. We were immediately on the same page about everything.

I obsess over all the details, and she was able to address my notes — or talk me off the ledge — while bringing her own vision and sensibility, which was right in line with what Gene and I envisioned. Just like on the shoot, I was given a lot of room to create the look.

You shot in Vancouver. How long was the shoot?
We shot for about 35 days in and around Vancouver.

How did you go about choosing the right camera and lenses this project? Can you talk about camera tests?
I spent a bit of time thinking about the best camera and lens combo. Initially, I was considering a full-frame format, but as we discussed the film and our references, we realized shooting anamorphic would bring a little more “bigness.”

Also, we knew we’d have a lot of shots of all three boys improvising or goofing around, so the wider aspect ratio would help keep them all in a nice frame. But I also didn’t want to be fighting the imperfections a lot of anamorphic lenses have. That “personality” can be great and really fun to shoot with, but for Good Boys, we needed to have greater control over the frame. So I tested every anamorphic series I could get my hands on — looking at distortion, flaring, horizontal and vertical sharpness, etc. — with a few camera systems. I settled on the ARRI Master anamorphic lenses and Alexa SXT and Mini cameras.

Ultimately, why was this the right combination of camera and lenses?
Well, I’ve shot almost every scripted and documentary project in the last five years on some model of Alexa or Amira, so I’m very familiar with the sensor and how it handles itself, no matter what the situation. And I knew we’d shoot ARRIRAW so would record an awesome amount of information. I’m so impressed with what ARRIRAW can handle; sometimes it sees too much. But really, there’s so much to think about while shooting, no matter how much you like the image in front of you, it’s reassuring to know you have heaps of information to work with.

As for lenses, I wanted a package that gave me all the scope and dimensionality of anamorphic without the typical limitations. Don’t get me wrong; some of the classic anamorphic series with all their flaws can be beautiful and exciting, but they weren’t the right choice for this film. I wanted to select how much (or how little) we had in focus, and I didn’t want to lose sharpness off the center of the frame or have to stop way down because we needed three boys’ faces in focus. So the Master anamorphics ended up being the perfect choice: a big look, great contrast and color rendition, lovely depth and separation, and clean and sharp across the frame.

Can you talk about the lighting and how the camera worked while grabbing shots when you could?
One of the challenges of working with three 12-year-olds as your lead actors is keeping things loose enough so they don’t feel fenced in, which would sap all the energy out of their performances. We worked hard to give each scene and location a strong look, but sometimes we lit a little more broadly or lensed a little wider so the boys had room to play.

We kept as much lighting out of the room or rigged overhead as we could so the locations wouldn’t get claustrophobic or overheated. And the operators were able to adjust their frames easily as things changed or if an actor was doing something unexpected.

Any challenging scenes that you are particularly proud of or that you found most challenging?
Without question, the most challenging sequence was the boys running across the highway. It was the biggest single scene I’ve shot, and it had multiple units shooting over five days — it was really tough from a coordination and matching perspective. Obviously, the scene had to be funny and exciting, but I also wanted it to feel huge. The boys are literally and figuratively crossing the biggest barrier of their lives! We got a little lucky that there was a thin layer of haze most of the time that took the edge off the direct sun and made matching a bit easier.

The key was sitting with our AD, Dan Miller, and coming up with the most advantageous shooting order, but not hopping around so much that we lose continuity or waste tons of time resetting everything. And almost every shot had VFX so our key grip, Marc Nolet, drilled small washers into the tarmac for every camera position and we took copious notes so we could go back if necessary, or second unit could come in and replicate something. It was a lot of work, but the final sequence is really fun and surprising.

Now for some more general questions. How did you become interested in cinematography?
I went to film school with the idea of being a writer/director, but I discovered very quickly that I wasn’t really into that. I was drawn immediately to cameras, lenses and film stocks, and I devoured all the information I could find about cinematography. My friends started asking me to shoot their student projects, and it took off from there. I’m lucky that I still get to work with some of those college friends.

How do you stay on top of advancing technology?
I don’t find it too difficult to stay on top of the latest and greatest camera or light or other widget. The basic idea is always the same, no matter how new the implementation, and when something truly groundbreaking comes along, you hear about it quickly.

Of course, many of my friends are in the camera or lighting departments, so we talk about this stuff all the time, and there are great online resources for checking out gear or swapping ideas. Probably the best place to learn is at events like Cine Gear, where you can see the latest stuff and hang out with your friends.

What inspires you artistically?
It’s easy to find inspiration almost anywhere: museums, books, online, just walking around. I also get great inspiration from my fellow cinematographers (past and present) and the work they do. The DP community is very open and supportive, maybe surprisingly so.

What new technology has changed the way you work?
The two innovations that have impacted my work most are digital cinema cameras and LED lighting. Both have afforded me a more lightweight and efficient way of working without sacrificing quality or options.

Jonathan Furmanski

What are some of your best practices or rules you try to follow on each job?
I credit my documentary work for teaching me to keep an open ear and an open mind. When you listen, you can prepare, anticipate or hear a key piece of information that could impact your approach. This, of course, leads to improvisation because maybe your idea doesn’t work or a better idea is presenting itself. Don’t be rigid. I also try to stand next to the camera as much as possible — that’s where all the action is.

Explain your ideal collaboration with the director when setting the look of a project.
It’s exactly that… a collaboration. I don’t want to be off by myself, and I don’t want to just pass information from one person to another. The best director/DP relationships are an extended, evolving conversation where you’re pushing a clear vision together but still challenging each other.

What’s your go-to gear — things you can’t live without?
I think the ARRI Amira is the best camera ever made, although I’m a bit of a chameleon when it comes to cameras and lenses — I don’t think I’ve used the same lens package twice on all my narrative projects. The two things I must have are my own wireless monitor and a good polarizing filter; I want complete control over the image, and I don’t like standing still.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

postPerspective’s ‘SMPTE 2019 Live’ interview coverage

postPerspective was the official production team for SMPTE during its most recent conference in downtown Los Angeles this year. Taking place once again at the Bonaventure Hotel, the conference featured events and sessions all week. (You can watch those interviews here.)

These sessions ranged from “Machine Learning & AI in Content Creation” to “UHD, HDR, 4K, High Frame Rate” to “Mission Critical: Project Artemis, Imaging from the Moon and Deep Space Imaging.” The latter featured two NASA employees and a live talk with astronauts on the International Space Station. It was very cool.

postPerspective’s coverage was also cool and included many sit-down interviews with those presenting at the show (including former astronaut and One More Orbit director Terry Virts as well as Todd Douglas Miller, the director of the Apollo 11 doc), SMPTE executives and long-standing members of the organization.

In addition to the sessions, manufacturers had the opportunity to show their tools on the exhibit floor, where one of our crews roamed with camera and mic in hand reporting on the newest tech.

Whether you missed the conference or experienced it firsthand, these exclusive interviews will provide a ton of information about SMPTE, standards, and the future of our industry, as well as just incredibly smart people talking about the merger of technology and creativity.

Enjoy our coverage!

Abu Dhabi’s twofour54 is now Dolby Vision certified

Abu Dhabi’s twofour54 has become Dolby Vision certified in an effort to meet the demand for color grading and mastering Dolby Vision HDR content. twofour54 is the first certified Dolby Vision facility in the UAE, providing work in both Arabic and English.

“The way we consume content has been transformed by connectivity and digitalization, with consumers able to choose not only what they watch but where, when and how,” says Katrina Anderson, director of commercial services at twofour54. “This means it is essential that content creators have access to technology such as Dolby Vision in order to ensure their content reaches as wide an audience as possible around the world.”

With Netflix, Amazon Prime and others now competing with existing broadcasters, there is a big demand around the world for high-quality production facilities. According to twofour54, Netflix’s expenditure on content creation soared from $4.6 billion in 2015 to $12 billion last year, while other platforms — such as Amazon Prime, Apple TV and YouTube — are also seeking to create more unique content. Consequently, the global demand for production facilities such as those offered by twofour54 is outstripping supply.

“We have seen an increased interest for Dolby Vision in home entertainment due to growing popularity of digital streaming services in Middle East, and we are now able to support studios and content creators with leading-edge tools that are deployed at twofour54 world-class post facility,” explains Pankaj Kedia, managing director of emerging markets for Dolby Laboratories. “Dolby Vision is the preferred HDR mastering workflow for leading studios and a growing number of content creators, and hence this latest offering demonstrates twofour54 commitment to make Abu Dhabi a preferred location for film and TV production.”

Why is this important? For color grading of movies and episodic content, Dolby has created a workflow that generates shot-by-shot dynamic metadata that allows filmmakers to see how their content will look on consumer devices. The colorist can then add “trims” to adjust how the mapping looks and to deliver a better-looking SDR version for content providers serving early Ultra HD (UHD) televisions that are capable only of SDR reproduction.

The colorists at twofour54 use both Blackmagic DaVinci Resolve and FilmLight Baselight systems.

Main Image: Engineer Noura Al Ali

A post engineer’s thoughts on Adobe MAX, new offerings

By Mike McCarthy

Last week, I had the opportunity to attend Adobe’s MAX conference at the LA Convention Center. Adobe showed me, and 15,000 of my closest friends, the newest updates to pretty much all of its Creative Cloud applications, as well as a number of interesting upcoming developments. From a post production perspective, the most significant pieces of news are the release of Premiere Pro 14 and After Effects 17 (a.ka., the 2020 releases of those Creative Cloud apps).

The main show ran from Monday to Wednesday, with a number of pre-show seminars and activities the preceding weekend. My experience started off by attending a screening of the new Terminator Dark Fate film at LA Live, followed by Q&A with the director and post team. The new Terminator was edited in Premiere Pro, sharing the project assets between a large team of editors and assistants, with extensive use of After Effects, Adobe’s newly acquired Substance app and various other tools in the Creative Cloud.

The post team extolled the improvements in shared project support and project opening times since their last Premiere endeavor on the first Deadpool movie. Visual effects editor Jon Carr shared how they used the integration between Premiere and After Effects to facilitate rapid generation of temporary “postvis” effects. This helped the editors tell the story while they were waiting on the VFX teams to finish generating the final CGI characters and renders.

MAX
The conference itself kicked off with a keynote presentation of all of Adobe’s new developments and releases. The 150-minute presentation covered all aspects of the company’s extensive line of applications. “Creativity for All” is the primary message Adobe is going for, and they focused on the tension between creativity and time. So they are trying to improve their products in ways that give their users more time to be creative.

The three prongs of that approach for this iteration of updates were:
– Faster, more powerful, more reliable — fixing time-wasting bugs, improving hardware use.
– Create anywhere, anytime, with anyone — adding functionality via the iPad, and shared Libraries for collaboration.
– Explore new frontiers — specifically in 3D with Adobe’s Dimension, Substance and Aero)

Education is also an important focus for Adobe, with 15 million copies of CC in use in education around the world. They are also creating a platform for CC users to stream their working process to viewers who want to learn from them, directly from within the applications. That will probably integrate with the new expanded Creative Cloud app released last month. They also have released integration for Office apps to access assets in CC libraries.

The first application updates they showed off were in Photoshop. They have made the new locked aspect ratio scaling a toggle-able behavior, improved the warp tool and improved ways to navigate deep layer stacks by seeing which layers effect particular parts of an image. But the biggest improvement is AI-based object selection. This makes detailed maskings based on simple box selections or rough lassos. Illustrator now has GPU acceleration, improving performance of larger documents and a path simplifying tool to reduce the number of anchor points.

They released Photoshop for the iPad and announced that Illustrator will be following that path as well. Fresco is headed the other direction and now available on Windows. That is currently limited to Microsoft Surface products, but I look forward to being able to try it out on my ZBook-X2 at some point. Adobe XD has new features, and apparently is the best way to move complex Illustrator files into After Effects, which I learned at one of the sessions later.

Premiere
Premiere Pro 14 has a number of new features, the most significant one being AI-driven automatic reframe to allow you to automatically convert your edited project into other aspect ratios for various deliverables. While 16×9 is obviously a standard size, certain web platforms are optimized for square or tall videos. The feature can also be used to reframe content for 2.35 to 16×9 or 4×3, which are frequent delivery requirements for feature films that I work on. My favorite aspect of this new functionality is that the user has complete control over the results.

Unlike other automated features like warp stabilizer, which only offers on/off of applying the results, the auto-frame function just generates motion effect keyframes that can be further edited and customized by the user… once the initial AI pass is complete. It also has a nesting feature for retaining existing framing choices, that results in the creation of a new single-layer source sequence. I can envision this being useful for a number of other workflow processes — such as preparing for external color grading or texturing passes, etc.

They also added better support for multi-channel audio workflows and effects, improved playback performance for many popular video formats, better HDR export options and a variety of changes to make the motion graphics tools more flexible and efficient for users who use them extensively. They also increased the range of values available for clip playback speed and volume, and added support for new camera formats and derivations.

The brains behind After Effects have focused on improving playback and performance for this release and have made some significant improvements in that regard. The other big feature that actually may make a difference is content-aware fill for video. This was sneak previewed at MAX last year and first implemented in the NAB 2019 release of After Effects, but it should be greatly refined and improved in this version since it’s now twice as fast.

They also greatly improved support for OpenEXR frame sequences, especially with multiple render pass channels. The channels can be labeled; it creates a video contact sheet for viewing all the layers in thumbnail form. EXR playback performance is supposed to be greatly improved as well.

Character Animator is now at 3.0, and they have added keyframing of all editable values, trigger-able reposition “cameras” and trigger-able audio effects, among other new features. And Adobe Rush now supports publishing directly to TikTok.

Content Authenticity Initiative
Outside of individual applications, Adobe has launched the Content Authenticity Initiative in partnership with the NY Times and Twitter. It aims to fight fake news and restore consumer confidence in media. Its three main goals are: trust, attribution and authenticity. It aims to present end users with who created an image and who edited or altered it and, if so, in what ways. Seemingly at odds with that, they also released a new mobile app that edits images upon capture, using AI empowered “lenses” for highly stylized looks, even providing a live view.

This opening keynote was followed by a selection of over 200 different labs and sessions available over the next three days. I attended a couple sessions focused on After Effects, as that is a program I know I don’t use to its full capacity. (Does anyone, really?)

Partners
A variety of other partner companies were showing off their products in the community pavilion. HP was pushing 3D printing and digital manufacturing tools that integrate with Photoshop and Illustrator. Dell has a new 27-inch color accurate monitor with built-in colorimeter, presumably to compete with HP’s top end DreamColor displays. Asus also has some new HDR monitors that are Dolby Vision compatible. One is designed to be portable, and is as thin and lightweight as a laptop screen. I have always wondered why that wasn’t a standard approach for desktop displays.

Keynotes
Tuesday opened with a keynote presentation from a number of artists of different types, speaking or being interviewed. Jason Levine’s talk with M. Night Shyamalan was my favorite part, even though thrillers aren’t really my cup of tea. Later, I was able to sit down and talk with Patrick Palmer, Adobe’s Premiere Pro product manager about where Premiere is headed and the challenges of developing HDR creation tools when there is no unified set of standards for final delivery. But I am looking forward to being able to view my work in HDR while I am editing at some point in the future.

One of the highlights of MAX is the 90-minute Sneaks session on Tuesday night, where comedian John Mulaney “helped” a number of Adobe researchers demonstrate new media technologies they are working on. These will eventually improve audio quality, automate animation, analyze photographic authenticity and many other tasks once they are refined into final products at some point in the future.

This was only my second time attending MAX, and with Premiere Rush being released last year, video production was a key part of that show. This year, without that factor, it was much more apparent to me that I was an engineer attending an event catering to designers. Not that this is bad, but I mention it here because it is good to have a better idea of what you are stepping into when you are making decisions about whether to invest in attending a particular event.

Adobe focuses MAX on artists and creatives as opposed to engineers and developers, who have other events that are more focused on their interests and needs. I suppose that is understandable since it is not branded Creative Cloud for nothing. But it is always good to connect with the people who develop the tools I use, and the others who use them with me, which is a big part of what Adobe MAX is all about.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Review: iZotope’s Ozone 9 isn’t just for mastering

By Pat Birk

Izotope is back with its latest release, Ozone 9. And with it, iZotope hopes to provide a comprehensive package of tools to streamline the audio engineer’s workflow. The company has been on my radar for a number of years now, having used the RX suite extensively for clean up and restoration of production audio.

I have always been impressed by RX’s ability to improve poor location sound but was unfamiliar with the company’s more music-focused products. But, in addition to being an engineer, I am also a musician, so I was excited to try out the Ozone suite for myself.

Ozone is first and foremost a mastering suite. It features a series of EQ, compression, saturation and limiting modules meant to be used in a mastering chain — putting the final touches on your audio before it hits streaming platforms or physical media.

Since Ozone is primarily (though by no means solely) aimed at mastering engineers, the plugin features a host of options for manipulating a finished stereo mix, with all elements in place and no stems to adjust. Full disclosure; My mastering experience prior to this review comprised loading up an instance of Waves Abbey Road TG Mastering Chain, applying a preset and playing with the settings from there. However, that didn’t stop me from loading a recent mix into Ozone and taking a crack at mastering it.

The Master Assistant feature helps create a starting point on your master. Note the colored lines beneath the waveform, which accurately depict the song’s structure.

Ozone has deeply integrated machine learning and I immediately found that the program lives up to the hype surrounding that technology. I loaded my song into the standalone app and analyzed it with the Master Assistant feature. I was asked to choose between the Vintage and Modern setting, select either a manual EQ setting or load a mastered song for reference and then tell Ozone whether the track was being mastered for streaming or CD.

Within about 15 seconds of making these selections and playing the track, Ozone had chained together a selection of EQs, compressors and limiters that added punch, clarity and, of course, loudness. I was really impressed with the ballpark iZotope’s AI had gotten my track into. Another really nice touch was the fact that Ozone had analyzed the track and assigned a series of colored lines beneath the waveform to represent each section of the song. It was dead on, and really streamlined the process of checking each section for making adjustments.

Vintage
As a musician who came up listening to the great recordings of the ‘60s and ‘70s, I often find myself wanting to add some analog credibility to my largely in-the-box productions. iZotope delivers in a big way here, incorporating four vintage modules to add as much tube and transistor warmth as you desire. The Vintage EQ module is based on the classic Pultec, emulating its distinctive curves and representing them graphically. My ears knew that a little goes a long way with Pultec-type EQs, but the graphic EQ really helped me understand what was going on more deeply.

The Vintage Compressor emulates analog feedback compressors such as the Urei 1176 and Teletronix LA2A and is specifically designed to minimize pumping effects that can appear when compression is overdone. I had to push the compressor pretty hard before I heard anything like that and found that it did a really nice job of subtly attenuating transients.

Vintage tape adds analog warmth, and this reviewer found it pulls the sound together.

The Vintage Limiter is based on the prized Fairchild 670 limiter and it does what a limiter is meant to do: raise the level of the mix and decrease dynamic range, all while adding a distinctive analog warmth to the signal. I’ve never gotten my hands on a Fairchild, but I know that this emulation sounds good, regardless of how true it is to the original.

The Master Assistant feature arranged all of these modules in a nicely gain-staged chain for me, and after some light tweaking, I was well within the neighborhood of what I was hoping for in a master. But I wanted to add a little more warmth, a little more “glue.” That’s where the Vintage Tape module came in. iZotope has based its tape emulation on the Studer A810. The company says that the plugin features all of the benefits of tape — added warmth, saturation and glue — without any of the wow, flutter and crosstalk that occurs on actual tape machines.

Adjustable tape speeds have a noticeable effect on frequency response, with 7.5ips being darker and 30ips being brighter. More tonal adjustments can be made via the bias and low and high emphasis controls, and saturation is controlled via the input drive control. The plugin departs
from the realm of physical tape emulation with the added Harmonics control, which adds even harmonics to the signal, providing further warmth.

I appreciated the warmth and presence Vintage Tape added to the signal, but I did find myself missing some of the “imperfection” options included on other tape emulation plug-ins, such as the Waves J37 tape machine. Slight wow and flutter can add character to a recording and can be especially interesting if the tape emulator has a send-and-return section for setting up delays. But Ozone is a mastering suite, so I can see why these kinds of features weren’t included.

The vintage EQ purports to offer Pultec-style cuts and boosts.

Modern Sounds
Each of the vintage modules has a modern counterpart in the form of the Dynamics, Dynamic EQ, EQ and Exciter plugins. Each of these plugins is simple to operate, with a sleek, modern UI. Each plugin is also multiband with the EQs featuring up to eight bands, the Dynamic EQ featuring six and the Exciter and Dynamics modules featuring four bands each. This opens up a wide range of possibilities for precisely manipulating audio.

I was particularly intrigued by the Exciter’s ability to divide the frequency spectrum into four quadrants and apply a different type of analog harmonic excitement to each. Tube, transistor and tape saturation are all available, and the Exciter truly represents a modern method of using of classic analog sound signatures.

The Modern modules will also be of interest to sound designers and other audio post pros. Dynamic EQ allows you to set a threshold and ratio at which a selected band will begin to affect audio. While this is, of course, useful for managing problems such as sibilance and other harsh frequencies in a musical context, problematic frequencies are just as prevalent in dialogue recording, if not more so. Used judiciously, Dynamic EQ has the potential to save a lot of time in a dialogue edit. Dynamic EQ or the multiband compression section of Ozone’s Dynamics module have the potential to rescue production audio.

Exciter allows for precise amounts of harmonic distortion to be added across four bands.

For instance, in the case of a fantastic performance during which the actor creates a loud transient noise by hitting a prop, the Dynamic EQ can easily tame the transient noise without greatly affecting the actor’s voice and without creating artifacts. And while the EQ modules in Ozone feature a wide selection of filter categories and precisely adjustable Qs, which will no doubt be useful throughout the design process, it is important to note that they are limited to 6dB boosts and 12dB cuts in gain. The plugin is still primarily aimed at the subtleties of mastering.

Dialogue Editors, Listen Up
Ozone’s machine learning does provide two more fantastic features for dialogue editors: Match EQ and Master Rebalance. Match EQ intelligently builds an EQ profile of a given audio selection and can apply it another piece of audio. This can aid greatly in matching a lavalier mic to a boom track or incorporating ADR into a take. I also tested it by referencing George Harrison’s “What Is Life?” and applying it to a mix of my song. I was shocked by how close the plugin got my mix sounding like George’s.

Ozone’s standard equalizer

Master Rebalance, meanwhile, is meant for a mastering engineer to be able to bring up or lower the vocals, bass, or drums in a song with only a stereo mix to work from. I tested it on music and was very impressed by how effectively it raised and cut each category without affecting the parts around it. But this will also have utility for dialogue editors — the module is so effective at recognizing the human voice that it can bring up dialogue within production tracks, further separating it from whatever noise is happening around it.

Match EQ yields impressive results and could be time-saving for music engineers crossing over into the audio post world — like those who do not own RX 7 Advanced, which features a similar module.

The Imager module also has potential for post. Its Stereoize feature can add an impressive amount of width to any track and has a multiband feature, meaning you have the option to, for example, keep the low frequencies tight and centered while spreading the mids and highs more widely across the stereo field. And while it is not a substitution for true stereo recording, the Stereoize feature can add depth to mono ambience and world tone recordings, making them usable in the right context.

Master rebalance features a simple interface with

The collection of plugins is available at three price points — Elements, Standard and Advanced — which allows engineers to get started with Ozone at any budget. Elements is a stripped down package of Ozone’s bare essentials, Standard introduces the standalone app and a sizeable step up in terms of featureset and Advanced is replete with every mastering innovation IZotope has developed to date, including new toys like Low-End Focus and Master Rebalance. A complete list of each tier’s corresponding features can be found on Izotope’s website.

Summing Up
Ozone 9 integrates an immense amount of technology and research into a sleek, user-friendly package. For music recording and mastering engineers, this suite is a no-brainer. For other types of audio post engineers, the plugin provides enough perks to be interesting and useful, from editing to design to final mix. Ozone 9 Elements, Standard and Advanced editions are available now from IZotope.


Pat Birk is a musician and sound engineer at Silver Sound, a boutique sound house based in New York City.

Blog: Making post deliverables simple and secure

By Morgan Swift

Post producers don’t have it easy. With an ever-increasing number of platforms for distribution and target languages to cater to, getting one’s content to the global market can be challenging to say the least. To top it all, given the current competitive landscape, producers are always under pressure to reduce costs and meet tight deadlines.

Having been in the creative services business for two decades, we’ve all seen it before — post coordinators and supervisors getting burnt out working late nights, often juggling multiple projects and being pushed to the breaking point. You can see it in their eyes. What adds to the stress is dealing with multiple vendors to get various kinds of post finishing work done — from color grading to master QC to localization.

Morgan Swift

Localization is not the least of these challenges. Different platforms specify different deliverables, including access services like closed captions (CC) and audio description (AD); along with as-broadcast scripts (ABS) and combined continuity spotting lists (CCSL). Each of these deliverables requires specialized teams and tools to execute. Needless to say, they also have a significant impact on the budget — usually at least tens of thousands of dollars (much more for a major release).

It is therefore extremely critical to plan post deliverables well in advance to ensure that you are in complete control of turnaround time (TAT), expected spend and potential cost saving opportunities. Let’s look at a few ways of streamlining the process of creating access services deliverables. To do this, we need to understand the various factors at play.

First of all, we need to consider the amount of effort involved in creating these deliverables. There is typically a lot of overlap, as deliverables like as-broadcast scripts and combined continuity spotting lists are often required for creating closed captions and audio description. This means that it is cheaper to combine the creation of all these deliverables instead of getting them done separately.

The second factor to think about is security. Given that pre-release content is extremely vulnerable to piracy, the days of getting an extra DVD with visible timecode for closed captions should be over. Even the days of sending a non-studio-approved link just to create the deliverables should be over.
Why? Because today, there exist tailor-made solutions that have been designed to facilitate secure localization operations. They enable easy creation of a folder that can be used to send and receive files securely, even by external vendors. One such solution is Clear Media ERP, which was built ground-up by Prime Focus Technologies in order to address these challenges.

There is no additional cost to send and receive videos or post deliverable files if you already have a system like this set up for a show. You can keep your pre-release content completely safe, leveraging the software’s advanced security features which include multi-factor authentication, Okta integration, bulk watermarking, burnt-in watermarks for downloads, secure script and document distribution and more.

With the right tech stack, you can get one beautifully organized and secure location to store all of your Access Services deliverables. Which means your team can finally sit back and focus on what matters the most — creating incredible content.


Morgan Swift  is director of account management at Prime Focus Technologies in Los Angeles.

SMPTE 2019 Live: Gala Award Winners

postPerspective was invited by SMPTE to host the exclusive coverage of their 2019 Awards Gala. (Watch here!)

The annual event was hosted by Kasha Patel (a digital storyteller at NASA Earth Observatory by day and a science comedian by night!), and presenters included Steve Wozniak. Among this year’s honorees — Netflix’s Anne Aaron, Gary J. Sullivan, Michelle Munson and Sky’s Cristina Gomila Torres. Honorary Membership was bestowed on Roderick Snell (Snell & Wilcox) and Paul Kellar (Quantel).

If you missed this year’s SMPTE Awards Gala, or even if you were there, check out our backstage interviews with some of our industry’s luminaries. We hope you enjoy watching these interviews as much as we enjoyed shooting them.

Oh, and a big shout out to the team from AlphaDogs who shot and edited all of our 2019 SMPTE Live coverage!

Behind the Title: Sarofsky EP Steven Anderson

This EP’s responsibilities range gamut “from managing our production staff to treating clients to an amazing dinner.”

Company: Chicago’s Sarofsky

Can you describe your company?
We like to describe ourselves as a design-driven production company. I like to think of us as that but so much more. We can be a one-stop shop for everything from concept through finish, or we can partner with a variety of other companies and just be one piece of the puzzle. It’s like ordering from a Chinese menu — you get to pick what items you want.

What’s your job title, and what does the job entail?
I’m executive producer, and that means different things at different companies and industries. Here at Sarofsky, I am responsible for things that run the gamut from managing our production staff to treating clients to an amazing dinner.

Sarofsky

What would surprise people the most about what falls under that title?
I also run payroll, and I am damn good at it.

How has the VFX industry changed in the time you’ve been working?
It used to be that when you told someone, “This is going to take some time to execute,” that’s what it meant. But now, everyone wants everything two hours ago. On the flip side, the technology we now have access to has streamlined the production process and provided us with some terrific new tools.

Why do you like being on set for shoots? What are the benefits?
I always like being on set whenever I can because decisions are being made that are going to affect the rest of the production paradigm. It’s also a good opportunity to bond with clients and, sometimes, get some kick-ass homemade guacamole.

Did a particular film inspire you along this path in entertainment?
I have been around this business for quite a while, and one of the reasons I got into it was my love of film and filmmaking. I can’t say that one particular film inspired me to do this, but I remember being a young kid and my dad taking me to see The Towering Inferno in the movie theater. I was blown away.

What’s your favorite part of the job?
Choosing a spectacular bottle of wine for a favorite client and watching their face when they taste it. My least favorite has to be chasing down clients for past due invoices. It gets old very quickly.

What is your most productive time of the day?
It’s 6:30am with my first cup of coffee sitting at my kitchen counter before the day comes at me. I get a lot of good thinking and writing done in those early morning hours.

Original Bomb Pop via agency VMLY&R

If you didn’t have this job, what would you be doing instead?
I would own a combo bookstore/wine shop where people could come and enjoy two of my favorite things.

Why did you choose this profession?
I would say this profession chose me. I studied to be an actor and made my living at it for several years, but due to some family issues, I ended up taking a break for a few years. When I came back, I went for a job interview at FCB and the rest is history. I made the move from agency producing to post executive producer five years ago and have not looked back since.

Can you briefly explain one or more ways Sarofsky is addressing the issue of workplace diversity in its business?
We are a smallish women-owned business, and I am a gay man; diversity is part of our DNA. We always look out for the best talent but also try to ensure we are providing opportunities for people who may not have access to them. For example, one of our amazing summer interns came to us through a program called Kaleidoscope 4 Kids, and we all benefited from the experience.

Name some recent projects you have worked on, which are you most proud of, and why?
My first week here at EP, we went to LA for the friends and family screening of Guardians of the Galaxy, and I thought, what an amazing company I work for! Marvel Studios is a terrific production partner, and I would say there is something special about so many of our clients because they keep coming back. I do have a soft spot for our main title for Animal Kingdom just because I am a big Ellen Barkin fan.

Original Bomb Pop via agency VMLY&R

Name three pieces of technology you can’t live without.
I’d be remiss if I didn’t say my MacBook and iPhone, but I also wouldn’t want to live without my cooking thermometer, as I’ve learned how to make sourdough bread this year, and it’s essential.

What social media channels do you follow?
I am a big fan of Instagram; it’s just visual eye candy and provides a nice break during the day. I don’t really partake in much else unless you count NPR. They occupy most of my day.

Do you listen to music while you work? Care to share your favorite music to work to?
I go in waves. Sometimes I do but then I won’t listen to anything for weeks. But I recently enjoyed listening to “Ladies and Gentleman: The Best of George Michael.” It was great to listen to an entire album, a rare treat.

What do you do to de-stress from it all?
I get up early and either walk or do some type of exercise to set the tone for the day. It’s also so important to unplug; my partner and I love to travel, so we do that as often as we can. All that and a 2006 Chateau Margaux usually washes away the day in two delicious sips.

James Norris joins Nomad in London as editor, partner

Nomad in London has added James Norris as editor and partner. A self-taught, natural editor, James started out running for the likes of Working Title, Partizan and Tomboy Films. He then moved to Whitehouse Post as an assistant where he refined his craft and rose through the ranks to become an editor.

Over the past 15 years, he’s worked across commercials, music videos, features and television. Norris edited Ikea’s Fly Robot Fly spot and Asda’s Get Possessed piece, and has recently cut a new project for Nike. Working within television and film, he also cut an episode of the BAFTA-nominated drama Our World War and feature film We Are Monster.

“I was attracted to Nomad for their vision for the future and their dedication to the craft of editing. They have a wonderful history but are also so forward-thinking and want to create new, exciting things. The New York and LA offices have seen incredible success over the last few years, and now there’s Tokyo and London too. On top of this, Nomad feels like home already. They’re really lovely people — it really does feel like a family.”

Norris will be cutting on Avid Media Composer at Nomad.

 

Harbor crafts color and sound for The Lighthouse

By Jennifer Walden

Director Robert Eggers’ The Lighthouse tells the tale of two lighthouse keepers, Thomas Wake (Willem Dafoe) and Ephraim Winslow (Robert Pattinson), who lose their minds while isolated on a small rocky island, battered by storms, plagued by seagulls and haunted by supernatural forces/delusion-inducing conditions. It’s an A24 film that hit theaters in late October.

Much like his first feature-length film The Witch (winner of the 2015 Sundance Film Festival Directing Award for a dramatic film and the 2017 Independent Spirit Award for Best First Feature), The Lighthouse is a tense and haunting slow descent into madness.

But “unlike most films where the crazy ramps up, reaching a fever pitch and then subsiding or resolving, in The Lighthouse the crazy ramps up to a fever pitch and then stays there for the next hour,” explains Emmy-winning supervising sound editor/re-recording mixer Damian Volpe. “It’s like you’re stuck with them, they’re stuck with each other and we’re all stuck on this rock in the middle of the ocean with no escape.”

Volpe, who’s worked with director Eggers on two short films — The Tell-Tale Heart and Brothers — thought he had a good idea of just how intense the film and post sound process would be going into The Lighthouse, but it ended up exceeding his expectations. “It was definitely the most difficult job I’ve done in over two decades of working in post sound for sure. It was really intense and amazing,” he says.

Eggers chose Harbor’s New York City location for both sound and final color. This was colorist Joe Gawler’s first time working with Eggers, but it couldn’t have been a more fitting film. The Lighthouse was shot on 35mm black & white (Double-X 5222) film with a 1.19:1 aspect ratio, and as it happens Gawler is well versed in the world of black & white. He’s remastered a tremendous amount of classic movie titles for The Criterion Collection, such as Breathless, Seventh Samurai and several Fellini films like 8 ½. “To take that experience from my Criterion title work and apply that to giving authenticity to a contemporary film that feels really old, I think it was really helpful,” Gawler says.

Joe Gawler

The advantage of shooting on film versus shooting digitally is that film negatives can be rescanned as technology advances, making it possible to take a film from the ‘60s and remaster it into 4K resolution. “When you shoot something digitally, you’re stuck in the state-of-the-moment technology. If you were shooting digitally 10 years ago and want to create a new deliverable of your film and reimagine it with today’s display technologies, you are compromised in some ways. You’re having to up-res that material. But if you take a 35mm film negative shot 100 years ago, the resolution is still inside that negative. You can rescan it with a new scanner and it’s going to look amazing,” explains Gawler.

While most of The Lighthouse was shot on black & white film (with Baltar lenses designed in the 1930s for that extra dose of authenticity), there were a few stock footage shots of the ocean with big storm waves and some digitally rendered elements, such as the smoke, that had to be color corrected and processed to match the rich, grainy quality of the film. “Those stock footage shots we had to beat up to make them feel more aged. We added a whole bunch of grain into those and the digital elements so they felt seamless with the rest of the film,” says Gawler.

The digitally rendered elements were separate VFX pieces composited into the black & white film image using Blackmagic’s DaVinci Resolve. “Conforming the movie in Resolve gave us the flexibility to have multiple layers and allowed us to punch through one layer to see more or less of another layer,” says Gawler. For example, to get just that right amount of smoke, “we layered the VFX smoke element on top of the smokestack in the film and reduced the opacity of the VFX layer until we found the level that Rob and DP Jarin Blaschke were happy with.”

In terms of color, Gawler notes The Lighthouse was all about exposure and contrast. The spectrum of gray rarely goes to true white and the blacks are as inky as they can be. “Jarin didn’t want to maintain texture in the blackest areas, so we really crushed those blacks down. We took a look at the scopes and made sure we were bottoming out so that the blacks were pure black.”

From production to post, Eggers’ goal was to create a film that felt like it could have been pulled from a 1930’s film archive. “It feels authentically antique, and that goes for the performances, the production design and all the period-specific elements — the lights they used and the camera, and all the great care we took in our digital finish of the film to make it feel as photochemical as possible,” says Gawler.

The Sound
This holds true for post sound, too. So much so that Eggers and Volpe kicked around the idea of making the soundtrack mono. “When I heard the first piece of score from composer Mark Korven, the whole mono idea went out the door,” explains Volpe. “His score was so wide and so rich in terms of tonality that we never would’ve been able to make this difficult dialogue work if we had to shove it all down one speaker’s mouth.”

The dialogue was difficult on many levels. First, Volpe describes the language as “old-timey, maritime” delivered in two different accents — Dafoe has an Irish-tinged seasoned sailor accent and Pattinson has an up-east Maine accent. Additionally, the production location made it difficult to record the dialogue, with wind, rain and dripping water sullying the tracks. Re-recording mixer Rob Fernandez, who handled the dialogue and music, notes that when it’s raining the lighthouse is leaking. You see the water in the shots because they shot it that way. “So the water sound is married to the dialogue. We wanted to have control over the water so the dialogue had to be looped. Rob wanted to save as much of the amazing on-set performances as possible, so we tried to go to ADR for specific syllables and words,” says Fernandez.

Rob Fernandez

That wasn’t easy to do, especially toward the end of the film during Dafoe’s monologue. “That was very challenging because at one point all of the water and surrounding sounds disappear. It’s just his voice,” says Fernandez. “We had to do a very slow transition into that so the audience doesn’t notice. It’s really focusing you in on what he is saying. Then you’re snapped out of it and back into reality with full surround.”

Another challenging dialogue moment was a scene in which Pattinson is leaning on Dafoe’s lap, and their mics are picking up each other’s lines. Plus, there’s water dripping. Again, Eggers wanted to use as much production as possible so Fernandez tried a combination of dialogue tools to help achieve a seamless match between production and ADR. “I used a lot of Synchro Arts’ Revoice Pro to help with pitch matching and rhythm matching. I also used every tool iZotope offers that I had at my disposal. For EQ, I like FabFilter. Then I used reverb to make the locations work together,” he says.

Volpe reveals, “Production sound mixer Alexander Rosborough did a wonderful job, but the extraneous noises required us to replace at least 60% of the dialogue. We spent several months on ADR. Luckily, we had two extremely talented and willing actors. We had an extremely talented mixer, Rob Fernandez. My dialogue editor William Sweeney was amazing too. Between the directing, the acting, the editing and the mixing they managed to get it done. I don’t think you can ever tell that so much of the dialogue has been replaced.”

The third main character in the film is the lighthouse itself, which lives and breathes with a heartbeat and lungs. The mechanism of the Fresnel lens at the top of the lighthouse has a deep, bassy gear-like heartbeat and rasping lungs that Volpe created from wrought iron bars drawn together. Then he added reverb to make the metal sound breathier. In the bowels of the lighthouse there is a steam engine that drives the gears to turn the light. Ephraim (Pattinson) is always looking up toward Thomas (Dafoe), who is in the mysterious room at the top of the lighthouse. “A lot of the scenes revolve around clockwork, which is just another rhythmic element. So Ephraim starts to hear that and also the sound of the light that composer Korven created, this singing glass sound. It goes over and over and drives him insane,” Volpe explains.

Damian Volpe

Mermaids make a brief appearance in the film. To create their vocals, Volpe and his wife did a recording session in which they made strange sea creature call-and-response sounds to each other. “I took those recordings and beat them up in Pro Tools until I got what I wanted. It was quite a challenge and I had to throw everything I had at it. This was more of a hammer-and-saw job than a fancy plug-in job,” Volpe says.

He captured other recordings too, like the sound of footsteps on the stairs inside a lighthouse on Cape Cod, marine steam engines at an industrial steam museum in northern Connecticut and more at the Mystic Sea Port… seagulls and waves. “We recorded so much. We dug a grave. We found an 80-year-old lobster pot that we smashed about. I recorded the inside of conch shells to get drones. Eighty percent of the sound in the film is material that I and Filipe Messeder (assistant and Foley editor) recorded, or that I recorded with my wife,” says Volpe.

But one of the trickiest sounds to create was a foghorn that Eggers originally liked from a lighthouse in Wales. Volpe tracked down the keeper there but the foghorn was no longer operational. He then managed to locate a functioning steam-powered diaphone foghorn in Shetland, Scotland. He contacted the lighthouse keeper Brian Hecker and arranged for a local documentarian to capture it. “The sound of the Sumburgh Lighthouse is a major element in the film. I did a fair amount of additional work on the recordings to make them sound more like the original one Rob [Eggers] liked, because the Sumburgh foghorn had a much deeper, bassier, whale-like quality.”

The final voice in The Lighthouse’s soundtrack is composer Korven’s score. Since Volpe wanted to blur the line between sound design and score, he created sounds that would complement Korven’s. Volpe says, “Mark Korven has these really great sounds that he generated with a ball on a cymbal. It created this weird, moaning whale sound. Then I created these metal creaky whale sounds and those two things sing to each other.”

In terms of the mix, nearly all the dialogue plays from the center channel, helping it stick to the characters within the small frame of this antiquated aspect ratio. The Foley, too, comes from the center and isn’t panned. “I’ve had some people ask me (bizarrely) why I decided to do the sound in mono. There might be a psychological factor at work where you’re looking at this little black & white square and somehow the sound glues itself to that square and gives you this idea that it’s vintage or that it’s been processed or is narrower than it actually is.

“As a matter of fact, this mix is the farthest thing from mono. The sound design, effects, atmospheres and music are all very wide — more so than I would do in a regular film as I tend to be a bit conservative with panning. But on this film, we really went for it. It was certainly an experimental film, and we embraced that,” says Volpe.

The idea of having the sonic equivalent of this 1930’s film style persisted. Since mono wasn’t feasible, other avenues were explored. Volpe suggested recording the production dialogue onto a NAGRA to “get some of that analog goodness, but it just turned out to be one thing too many for them in the midst of all the chaos of shooting on Cape Forchu in Nova Scotia,” says Volpe. “We did try tape emulator software, but that didn’t yield interesting results. We played around with the idea of laying it off to a 24-track or shooting in optical. But in the end, those all seemed like they’d be expensive and we’d have no control whatsoever. We might not even like what we got. We were struggling to come up with a solution.”

Then a suggestion from Harbor’s Joel Scheuneman (who’s experienced in the world of music recording/producing) saved the day. He recommended the outboard Rupert Neve Designs 542 Tape Emulator.

The Mix
The film was final mixed in 5.1 surround on a Euphonix S5 console. Each channel was sent through an RND 542 module and then into the speakers. The units’ magnetic heads added saturation, grain and a bit of distortion to the tracks. “That is how we mixed the film. We had all of these imperfections in the track that we had to account for while we were mixing,” explains Fernandez.

“You couldn’t really ride it or automate it in any way; you had to find the setting that seemed good and then just let it rip. That meant in some places it wasn’t hitting as hard as we’d like and in other places it was hitting harder than we wanted. But it’s all part of Rob Eggers’s style of filmmaking — leaving room for discovery in the process,” adds Volpe.

“There’s a bit of chaos factor because you don’t know what you’re going to get. Rob is great about being specific but also embracing the unknown or the unexpected,” he concludes.


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.