Evercast 5.4. Ad 5, Evercast Ad 6 5.18

Category Archives: post production

Blackmagic releases Resolve 16.2, beefs up audio post tools

Blackmagic has updated its color, edit, VFX and audio post tool to Resolve 16.2. This new version features major Fairlight updates for audio post as well as many improvements for color correction, editing and more.

This new version has major new updates for editing in the Fairlight audio timeline when using a mouse and keyboard. This is because the new edit selection mode unlocks functionality previously only available via the audio editor on the full Fairlight console, so editing is much faster than before. In addition, the edit selection mode makes adding fades and cuts and even moving clips only a mouse click away. New scalable waveforms let users zoom in without adjusting the volume. Bouncing lets customers render a clip with custom sound effects directly from the Fairlight timeline.

Adding multiple clips is also easier, as users can now add them to the timeline vertically, not just horizontally, making it simpler to add multiple tracks of audio at once. Multichannel tracks can now be converted into linked groups directly in the timeline so users no longer have to change clips manually and reimport. There’s added support for frame boundary editing, which improves file export compatibility for film and broadcast deliveries. Frame boundary editing now adds precision so users can easily trim to frame boundaries without having to zoom all the way in the timeline. The new version supports modifier keys so that clips can be duplicated directly in the timeline using the keyboard and mouse. Users can also copy clips across multiple timelines with ease.

Resolve 16.2 also includes support for the Blackmagic Fairlight Sound Library with new support for metadata based searches, so customers don’t need to know the filename to find a sound effect. Search results also display both the file name and description, so finding the perfect sound effect is faster and easier than before.

MPEG-H 3D immersive surround sound audio bussing and monitoring workflows are now supported. Additionally, improved pan and balance behavior includes the ability to constrain panning.

Fairlight audio editing also has index improvements. The edit index is now available in the Fairlight page and works as it does in the other pages, displaying a list of all media used; users simply click on a clip to navigate directly to its location in the timeline. The track index now supports drag selections for mute, solo, record enable and lock as well as visibility controls so editors can quickly swipe through a stack of tracks without having to click on each one individually. Audio tracks can also be rearranged by click and dragging a single track or a group of tracks in the track index.

This new release also includes improvements in AAF import and export. AAF support has been refined so that AAF sequences can be imported directly to the timeline in use. Additionally, if the project features a different time scale, the AAF data can also be imported with an offset value to match. AAF files that contain multiple channels will also be recognized as linked groups automatically. The AAF export has been updated and now supports industry-standard broadcast wave files. Audio cross-fades and fade handles are now added to the AAF files exported from Fairlight and will be recognized in other applications.

For traditional Fairlight users, this new update makes major improvements in importing old legacy Fairlight projects —including improved speed when opening projects with over 1,000 media files, so projects are imported more quickly.

Audio mixing is also improved. A new EQ curve preset for clip EQ in the inspector allows removal of troublesome frequencies. New FairlightFX filters include a new meter plug-in that adds a floating meter for any track or bus, so users can keep an eye on levels even if the monitoring panel or mixer are closed. There’s also a new LFE filter designed to smoothly roll off the higher frequencies when mixing low-frequency effects in surround.

Working with immersive sound workflows using the Fairlight audio editor has been updated and now includes dedicated controls for panning up and down. Additionally, clip EQ can now be altered in the inspector on the editor panel. Copy and paste functions have been updated, and now all attributes — including EQ, automation and clip gain — are copied. Sound engineers can set up their preferred workflow, including creating and applying their own presets for clip EQ. Plug-in parameters can also be customized or added so that users have fast access to their preferred tool set.

Clip levels can now be changed relatively, allowing users to adjust the overall gain while respecting existing adjustments. Clip levels can also be reset to unity, easily removing any level adjustments that might have previously been made. Fades can also be deleted directly from the Fairlight Editor, making it faster to do than before. Sound engineers can also now save their preferred track view so that they get the view they want without having to create it each time. More functions previously only available via the keyboard are now accessible using the panel, including layered editing. This also means that automation curves can now be selected via the keyboard or audio panel.

Continuing on with the extensive improvements to the Fairlight audio, there has also been major updates to the audio editor transport control. Track navigation is now improved and even works when nothing is selected. Users can navigate directly to the timecode entry window above the timeline from the audio editor panel, and there is added support for high-frame-rate timecodes. Timecode entry now supports values relative to the current CTI location, so the playhead can move along the timeline relative to the position rather than a set timecode.

Support has also been added so the colon key can be used in place of the user typing 00. Master spill on console faders now lets users spill out all the tracks to a bus fader for quick adjustments in the mix. There’s also more precision with rotary controls on the panel and when using a mouse with a modifier key. Users can also change the layout and select either icon or text-only labels on the Fairlight editor. Legacy Fairlight users can now use the traditional — and perhaps more familiar — Fairlight layout. Moving around the timeline is even quicker with added support for “media left” and “media right” selection keys to jump the playhead forward and back.

This update also improves editing in Resolve. Loading and switching timelines on the edit page is now faster, with improved performance when working with a large number of audio tracks. Compound clips can now be made from in and out points so that editors can be more selective about which media they want to see directly in the edit page. There is also support for previewing timeline audio when performing live overwrites of video-only edits. Now when trimming, the duration will reflect the clip duration as users actively trim, so they can set a specific clip length. Support for a change transition duration dialogue.

The media pool now includes metadata support for audio files with up to 24 embedded channels. Users can also duplicate clips and timelines into the same bin using copy and paste commands. Support for running the primary DaVinci Resolve screen as a window when dual-screen mode is enabled. Smart filters now let users sort media based on metadata fields, including keywords and people tags, so users can find the clips they need faster.

Quick Chat: Editing Leap Day short for Stella Artois

By Randi Altman

To celebrate February 29, otherwise known as Leap Day, beer-maker Stella Artois released a short film featuring real people who discover their time together is valuable in ways they didn’t expect. The short was conceived by VaynerMedia, directed by Division7s Kris Belman and cut by Union partner/editor Sloane Klevin. Union also supplied Flame work on the piece.

The film begins with the words, ”There is a crisis sweeping the nation” set on a black screen. Then we see different women standing on the street talking about how easy it is to cancel plans. “You’re just one text away,” says one. “When it’s really cold outside and I don’t want to go out, I use my dog excuse,” says another. That’s when the viewer is told, through text on the screen, that Stella Artois has set out to right this wrong “by showing them the value of their time together.”

The scene changes from the street to a restaurant where friends are reunited for a meal and a goblet of Stella after not seeing each other for a while. When the check comes the confused diners ask about their checks, as an employee explains, that the menu lists prices in minutes, and that Leap Day is a gift of 24 hours and that people should take advantage of that by “uncancelling plans.”

Prior to February 29, Stella encouraged people to #UnCancel plans and catch up with friends over a beer… paid for by the brand. Using the Stella Leap Day Fund — a $366,000 bank of beer reserved exclusively for those who spend time together (there are 366 days in a Leap Year) — people were able to claim as much as a 24-pack when sharing the film using #UnCancelPromo and tagging someone they would like to catch up with.

Editor Sloane Klevin

For the film short, the diners were captured with hidden cameras. Union editor Klevin, who used an Avid Media Composer 2018.12.03 with EditShare storage, was tasked with finding a story in their candid conversations. We reached out to her to find out more about the project and her process.

How early did you get involved in this project, and what kind of input did you have?
I knew I was probably getting the job about a week before they shot. I had no creative input into the shoot; that really only happens when I’m editing a feature.

What was your process like?
This was an incredibly fast turnaround. They shot on a Wednesday night, and it was finished and online the following Wednesday morning at 12am.

I thought about truncating my usual process in order to make the schedule, but when I saw their shooting breakdown for how they planned to shoot it all in one evening, I knew there wouldn’t be a ton of footage. Knowing this, I could treat the project the way I approach most unscripted longform branded content.

My assistant, Ryan Stacom, transcoded and loaded the footage into the Avid overnight, then grouped the four hidden cameras with the sound from the hidden microphones — and, brilliantly, production had time-of-day timecode on everything. The only thing that was tricky was when two tables were being filmed at once. Those takes had to be separated.

The Simon Says transcription software was used to transcribe the short pre and post interviews we had, and Ryan put markers from the transcripts on those clips so I could jump straight to a keyword or line I was searching for during the edit process. I watched all the verité footage myself and put markers on anything I thought was usable in the spot, typing into the markers what was said.

How did you choose the footage you needed?
Sometimes the people had conversations that were neither here nor there, because they had no idea they were being filmed, so I skipped that stuff. Also, I didn’t know if the transcription software would be accurate with so much background noise from the restaurant on the hidden table microphones, so markering myself seemed the best option. I used yellow markers for lines I really liked, and red for stuff I thought we might want to be able to find and audition, but those wasn’t necessarily my selects. That way I could open the markers tool, and read through my yellow selects at a glance.

Once I’d seen everything, I did a music search of Asche & Spencer’s incredibly intuitive, searchable music library website, downloaded my favorite tracks and started editing.  Because of the fast turnaround, the agency was nice enough to send an outline for how they hoped the material might be edited. I explored their road map, which was super helpful, but went with my gut on how to deviate. They gave me two days to edit, which meant I could post for the director first and get his thoughts.

Then I spent the weekend playing with the agency and trying other options. The client saw the cut and gave notes on both days I was with the agency, then we spent Monday and Tuesday color correcting (thanks to Mike Howell at Color Collective), reworking the music track, mixing (with Chris Afzal at Wave Studios), conforming, subtitling.

That was a crazy fast turnaround.
Considering how fast the turnaround was, it went incredibly smoothly. I attribute that to the manageable amount of footage, fantastic casting that got us really great reactions from all the people they filmed, and the amount of communication my producer at Union and the agency producer had in advance.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

 

Evercast 5.4. Ad 5, Evercast Ad 6 5.18

Western Digital intros WD Gold NVMe SSDs  

Western Digital has introduced its new enterprise-class WD Gold NVMe SSDs designed to help small- and medium-sized companies transition to NVMe storage. The SSDs offer power loss protection and high performance with low latency.

The WD Gold NVMe SSDs will be available in four capacities — .96TB, 1.92TB, 3.84TB and 7.68TB — in early Q2 of this year. The WD Gold NVMe SSD is designed to be, according to the company, “the primary storage in servers delivering significantly improved application responsiveness, higher throughput and greater scale than existing SATA devices for enterprise applications.”

These new NVMe SSDs complement the recently launched WD Gold HDDs by providing a high-performance storage tier for applications and data sets that require low latency or high throughput.

The WD Gold NVMe SSDs are designed using Western Digital’s silicon-to-system technology, from its 3D TLC NAND SSD media to its purpose-built firmware and own integrated controller. The drives give users peace of mind knowing they’re protected against power loss and that data paths are safe. Secure boot and secure erase provide users with additional data-management protections, and the devices come with an extended five-year limited warranty.


Krista Liney directs Ghost-inspired promo for ABC’s The Bachelor

Remember the Ghost-inspired promo for ABC’s The Bachelor, which first aired during the 92nd Academy Awards telecast? ABC Entertainment Marketing developed the concept and wrote the script, which features current Bachelor lead Peter Weber in a send-up of the iconic pottery scene in Ghost between Demi Moore and Patrick Swayze. It even includes the Righteous Brothers song, Unchained Melody, which played over that scene in the film.

ABC Entertainment Marketing tapped Canyon Road Films to produce and Krista Liney to direct. Liney captured Peter taking off his shirt, sitting down at the pottery wheel and “getting messy” — a metaphor for how messy his journey to love has been. As he starts to mold the clay, he is joined by one set of hands, then another and another. As the clay collapses, Whoopi Goldberg appears to say, “Peter, you in danger, boy” – a take-off of the line she delivers to Moore’s character in the film.

This marks Liney’s first shoot as a newly signed director coming on board at Canyon Road Films, a Los Angeles-based creative production company that specializes in television promos and entertainment content.

Liney has a perspective from the side of the client and the production house, having previously served as a marketing executive on the network side. “With promos, I aim to create pieces that will cut through the clutter and command attention,” she explains. “For me, it’s all about how I can best build the anticipation and excitement within the viewer.”

The piece was shot on an ARRI Alexa Mini with Primes and Optimo lenses. ABC finished the spot in-house.

Other credits include EP Lara Wickes and DP Eric Schmidt.


Sonnet intros USB to 5GbE adapter for Mac, Windows and Linux

Sonnet Technologies has introduced the Solo5G USB 3 to 5Gb Ethernet (5GbE) adapter. Featuring NBASE-T (multigigabit) Ethernet technology, the Sonnet Solo5G adapter adds 5GbE and 2.5GbE network connectivity to an array of computers, allowing for superfast data transfers over the existing Ethernet network cabling infrastructure found in most buildings today.

Measuring 1.5 inches wide by 3.25 inches deep by 0.7 inches tall, the Solo5G is a compact, fanless 5GbE adapter for Mac, Windows and Linux computers. Equipped with an RJ45 port, the adapter supports 5GbE and 2.5GbE (5GBASE-T and 2.5GBASE-T, respectively) connectivity via common Cat 5e (or better) copper cabling at distances of up to 100 meters. The adapter’s USB port connects to a USB-A, USB-C or Thunderbolt 3 port on the computer and is bus-powered for convenient, energy-efficient and portable operation.

Cat 5e and Cat 6 copper cables — representing close to 100% of the installed cable infrastructure in enterprises worldwide — were designed to carry data at only up to 1Gb per second. NBASE-T Ethernet was developed to boost the speed capability well beyond that limit. Sonnet’s Solo5G takes advantage of that technology.

When used with a multigigabit Ethernet switch or a 10Gb Ethernet switch with NBASE-T support — including models from Buffalo, Cisco, Netgear, QNAP, TrendNet and others — the Sonnet adapter delivers performance gains from 250% to 425% the speed of Gigabit Ethernet without a wiring upgrade. When connecting to a multigigabit Ethernet-compatible switch is not possible, the Solo5G also supports 1Gb/s and 100Mb/s link speeds.

Sonnet’s Solo5G includes 0.5-meter USB-C to USB-C and USB-C to USB-A cables for connecting the adapter to the computer, saving users the expense of buying a second cable when needed.

The Solo5G USB-C to 5 Gigabit Ethernet adapter is available now for $79.99.


The Den editorial boutique launches in Los Angeles

Christjan Jordan, editor of award-winning work for clients including Amazon, GEICO and Hulu, has partnered with industry veteran Mary Ellen Duggan to launch The Den, an independent boutique editorial house in Los Angeles.

Over the course of his career, Jordan has worked with Arcade Edit, Cosmo Street and Rock Paper Scissors, among others. He has edited such spots as Alexa Loses Her Voice for Amazon, Longest Goal Celebration Ever for GEICO, #notspecialneeds for World Down Syndrome Day out of Publicis NY and Super Bowl 2020 ads Tom Brady’s Big Announcement for Hulu and Famous Visitors for Walmart. Jordan’s work has been recognized by the Cannes Lions, AICE, AICP, Clio, D&AD, One Show and Sports Emmy awards.

Yes, with Mary Ellen, agency producers are guided by an industry veteran that knows exactly what agencies and clients are looking for,” says Jordan. “And for me, I love fostering young editors. It’s an interesting time in our industry and there is a lot of fresh creative talent.”

In her career, Duggan has headed production departments at both KPB and Cliff Freeman on the East Coast and, most recently, Big Family Table in Los Angeles. In addition, she has freelanced all over the country.

“The stars aligned for Christjan and I to work together,” says Duggan. “We had known each other for years and had recently worked on a Hulu campaign together. We had a similar vision for what we thought the editorial experience should be. A high end boutique editorial that is nimble, has a roster of diverse talent, and a real family vibe.”

Veteran producer Rachel Seitel has joined as partner and head of business development. The Den will be represented by Diane Patrone at The Family on the East Coast and by Ezra Burke and Shane Harris on the West Coast.

The Den’s founding roster also features editor Andrew Ratzlaff and junior editor Hannelore Gomes. The staff works on Avid Media Composer and Adobe Premiere.


LVLY adds veteran editor Bill Cramer

Bill Cramer, an editor known for his comedy and dialogue work, among other genres, has joined the editorial roster at LVLY, a content creation and creative studio based in New York City.

Cramer joins from Northern Lights. Prior to that he had spent many years at Crew Cuts, where he launched his career and built a strong reputation for his work on many ads and campaigns. Clients included ESPN, GMC, LG, Nickelodeon, Hasbro, MLB, Wendy’s and American Express. Check out his reel.

Cramer reports that he wasn’t looking to make a move but that LVLY’s VP/MD, Wendy Brovetto, inspired him. “Wendy and I knew of each other for years, and I’d been following LVLY since they did their top-to-bottom rebranding. I knew that they’re doing everything from live -action production to podcasting, VFX, design, VR and experiential, and I recognized that joining them would give me more opportunities to flex as an editor. Being at LVLY gives me the chance to take on any project, whether that’s a 30-second commercial, music video or long-form branded content piece; they’re set up to tackle any post production needs, no matter the scale.”

“Bill’s a great comedy/dialogue editor, and that’s something our clients have been looking for,” says Brovetto. “Once I saw the range of his work, it was an easy decision to invite him to join the LVLY team. In addition to being a great editor, he’s a funny guy, and who doesn’t need more humor in their day?”

Cramer, who works on both Avid Media Composer and Adobe Premiere, joins an editorial roster that includes Olivier Wicki, J.P. Damboragian, Geordie Anderson, Noelle Webb, Joe Siegel and Aaron & Bryan.


Senior colorist Tony D’Amore joins Picture Shop

Burbank’s Picture Shop has beefed up its staff with senior colorist Tony D’Amore, who will also serve as a director of creative workflow. In that role, he will oversee a team focusing on color prep and workflow efficiency.

Originally from rural Illinois, D’Amore made the leap to the West Coast to pursue an education, studying film and television at UCLA. He started his career in color in the early ‘90s, gaining valuable experience in the world of post. He has been working closely with color and post workflow since.

While D’Amore has experience working on Autodesk Lustre and FilmLight Baselight, he primarily grades in Blackmagic DaVinci Resolve. D’Amore has contributed color to several Emmy Award-winning shows nominated in the category of “Outstanding Cinematography.”

D’Amore has developed new and efficient workflows for Dolby Vision HDR and HDR10, coloring hundreds of hours of episodic programming for networks including CBS, ABC and Fox, as well as cable and streaming platforms such as HBO, Starz, Netflix, Hulu and Amazon.

D’Amore’s most notable project to date is having colored a Marvel series simultaneously for IMAX and ABC delivery. His list of color credits include, Barry (HBO), Looking for Alaska (Hulu), Legion (FX), Carnival Row (Amazon), Power (Starz), Fargo (FX), Elementary (CBS), Hanna (Amazon), and a variety of Marvel series, including Jessica Jones, Daredevil, The Defenders, Luke Cage and Iron Fist. All of these are available on streaming platforms


Behind the Title: Dell Blue lead editor Jason Uson

This veteran editor started his career at LA’s Rock Paper Scissors, where he spent four years learning the craft from editors such as Bee Ottinger and Angus Wall. After freelancing at Lost Planet, Spot Welders and Nomad, he held staff positions at Cosmo Street, Harpo Films and Beast Editorial before opening Foundation Editorial his own post boutique in Austin.

NAME: Jason Uson

COMPANY: Austin, Texas-based Dell Blue

Can you describe what Dell Blue does?
Dell Blue is the in-house agency for Dell Technologies.

What’s your job Title?
Senior Lead Creative Editor

What does that entail?
Outside of the projects that I am editing personally, there are multiple campaigns happening simultaneously at all times. I oversee all of them and have my eyes on every edit, fostering and mentoring our junior editors and producers to help them grow in their careers.

I’ve helped establish and maintain the process regarding our workflow and post pipeline. I also work closely with our entire team of creatives, producers, project managers and vendors from the beginning of each project and follow it through from production to post. This enables us to execute the best possible workflow and outcome for every project.

To add another layer to my role, I am also directing spots for Dell when the project is right.

Alienware

That’s a lot! What else would surprise people about what falls under that title?
The number of hours that go into making sure the job gets done and is the best it can be. Editing is a process that takes time. Creating something of value that means something is an art no matter how big or small the job might be. You have to have pride in every aspect of the process. It shows when you don’t.

What’s your favorite part of the job?
I have two favorites. The first is the people. I know that sounds cliché, but it’s true. The team here at Dell is truly something special. We are family. We work together. Play together. Happy Hour together. Respect, support and genuinely care for one another. But, ultimately, we care about the work. We are all aligned to create the best work possible. I am grateful to be surrounded by such a talented and amazing group of humans.

The second, which is equally important to me, is the process of organizing my project, watching all the footage and pulling selects. I make sure I have what I need and check it off my list. Music, sound effects, VO track, graphics and anything else I need to get started. Then I create my first timeline. A blank, empty timeline. Then I take a deep breath and say to myself, “Here we go.” That’s my favorite.

Do you have a least favorite?
My least favorite part is wrapping a project. I spend so much time with my clients and creatives and we really bond while working on a project together. We end on such a high note of excitement and pride in what we’ve done and then, just like that, it’s over. I realize that sounds a bit dramatic. Not to worry, though, because lucky for me, we all come back together in a few months to work on something new and the excitement starts all over again.

What is your most productive time of day?
This also requires a two-part answer. The first is early morning. This is my time to get things done, uninterrupted. I go upstairs and make a fresh cup of coffee. I open my deck doors. I check and send emails, and get my personal stuff done. This clears out all of my distractions for the day before I jump into my edit bay.

The second part is late at night. I get to replay all of the creative decisions from the day and explore other options. Sometimes, I get lucky and find something I didn’t see before.

If you didn’t have this job, what would you be doing instead?
That’s easy. I’d be a chef. I love to cook and experiment with ingredients. And I love to explore and create an amazing dining experience.

I see similarities between editors and chefs. Both aim to create something impactful that elicits an emotional response from the “elements” they are given. For chefs, the ingredients, spices and techniques are creatively brought together to bring a dish to life.

For editors, the “elements” that I am given, in combination with the use of my style, techniques, sound design, graphics and music etc. all give life to a spot.

How early did you know this would be your path?
I had originally moved to Los Angeles with dreams of becoming an actor. Yes, it’s groundbreaking, I know. During that time, I met editor Dana Glauberman (The Mandalorian, Juno, Up in the Air, Thank You for Smoking, Creed II, Ghostbusters: Afterlife). I had lunch with her at the studios one day in Burbank and went on a tour of the backlot. I got to see all the edit bays, film stages, soundstages and machine rooms. To me, this was magic. A total game-changer in an instant.

While I was waiting on that one big role, I got my foot in the door as a PA at editing house Rock Paper Scissors. One night after work, we all went for drinks at a local bar and every commercial on TV were the ones (editors) Angus Wall and Adam Pertofsky had worked on within the last month, and I was blown away. Something clicked.

This entire creative world behind the scenes was captivating to me. I made the decision at that moment to lean in and go for it. I asked the assistant editor the following morning if he would teach me — and I haven’t looked back. So, Dana, Angus and Adam… thank you!

Can you name some of your recent projects?
I edited the latest global campaign for Alienware called Everything Counts, which was directed by Tony Kaye. More recently, I worked on the campaign for Dell’s latest and greatest business PC laptop that launches in March 2020, which was directed by Mac Premo.

Dell business PC

Side note: I highly recommend Googling Mac Premo. His work is amazing.

What project are you most proud of?
There are two projects that stand out for me. The first one is the very first spot I ever cut — a Budweiser ad for director Sam Ketay and the Art Institute of Pasadena. During the edit, I thought, “Wow, I think I can do this.” It went on to win a Clio.

The second is the latest global campaign for Alienware, which I mentioned above. Director Tony Kaye is a genius. Tony and I sat in my edit bay for a week exploring and experimenting. His process is unlike any other director I have worked with. This project was extremely challenging on many levels. I honestly started looking at footage in a very different way. I evolved. I learned. And I strive to continue to grow every day.

Name three pieces of technology you can’t live without.
Wow, good question. I guess I’ll be that guy and say my phone. It really is a necessity.

Spotify, for sure. I am always listening to music in my car and trying to match artists with projects that are not even in existence yet.

My Bose noise cancelling headphones.

What social media channels do you follow?
I use Facebook and LinkedIn — mainly to stay up to date on what others are doing and to post my own updates every now and then.

I’m on Instagram quite a bit. Outside of the obvious industry-related accounts I follow, here are a few of my random favorites:

@nuts_about_birds
If you love birds as much as I do, this is a good one to follow.

@sergiosanchezart
This guy is incredible. I have been following his work for a long time. If you are looking for a new tattoo, look no further.

@andrewhagarofficial
I was lucky enough to meet Andrew through my friend @chrisprofera and immediately dove into his music. Amazing. Not to mention his dad is Sammy Hagar. Enough said.

@kaleynelson
She’s a talented photographer based in LA. Her concert stills are impressive.

@zuzubee
I love graffiti art and Zuzu is one of the best. Based is Austin, she has created several murals for me. You can see her work all over the city, as well as installations during SXSW and Austin City Limits, on Bud Light cans, and across the US.

Do you listen to music at work? What types?
I do listen to music when I work but only when I’m going through footage and pulling selects. Classical piano is my go-to. It opens my mind and helps me focus and dive into my footage.

Don’t get me wrong, I love music. But if I am jamming to my favorite, Sammy Hagar, I can’t drive…I mean dive… into my footage. So classical piano for me.

How do you de-stress from it all?
This is an understatement, but there are a few things that help me out. Sometimes during the day, I will take a walk around the block. Get a little vitamin D and fresh air. I look around at things other than my screen. This is something (editors) Tom Muldoon and John Murray at Nomad used to do every day. I always wondered why. Now I know. I come back refreshed and with my mind clear and ready for the next challenge.

I also “like” to hit the gym immediately after I leave my edit bay. Headphones on (Sammy Hagar, obviously), stretch it out and jump on the treadmill for 30 minutes.

All that is good and necessary for obvious reasons, but getting back to cooking… I love being in the kitchen. It’s therapy for me. Whether I am chopping and creating in the kitchen or out on the grill, I love it. And my wife appreciates my cooking. Well, I think she does at least.

Photo Credits: Dell PC and Jason Uson images – Chris Profera

Amazon’s The Expanse Season 4 gets HDR finish

The fourth season of the sci-fi series The Expanse was finished in HDR for the first time streaming via Amazon Prime Video. Deluxe Toronto handled end-to-end post services, including online editorial, sound remixing and color grading. The series was shot on ARRI Alexa Minis.

In preparation for production, cinematographer Jeremy Benning, CSC, shot anamorphic test footage at a quarry that would serve as the filming stand-in for the season’s new alien planet, Ilus. Deluxe Toronto senior colorist Joanne Rourke then worked with Benning, VFX supervisor Bret Culp, showrunner Naren Shankar and series regular Breck Eisner to develop looks that would convey the location’s uninviting and forlorn nature, keeping the overall look desaturated and removing color from the vegetation. Further distinguishing Ilus from other environments, production chose to display scenes on or above Ilus in a 2.39 aspect ratio, while those featuring Earth and Mars remained in a 16:9 format.

“Moving into HDR for Season 4 of our show was something Naren and I have wanted to do for a couple of years,” says Benning. “We did test HDR grading a couple seasons ago with Joanne at Deluxe, but it was not mandated by the broadcaster at the time, so we didn’t move forward. But Naren and I were very excited by those tests and hoped that one day we would go HDR. With Amazon as our new home [after airing on Syfy], HDR was part of their delivery spec, so those tests we had done previously had prepared us for how to think in HDR.

“Watching Season 4 come to life with such new depth, range and the dimension that HDR provides was like seeing our world with new eyes,” continues Benning. “It became even more immersive. I am very much looking forward to doing Season 5, which we are shooting now, in HDR with Joanne.”

Rourke, who has worked on every season of The Expanse, explains, “Jeremy likes to set scene looks on set so everyone becomes married to the look throughout editorial. He is fastidious about sending stills each week, and the intended directive of each scene is clear long before it reaches my suite. This was our first foray into HDR with this show, which was exciting, as it is well suited for the format. Getting that extra bit of detail in the highlights made such a huge visual impact overall. It allowed us to see the comm units, monitors, and plumes on spaceships as intended by the VFX department and accentuate the hologram games.”

After making adjustments and ensuring initial footage was even, Rourke then refined the image by lifting faces and story points and incorporating VFX. This was done with input provided by producer Lewin Webb; Benning; cinematographer Ray Dumas, CSC; Culp or VFX supervisor Robert Crowther.

To manage the show’s high volume of VFX shots, Rourke relied on Deluxe Toronto senior online editor Motassem Younes and assistant editor James Yazbeck to keep everything in meticulous order. (For that they used the Grass Valley Rio online editing and finishing system.) The pair’s work was also essential to Deluxe Toronto re-recording mixers Steve Foster and Kirk Lynds, who have both worked on The Expanse since Season 2. Once ready, scenes were sent in HDR via Streambox to Shankar for review at Alcon Entertainment in Los Angeles.

“Much of the science behind The Expanse is quite accurate thanks to Naren, and that attention to detail makes the show a lot of fun to work on and more engaging for fans,” notes Foster. “Ilus is a bit like the wild west, so the technology of its settlers is partially reflected in communication transmissions. Their comms have a dirty quality, whereas the ship comms are cleaner-sounding and more closely emulate NASA transmissions.”

Adds Lynds, “One of my big challenges for this season was figuring out how to make Ilus seem habitable and sonically interesting without familiar sounds like rustling trees or bird and insect noises. There are also a lot of amazing VFX moments, and we wanted to make sure the sound, visuals and score always came together in a way that was balanced and hit the right emotions story-wise.”

Foster and Lynds worked side by side on the season’s 5.1 surround mix, with Foster focusing on dialogue and music and Lynds on sound effects and design elements. When each had completed his respective passes using Avid ProTools workstations, they came together for the final mix, spending time on fine strokes, ensuring the dialogue was clear, and making adjustments as VFX shots were dropped in. Final mix playbacks were streamed to Deluxe’s Hollywood facility, where Naren could hear adjustments completed in real time.

In addition to color finishing Season 4 in HDR, Rourke also remastered the three previous seasons of The Expanse in HDR, using her work on Season 4 as a guide and finishing with Blackmagic DaVinci Resolve 15. Throughout the process, she was mindful to pull out additional detail in highlights without altering the original grade.

“I felt a great responsibility to be faithful to the show for the creators and its fans,” concludes Rourke. “I was excited to revisit the episodes and could appreciate the wonderful performances and visuals all over again.”

DP Chat: Watchmen cinematographer Greg Middleton

By Randi Altman

HBO’s Watchmen takes us to new dimensions in this recent interpretation of the popular graphic novel. In this iteration, we spend a lot of our time in Tulsa, Oklahoma, getting to know Regina King’s policewoman Angela Abar, her unconventional family and a shadowy organization steeped in racism called the Seventh Kavalry. We also get a look back — beautiful in black and white — at Abar’s tragic family back story. It was created and written for TV by Lost veteran Damon Lindelof.

Greg Middleton

Greg Middleton, ASC, CSC, who also worked on Game of Thrones and The Killing, was the series cinematographer. We reached out to him to find out about his process, workflow and where he gets inspiration.

When were you brought on to Watchmen, and what type of looks did the showrunner want from the show?
I joined Watchmen after the pilot for Episode 2. A lot of my early prep was devoted to discussions with the showrunner and producing directors on how to develop the look from the pilot going forward. This included some pilot reshoots due to changes in casting and the designing and building of new sets, like the police precinct.

Nicole Kassell (director of Episodes 1, 2 and 8) and series production designer Kristian Milstead and I spent a lot of time breaking down the possibilities of how we could define the various worlds through color and style.

How was the look described to you? What references were you given?
We based the evolution of the look of the show on the scripts, the needs of the structure within the various worlds and on the graphic novel, which we commonly referred to as “the Old Testament.”

As you mention, it’s based on a graphic novel. Did the look give a nod to that? If so, how? Was that part of the discussion?
We attempted to break down the elements of the graphic novel that might translate well and those that would not. It’s an interesting bit of detective work because a lot of the visual cues in the comic are actually a commentary on the style of comics at the time it was published in 1985.

Those cues, if taken literally, would not necessarily work for us, as their context would not be clear. Things like color were very referential to other comics of the time. For example, they used only secondary color instead of primaries as was the norm. The graphic novel is also a film noir in many ways, so we got some of our ideas based on that.

What did translate well were compositional elements — tricks of transition like match cuts and the details of story in props, costumes and sets within each frame. We used some split diopters and swing shift lenses to give us some deep focus effects for large foreground objects. In the graphic novel, of course, everything is in focus, so those type of compositions are common!

This must have been fun because of the variety of looks the series has — the black-and-white flashbacks, the stylized version of Tulsa, the look of the mansion in Wales (Europa), Vietnam in modern day. Can you talk about each of the different looks?
Yes, there were so many looks! When we began prep on the series with the second episode, we were also simultaneously beginning to film the scenes in Wales for the “blond man” scenes. We knew that that storyline would have its own particular feel because of the location and its very separateness from the rest of the world.

A more classic traditional proscenium-like framing and style seemed very appropriate. Part of that intent was designed to both confuse and to make very apparent to the audience that we were definitely in another world. Cinematographer Chris Seager, BSC, was filming those scenes as I was still doing tests for the other looks and the rest of our show in Atlanta.

We discussed lenses, camera format, etc. The three major looks we had to design that we knew would go together initially were our “Watchmen” world, the “American hero story” show within the show, and the various flashbacks to 1921 Tulsa and World War I. I was determined to make sure that the main world of the show did not feel overly processed and colorized photographically. We shot many tests and developed a LUT that was mostly film-like. The other important aspects to creating a look are, of course, art direction and production design, and I had a great partner in Kristian Milstead, the production designer who joined the show after the pilot.

This was a new series. Do you enjoy developing the look of a show versus coming on board after the look was established?
I enjoy helping to figure out how to tell the story. For series, helping develop the look photographically in the visual strategy is a big part of that. Even if some of those are established, you still do similar decision-making for shooting individual scenes. However, I much prefer being engaged from the beginning.

So even when you weren’t in Wales, you were providing direction?
As I mentioned earlier, Chris Seager and I spoke and emailed regarding lenses and those choices. It was still early for us in Atlanta, but there were preliminary decisions to be made on how the “blond man” (our code name for Jeremy Irons) world would be compared to our Watchmen world. What I did was consult with my director, Nicole Kassell, on her storyboards for her sequences in Wales.

Were there any scenes or looks that stood out as more challenging than others? Can you describe?
Episode 106 was a huge challenge. We have a lot of long takes involving complex camera moves and dimmer cues as a camera would circle or travel between rooms. Also, we developed the black-and-white look to feel like older black-and-white film.
One scene in June’s apartment involved using the camera on a small Scorpio 10-foot crane and a mini libre head to accomplish a slow move around the room. Then we had to push between her two actors toward the wall as an in-camera queue of a projected image of the black-and-white movie Trust in the Law reveals itself with a manual iris.

This kind of shot ends up being a dance with at least six people, not including the cast. The entire “nostalgia” part of the episode was done this way. And none of this would’ve been possible without incredible cast being able to hit these incredibly long takes and choreograph themselves with the camera. Jovan Adepo and Danielle Deadwyler were incredible throughout the episode.

I assume you did camera tests. Why did you choose the ARRI Alexa? Why was it right for this? What about lenses, etc.?
I have been working with the Alexa for many years now, so I was aware of what I could do with the camera. I tested a couple of others, but in the end the Alexa Mini was the right choice for us. I also needed a camera that was small so I could go on and off of a gimbal or fit into small places.

How did you work with the colorist? Who was that on this show? Were you in the suite with them?
Todd Bochner was our final colorist at Sim in LA. I shot several camera tests and worked with him in the suite to help develop viewing LUTs for the various worlds of the show. We did the same procedure for the black and white. In the end, we mimicked some techniques similar to black-and-white film (like red filters), except for us, it was adjusting the channels accordingly.

Do you know what they used on the color?
Yes, it was Blackmagic DaVinci Resolve 16.

How did you get interested in cinematography?
I was always making films as a kid, and then in school and then in university. In film school, at some point splitting apart the various jobs, I seemed to have some aptitude for the cinematography, so after school I decided to try making my focus. I came to it more out of a love of storytelling and filmmaking and less about photography.

Greg Middleton

What inspires you? Other films?
Films that move me emotionally.

What’s next for you?
A short break! I’ve been very fortunate to have been working a lot lately. A film I shot just before Watchmen called American Woman, directed by Semi Chellas, should be coming out this year.

And what haven’t I asked that’s important?
I think the question all filmmakers should ask themselves is, “Why am I telling this story, and what is unique about the way in which I’m telling it?”


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Sohonet intros ClearView Pivot for 4K remote post

Sohonet is now offering ClearView Pivot, a solution for realtime remote editing, color grading, live screening and finishing reviews at full cinema quality. The new solution will provide connectivity and collaboration services for productions around the world.

ClearView Pivot offers 4K HDR with 12-bit color depth and 4:4:4 chroma sampling for full-color-quality video streaming with ultra-low latency over the Sohonet’s private media network, which avoids the extreme compression required due to contention and latency of public internet connections.

“Studios around the world need a realtime 4K collaboration tool that can process video at lossless color fidelity using the industry-standard JPEG 2000 codec between two locations across a network like ours. Avoiding the headache of the current ‘equipment only’ approach is the only scalable solution,” explains Sohonet CEO Chuck Parker.

Sohonet says its integrated solution is approved by ISE (Independent Security Evaluators) — the industry’s gold standard for security. Sohonet’s solution provides an encrypted stream between each endpoint and provides an auditable usage trail for every solution. The Soho Media Network ( SMN) connection offers ultra-low latency (measured in milliseconds), and the company says that unlike equipment-only solutions that require the user to navigate firewall and security issues and perform a “solution check” before each session, ClearView Pivot works immediately. As a point-to-multipoint solution, the user can also pivot easily from one endpoint to the next to collaborate with multiple people at the click of a button or even to stream to multiple destinations at the same time.

Sohonet has been working closely with productions on lots and on locations over the past few years in the ongoing development of ClearView Pivot. In those real-world settings, ClearView Pivot has been put through its paces with trials across multiple departments, and the color technologies have been fully inspected and approved by experts across the industry.

MPI restores The Wizard of Oz in 4K HDR

By Barry Goch

The classic Victor Fleming-directed film The Wizard of Oz, which was released by MGM in 1939 and won two of its six Academy Award nominations, has been beautifully restored by Burbank’s Warner Bros. Motion Picture Imaging (MPI).

Bob Bailey

To share its workflow on the film, MPI invited a group of journalists to learn about the 4K UHD HDR restoration of this classic film. The tour guide for our high-tech restoration journey was MPI’s VP of operations and sales Bob Bailey, who walked us through the entire restoration process — from the original camera negative to final color.

The Wizard of Oz, which starred Judy Garland, was shot on a Technicolor three-strip camera system. According to Bailey, it ran three black and white negatives simultaneously. “That is why it is known as three-strip Technicolor. The magazine on top of the camera was triple the width of a normal black and white camera because it contained each roll of negative to capture your red, green and blue records,” explained Bailey.

“When shooting in Technicolor, you weren’t just getting the camera. You would rent a package that included the camera, a camera crew with three assistants, the film, the processing and a Technicolor color consultant.”

George Feltenstein, SVP of theatrical catalog marketing for Warner Bros. Home Entertainment, spoke about why the film was chosen for restoration. “The Wizard of Oz is among the crown jewels that we hold,” he said. “We wanted to embrace the new 4K HDR technology, but nobody’s ever released a film that old using this technology. HDR, or high dynamic range, has a color range that is wider than anything that’s come before it. There are colors [in The Wizard of Oz] that were never reproducible before, so what better a film to represent that color?”

Feltenstein went on to explain that this is the oldest film to get released in the 4K format. He hopes that this is just the beginning and that many of the films in Warner Bros.’ classic library will also be released on 4K HDR and worked on at MPI under Bailey’s direction.

The Process
MPI scanned each of the three-strip Technicolor nitrate film negatives at 8K 16-bit, composited them together and then applied a new color grain. The film was rescanned with the Lasergraphics Director 10K scanner. “We have just under 15 petabytes of storage here,” said Bailey. “That’s working storage, because we’re working on 8K movies since [some places in the world] are now broadcasting 8K.”

Steven Anastasi

Our first stop was to look at the Lasergraphics Director. We then moved on to MPI’s climate-controlled vault, where we were introduced to Steven Anastasi, VP of technical operations at Warner Bros. Anastasi explained that the original negative vault has climate-controlled conditions with 25% humidity at 35 degrees Fahrenheit, which is the combination required for keeping these precious assets safe for future generations. He said there are 2 million assets in the building, including picture and sound.

It was amazing to see film reels for 2001: A Space Odyssey sitting on a shelf right in front of me. In addition to the feature reels, MPI also stores millions of negatives captured throughout the years by Warner productions. “We also have a very large library,” reported Anastasi. “So the original negatives from the set, a lot of unit photography, head shots in some cases and so forth. There are 10 million of these.”

Finally, we were led into the color bay to view the film. Janet Wilson, senior digital colorist at MPI, has overseen every remaster of The Wizard of Oz for the past 20 years. Wilson used a FilmLight Baselight X system for the color grade. The grading suite housed multiple screens: a Dolby Pulsar for the Dolby Vision pass, a Sony X300 and a Panasonic EZ1000 OLED 4K HDR.

“We have every 4K monitor manufactured, and we run the film through all of them,” said Bailey. “We painstakingly go through the process from a post perspective to make sure that our consumers get the best quality product that’s available out in the marketplace.”

“We want the consumer experience on all monitors to be something that’s taken into account,” added Feltenstein. “So we’ve changed our workflow by having a consumer or prosumer monitor in these color correction suites so the colorist has an idea of what people are going to see at home, and that’s helped us make a better product.”

Our first view of the feature was a side-by-side comparison of the black and white scanned negative and the sepia color corrected footage. The first part of the film, which takes place in Kansas, was shot in black and white, and then a sepia look was applied to it. The reveal scene, when Dorothy passes through the door going into Oz, was originally shot in color. For this new release, the team generated a matte so Wilson could add this sepia area to the inside of the house as Dorothy transitioned into Oz.

“So this is an example of some of the stuff that we could do in this version of the restoration,” explained Wilson. “With this version, you can see that the part of the image where she’s supposed to be in the monochrome house is not actually black and white. It was really a color image. So the trick was always to get the interior of the house to look sepia and the exterior to look like all of the colors that it’s supposed to. Our visual effects team here at MPI — Mike Moser and Richie Hiltzik — was able to draw a matte for me so that I could color inside of the house independently of the exterior and make them look right, which was always a really tricky thing to do.”

Wilson referred back to the Technicolor three-strip, explaining that because you’ve got three different pieces of film — the different records — they’re receiving the light in different ways. “So sometimes one will be a little brighter than the other. One will be a little darker than the other, which means that the Technicolor is not a consistent color. It goes a little red, and then it goes a little green, and then it goes a little blue, and then it goes a little red again. So if you stop on any given frame, it’s going to look a little different than the frames around it, which is one of the tricky parts of color correcting technical art. When that’s being projected by a film projector, it’s less noticeable than when you’re looking at it on a video monitor, so it takes a lot of little individual corrections to smooth those kinds of things out.”

Wilson reported seeing new things with the 8K scan and 4K display. “The amount of detail that went into this film really shows up.” She said that one of the most remarkable things about the restoration was the amazing detail visible on the characters. For the first time in many generations, maybe ever, you can actually see the detail of the freckles on Dorothy’s face.

In terms of leveraging the expanded dynamic range of HDR, I asked Wilson if she tried to map the HDR, like in kind of a sweet spot, so that it’s both spectacular yet not overpowering at the same time.

“I ended up isolating the very brightest parts of the picture,” she replied. “In this case, it’s mostly the sparkles on their shoes and curving those off so I could run those in, because this movie is not supposed to have modern-day animation levels of brightness. It’s supposed to be much more contained. I wanted to take advantage of brightness and the ability to show the contrast we get from this format, because you can really see the darker parts of the picture. You can really see detail within the Wicked Witch’s dress. I don’t want it to look like it’s not the same film. I want it to replicate that experience of the way this film should look if it was projected on a good print on a good projector.”

Dorothy’s ruby slippers also presented a challenge to Wilson. “They are so red and so bright. They’re so light-reflective, but there were times when they were just a little too distracting. So I had to isolate this level at the same track with slippers and bring them down a little bit so that it wasn’t the first and only thing you saw in the image.”

If you are wondering if audio was part of this most recent restoration, the answer is no, but it had been remastered for a previous version. “As early at 1929, MGM began recording its film music using multiple microphones. Those microphonic angles allowed the mixer to get the most balanced monophonic mix, and they were preserved,” explained Feltenstein. “Twenty years ago, we created a 5.1 surround mix that was organically made from the original elements that were created in 1939. It is full-frequency, lossless audio, and a beautiful restoration job was made to create that track so you can improve upon what I consider to be close to perfection without anything that would be disingenuous to the production.”

In all, it was an amazing experience to go behind the scenes and see how the wizards of MPI created a new version of this masterpiece for today and preserved it for future generations.

This restored version of The Wizard of Oz is a must-see visual extravaganza, and there is no better way to see it than in UHD, HDR, Dolby Vision or HDR10+. What I saw in person took my breath away, and I hope every movie fan out there can have the opportunity to see this classic film in its never-before-seen glory.

The 4K version of The Wizard of Oz is currently available via an Ultra HD Blu-ray Combo Pack and digital.


Barry Goch is a finishing artist at LA’s The Foundation as well as a UCLA Extension Instructor, Post Production. You can follow him on Twitter at @Gochya

Behind the Title: Harbor sound editor/mixer Tony Volante

“As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.”

Name: Tony Volante

Company: Harbor

Can you describe what Harbor does?
Harbor was founded in 2012 to serve the feature film, episodic and advertising industries. Harbor brings together production and post production under one roof — what we like to call “a unified process allowing for total creative control.”

Since then, Harbor has grown into a global company with locations in New York, Los Angeles and London. Harbor hones every detail throughout the moving-image-making process: live-action, dailies, creative and offline editorial, design, animation, visual effects, CG, sound and picture finishing.

What’s your job title?
Supervising Sound Editor/Re-Recording Mixer

What does that entail?
I supervise the sound editorial crew for motion pictures and TV series along with being the re-recording mixer on many of my projects. I put together the appropriate crew and schedule along with helping to finalize a budget through the bidding process. As re-recording mixer, I take all the final edited elements and blend them together to create the final soundscape.

What would surprise people the most about what falls under that title?
How almost all the sound that someone hears in a movie has been replaced by a sound editor.

What’s your favorite part of the job?
Creatively collaborating with co-workers and hearing it all come together in the final mix.

What is your most productive time of day?
Whenever I can turn off my emails and can concentrate on mixing.

If you didn’t have this job, what would you be doing instead?
Fishing!

When did you know this would be your path?
I played drums in a rock band and got interested in sound at around 18 years old. I was always interested in the “sound” of an album along with the musicality. I found myself buying records based on who had produced and engineered them.

Can you name some recent projects?
Fosse/Verdo (FX) and Boys State, which just one Grand Jury Prize at Sundance.

How has the industry changed since you began working?
Technology has improved workflows immensely and has helped us with the creative process. It has also opened up the door to accelerating schedules to the point of sacrificing artistic expression and detail.

Name three pieces of technology you can’t live without
Avid Pro Tools, my iPhone and my car’s navigation system.

How do you de-stress from it all?
I stand in the middle of a flowing stream fishing with my fly rod. If I catch something that’s a bonus!

Alkemy X adds all-female design collective Mighty Oak

Alkemy X has added animation and design collective Mighty Oak to its roster for US commercial representation. Mighty Oak has used its expertise in handmade animation techniques and design combined with live action for brands and networks, including General Electric, Netflix, Luna Bar, HBO, Samsung NBC, Airbnb, Conde Nast, Adult Swim and The New York Times.

Led by CEO/EP Jess Peterson, head of creative talent Emily Collins and CD Michaela Olsen, the collective has garnered over 3 billion online views. Mighty Oak’s first original short film, Under Covers, premiered at the 2019 Sundance Film Festival. Helmed by Olsen, the quirky stop-motion short features handmade puppets and forced-perspective sets to glimpse into the unsuspecting lives and secrets that rest below the surface of a small town.

“I was immediately struck by the extreme care that Mighty Oak takes on each and every frame of their work,” notes Alkemy X EP Eve Ehrich. “Their handmade style and fresh approach really make for dynamic, memorable animation, regardless of the concept.”

Mighty Oak’s Peterson adds, “We are passionate about collaborating with our clients from the earliest stages, working together to craft original character designs and creating work that is memorable and fun.”

Post house DigitalFilm Tree names Nancy Jundi COO

DigitalFilm Tree (DFT) has named Nancy Jundi as chief operating officer. She brings a wealth of experience to her new role, after more than 20 years working with entertainment and technology companies.

Jundi has been an outside consultant to DFT since 2014 and will now be based in DFT’s Los Angeles headquarters, where she joins founder and CEO Ramy Katrib in pioneering new offerings for DFT.

Jundi began her career in investment banking and asset protection before segueing into the entertainment industry. Her experience includes leading sales and marketing at Runway during its acquisition by The Post Group (TPG). She then joined that team as director of marketing and communications to unify the end-to-end post facilities into TPG’s singular brand narrative. She later co-founded Mode HQ (acquired by Pacific Post) before transitioning into technology and SaaS companies.

Since 2012, Jundi has served as a consultant to companies in industries as varied as financial technology, healthcare, eCommerce, and entertainment, with brands such as LAbite, Traffic Zoom and GoBoon. Most recently, she served as interim senior management for CareerArc.

“Nancy is simply one of the smartest and most creative thinkers I know,” says Katrib. “She is the rare interdisciplinary – the creative and technological thinker that can exist in both arenas with clarity and tenacity.”

Marriage Story director Noah Baumbach

By Iain Blair

Writer/director Noah Baumbach first made a name for himself with The Squid and the Whale, his 2005 semi-autobiographical, bittersweet story about his childhood and his parents’ divorce. It launched his career, scoring him an Oscar nomination for Best Original Screenplay.

Noah Baumbach

His latest film, Marriage Story, is also about the disintegration of a marriage — and the ugly mechanics of divorce. Detailed and emotionally complex, the film stars Scarlett Johansson and Adam Driver as the doomed couple.

In all, Marriage Story scooped up six Oscar nominations — Best Picture, Best Actress, Best Actor, Best Supporting Actress, Best Original Screenplay and Best Original Score. Laura Dern walked away with a statue for her supporting role.

The film co-stars Dern, Alan Alda and Ray Liotta. The behind-the-scenes team includes director of photography Robbie Ryan, editor Jennifer Lame and composer Randy Newman.

Just a few days before the Oscars, Baumbach — whose credits also include The Meyerwitz Stories, Frances Ha and Margot at the Wedding — talked to me about making the film and his workflow.

What sort of film did you set out to make?
It’s obviously about a marriage and divorce, but I never really think about a project in specific terms, like a genre or a tone. In the past, I may have started a project thinking it was a comedy but then it morphs into something else. With this, I just tried to tell the story as I initially conceived it, and then as I discovered it along the way. While I didn’t think about tone in any general sense, I became aware as I worked on it that it had all these different tones and genre elements. It had this flexibility, and I just stayed open to all those and followed them.

I heard that you were discussing this with Adam Driver and Scarlett Johansson as you wrote the script. Is that true?
Yes, but it wasn’t daily. I’d reached out to both of them before I began writing it, and luckily they were both enthusiastic and wanted to do it, so I had them as an inspiration and guide as I wrote. Periodically, we’d get together and discuss it and I’d show them some pages to keep them in the loop. They were very generous with conversations about their own lives, their characters. My hope was that when I gave them the finished script it would feel both new and familiar.

What did they bring to the roles?
They were so prepared and helped push for the truth in every scene. Their involvement from the very start did influence how I wrote their roles. Nicole has that long monologue and I don’t know if I’d have written it without Scarlett’s input and knowing it was her. Adam singing “Being Alive” came out of some conversations with him. They’re very specific elements that come from knowing them as people.

You reunited with Irish DP Robbie Ryan, who shot The Meyerowitz Stories. Talk about how you collaborated on the look and why you shot on film?
I grew up with film and feel it’s just the right medium for me. We shot The Meyerowitz Stories on Super 16, and we shot this on 35mm, and we had to deal with all these office spaces and white rooms, so we knew there’d be all these variations on white. So there was a lot of discussion about shades and the palette, along with the production and costume designers, and also how we were going to shoot these confined spaces, because it was what the story required.

You shot on location in New York and LA. How tough was the shoot?
It was challenging, but mainly because of the sheer length of many of the scenes. There’s a lot of choreography in them, and some are quite emotional, so everyone had to really be up for the day, every day. There was no taking it easy one day. Every day felt important for the movie.

Where did you do the post?
All in New York. I have an office in the Village where I cut my last two films, and we edited there again. We mixed on the Warner stage, where I’ve mixed most of my movies. We recorded the music and orchestra in LA.

Do you like the post process?
I really love it. It’s the most fun and the most civilized part of the whole process. You go to work and work on the film all day, have dinner and go home. Writing is always a big challenge, as you’re making it up as you go along, and it can be quite agonizing. Shooting can be fun, but it’s also very stressful trying to get everything you need. I love working with the actors and crew, but you need a high level of energy and endurance to get through it. So then post is where you can finally relax, and while problems and challenges always arise, you can take time to solve them. I love editing, the whole rhythm of it, the logic of it.

_DSC4795.arw

Talk about editing with Jennifer Lame. How did that work?
We work so well together, and our process really starts in the script stage. I’ll give her an early draft to get her feedback and, basically, we start editing the script. We’ll go through it and take out anything we know we’re not going to use. Then during the shoot she’ll sometimes come to the set, and we’ll also talk twice a day. We’ll discuss the day’s work before I start, and then at lunch we’ll go over the previous day’s dailies. So by the time we sit down to edit, we’re really in sync about the whole movie. I don’t work off an assembly, so she’ll put together stuff for herself to let me know a scene is working the way we designed it. If there’s a problem, she’ll let me know what we need.

What were the big editing challenges?
Besides the general challenges of getting a scene right, I think for some of the longer ones it was all about finding the right rhythm and pacing. And it was particularly true of this film that the pace of something early on could really affect something later. Then you have to fix the earlier bit first, and sometimes it’s the scene right before. For instance, the scene where Charlie and Nicole have a big argument that turns into a very emotional fight is really informed by the courtroom scene right before it. So we couldn’t get it right until we’d got the courtroom scene right.

A lot of directors do test screenings. Do you?
No, I have people I show it to and get feedback, but I’ve never felt the need for testing.

VFX play a role. What was involved?
The Artery did them. For instance, when Adam cuts his arm we used VFX in addition to the practical effects, and then there’s always cleanup.

Talk about the importance of sound to you as a filmmaker, as it often gets overlooked in this kind of film.
I’m glad you said that because that’s so true, and this doesn’t have obvious sound effects. But the sound design is quite intricate, and Chris Scarabosio (working out of Skywalker Sound), who did Star Wars, did the sound design and mix; he was terrific.

A lot of it was taking the real-world environments in New York and LA and building on that, and maybe taking some sounds out and playing around with all the elements. We spent a lot of time on it, as both the sound and image should be unnoticed in this. If you start thinking, “That’s a cool shot or sound effect,” it takes you out of the movie. Both have to be emotionally correct at all times.

Where did you do the DI and how important is it to you?
We did it at New York’s Harbor Post with colorist Marcy Robinson, who’s done several of my films. It’s very important, but we didn’t do anything too extreme, as there’s not a lot of leeway for changing the look that much. I’m very happy with the look and the way it all turned out.

Congratulations on all the Oscar noms. How important is that for a film like this?
It’s a great honor. We’re all still the kids who grew up watching movies and the Oscars, so it’s a very cool thing. I’m thrilled.

What’s next?
I don’t know. I just started writing, but nothing specific yet.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Bill Baggelaar promoted at Sony Pictures, Sony Innovation Studios

Post industry veteran Bill Baggelaar has been promoted to executive VP and CTO, technology development at Sony Pictures and executive VP and general manager of Sony Innovation Studios. Prior to joining Sony Pictures almost nine years ago, he spent 13 years at Warner Bros. as VP of technology/motion picture imaging and head of technology/feature animation. His new role will start in earnest on April 1.

“I am excited for this new challenge that combines roles as both CTO of Sony Pictures and GM of Sony Innovation Studios,” says Baggelaar. “The CTO’s office works both inside the studio and with the industry to develop key standards and technologies that can be adopted across the various lines of business. Sony Innovation Studios is developing groundbreaking tools, methods and techniques for realtime volumetric virtual production — or as we like to say, “the future of movie magic” — with a level of fidelity and quality that is best in class. With the technicians, engineers and artisans at Sony Innovation Studios combined with our studio technology team, we will be able to bring new experiences and technologies to all areas of production and delivery.”

Baggelaar’s promotion is part of a larger announcement by Sony, which involves a new team established within Sony Pictures — the Entertainment Innovation & Technology Group, Sony Pictures Entertainment, which encompasses the following departments: Sony Innovation Studios (SIS), Technology Development, IP Acceleration and Branded Integration.

The group is headed by Yasuhiro Ito, executive VP, Entertainment Innovation & Technology Group. Don Eklund will be leaving his post as EVP /CTO of technology development at the end of March. Eklund has had a long history with SPE and has been in his current role since 2017, establishing the foundation of the studio’s technology development activities.

“This new role combines my years of experience in production, post and VFX; my work with the broader industry and organizations; and my work with Sony companies around the world over the past eight and a half years — along with my more recent endeavors into virtual production — to create a truly unique opportunity for technical innovation that only Sony can provide,” concludes Baggelaar, who will report directly to Ito.

Kevin Lau heads up advertising, immersive at Digital Domain

Visual effects studio Digital Domain has brought on Kevin Lau as executive creative director of advertising, games and new media. In this newly created position, Lau will oversee all short-form projects and act as a creative partner for agencies and brands.

Lau brings over 18 years of ad-based visual effects and commercial production experience, working on campaigns for brands such as Target, Visa and Sprint.

Most recently, he was the executive creative director and founding partner at Timber, an LA-based studio focused on ads (GMC, Winter Olympics) and music videos (Kendrick Lamar’s Humble). Prior to that, he held creative director positions at Mirada, Brand New School and Superfad. Throughout his career, his work has been honored with multiple awards including Clios, AICP Awards, MTV VMAs and a Cannes Gold Lion for Sprint’s “Now Network” campaign via Goodby.

Lau, who joins Digital Domain EPs Nicole Fina and John Canning as they continue to build the studio’s short-form business, will help unify the vision for the advertising, games and new media/experiential groups, promoting a consistent voice across campaigns.

Lau joins the team as the new media group prepares to unveil its biggest project to date: Time’s The March, a virtual reality recreation of the 1963 March on Washington for Jobs and Freedom. Digital Domain’s experience with digital humans will play a major role in the future of both groups as they continue to build on the photoreal cinematics and in-game characters previously created for Activision, Electronic Arts and Ubisoft.

Quantum to acquire Western Digital’s ActiveScale business  

Quantum has entered into an agreement with Western Digital Technologies, a subsidiary of Western Digital Corp., to acquire its ActiveScale object storage business. The addition of the ActiveScale product line and engineers brings object storage software and erasure coding technology to Quantum’s portfolio and helps the company to expand in the object storage market.

The acquisition will extend the company’s role in storing and managing video and unstructured data using a software-defined approach. The transaction is expected to close by March 31, 2020. Financial terms of the transaction were not disclosed.

What are the benefits of object storage software?
• Scalability: Allows users to store, manage and analyze billions of objects and exabytes of capacity.
• Durable: ActiveScale object storage offers up to 19 nines of data durability using patented erasure coding protection technologies.
• Easy to Manage at Scale: Because object storage has a flat namespace (compared to a hierarchical file system structure), managing billions of objects and hundreds of petabytes of capacity is easier than using traditional network attached storage. This, according to Quantum, reduces operational expenses.

Quantum has been offering object storage and selling and supporting the ActiveScale product line for over five years. Object storage can be used as an active-archive tier of storage — where StorNext file storage is used for high-performance ingest and processing of data, object storage acts as lower cost online content repository, and tape acts as the lowest cost cold storage tier.

For M&E, object storage is used as a long-term content repository for video content, in movie and TV production, in sports video, and even for large corporate video departments. Those working in movie and TV production require very high performance ingest, edit, processing, rendering of their video files, which typically is done with a file system like StorNext. Once content is finished, it is preserved in an object store, with StorNext data management handling the data movement between file and object tiers.

“Object storage software is an obvious fit with our strategy, our go-to-market focus and within our technology portfolio,” says Jamie Lerner, president/CEO of Quantum. “We are committed to the product, and to making ActiveScale customers successful, and we look forward to engaging with them to solve their most pressing business challenges around storing and managing unstructured data. With the addition of the engineers and scientists that developed the erasure-coded object store software, we can deliver on a robust technical roadmap, including new solutions like an object store built on a combination of disk and tape.”

Colorist Chat: Light Iron supervising colorist Ian Vertovec

“As colorists, we are not just responsible for enhancing each individual shot based on the vision of the filmmakers, but also for helping to visually construct an emotional arc over time.”

NAME: Ian Vertovec

TITLE: Supervising Colorist

COMPANY: Light Iron

CAN YOU DESCRIBE YOUR ROLE IN THE COMPANY?
A Hollywood-based collaborator for motion picture finishing, with a studio in New York City as well.

GLOW

AS A COLORIST, WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
As colorists, we are not just responsible for enhancing each individual shot based on the vision of the filmmakers, but also for helping to visually construct an emotional arc over time. For example, a warm scene feels warmer coming out of a cool scene as opposed to another warm scene. We have the ability and responsibility to nudge the audience emotionally over the course of the film. Using color in this way makes color grading a bit like a cross between photography and editing.

ARE YOU SOMETIMES ASKED TO DO MORE THAN JUST COLOR ON PROJECTS?
Once in a while, I’ll be asked to change the color of an object, like change a red dress to blue or a white car to black. While we do have remarkable tools at our disposal, this isn’t quite the correct way to think about what we can do. Instead of being able to change the color of objects, it’s more like we can change the color of the light shining on objects. So instead of being able to turn a red dress to blue, I can change the light on the dress (and only the dress) to be blue. So while the dress will appear blue, it will not look exactly how a naturally blue dress would look under white light.

WHAT’S YOUR FAVORITE PART OF THE JOB?
There is a moment with new directors, after watching the first finished scene, when they realize they have made a gorgeous-looking movie. It’s their first real movie, which they never fully saw until that moment — on the big screen, crystal clear and polished — and it finally looks how they envisioned it. They are genuinely proud of what they’ve done, as well as appreciative of what you brought out in their work. It’s an authentic filmmaking moment.

WHAT’S YOUR LEAST FAVORITE?
Working on multiple jobs at a time and long days can be very, very draining. It’s important to take regular breaks to rest your eyes.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Something with photography, VFX or design, maybe.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I was doing image manipulation in high school and college before I even knew what color grading was.

Just Mercy

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Just Mercy, Murder Mystery, GLOW, What We Do in the Shadows and Too Old to Die Young.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
Sometimes your perspective and a filmmaker’s perspective for a color grade can be quite divergent. There can be a temptation to take the easy way and either defer or overrule. I find tremendous value in actually working out those differences and seeing where and why you are having a difference of opinion.

It can be a little scary, as nobody wants to be perceived as confrontational, but if you can civilly explain where and why you see a different approach, the result will almost always be better than what either of you thought possible in the first place. It also allows you to work more closely and understand each other’s creative instincts more accurately. Those are the moments I am most proud of — when we worked through an awkward discord and built something better.

WHERE DO YOU FIND INSPIRATION?
I have a fairly extensive library of Pinterest boards — mostly paintings — but it’s real life and being in the moment that I find more interesting. The color of a green leaf at night under a sodium vapor light, or how sunlight gets twisted by a plastic water bottle — that is what I find so cool. Why ruin that with an Insta post?

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
FilmLight Baselight’s Base Grade, FilmLight Baselight’s Texture Equalizer and my Red Hydrogen.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
Instagram mostly.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
After working all day on a film, I often don’t feel like watching another movie when I get home because I’ll just be thinking about the color.  I usually unwind with a video game, book or podcast. The great thing about a book or video games is that they demand your 100% attention. You can’t be simultaneously browsing social media or the news  or be thinking about work. You have to be 100% in the moment, and it really resets your brain.

Nomad Editorial hires eclectic editor Dan Maloney

Nomad Editing Company has added editor Dan Maloney to its team. Maloney is best known for his work cutting wry, eclectic comedy spots in addition to more emotional content. While his main tool is Avid Media Composer, he is also well-versed in Adobe Premiere.

“I love that I get to work in so many different styles and genres. It keeps it all interesting,” he says.

Prior to joining Nomad, Maloney cut at studios such as Whitehouse Post, Cut+Run, Spot Welders and Deluxe’s Beast. Throughout his career, Maloney has uses his eye for composition on a wide range of films, documentaries, branded content and commercials, including the Tide Interview spot that debuted at Super Bowl XLII.
“My editing style revolves mostly around performance and capturing that key moment,” he says. “Whether I’m doing a comedic or dramatic piece, I try to find that instance where an actor feels ‘locked in’ and expand the narrative out from there.”

According to Nomad editor/partner Jim Ulbrich, “Editing is all about timing and pace. It’s a craft and you can see Dan’s craftsmanship in every frame of his work. Each beat is carefully constructed to perfection across multiple mediums and genres. He’s not simply a comedy editor, visual storyteller, or doc specialist. He’s a skilled craftsman.”

Director James Mangold on Oscar-nominated Ford v Ferrari

By Iain Blair

Filmmaker James Mangold has been screenwriting, producing and directing for years. He has made films about country legends (Walk the Line), cowboys (3:10 to Yuma), superheroes (Logan) and cops (Cop Land), and has tackled mental illness (Girl Interrupted) as well.

Now he’s turned his attention to race car drivers and Formula 1 with his movie Ford v Ferrari, which has earned Mangold an Oscar nomination for Best Picture. The film also received nods for its editing, sound editing and sound mixing.

James Mangold (beard) on set.

The high-octane drama was inspired by a true-life friendship that forever changed racing history. In 1959, Carroll Shelby (Matt Damon) is on top of the world after winning the most difficult race in all of motorsports, the 24 Hours of Le Mans. But his greatest triumph is followed quickly by a crushing blow — the fearless Texan is told by doctors that a grave heart condition will prevent him from ever racing again.

Endlessly resourceful, Shelby reinvents himself as a car designer and salesman working out of a warehouse space in Venice Beach with a team of engineers and mechanics that includes hot-tempered test driver Ken Miles (Christian Bale). A champion British race car driver and a devoted family man, Miles is brilliant behind the wheel, but he’s also blunt, arrogant and unwilling to compromise.

After Shelby’s vehicles make a strong showing at Le Mans against Italy’s venerable Enzo Ferrari, Ford Motor Company recruits the firebrand visionary to design the ultimate race car, a machine that can beat even Ferrari on the unforgiving French track. Determined to succeed against overwhelming odds, Shelby, Miles and their ragtag crew battle corporate interference, the laws of physics and their own personal demons to develop a revolutionary vehicle that will outshine every competitor. The film culminates in the historic showdown between the US and Italy at the grueling 1966 24 hour Le Mans race.

Mangold’s below-the-line talent, many of whom have collaborated with the director before, includes Academy Award-nominated director of photography Phedon Papamichael; film editors Michael McCusker, ACE, and Andrew Buckland; visual effects supervisor Olivier Dumont; and composers Marco Beltrami and Buck Sanders.

L-R: Writer Iain Blair and Director James Mangold

I spoke with Mangold — whose other films include Logan, The Wolverine and Knight and Day — about making the film and his workflow.

You obviously love exploring very different subject matter in every film you make.
Yes, and I do every movie like a sci-fi film — meaning inventing a new world that has its own rules, customs, language, laws of physics and so on, and you need to set it up so the audience understands and they get it all. It’s like being a world-builder, and I feel every film should have that, as you’re entering this new world, whether it’s Walk the Line or The French Connection. And the rules and behavior are different from our own universe, and that’s what makes the story and characters interesting to me.

What sort of film did you set out to make?
Well, given all that, I wanted to make an exciting racing movie about that whole world, but it’s also that it was a moment when racing was free of all things that now turn me off about it. The cars were more beautiful then, and free of all the branding. Today, the cars are littered with all the advertising and trademarks — and it’s all nauseating to me. I don’t even feel like I’m watching a sport anymore.

When this story took place, it was also a time when all the new technology was just exploding. Racing hasn’t changed that much over the past 20 years. It’s just refining and tweaking to get that tiny edge, but back in the ‘60s they were still inventing the modern race car, and discovering aerodynamics and alternate building materials and methods. It was a brand-new world, so there was this great sense of discovery and charm along with all that.

What were the main technical challenges in pulling it all together?
Trying to do what I felt all the other racing movies hadn’t really done — taking the driving out of the CG world and putting it back in the real world, so you could feel the raw power and the romanticism of racing. A lot of that’s down to the particulates in the air, the vibrations of the camera, the way light moves around the drivers — and the reality of behavior when you’re dealing with incredibly powerful machines. So right from the start, I decided we had to build all the race cars; that was a huge challenge right there.

How early on did you start integrating post and all the VFX?
Day one. I wanted to use real cars and shoot the Le Mans and other races in camera rather than using CGI. But this is a period piece, so we did use a lot of CGI for set extensions and all the crowds. We couldn’t afford 50,000 extras, so just the first six rows or so were people in the stands; the rest were digital.

Did you do a lot of previz?

A lot, especially for Le Mans, as it was such a big, three-act sequence with so many moving parts. We used far less for Daytona. We did a few storyboards and then me and my second unit director, Darrin Prescott — who has choreographed car chases and races in such movies as Drive, Deadpool 2, Baby Driver and The Bourne Ultimatum — planned it out using matchbox cars.

I didn’t want that “previzy” feeling. Even when I do a lot of previz, whether it’s a Marvel movie or like this, I always tell my previz team “Don’t put the camera anywhere it can’t go.” One of the things that often happens when you have the ability to make your movie like a cartoon in a laboratory — which is what previz is — is that you start doing a lot of gimmicky shots and flying the camera through keyholes and floating like a drone, because it invites you to do all that crazy shit. It’s all very show-offy as a director — “Look at me!” — and a turnoff to me. It takes me out of the story, and it’s also not built off the subjective experience of your characters.

This marks your fifth collaboration with DP Phedon Papamichael, and I noticed there’s no big swooping camera moves or the beauty shot approach you see in all the car commercials.
Yes, we wanted it to look beautiful, but in a real way. There’s so much technology available now, like gyroscopic setups and arms that let you chase the cars in high-speed vehicles down tracks. You can do so much, so why do you need to do more? I’m conservative that way. My goal isn’t to brand myself through my storytelling tricks.

How tough was the shoot?
It was one of the most fun shoots I’ve ever had, with my regular crew and a great cast. But it was also very grueling, as we were outside a lot, often in 115-degree heat in the desert on blacktop. And locations were big challenges. The original Le Mans course doesn’t exist anymore like it used to be, so we used several locations in Georgia to double for it. We shot the races wide-angle anamorphic with a team of a dozen professional drivers, and with anamorphic you can shoot the cars right up into the lens — just inches away from camera, while they’d be doing 150 mph or 160 mph.

Where did you post?
All on the Fox lot at my offices. We scored at Capitol Records and mixed the score in Malibu at my composer’s home studio. I really love the post, and for me it’s all part of the same process — the same cutting and pasting I do when I’m writing, and even when I’m directing. You’re manipulating all these elements and watching it take form — and particularly in this film, where all the sound design and music and dialogue are all playing off one another and are so key. Take the races. By themselves, they look like nothing. It’s just a car whipping by. The power of it all only happens with the editing.

You had two editors — Michael McCusker and Andrew Buckland. How did that work?
Mike’s been with me for 20 years, so he’s kind of the lead. Mike and Drew take and trade scenes, and they’re good friends so they work closely together. I move back and forth between them, which also gives them each some space. It’s very collaborative. We all want it to look beautiful and elegant and well-designed, but no one’s a slave to any pre-existing ideas about structure or pace. (Check out postPerspective‘s interview with the editing duo here.)

What were the big editing challenges?
It’s a car racing movie with drama, so we had to hit you with adrenalin and then hold you with what’s a fairly procedural and process-oriented film about these guys scaling the corporate wall to get this car built and on the track. Most of that’s dramatic scenes. The flashiest editing is the races, which was a huge, year-long effort. Mike was cutting the previz before we shot a foot, and initially we just had car footage, without the actors, so that was a challenge. It all transformed once we added the actors.

Can you talk about working on the visual effects with Method’s VFX supervisor Olivier Dumont?
He did an incredible job, as no one thinks there are so many. They’re really invisible, and that’s what I love — the film feels 100% analog, but of course it isn’t. It’s impossible to build giant race tracks as they were in the ‘60s. But having real foregrounds really helped. We had very few scenes where actors were wandering around in a green void like on so many movies now. So you’re always anchored in the real world, and then all the set extensions were in softer focus or backlit.

This film really lends itself to sound.
Absolutely, as every car has its own signature sound, and as we cut rapidly from interiors to exteriors, from cars to pits and so on. The perspective aural shifts are exciting, but we also tried to keep it simple and not lose the dramatic identity of the story. We even removed sounds in the mix if they weren’t important, so we could focus on what was important.

Where did you do the DI, and how important is it to you?
At Efilm with Skip Kimball (working on Blackmagic DaVinci Resolve), and it was huge on this, especially dealing with the 24-hour race, the changing light, rain and night scenes, and having to match five different locations was a nightmare. So we worked on all that and the overall look from early on in the edit.

What’s next?
Don’t know. I’ve got two projects I’m working on. We’ll see.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Talking with Franki Ashiruka of Nairobi’s Africa Post Office

By Randi Altman

After two decades of editing award-winning film and television projects for media companies throughout Kenya and around the world, Franki Ashiruka opened Africa Post Office, a standalone, post house in Nairobi, Kenya. The studio provides color grading, animation, visual effects, motion graphics, compositing and more. In addition, they maintain a database of the Kenyan post production community that allows them to ramp up with the right artists when the need arises.

Here she talks about the company, its workflow and being a pioneer in Nairobi’s production industry.

When did you open Africa Post Office, and what was your background prior to starting this studio?
Africa Post Office (APO) opened its doors in February 2017. Prior to starting APO, I was a freelance editor with plenty of experience working with well-established media houses such as Channel 4 (UK), Fox International Channels (UK), 3D Global Leadership (Nigeria), PBS (USA), Touchdown (New Zealand), Greenstone Pictures (New Zealand) and Shadow Films (South Africa).

In terms of Kenya-based projects, I’ve worked with a number of production houses including Quite Bright Films, Fat Rain Films, Film Crew in Africa, Mojo Productions, Multichoice, Zuku, Content House and Ginger Ink Films.

I imagine female-run, independent studios in Africa are rare?
On the contrary, Kenya has reached a point where more and more women are emerging as leaders of their own companies. I actually think there are more women-led film production companies than male-led. The real challenge was that before APO, there was nothing quite like it in Nairobi. Historically, video production here was very vertical — if you shot something, you’d need to also manage post within whatever production house you were working in. There were no standalone post houses until us. That said, with my experience, even though hugely daunting, I never thought twice about starting APO. It is what I have always wanted to do, and if being the first company of our kind didn’t intimidate me, being female was never going to be a hindrance.

L-R: Franki Ashiruka, Kevin Kyalo, Carole Kinyua and Evans Wenani

What is the production and post industry like in Nairobi? 
When APO first opened, the workload was commercial-heavy, but in the last two years that has steadily declined. We’re seeing this gap filled by documentary films, corporate work and television series. Feature films are also slowly gaining traction and becoming the focus of many up-and-coming filmmakers.

What services do you provide, and what types of projects do you work on?
APO has a proven track record of successful delivery on hundreds of film and video projects for a diverse range of clients and collaborators, including major corporate entities, NGOs, advertising and PR agencies, and television stations. We also have plenty of experience mastering according to international delivery standards. We’re proud to house a complete end-to-end post ecosystem of offline and online editing suites.

Most importantly, we maintain a very thorough database of the post production community in Kenya.
This is of great benefit to our clients who come to us for a range of services including color grading, animation, visual effects, motion graphics and compositing. We are always excited to collaborate with the right people and get additional perspectives on the job at hand. One of our most notable collaborators is Ikweta Arts (Avatar, Black Panther, Game of Thrones, Hacksaw Ridge), owned and run by Yvonne Muinde. They specialize in providing VFX services with a focus in quality matte painting/digital environments, art direction, concept and post visual development art. We also collaborate with Keyframe (L’Oréal, BMW and Mitsubishi Malaysia) for motion graphics and animations.

Can you name some recent projects and the work you provided?
We are incredibly fortunate to be able to select projects that align with our beliefs and passions.

Our work on the short film Poacher (directed by Tom Whitworth) won us three global Best Editing Awards from the Short to the Point Online Film Festival (Romania, 2018), Feel the Reel International Film Festival (Glasgow, 2018) and Five Continents International Film Festival (Venezuela, 2019).

Other notable work includes three feature documentaries for the Big Story segment on China Global Television Network, directed by Juan Reina (director of the Netflix Original film Diving Into the Unknown), Lion’s Den (Quite Bright Films) an adaptation of ABC’s Shark Tank and The Great Kenyan Bake Off (Showstopper Media) adopted from the BBC series The Great British Bake Off. We also worked on Disconnect, a feature film produced by Kenya’s Tosh Gitonga (Nairobi Half Life), a director who is passionate about taking Africa’s budding film industry to the next level. We have also worked on a host of television commercials for clients extending across East Africa, including Kenya, Rwanda, South Sudan and Uganda.

What APO is most proud of though, is our clients’ ambitions and determination to contribute toward the growth of the African film industry. This truly resonates with APO’s mantra.

You recently added a MAM and some other gear. Can you talk about the need to upgrade?
Bringing on the EditShare EFS 200 nodes has significantly improved the collaborative possibilities of APO. We reached a point where we were quickly growing, and the old approach just wasn’t going to cut it.

Prior to centralizing our content, projects lived on individual hard disks. This meant that if I was editing and needed my assistant to find me a scene or a clip, or I needed VFX on something, I would have to export individual clips to different workstations. This created workflow redundancies and increased potential for versioning issues, which is something we couldn’t afford to be weighed down with.

The remote capabilities of the EditShare system were very appealing as well. Our color grading collaborator, Nic Apostoli of Comfort and Fame, is based in Cape Town, South Africa. From there, he can access the footage on the server and grade it while the client reviews with us in Nairobi. Flow media asset management also helps in this regard. We’re able to effectively organize and index clips, graphics, versions, etc. into clearly marked folders so there is no confusion about what media should be used. Collaboration among the team members is now seamless regardless of their physical location or tools used, which include the Adobe Creative Suite, Foundry Nuke, Autodesk Maya and Maxon Cinema 4D.

Any advice for others looking to break out on their own and start a post house?
Know what you want to do, and just do it! Thanks Nike …


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

The 71st NATAS Technology & Engineering Emmy Award winners

The National Academy of Television Arts & Sciences (NATAS) has announced the recipients of the 71st Annual Technology & Engineering Emmy Awards. The event will take place in partnership with the National Association of Broadcasters, during the NAB Show on Sunday, April 19 in Las Vegas.

The Technology & Engineering Emmy Awards are awarded to a living individual, a company or a scientific or technical organization for developments and/or standardization involved in engineering technologies that either represent so extensive an improvement on existing methods or are so innovative in nature that they materially have affected television.

A committee of engineers working in television considers technical developments in the industry and determines which, if any, merit an award.

“The Technology & Engineering Emmy Award was the first Emmy Award issued in 1949, and it laid the groundwork for all the other Emmys to come,” says Adam Sharp, CEO/president of NATAS. “We are especially excited to be honoring Yvette Kanouff with our Lifetime Achievement Award in Technology & Engineering.”

Kanouff has held CTO and president roles at various companies in the cable and media industry. Over the years, she has spearheaded transformational technologies, such as video on demand, cloud DVR, digital and on-demand advertising, streaming security and privacy.

And now the Awards recipients:

Pioneering System for Live Performance-Based Animation Using Facial Recognition
– Adobe

HTML5 Development and Deployment of a Full TV Experience on Any Device
– Apple
– Google
– LG
– Microsoft
– Mozilla
– Opera
– Samsung

Pioneering Public Cloud-Based Linear Media Supply Chains
– AWS
– Discovery
– Evertz
– Fox Neo (Walt Disney Television)
– SDVI

Pioneering Development of Large Scale, Cloud Served, Broadcast Quality,
Linear Channel Transmission to Consumers
– Sling TV
– Sony PlayStation Vue
– Zattoo

Early Development of HSM Systems That Created a Pivotal Improvement in Broadcast Workflows
– Dell (Isilon)
– IBM
– Masstech
– Quantum

Pioneering Development and Deployment of Hybrid Fiber Coax Network Architecture
– Cable Labs

Pioneering Development of the CCD Image Sensor
– Bell Labs
– Michael Tompsett

VoCIP (Video over Bonded Cellular Internet)
– Aviwest
– Dejero
– LiveU
– TVU Networks

Ultra-High Sensitivity HDTV Camera
– Canon
– Flovel

Development of Synchronized Multi-Channel Uncompressed Audio Transport Over IP Networks
– ALC NetworX
– Audinate
– Audio Engineering Society
– Kevin Gross
– QSC
– Telos Alliance
– Wheatstone

Emmy statue image courtesy of ATAS/NATAS

Review: HP’s ZBook G6 mobile workstation

By Brady Betzel

In a year that’s seen AMD reveal an affordable 64-core processor with its Threadripper 3, it appears as though we are picking up steam toward next-level computing.

Apple finally released its much-anticipated Mac Pro (which comes with a hefty price tag for the 1.5TB upgrade), and custom-build workstation companies — like Boxx and Puget Systems — can customize good-looking systems to fit any need you can imagine. Additionally, over the past few months, I have seen mobile workstations leveling the playing field with their desktop counterparts.

HP is well-known in the M&E community for its powerhouse workstations. Since I started my career, I have either worked on a MacPro or an HP. Both have their strong points. However, workstation users who must be able to travel with their systems, there have always been some technical abilities you had to give up in exchange for a smaller footprint. That is, until now.

The newly released HP ZBook 15 G6 has become the rising the rising tide that will float all the boats in the mobile workstation market. I know I’ve said it before, but the classification of “workstation” is technically much more than just a term companies just throw around. The systems with workstation-level classification (at least from HP) are meant to be powered on and run at high levels 24 hours a day, seven days a week, 365 days a year.

They are built with high-quality, enterprise-level components, such as ECC (error correcting code) memory. ECC memory will self-correct errors that it sees, preventing things like blue screens of death and other screen freezes. ECC memory comes at a cost, and that is why these workstations are priced a little higher than a standard computer system. In addition, the warranties are a little more inclusive — the HP ZBook 15 G6 comes with a standard three-year/on-site service warranty.

Beyond the “workstation” classification, the ZBook 15 G6 is amazingly powerful, brutally strong and incredibly colorful and bright. But what really matters is under the hood. I was sent the HP ZBook 15 G6 that retails for $4,096 and contains the following specs:
– Intel Xeon E-2286M (eight cores/16 threads — 2.4GHz base/5GHz Turbo)
– Nvidia Quadro RTX 3000 (6GB VRAM)
15.6-inch UHD HP Dream Color display, anti-glare, WLED backlit 600 nits, 100% DCI-P3
– 64GB DDR4 2667MHz
– 1TB PCIe Gen 3 x4 NVMe SSD TLC
– FHD webcam 1080p plus IR camera
– HP collaboration keyboard with dual point stick
– Fingerprint sensor
– Smart Card reader
– Intel Wi-Fi 6 AX 200, 802.11ac 2×2 +BT 4.2 combo adapter (vPro)
– HP long-life battery four-cell 90 Wh
– Three-year limited warranty

The ZBook 15 G6 is a high-end mobile workstation with a price that reflects it. However, as I said earlier, true workstations are built to withstand constant use and, in this case, abuse. The ZBook 15 G6 has been designed to pass up to 21 extensive MIL-STD 810G tests, which is essentially worst-case scenario testing. For instance, drop testing of around four feet, sand and dust testing, radiation testing (the sun beating down on the laptop for an extended period) and much more.

The exterior of the G6 is made of aluminum and built to withstand abuse. The latest G6 is a little bulky/boxy, in my opinion, but I can see why it would hold up to some bumps and bruises, all while working at blazingly fast speeds, so bulk isn’t a huge issue for me. Because of that bulk, you can imagine that this isn’t the lightest laptop either. It weighs in at 5.79 pounds for the lowest end and measures 1 inch by 14.8 inches by 10.4 inches.

On the bottom of the workstation is an easy-to-access panel for performing repairs and upgrades yourself. I really like the bottom compartment. I opened it and noticed I could throw in an additional NVMe drive and an SSD if needed. You can also access memory here. I love this because not only can you perform easy repairs yourself, but you can perform upgrades or part replacements without voiding your warranty on the original equipment. I’m glad to see that HP kept this in mind.

The keyboard is smaller than a full-size version but has a number keypad, which I love using when typing in timecodes. It is such a time-saver for me. (I credit entering in repair order numbers when I fixed computers at Best Buy as a teenager.) On the top of the keyboard are some handy shortcuts if you do web conferences or calls on your computer, including answering and ending calls. The Bang & Olufsen speakers are some of the best laptop speakers I’ve heard. While they aren’t quite monitor-quality, they do have some nice sound on the low end that I was able to fine-tune in the Bang & Olufsen audio control app.

Software Tests
All right, enough of the technical specs. Let’s get on to what people really want to know — how the HP ZBook 15 G6 performs while using apps like Blackmagic’s DaVinci Resolve and Adobe Premiere Pro. I used sample Red and Blackmagic Raw footage that I use a lot in testing. You can grab the Red footage here and the BRaw footage here. Keep in mind you will need to download the BRaw software to edit with BRaw inside of Adobe products, which you can find here).

Performance monitor while exporting in Resolve with VFX.

For testing in Resolve and Premiere, I strung out one-minute of 4K, 6K and 8K Red media in one sequence and the 4608×2592 4K and 6K BRaw media in another. During the middle of my testing Resolve had a giant Red API upgrade to allow for better realtime playback of Red Raw files if you have an Nvidia CUDA-based GPU.

First up is Resolve 16.1.1 and then Resolve 16.1.2. Both sequences are set to UHD (3840×2160) resolution. One sequence of each codec contains just color correction, while another of each codec contains effects and color correction. The Premiere sequence with color and effects contains basic Lumetri color correction, noise reduction (50) and a Gaussian blur with settings of 0.4. In Resolve, the only difference in the color and effects sequence is that the noise reduction is spatial and set to Enhanced, Medium and 25/25.

In Resolve, the 4K Red media would play in realtime while the 6K (RedCode 3:1) would jump down to about 14fps to 15fps, and the 8K (RedCode 7:1) would play at 10fps at full resolution with just color correction. With effects, the 4K media would play at 20fps, 6K at 3fps and 8K at 10fps. The Blackmagic Raw video would play at real time with just color correction and around 3fps to 4fps with effects.

This is where I talk about just how loud the fans in the ZBook 15 G6 can get. When running exports and benchmarks, the fans are noticeable and a little distracting. Obviously, we are running some high-end testing with processor- and GPU-intensive tests but still, the fans were noticeable. However, the bottom of the mobile workstation was not terribly hot, unlike the MacBook Pros I’ve tested before. So my lap was not on fire.

In my export testing, I used those same sequences as before and from Adobe Premiere Pro 2020. I exported UHD files using Adobe Media Encoder in different containers and codecs: H.264 (Mov), H.265 (Mov), ProResHQ, DPX, DCP and MXF OP1a (XDCAM). The MXF OP1a was at 1920x1080p export.
Here are my results:

Red (4K,6K,8K)
– Color Only: H.264 – 5:27; H.265 – 4:45; ProResHQ – 4:29; DPX – 3:37; DCP – 10:38; MXF OP1a – 2:31

Red Color, Noise Reduction (50), Gaussian Blur .4: H.264 – 4:56; H.265 – 4:56; ProResHQ – 4:36; DPX – 4:02; DCP – 8:20; MXF OP1a – 2:41

Blackmagic Raw
Color Only: H.264 – 2:05; H.265 – 2:19; ProResHQ – 2:04; DPX – 3:33; DCP – 4:05; MXF OP1a – 1:38

Color, Noise Reduction (50), Gaussian Blur 0.4: H.264 – 1:59; H.265 – 2:22; ProResHQ – 2:07; DPX – 3:49; DCP – 3:45; MXF OP1a – 1:51

What is surprising is that when adding effects like noise reduction and a Gaussian blur in Premiere, the export times stayed similar. While using the ZBook 15 G6, I noticed my export times improved when I upgraded driver versions, so I re-did my tests with the latest Nvidia drivers to make sure I was consistent. The drivers also solved an issue in which Resolve wasn’t reading BRaw properly, so remember to always research drivers.

The Nvidia Quadro RTX 3000 really pulled its weight when editing and exporting in both Premiere and Resolve. In fact, in previous versions of Premiere, I noticed that the GPU was not really being used as well as it should have been. With the Premiere Pro 2020 upgrade it seems like Adobe really upped its GPU usage game — at some points I saw 100% GPU usage.

In Resolve, I performed similar tests, but instead of ProResHQ I exported a DNxHR QuickTime file/package instead of a DCP and IMF package. For the most part, they are stock exports in the Deliver page of Resolve, except I forced Video Levels, Forced Debayer and Resizing to Highest Quality. Here are my results from Resolve version 16.1.1 and 16.1.2. (16.1.2 will be in parenthesis.)

– Red (4K, 6K, 8K) Color Only: H.264 – 2:17 (2:31); H.265 – 2:23 (2:37); DNxHR – 2:59 (3:06); IMF – 6:37 (6:40); DPX – 2:48 (2:45); MXF OP1A – 2:45 (2:33)

Color, Noise Reduction (Spatial, Enhanced, Medium, 25/25), Gaussian Blur 0.4: H.264 – 5:00 (5:15); H.265 – 5:18 (5:21); DNxHR – 5:25 (5:02); IMF – 5:28 (5:11); DPX – 5:23 (5:02); MXF OP1a – 5:20 (4:54)

-Blackmagic Raw Color Only: H.264 – 0:26 (0:25); H.265 – 0:31 (0:30); DNxHR – 0:50 (0:50); IMF – 3:51 (3:36); DPX – 0:46 (0:46); MXF OP1a – 0:23 (0:22)

Color, Noise Reduction (Spatial, Enhanced, Medium, 25/25), Gaussian Blur 0.4: H.264 – 7:51 (7:53); H.265 – 7:45 (8:01); DNxHR – 7:53 (8:00); IMF – 8:13 (7:56); DPX – 7:54 (8:18); MXF OP1a – 7:58 (7:57)

Interesting to note: Exporting Red footage with color correction only was significantly faster from Resolve, but for Red footage with effects applied, export times were similar between Resolve and Premiere. With the CUDA Red SDK update to Resolve in 16.1.2, I thought I would see a large improvement, but I didn’t. I saw an approximate 10% increase in playback but no improvement in export times.

Puget

Puget Systems has some great benchmarking tools, so I reached out to Matt Bach, Puget Systems’ senior labs technician, about my findings. He suggested that the mobile Xeon could possibly still be the bottleneck for Resolve. In his testing he saw a larger increase in speed with AMD Threadripper 3 and Intel i9-based systems. Regardless, I am kind of going deep on realtime playback of 8K Red Raw media on a mobile workstation — what a time we are in. Nonetheless, Blackmagic Raw footage was insanely fast when exporting out of Resolve, while export time for the Blackmagic Raw footage with effects was higher than I expected. There was a consistent use of the GPU and CPU in Resolve much like in the new version of Premiere 2020, which is a trend that’s nice to see.

In addition to Premiere and Resolve testing, I ran some common benchmarks that provide a good 30,000-foot view of the HP ZBook 15 G6 when comparing it to other systems. I decided to use the Puget Systems benchmarking tools. Unfortunately, at the time of this review, the tools were only working properly with Premiere and After Effects 2019, so I ran the After Effects benchmark using the 2019 version. The ZBook 15 G6 received an overall score of 802, render score of 79, preview score of 75.2 and tracking score of 86.4. These are solid numbers that beat out some desktop systems I have tested.

Corona

To test some 3D applications, I ran the Cinebench R20, which gave a CPU score of 3243, CPU (single core) score of 470 and an M/P ratio of 6.90x. I recently began running the Gooseberry benchmark scene in Blender to get a better sense of 3D rendering performance, and it took 29:56 to export. Using the Corona benchmark, it took 2:33 to render 16 passes, 3,216,368 rays/s. Using Octane Bench the ZBook 15 G6 received a score of 139.79. In the Vray benchmark for CPU, it received 9833 Ksamples, and in the Vray GPU testing, 228 mpaths. I’m not going to lie; I really don’t know a lot about what these benchmarks are trying to tell me, but they might help you decide whether this is the mobile workstation for your work.

Cinebench

One benchmark I thought was interesting between driver updates for the Nvidia Quadro RTX 3000 was the Neat Bench from Neat Video — the noise reduction plugin for video. It measures whether your system should use the CPU, GPU or a combination thereof to run Neat Video. Initially, the best combination result was to use the CPU only (seven cores) at 11.5fps.

After updating to the latest Nvidia drivers, the best combination result was to use the CPU (seven cores) and GPU (Quadro RTX 3000) at 24.2fps. A pretty incredible jump just from a driver update. Moral of the story: Make sure you have the correct drivers always!

Summing Up
Overall, the HP ZBook 15 G6 is a powerful mobile workstation that will work well across the board. From 3D to color correction apps, the Xeon processor in combination with the Quadro RTX 3000 will get you running 4K video without a problem. With the HP DreamColor anti-glare display using up to 600 nits of brightness and covering 100% of the DCI-P3 color space, coupled with the HDR option, you can rely on the attached display for color accuracy if you don’t have your output monitor attached. And with features like two USB Type-C ports (Thunderbolt 3 plus DP 1.4 plus USB 3.1 Gen 2), you can connect external monitors for a larger view of your work

The HP Fast Charge will get you out of a dead battery fiasco with the ability to go from 0% to 50% charge in 45 minutes. All of this for around $4,000 seems to be a pretty low price to pay, especially because it includes a three-year on-site warranty and because the device is certified to work seamlessly with many apps that pros use with HP’s independent software vendor verifications.

If you are looking for a mobile workstation upgrade, are moving from desktop to mobile or want an alternative to a MacBook Pro, you should price a system out online.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

The Mill opens boutique studio in Berlin

Technicolor’s The Mill has officially launched in Berlin. This new boutique studio is located in the heart of Berlin, situated in the creative hub of Mitte, near many of Germany’s agencies, production companies and brands.

The Mill has been working with German clients for years. Recent projects include the Mercedes’ Bertha Benz spot with director Sebastian Strasser; Netto’s The Easter Surprise, directed in-house by The Mill; and BMW The 8 with director Daniel Wolfe. The new studio will bring The Mill’s full range of creative services from color to experiential and interactive, as well as visual effects and design.

The Mill Berlin crew

Creative director Greg Spencer will lead the creative team. He is a multi-award winning creative, having won several VES, Cannes Lions and British Arrow awards. His recent projects include Carlsberg’s The Lake, PlayStation’s This Could Be You and Eve Cuddly Toy. Spencer also played a role in some of Mill Film’s major titles. He was the 2D supervisor for Les Misérables and also worked on the Lord of the Rings trilogy. His resume also includes campaigns for brands such as Nike and Samsung.

Executive producer Justin Stiebel moves from The Mill London, where he has been since early 2014, to manage client relationships and new business. Since joining the company, Stiebel has produced spots such as Audi’s Next Level and the Mini’s “The Faith of a Few” campaign. He has also collaborated with directors such as Sebastian Strasser, Markus Walter and Daniel Wolfe while working on brands like Mercedes, Audi and BMW.

Sean Costelloe is managing director of The Mill London and The Mill Berlin.

Main Image Caption: (L-R) Justin Stiebel and Greg Spencer

Quantum F1000: a lower-cost NVMe storage option

Quantum is now offering the F1000, a lower-priced addition to the Quantum F-Series family of NVMe storage appliances. Using the software-defined architecture introduced with the F2000, the F1000 offers “ultra-fast streaming” performance and response times at a lower entry price. The F-Series can be used to accelerate the capture, edit and finishing of high-definition content and to accelerate VFX and CGI render speeds up to 100 times for developing augmented and virtual reality.

The Quantum F-Series was designed to handle content such as HD video used for movie, TV and sports production, advertising content or image-based workloads that require high-speed processing. Pros are using F-Series NVMe systems as part of Quantum’s StorNext scale-out file storage cluster and leveraging the StorNext data management capabilities to move data between NVMe storage pools and other storage pools. Users can take advantage of the performance boost NVMe provides for workloads that require it, while continuing to use lower-cost storage for data where performance is less critical.

Quantum F-Series NVMe appliances accelerate pro workloads and also help customers move from Fibre Channel networks to less expensive IP-based networks. User feedback has shown that pros need a lower cost of entry into NVMe technology, which is what led Quantum to develop the F1000. According to Quantum, the F1000 offers performance that is five to 10 times faster than an equivalent SAS SSD storage array at a similar price.

The F1000 is available in two capacity points: 39TB and 77TB. It offers the same connectivity options as the F2000 — 32Gb Fibre Channel or iSER/RDMA using 100Gb Ethernet — and is designed to be deployed as part of a StorNext scale out file storage cluster.

DP Chat: The Grudge’s Zachary Galler

By Randi Altman

Being on set is like coming home for New York-based cinematographer Zachary Galler, who as a child would tag along with his father while he directed television and film projects. The younger Galler started in the industry as a lighting technician and quickly worked his way up to shooting various features and series.

His first feature as a cinematographer, The Sleepwalker, premiered at the in 2014 and was later distributed by IFC. His second feature, She’s Lost Control, was awarded the C.I.C.A.E. Award at the Berlin International Film Festival later that year. Other television credits include all eight episodes of Discovery’s scripted series Manhunt: Unabomber, Hulu’s The Act and USA’s Briarpatch (coming in February). He recently completed the feature Nicolas Pesce-directed thriller The Grudge, which stars John Cho and Betty Gilpin and is in theaters now.

Tell us about The Grudge. How early did you get involved in planning, and what direction were you given by the director about the look he wanted?
Nick and I worked together on a movie he directed called Piercing. That was our first collaboration, but we discovered that we had very similar ideas and working styles and we formed a special relationship. Shortly after that project, we started talking about The Grudge, and about a year later we were shooting. We talked a lot about how this movie should feel, and how we could achieve something new and different from something neither of us had done before. We used a lot of look-books and movie references to communicate, so when it came time to shoot we had the visual language down fluently and that allowed us keep each other consistent in execution.

How would you describe the look?
Nick really liked the bleach-bypass look from David Fincher’s Se7en, and I thought about a mix of that and (photographer) Bill Henson. We also knew that we had to differentiate between the different storyline threads in the movie, so we had lots to figure out. One of the threads is darker and looks very yellow, while another is warmer and more classic. Another is slightly more desaturated and darker. We did keep the same bleach-bypass look throughout, but adjusted our color temperature, contrast and saturation accordingly. For a horror movie like this, I really wanted to be able to control where the shadow detail turned into black, because some of our scare scenes relied on that so we made sure to light accordingly, and were able to fine-tune most of that in-camera.

How did you work with the director and colorist to achieve that look?
We worked with FotoKem colorist Kostas Theodosiou (who used Blackmagic Resolve). I was shooting a TV show during the main color pass, so I only got to check in to set looks and approve final color, but Nick and Kostas did a beautiful job. Kostas is a master of contrast control and very tastefully helped us ride that line of where there should be detail and where it should not be detail. He was definitely an important part of the collaboration and helped make the movie better.

Where was it shot and how long was the shoot?
We shot the movie in 35 days in Winnipeg, Canada.

How did you go about choosing the right camera and lenses for this project and why these tools?
Nick decided early on that he wanted to shoot this film anamorphic. Panavision has been an important partner for me on most of my projects, and I knew that I loved their glass. We got a range of different lenses from Panavision Toronto to help us differentiate our storylines — we shot one on T Series, one on Primo anamorphics and one on G Series anamorphics. The Alexa Mini was the camera of choice because of its low light sensitivity and more natural feel.

Now more general questions…

How did you become interested in cinematography?
My father was a director, so I would visit him on set a lot when I was growing up. I didn’t know quite what I wanted to do when I was young but I knew that it was being on set. After dropping out of film school, I got a job working in a lighting rental warehouse and started driving trucks and delivering lights to sets in New York. I had always loved taking pictures as a kid and as I worked more and learned more, I realized that what I wanted to do was be a DP. I was very lucky in that I found some great collaborators early on in my career that both pushed me and allowed me to fail. This is the greatest job in the world.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
Artistically, I am inspired by painters, photographers and other DPs. There are so many people doing such amazing work right now. As far as technology is concerned, I’m a bit slow with adopting, as I need to hold something in my hands or see what it does before I adopt it. I have been very lucky to get to work with some great crews, and often a camera assistant, gaffer or key grip will bring something new to the table. I love that type of collaboration.

 

DP Zachary Galler (right) and director Nicolas Pesce on the set of Screen Gems’ The Grudge.

What new technology has changed the way you works?
For some reason, I was resistant to using LUTs for a long time. The Grudge was actually the first time I relied on something that wasn’t close to just plain Rec 709. I always figured that if I could get the 709 feeling good when I got into color I’d be in great shape. Now, I realize how helpful they can be, and that you can push much further. I also think that the Astera LED tubes are amazing. They allow you to do so much so fast and put light in places that would be very hard to do with other traditional lighting units.

What are some of your best practices or rules you try to follow on each job?
I try to be pretty laid back on set, and I can only do that because I’m very picky about who I hire in prep. I try and let people run their departments as much as possible and give them as much information as possible — it’s like cooking, where you try and get the best ingredients and don’t do much to them. I’ve been very lucky to have worked with some great crews over the years.

What’s your go-to gear — things you can’t live without?
I really try and keep an open mind about gear. I don’t feel romantically attached to anything, so that I can make the right choices for each project.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Directing Olly’s ‘Happy Inside Out’ campaign

How do you express how vitamins make you feel? Well, production company 1stAveMachine partnered with independent creative agency Yard NYC to develop the stylized “Happy Inside Out” campaign for Olly multivitamin gummies to show just that.

Beauty

The directing duo of Erika Zorzi and Matteo Sangalli, known as Mathery, highlighted the brand’s products and benefits by using rich textures, colors and lighting. They shot on an ARRI Alexa Mini. “Our vision was to tell a cohesive narrative, where each story of the supplements spoke the same visual language,” Mathery explains. “We created worlds where everything is possible and sometimes took each product’s concept to the extreme and other times added some romance to it.”

Each spot imagines various benefits of taking Olly products. The side-scrolling Energy, which features a green palette, shows a woman jumping and doing flips through life’s everyday challenges, including through her home to work, doing laundry and going to the movies. Beauty, with its pink color pallete, features another woman “feeling beautiful” while turning the heads of a parliament of owls. Meanwhile, Stress, with its purple/blue palette, features a women tied up in a giant ball of yarn, and as she unspools herself, the things that were tying her up spin away. In the purple-shaded Sleep, a lady lies in bed pulling off layer after layer of sleep masks until she just happily sleeps.

Sleep

The spots were shot with minimal VFX, other than a few greenscreen moments, and the team found itself making decisions on the fly, constantly managing logistics for stunt choreography, animal performances and wardrobe. Jogger Studios provided the VFX using Autodesk Flame for conform, cleanup and composite work. Adobe After Effects was used for all of the end tag animation. Cut+Run edited the campaign.

According to Mathery, “The acrobatic moves and obstacle pieces in the Energy spot were rehearsed on the same day of the shoot. We had to be mindful because the action was physically demanding on the talent. With the Beauty spot, we didn’t have time to prepare with the owls. We had no idea if they would move their heads on command or try to escape and fly around the whole time. For the Stress spot, we experimented with various costume designs and materials until we reached a look that humorously captured the concept.”

The campaign marks Mathery’s second collaboration with Yard NYC and Olly, who brought the directing team into the fold very early on, during the initial stages of the project. This familiarity gave everyone plenty of time to let the ideas breath.

Recreating the Vatican and Sistine Chapel for Netflix’s The Two Popes

The Two Popes, directed by Fernando Meirelles, stars Anthony Hopkins as Pope Benedict XVI and Jonathan Pryce as current pontiff Pope Francis in a story about one of the most dramatic transitions of power in the Catholic Church’s history. The film follows a frustrated Cardinal Bergoglio (the future Pope Francis) who in 2012 requests permission from Pope Benedict to retire because of his issues with the direction of the church. Instead, facing scandal and self-doubt, the introspective Benedict summons his harshest critic and future successor to Rome to reveal a secret that would shake the foundations of the Catholic Church.

London’s Union was approached in May 2017 and supervised visual effects on location in Argentina and Italy over several months. A large proportion of the film takes place within the walls of Vatican City. The Vatican was not involved in the production and the team had very limited or no access to some of the key locations.

Under the direction of production designer Mark Tildesley, the production replicated parts of the Vatican at Rome’s Cinecitta Studios, including a life-size, open ceiling, Sistine Chapel, which took two months to build.

The team LIDAR-scanned everything available and set about amassing as much reference material as possible — photographing from a permitted distance, scanning the set builds and buying every photographic book they could lay their hands on.

From this material, the team set about building 3D models — created in Autodesk Maya — of St. Peter’s Square, the Basilica and the Sistine Chapel. The environments team was tasked with texturing all of these well-known locations using digital matte painting techniques, including recreating Michelangelo’s masterpiece on the ceiling of the Sistine Chapel.

The story centers on two key changes of pope in 2005 and 2013. Those events attracted huge attention, filling St. Peter’s Square with people eager to discover the identity of the new pope and celebrate his ascension. News crews from around the world also camp out to provide coverage for the billions of Catholics all over the world.

To recreate these scenes, the crew shot at a school in Rome (Ponte Mammolo) that has the same pattern on its floor. A cast of 300 extras was shot in blocks in different positions at different times of day, with costume tweaks including the addition of umbrellas to build a library that would provide enough flexibility during post to recreate these moments at different times of day and in different weather conditions.

Union also called on Clear Angle Studios to individually scan 50 extras to provide additional options for the VFX team. This was an ambitious crowd project, so the team couldn’t shoot in the location, and the end result had to stand up at 4K in very close proximity to the camera. Union designed a Houdini-based system to deal with the number of assets and clothing in such a way that the studio could easily art-direct them as individuals, allow the director to choreograph them and deliver a believable result.

Union conducted several motion capture shoots inhouse at Union to provide some specific animation cycles that married with the occasions they were recreating. This provided even more authentic-looking crowds for the post team.

Union worked on a total of 288 VFX shots, including greenscreens, set extensions, window reflections, muzzle flashes, fog and rain and a storm that included a lightning strike on the Basilica.

In addition, the team did a significant amount of de-aging work to accommodate the film’s eight-year main narrative timeline as well as a long period in Pope Francis’ younger years.

A Beautiful Day in the Neighborhood director Marielle Heller

By Iain Blair

If you are of a certain age, the red cardigan, the cozy living room and the comfy sneakers can only mean one thing — Mister Rogers! Sony Pictures’ new film, A Beautiful Day in the Neighborhood, is a story of kindness triumphing over cynicism. It stars Tom Hanks and is based on the real-life friendship between Fred Rogers and journalist Tom Junod.

Marielle Heller

In the film, jaded writer Lloyd Vogel (Matthew Rhys), whose character is loosely based on Junod, is assigned a profile of Rogers. Over the course of his assignment, he overcomes his skepticism, learning about empathy, kindness and decency from America’s most beloved neighbor.

A Beautiful Day in the Neighborhood is helmed by Marielle Heller, who most recently directed the film Can You Ever Forgive Me? and whose feature directorial debut was 2015’s The Diary of a Teenage Girl. Heller has also directed episodes of Amazon’s Transparent and Hulu’s Casual.

Behind the scenes, Heller collaborated with DP Jody Lee Lipes, production designer Jade Healy, editor Anne McCabe, ACE, and composer Nate Heller.

I recently spoke with Heller about making the film, which is generating a lot of Oscar buzz, and her workflow.

What sort of film did you set out to make?
I didn’t want to make a traditional biopic, and part of what I loved about the script was it had this larger framing device — that it’s a big episode of Mister Rogers for adults. That was very clever, but it’s also trying to show who he was deep down and what it was like to be around him, rather than just rattling off facts and checking boxes. I wanted to show Fred in action and his philosophy. He believed in authenticity and truth and listening and forgiveness, and we wanted to embody all that in the filmmaking.

It couldn’t be more timely.
Exactly, and it’s weird since it’s taken eight years to get it made.

Is it true Tom Hanks had turned this down several times before, but you got him in a headlock and persuaded him to do it?
(Laughs) The headlock part is definitely true. He had turned it down several times, but there was no director attached. He’s the type of actor who can’t imagine what a project will be until he knows who’s helming it and what their vision is.

We first met at his grandkid’s birthday party. We became friends, and when I came on board as director, the producers told me, “Tom Hanks was always our dream for playing Mister Rogers, but he’s not interested.” I said, “Well, I could just call him and send him the script,” and then I told Tom I wasn’t interested in doing an imitation or a sketch version, and that I wanted to get to his essence right and the tone right. It would be a tightrope to walk, but if we could pull it off, I felt it would be very moving. A week later he was like, “Okay, I’ll do it.” And everyone was like, “How did you get him to finally agree?” I think they were amazed.

What did he bring to the role?
Maybe people think he just breezed into this — he’s a nice guy, Fred’s a nice guy, so it’s easy. But the truth is, Tom’s an incredibly technically gifted actor and one of the hardest-working ones I’ve ever worked with. He does a huge amount of research, and he came in completely prepared, and he loves to be directed, loves to collaborate and loves to do another take if you need it. He just loves the work.

Any surprises working with him?
I just heard that he’s actually related to Fred, and that’s another weird thing. But he truly had to transform for the role because he’s not like Fred. He had to slow everything down to a much slower pace than is normal for him and find Fred’s deliberate way of listening and his stillness and so on. It was pretty amazing considering how much coffee Tom drinks every day.

What did Matthew Rhys bring to his role?
It’s easy to forget that he’s actually the protagonist and the proxy for all the cynicism and neuroticism that many of us feel and carry around. This is what makes it so hard to buy into a Mister Rogers world and philosophy. But Matthew’s an incredibly complex, emotional person, and you always know how much he’s thinking. He’s always three steps ahead of you, he’s very smart, and he’s not afraid of his own anger and exploring it on screen. I put him through the ringer, as he had to go through this major emotional journey as Lloyd.

How important was the miniature model, which is a key part of the film?
It was a huge undertaking, but also the most fun we had on the movie. I grew up building miniatures and little cities out of clay, so figuring it all out — What’s the bigger concept behind it? How do we make it integrate seamlessly into the story? — fascinated me. We spent months figuring out all the logistics of moving between Fred’s set and home life in Pittsburgh and Lloyd’s gritty, New York environment.

While we shot in Pittsburgh, we had a team of people spend 12 weeks building the detailed models that included the Pittsburgh and Manhattan skylines, the New Jersey suburbs, and Fred’s miniature model neighborhood. I’d visit them once a week to check on progress. Our rule of thumb was we couldn’t do anything that Fred and his team couldn’t do on the “Neighborhood,” and we expanded a bit beyond Fred’s miniatures, but not outside of the realm of possibility. We had very specific shots and scenes all planned out, and we got to film with the miniatures for a whole week, which was a delight. They really help bridge the gap between the two worlds — Mister Rogers’ and Lloyd’s worlds.

I heard you shot with the same cameras the original show used. Can you talk about how you collaborated with DP Jody Lee Lipes, to get the right look?
We tracked down original Ikegami HK-323 cameras, which were used to film the show, and shipped them in from England and brought them to the set in Pittsburgh. That was huge in shooting the show and making it even more authentic. We tried doing it digitally, but it didn’t feel right, and it was Jody who insisted we get the original cameras — and he was so right.

Where did you post?
We did it in New York — the editing at Light Iron, the sound at Harbor and the color at Deluxe.

Do you like the post process?
I do, as it feels like writing. There’s always a bit of a comedown from production for me, which is so fast-paced. You really slow down for post; it feels a bit like screeching to a halt for me, but the plus is you get back to the deep critical thinking needed to rewrite in the edit, and to retell the story with the sound and the DI and so on.

I feel very strongly that the last 10% of post is the most important part of the whole process. It’s so tempting to just give up near the end. You’re tired, you’ve lost all objectivity, but it’s critical you keep going.

Talk about editing with Anne McCabe. What were the big editing challenges?
She wasn’t on the set. We sent dailies to her in New York, and she began assembling while we shot. We have a very close working relationship, so she’d be on the phone immediately if there were any concerns. I think finding the right tone was the biggest challenge, and making it emotionally truthful so that you can engage with it. How are you getting information and when? It’s also playing with audiences’ expectations. You have to get used to seeing Tom Hanks as Mister Rogers, so we decided it had to start really boldly and drop you in the deep end — here you go, get used to it! Editing is everything.

There are quite a few VFX. How did that work?
Obviously, there’s the really big VFX sequence when Lloyd goes into his “fever dreams” and imagines himself shrunk down on the set of the neighborhood and inside the castle. We planned that right from the start and did greenscreen — my first time ever — which I loved. And even the practical miniature sets all needed VFX to integrate them into the story. We also had seasonal stuff, period-correct stuff, cleanup and so on. Phosphene in New York did all the VFX.

Talk about the importance of sound and music.
My composer’s also my brother, and he starts very early on so the music’s always an integral part of post and not just something added at the end. He’s writing while we shoot, and we also had a lot of live music we had to pre-record so we could film it on the day. There’s a lot of singing too, and I wanted it to sound live and not overly produced. So when Tom’s singing live, I wanted to keep that human quality, with all the little mouth sounds and any mistakes. I left all that in purposely. We never used a temp score since I don’t like editing to temp music, and we worked closely with the sound guys at Harbor in integrating all of the music, the singing, the whole sound design.

How important is the DI to you?
Hugely important and we finessed a lot with colorist Sam Daley. When you’re doing a period piece, color is so crucial – that it feels authentic to that world. Jody and Sam have worked together for a long time and they worked very hard on the LUT before we began, and every department was aware of the color palette and how we wanted it to look and feel.

What’s next?
I just started a new company called Defiant By Nature, where I’ll be developing and producing TV projects by other people. As for movies, I’m taking a little break.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Behind the title: Cutters editor Steve Bell

“I’ve always done a fair amount of animation design, music rearranging and other things that aren’t strictly editing, but most editors are expected to play a role in aspects of the post process that aren’t strictly editing.”

Name: Steve Bell

What’s your job title?
Editor

Company: Cutters Editorial

Can you describe your company?
Cutters is part of a global group of companies offering offline editing, audio engineering, VFX and picture finishing, production and design – all of which fall under Cutters Studios. Here in New York, we do traditional broadcast TV advertising and online content, as well as longer format work and social media content for brands, directors and various organizations that hire us to develop a concept, shoot and direct.

Cutters New York

What’s your job title?
Editor

What’s your favorite part of the job?
There’s a stage to pretty much every project where I feel I’ve gotten a good enough grasp of the material that I can connect the storytelling dots and see it come to life. I like problem solving and love the feeling you get when you know you’ve “figured it out.”

Depending on the scale of the project, it can start a few hours in, a few days in or a few weeks in, but once it hits you can’t stop until you see the piece finished. It’s like reading a good page-turner; you can’t put it down. That’s the part of the creative process I love and what I like most about my job.

What’s your least favorite?
It’s those times when it becomes clear that I’ve/we’ve probably looked at something too many times to actually make it better. That certainly doesn’t happen on many jobs, but when it does, it’s probably because too many voices have had a say; too many cooks in the kitchen, as they say.

What is your most productive time of the day?
Early in the morning. I’m most clearheaded at the very beginning of the day, and then sometimes toward the very end of a long day. But those times also happen to be when I’m most likely to be alone with what I’m working on and free from other distractions.

If you didn’t have this job, what would you be doing instead? 
Baseball player? Astronaut? Joking. But let’s face it, we all fantasize about fulfilling the childhood dreams that are completely different from what we do. To be truthful I’m sure I’d be doing some kind of writing, because it was my desire to be a writer, particularly of film, that indirectly led me to be an editor.

Why did you choose this profession? How early on did you know this would be your path?
Well the simple answer is probably that I had opportunities to edit professionally at a relatively young age, which forced me to get better at editing way before I had a chance to get better at writing. If I keep editing I may never know if I can write!

Stella Artois

Can you name some recent projects you have worked on?
The Dwyane Wade Budweiser retirement film, Stella Artois holiday spots, a few films for the Schott/Hamilton watch collaboration. We did some fun work for Rihanna’s Savage X Fenty release. Early in the year I did a bunch of lovely spots for Hallmark Hall of Fame programming.

Do you put on a different hat when cutting for a specific genre?
For sure. There are overlapping tasks, but I do believe it takes a different set of skills to do good dramatic storytelling than it takes to do straight comedy, or doc or beauty. Good “Storytelling” (with a capital ‘S’) is helpful in all of it — I’d probably say crucial. But it comes down to the important element that’s used to create the story: emotion, humor, rhythm, etc. And then you need to know when it needs to be raw versus formal, broad versus subtle and so forth. Different hats are needed to get that exactly right.

What is the project that you are most proud of and why?
I’m still proud of the NHL’s No Words spot I worked on with Cliff Skeete and Bruce Jacobson. We’ve become close friends as we’ve collaborated on a lot of work since then for the NHL and others. I love how effective that spot is, and I’m proud that it continues to be referenced in certain circles.

NHL No Words

In a very different vein, I think I’m equally proud of the work I’ve done for the UN General Assembly meetings, especially the film that accompanied Kathy Jetnil-Kijiner’s spoken word performance of her poem “Dear Matafele Peinem” during the opening ceremonies of the UN’s first Climate Change conference. That’s an issue that’s very important to me and I’m grateful for the chance to do something that had an impact on those who saw it.

What do you use to edit?
I’m a Media Composer editor, and it probably goes back to the days when I did freelance work for Avid and had to learn it inside out. The interface at least is second nature to me. Also, the media sharing and networking capabilities of Avid make it indispensable. That said, I appreciate that Premiere has some clear advantages in other ways. If I had to start over I’m not sure I wouldn’t start with Premiere.

What is your favorite plugin?
I use a lot of Boris FX plugins for stabilization, color correction and so forth. I used to use After Effects often, and Boris FX offers a way of achieving some of what I once did exclusively in After Effects.

Are you often asked to do more than edit? If so, what else are you asked to do?
I’ve always done a fair amount of animation design, music rearranging and other things that aren’t strictly editing, but most editors are expected to play a role in aspects of the post process that aren’t strictly “film editing.”

Many of my clients know that I have strong opinions about those things, so I do get asked to participate in music and animation quite often. I’m also sometimes asked to help with the write-ups of what we’ve done in the edit because I like talking about the process and clarifying what I’ve done. If you can explain what you’ve done you’re probably that much more confident about the reasons you did it. It can be a good way to call “bullshit” on yourself.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
Yeah, right?! It can be stressful, especially when you’re occasionally lucky enough to be busy with multiple projects all at once. I take decompressing very seriously. When I can, I spend a lot of time outdoors — hiking, biking, you name it — not just for the cardio and exercise, which is important enough, but also because it’s important to give your eyes a chance to look off into the distance. There are tremendous physical and psychological benefits to looking to the horizon.

Ford v Ferrari’s co-editors discuss the cut

By Oliver Peters

After a failed attempt to acquire European carmaker Ferrari, an outraged Henry Ford II sets out to trounce Enzo Ferrari on his own playing field — automobile endurance racing. That is the plot of 20th Century Fox’s Ford v Ferrari, directed by James Mangold. In the end, Ford’s effort falls short, leading him to independent car designer Carroll Shelby (Matt Damon). Shelby’s outspoken lead test driver Ken Miles (Christian Bale) complicates the situation by making an enemy out of Ford senior VP Leo Beebe.

Michael McCusker

Nevertheless, Shelby and his team are able to build one of the greatest race cars ever — the GT40 MkII — setting up a showdown between the two auto legends at the 1966 24 Hours of Le Mans.

The challenge of bringing this clash of personalities to the screen was taken on by director James Mangold (Logan, Wolverine, 3:10 to Yuma) and his team of long-time collaborators.

I recently spoke with film editors Michael McCusker, ACE, (Walk the Line, 3:10 to Yuma, Logan) and Andrew Buckland (The Girl On the Train) — both of whom were recently nominated for an Oscar and ACE Eddie Award for their work on the film — about what it took to bring Ford v Ferrari together.

The post team for this film has worked with James Mangold on quite a few films. Tell me a bit about the relationship.
Michael McCusker: I cut my very first movie, Walk the Line, for Jim 15 years ago and have since cut his last six movies. I was the first assistant editor on Kate & Leopold, which was shot in New York in 2001. That’s where I met Andrew, who was hired as one of the local New York film assistants. We became fast friends. Andrew moved to LA in 2009, and I hired him to assist me on Knight & Day.

Andrew Buckland

I always want to keep myself available for Jim — he chooses good material, attracts great talent and is a filmmaker who works across multiple genres. Since I’ve worked with him, I’ve cut a musical movie, a western, a rom-com, an action movie, a straight-up superhero movie, a dystopian superhero movie and now a racing film.

As a film editor, it must be great not to get typecast for any particular cutting style.
McCusker: Exactly. I worked for David Brenner for years as his first. He was able to cross genres, and that’s what I wanted to do. I knew even then that the most important decisions I would make would be choosing projects. I couldn’t have foreseen that Jim was going to work across all these genres — I simply knew that we worked well together and that the end product was good.

In preparing for Ford v Ferrari, did you study any other recent racing films, like Ron Howard’s Rush?
McCusker: I saw that movie, and liked it. Jim was aware of it, too, but I think he wanted to do something a little more organic. We watched a lot of older racing films, like Steve McQueen’s Le Mans and John Frankenheimer’s Grand Prix.

Jim’s original intention was to play the racing in long takes and bring the audience along for the ride. As he was developing the script, and we were in preproduction, it became clear that there was more drama for him to portray during the racing sequences than he anticipated. So the races took on more of an energized pace.

Energized in what way? Do you mean in how you cut it or in a change of production technique, like more stunt cameras and angles?
McCusker: I was fortunate to get involved about two-and-a-half months prior to the start of production. We were developing the Le Mans race in previs. This required a lot of editing and discussions about shot design and figuring out what the intercutting was going to be during that sequence, which is like the fourth act of the movie.

You’re dealing with Mollie and Peter [Miles’ wife and son] at home watching the race, the pit drama, what’s going on with Shelby and his crew, with Ford and Leo Beebe and also, of course, what’s going on in the car with Ken. It’s a three-act movie unto itself, so Jim was trying to figure out how it was all going to work before he had to shoot it. That’s where I came in. The frenetic pace of Le Mans was more a part of the writing process — and part of the writing process was the previs. The trick was how to make sure we weren’t just following cars around a track. That’s where redundancy can tend to beleaguer an audience in racing movies.

What was the timeline for production and post?
McCusker: I started at the end of May 2018. Production began at the beginning of August and went all the way through to the end of November. We started post in earnest at the beginning of November of last year, took some time off for the holidays, and then showed the film to the studios around February or March.

When did you realize you were going to need help?
The challenge was that there was going to be a lot of racing footage, which meant there was going to be a lot of footage. I knew I was going to need a strong co-editor, so Andrew was the natural choice. He had been cutting on his own and cutting with me over the years. We share a common approach to editing and have a similar aesthetic.

There was a point when things got really intense and we needed another pair of hands, so I brought in Dirk Westervelt to help out for a couple of months. That kept our noses above water, but the process was really enjoyable. We were never in a crisis mode. We got a great response from preview audiences and, of course, that calms everybody down. At that point it was just about quality control and making sure we weren’t resting on our laurels.

How long was your initial cut, and what was your process for trimming the film down to the present run time?
McCusker: We’re at 2:30:00 right now and I think the first cut was 3:10 or 3:12. The Le Mans section was longer. The front end of the movie had more scenes in it. We ended up lifting some scenes and rearranging others. Plus, the basic trimming of scenes brought the length down.

But nothing was the result of a panic, like, “Oh my God, we’ve got to get to 2:30!” There were no demands by the studio or any pressures we placed upon ourselves to hit a particular running time. I like to say that there’s real time and there’s cinematic time. You can watch Once Upon a Time in America, which is 3:45, and feels like it’s an hour. Or you can watch an 89-minute movie and feel like it’s drudgery. We just wanted to make sure we weren’t overstaying our welcome.

How extensively did you rearrange scenes during the edit? Or did the structure of the film stay pretty much as scripted?
McCusker: To a great degree it stayed as scripted. We had some scenes in the beginning that we felt were a little bit tangential and weren’t serving the narrative directly, and those were cut.

The real endeavor of this movie starts the moment that these two guys [Shelby and Miles] decide to tackle the challenge of developing this car. There’s a scene where Miles sees the car for the first time at LAX. We understood that we had to get to that point in a very efficient way, but also set up all the other characters — their motives and their desires.

It’s an interesting movie, because it starts off with a lot of characters. But then it develops into a movie about two guys and their friendship. So it goes from an ensemble piece to being about Ken and Carroll, while at the same time the scope of the movie is opening up and becoming larger as the racing is going on. For us, the trickiest part was the front end — to make sure we spent enough time with each character so that we understood them, but not so much time that audience would go, “Enough already! Get on with it!”

Did that help inform your cutting style for this film?
McCusker: I don’t think so. Where it helped was knowing the sound of the broadcasters and race announcers. I liked Chris Economaki and Jim McKay — guys who were broadcasting the races when I was a kid. I was intrigued about how they gave us the narrative of the race. It came in handy while we were making this movie, because we were able to get our hands on some of Jim McKay’s actual coverage of Le Mans and used it in the movie. That brings so much authenticity.

Let’s talk sound. I would imagine the sound design was integral to your rough cuts. How did you tackle that?
Andrew Buckland: We were fortunate to have the sound team on very early during preproduction. We were cutting in a 5.1 environment, so we wanted to create sound design early. The engine sounds might not have been the exact sounds that would end up in the final, but they were adequate enough to allow you to experience the scenes as intended. Because we needed to get Jim’s response early, some of the races were cut with the production sound — from the live mics during filming. This allowed Jim and us to quickly see how the scenes would flow.

Other scenes were cut strictly MOS because the sound design would have been way too complicated for the initial cut of the scene. Once the scene was cut visually, we’d hand over the scene to sound supervisor Don Sylvester, who was able to provide us with a set of 5.1 stems. That was great, because we could recut and repurpose those stems for other races.

McCusker: We had developed a strategy with Don to split the sound design into four or five stems to give us enough discrete channels to recut these sequences. The stems were a palette of interior perspectives, exterior perspectives, crowds, car-bys, and so on. By employing this strategy, we didn’t need to continually turn over the cut to sound for patch-up work.

Then, as Don went out and recorded the real cars and was developing the actual sounds for what was going to be used in the mix, he’d generate new stems and we would put them into the Media Composer. This was extremely informative to Jim, because he could experience our Avid temp mix in 5.1 and give notes, which ultimately informed the final sound design and the mix.

What about temp music? Did you also weave that into your rough cuts?
McCusker: Ted Caplan, our music editor, has also worked with Jim for 15 years. He’s a bit of a renaissance man — a screenwriter, a novelist, a one-time musician and a sound designer in his own right. When he sits down to work with music, he’s coming at it from a story point-of-view. He has a very instinctual knowledge of where music should start, and it happens to dovetail into the aesthetic that Jim, Andrew, and I are working toward. None of us like music to lead scenes in a way that anticipates what the scene is going to be about before you experience it.

For this movie, it was challenging to develop what the musical tone of the movie would be. Ted was developing the temp track along with us from a very early stage. We found over time that not one particular musical style was going to work. This is a very complex score. It includes a kind of surf-rock sound with Carroll Shelby in LA, an almost jaunty, lounge jazz sound for Detroit and the Ford executives, and then the hard-driving rhythmic sound for the racing.

The final score was composed by Marco Beltrami and Buck Sanders.

I presume you were housed in multiple cutting rooms at a central facility.
McCusker: We cut at 20th Century Fox, where Jim has a large office space. We cut Logan and Wolverine there before this movie. It has several cutting spaces and I was situated between Andrew and Don. Ted was next to Don and John Berri, our additional editor. Assistants were right around the corner. It makes for a very efficient working environment.

Since the team was cutting with Avid Media Composer, did any of its features stand out to you for this film?
Both: FluidMorph! (laughing)

McCusker: FluidMorph, speed-ramping — we often had to manipulate the shot speeds to communicate the speed of the cars. A lot of these cars were kit cars that could drive safely at a certain speed for photography, but not at race speed. So we had to manipulate the speed a lot to get the sense of action that these cars have.

What about Avid’s ScriptSync? I know a lot of narrative editors love it.
McCusker: I used ScriptSync once a few years ago and I never cut a scene faster. I was so excited. Then I watched it, and it was terrible. To me there’s so much more to editing than hitting the next line of dialogue. I’m more interested in the lines between the lines — subtext. I do understand the value of it in certain applications. For instance, I think it’s great on straight comedy. It’s helpful to get around and find things when you are shooting tons of coverage for a particular joke. But for me, it’s not something I lean on. I mark up my own dailies and find stuff that way.

Tell me a bit more about your organizational process. Do you start with a Kem roll or stringouts of selected takes?
McCusker: I don’t watch dailies, at least in a traditional sense. I don’t start in the morning, watch the dailies and then cut. And I don’t ask my assistants to organize any of my dailies in bins. I come in and grab the scene that I have in front on me. I’ll look at the last take of every set-up quickly and then I spend an enormous amount of time — particularly on complex scenes — creating a bin structure that I can work with.

Sometimes it’s the beats in a scene, sometimes I organize by shot size, sometimes by character — it depends on what’s driving the scene. I learn my footage by organizing it. I remember shot sizes. I remember what was shot from set-up to set-up. I have a strong visual memory of where things are in a bin. So, if I ask an assistant to do that, then I’m not going to remember it. If there are a lot of resets or restarts in a take, I’ll have the assistant mark those up. But, I’ll go through and mark up beats or pivotal points in a scene, or particularly beautiful moments, and then I’ll start cutting.

Buckland: I’ve adopted a lot of Mike’s methodology, mainly because I assisted Mike on a few films. But it actually works for me, as well. I have a similar aesthetic to Mike.

Was this was shot digitally?
McCusker: It was primarily shot with ARRI Alexa 65 LFs, plus some other small-format cameras. A lot of it was shot with old anamorphic lenses on the Alexa that allowed them to give it a bit of a vintage feeling. It’s interesting that as you watch it, you see the effect of the old lenses. There’s a fall-off on the edges, which is kind of cool. There were a couple of places where the subject matter was framed into the curve of the lens, which affects the focus. But we stuck with it, because it feels “of the time.”

Since the film takes place in the 1960s and has a lot of racing sequences, I assume there a lot of VFX?
McCusker: The whole movie is a period film and we would temp certain things in the Avid for the rough cuts. John Berri was wrangling visual effects. He’s a master in the Avid and also Adobe After Effects. He has some clever ways of filling in backgrounds or greenscreens with temp elements to give the director an idea of what’s going to go there. We try to do as much temp work in the Avid as we are capable of doing, but there’s so much 3D visual effects work in this movie that we weren’t able to do that all of the time.

The racing is real. The cars are real. The visual effects work was for a lot of the backgrounds. The movie was shot almost entirely in Los Angeles with some second unit footage shot in Georgia. The modern-day Le Mans track isn’t at all representative of what Le Mans was in 1966, so there was no way to shoot that. Everything had to be doubled and then augmented with visual effects. In addition to Georgia, where they shot most of the actual racing for Le Mans, they went to France to get some shots of the actual town of Le Mans. Of those, I think only about four of those shots are left. (laughs)

Any final thoughts about how this film turned out?
McCusker: I’m psyched that people seem to like the film. Our concern was that we had a lot of story to tell. Would we wear audiences out? We continually have people tell us, “That was two and a half hours? We had no idea.” That’s humbling for us and a great feeling. It’s a movie about these really great characters with great scope and great racing. You can put all the big visual effects in a film that you want to, but it’s really about people.

Buckland: I agree. It’s more of a character movie with racing. Also, because I am not a racing fan per se, the character drama really pulled me into the film while working on it.


Oliver Peters is an experienced film and commercial editor/colorist. In addition, he regularly interviews editors for trade publications. He may be contacted through his website at oliverpeters.com.

Company 3 ups Jill Bogdanowicz to co-creative head, feature post  

Company 3 senior colorist Jill Bogdanowicz will now share the title of creative head, feature post with senior colorist Stephen Nakamura. In this new role she will collaborate with Nakamura working to foster communication among artists, operations and management in designing and implementing workflows to meet the ever-changing needs of feature post clients.

“Company 3 has been and will always be guided by artists,” says senior colorist/president Stefan Sonnenfeld. “As we continue to grow, we have been formalizing our intra-company communication to ensure that our artists communicate among themselves and with the company as a whole. I’m excited that Jill will be joining Stephen as a representative of our feature colorists. Her years of excellent work and her deep understanding of color science makes her a perfect choice for this position.”

Among the kinds of issues Bogdanowicz and Nakamura will address: Mentorship within the company, artist recruitment and training and adapting for emerging workflows and client expectations.

Says Bogdanowicz, “As the company continues to expand, both in size and workload, I think it’s more important than ever to have Stephen and me in a position to provide guidance to help the features department grow efficiently while also maintaining the level of quality our clients expect. I intend to listen closely to clients and the other artists to make sure that their ideas and concerns are heard.”

Bogdanowicz has been a leading feature film colorist since the early 2000s. Recent work includes Joker, Spider-Man: Far From Home and Dr. Sleep, to name a few.

Storage for Color and Post

By Karen Moltenbrey

At nearly every phase of the content creation process, storage is at the center. Here we look at two post facilities whose projects continually push boundaries in terms of data, but through it all, their storage solution remains fast and reliable. One, Light Iron, juggles an average of 20 to 40 data-intensive projects at a time and must have a robust storage solution to handle its ever-growing work. Another, Final Frame, recently took on a project whose storage requirements were literally out of this world.

Amazon’s The Marvelous Mrs. Maisel

Light Iron
Light Iron provides a wide range of services, from dailies to post on feature films, indies and episodic shows, to color/conform/beauty work on commercials and short-form projects. The facility’s clients include Netflix, Amazon Studios, Apple TV+, ABC Studios, HBO, Fox, FX, Paramount and many more. Light Iron has been committed to evolving digital filmmaking techniques over the past 10 years and understands the importance of data availability throughout the pipeline. Having a storage solution that is reliable, fast and scalable is paramount to successfully servicing data-centric projects with an ever-growing footprint.

More than 100 full-time employees located at Light Iron’s Los Angeles and New York locations regularly access the company’s shared storage solutions. Both facilities are equipped for dailies and finishing, giving clients an option between its offices based on proximity. In New York, where space is at a premium, the company also offers offline editorial suites.

The central storage solution used at both locations is a Quantum StorNext file system along with a combination of network-attached and direct-attached storage. On the archive end, both sites use LTO-7 tapes for backing up before moving the data off the spinning disc storage.

As Lance Hayes, senior post production systems engineer, explains, the facility segments the storage between three different types of options. “We structured our storage environment in a three-tiered model, with redundancy, flexibility and security in mind. We have our fast disks (tier one), which are fast volumes used primarily for playbacks in the rooms. Then there are deliverable volumes (tier two), where the focus is on the density of the storage. These are usually the destination for rendered files. And then, our nearline network-attached storage (tier three) is more for the deep storage, a holding pool before output to tape,” he explains.

Light Iron has been using Quantum as its de facto standard for the past several years. Founded in 2009, Light Iron has been on an aggressive growth trajectory and has evolved its storage strategy in response to client needs and technological advancement. Before installing its StorNext system, it managed with JBOD (“just a bunch of discs”) direct-attached storage on a very limited number of systems to service its staff of then-30-some employees, says Keenan Mock, senior media archivist at Light Iron. Light Iron, though, grew quickly, “and we realized we needed to invest in a full infrastructure,” he adds.

Lance Hayes

At Light Iron, work often starts with dailies, so the workflow teams interact with production to determine the cameras being used, the codecs being shot, the number of shoot days, the expected shooting ratio and so forth. Based on that information, the group determines which generation of LTO stock makes the most sense for the project (LTO-6 or LTO-7, with LTO-8 soon to be an option at the facility). “The industry standard, and our recommendation as well, is to create two LTO tapes per shoot day,” says Mock. Then, those tapes are geographically separated for safety.

In terms of working materials, the group generally restores only what is needed for each individual show from LTO tape, as opposed to keeping the entire show on spinning disc. “This allows us to use those really fast discs in a cost-effective way,” Hayes says.

Following the editorial process, Light Iron restores only the needed shots plus handles from tape directly to the StorNext SAN, so online editors can have immediate access. The material stays on the system while the conform and DI occur, followed by the creation of final deliverables, which are sent to the tier two and tier three spinning disk storage. If the project needs to be archived to tape, Mock’s department takes care of that; if it needs to be uploaded, that usually happens from the spinning discs.

Light Iron’s FilmLight Baselight systems have local storage, which is used mainly as cache volumes to ensure sustained playback in the color suite. In addition, Blackmagic Resolve color correctors play back content directly to the SAN using tier two storage.

Keenan Mock

Light Iron continually analyzes its storage infrastructure and reviews its options in terms of the latest technologies. Currently, the company considers its existing storage solution to be highly functional, though it is reviewing options for the latest versions of flash solutions from Quantum in 2020.

Based on the facility’s storage workflow, there’s minimal danger of maxing out the storage space anytime soon.

While Light Iron is religious about creating a duplicate set of tapes for backup, “it’s a very rare occurrence [for the duplicate to be needed],” notes Mock, “But it can happen, and in that circumstance, Light Iron is prepared.”

As for the shared storage, the datasets used in post, compared to other industries, are very large, “and without shared storage and a clustered file system, we wouldn’t be able to do the jobs we are currently doing,” Hayes notes.

Final Frame
With offices in New York City and London, Final Frame is a full-featured post facility offering a range of services, including DI of every flavor, 8mm to 77mm film scanning and restoration, offline editing, VFX, sound editing (theatrical and home Dolby Atmos) and mastering. Its work spans feature films, documentaries and television. The facility’s recent work on the documentary film Apollo 11, though, tested its infrastructure like no other, including the amount of storage space it required.

Will Cox

“A long time ago, we decided that for the backbone of all our storage needs, we were going to rely on fiber. We have a total of 55 edit rooms, five projection theaters and five audio mixing rooms, and we have fiber connectivity between all of those,” says Will Cox, CEO/supervising colorist. So, for the past 20 years, ever since 1Gb fiber became available, Final Frame has relied on this setup, though every five years or so, the shop has upgraded to the next level of fiber and is currently using 16Gb fiber.

“Storage requirements have increased because image data has increased and audio data has increased with Atmos. So, we’ve needed more storage and faster storage,” Cox says.

While the core of the system is fiber, the facility uses a variety of storage arrays, the bulk of which are 16Gb 4000 Series SAN offerings from Infortrend, totaling approximately 2PB of space. In addition, the studio uses 8GB Promise Technology VTrak arrays, also totaling about 1PB. Additionally installed at the facility are some JetStor 8GB offerings. For SAN management, Final Frame uses Tiger Technology’s Tiger Store.

Foremost in Cox’s mind when looking for a storage solution is interoperability, since Final Frame uses Linux, Mac and Windows platforms; reliability and fault tolerance are important as well. “We run RAID-6 and RAID-60 for pretty much everything,” he adds. “We also focus on how good the remote management is. We’ve brought online so much storage, we need the storage vendors to provide good interfaces so that our engineers and IT people can manage and get realtime feedback about the performance of the arrays and any faults that are creeping in, whether it’s due to failed drives or drives that are performing less than we had anticipated.”

Final Frame has also brought on a good deal more SSD storage. “We manage projects a bit differently now than we used to, where we have more tiered storage,” Cox adds. “We still do a lot of spinning discs, but SSD is moving in, and that is changing our workflows somewhat in that we don’t have to render as many files and as many versions when we have really fast storage. As a result, there’s some cost-savings on personnel at the workflow level when you have extremely fast storage.”

When working with clients who are doing offline editing, Final Frame will build an isolated SAN for them, and when it comes time to finish the project, whether it’s a picture or audio, the studio will connect its online and mixing rooms to that SAN. This setup is beneficial to security, Cox contends, as it accelerates the workflow since there’s no copying of data. However, aside from that work, everyone generally has parallel access to the storage infrastructure and can access it at any time.

More recently, in addition to other projects, Final Frame began working on Apollo 11, a film directed by Todd Douglas Miller. Miller wanted to rescan all the original negatives and all the original elements available from the Apollo 11 moon landing for a documentary film using audio and footage (16mm and 35mm) from NASA during that extraordinary feat. “He asked if we could make a movie just with the archival elements of what existed,” says Cox.

While ramping up and determining a plan of attack — Final Frame was going to scan the data at 4K resolution — NASA and NARA (National Archives and Records Administration) discovered a lost cache of archives containing 65mm and 70mm film.

“At that point, we decided that existing scanning technology wasn’t sufficient, and we’d need a film scanner to scan all this footage at 16K,” Cox adds, noting the company had to design and build an entirely new 16K film scanner and then build a pipeline that could handle all that data. “If you can imagine how tough 4K is to deal with, then think about 16K, with its insanely high data rates. And 8K is four times larger than 4K, and 16K is four times larger than 8K, so you’re talking about orders-of-magnitude increases in data.”

Adding to the complexity, the facility had no idea how much footage it would be using. Alas, Final Frame ultimately considered its storage structure and the costs needed to take it to the next level for 16K scanning and determined that amount of data was just too much to move and too much to store. “As it was, we filled up a little over a petabyte of storage just scanning the 8K material. We were looking at 4PB, quadrupling the amount of storage infrastructure needed. Then we would have had to run backups of everything, which would have increased it by another 4PB.”

Considering these factors, Final Frame changed its game plan and decided to scan at 8K. “So instead of 2PB to 2.5PB, we would have been looking at 8PB to 10PB of storage if we continued with our earlier plan, and that was really beyond what the production could tolerate,” says Cox.

Even scanning at 8K, the group had to have the data held in the central repository. “We were scanning in, doing what were extensively dailies, restoration and editorial, all from the same core set of media. Then, as editorial was still going on, we were beginning to conform and finish the film so we could make the Sundance deadline,” recalls Cox.

In terms of scans, copies and so forth, Final Frame stored about 2.5PB of data for that project. But in terms of data created and then destroyed, the amount of data was between 12PB and 15PB. To handle this load, the facility needed storage that could perform quickly, be very redundant and large. This led the company to bring on an additional 1PB of Fibre Channel SAN storage to add to the 1.5PB already in place — dedicated to just the Apollo 11 project. “We almost had to double the amount of storage infrastructure in the whole facility just to run this one project,” Cox points out. The additional storage was added in half-petabyte array increments, all connected to the SAN, all at 16Gb fiber.

While storage is important to any project, it was especially true for the Apollo 11 project due to the aggressive deadlines and excessively large storage needs. “Apollo 11 was a unique project. We were producing imagery that was being returned to the National Archives to be part of the historic record. Because of the significance of what we were scanning, we had to be very attentive to the longevity and accuracy of the media,” says Cox. “So, how it was being stored and where it was being stored were important factors on this project, more so than maybe any other project we’ve ever done.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage for UHD and 4K

By Peter Collins

Over the past few years, we have seen a huge audience uptake of UHD and 4K technologies. The increase in resolution offering more detailed imagery, and the adoption of HDR bringing bigger and brighter colors.

UHD technologies are a significant selling point, and are quickly becoming the “new normal ” for many commissioners. VOD providers, in particular, are behind the wheel and pushing things forward rapidly — it’s not just a creative decision, but one that is now required for delivery. Essentially, something the cinematographers used to have to fight for is now being man-dated by those commissioning the content.

This is all very exciting, but what does this mean for productions in general? There are wide-ranging implications and questions of logistics — timescales for data transfer and processing increase, post production infrastructure and workflows must be adapted, and archiving and retrieval times are extended (to say the least).

With these UHD and 4K productions having storage requirements into the hundreds of terabytes between various stages of the supply chain, the need to store the data in an accessible, secure and affordable manner is critical.

The majority of production, VFX, post and mastering facilities are currently still working the traditional way — from physically on-premise storage (on-prem for those who like to shave off a couple of syllables) such as NAS, local storage, LTO and SANs to distributed data stores spread across different buildings of a facility.

With UHD and 4K projects sometime generating north of half a petabyte of data (which needs to stick around until delivery is complete and beyond), it’s not a simple problem to ensure that large chunks of that data are available and accessible for every-one involved in the project who needs it — at least not in the most time effective way. And as sure as death and taxes, no matter how much storage you have to hand, you will miraculously start running out far sooner than you anticipated. Since this affects all stages of the supply chain, doesn’t it make sense to have some central store of data for everyone to access what they need, when they need it?

Across all areas of the industry, we are seeing the adoption of cloud storage over the traditional on-premises solution and are starting to see opportunities where a cloud-based solution might save money, time or, even better, both! There are numerous cloud “types” out there and below is my overview of the four most widely adopted.

Public: The public cloud can offer large amounts of storage for as long as it’s required (i.e., paid for) and stop charging you for it when it’s not (which is a nice change from having to buy storage with a lengthy support contract). The physical infrastructure of a public cloud is shared with other customers of the cloud provider (this is known as multi-tenancy), however all the resources allocated to you are invisible to other customers. Your data may be spread across several different areas of the data center (or beyond) depending on where the provider’s infrastructure has the most availability.

Private: Private clouds (from a storage perspective) are useful for those needing finer grained control over their data. Private clouds are those in which companies build their own infrastructure to support the services they want to offer and have complete control over where their data physically resides.

The downside to private clouds is cost, as the business is effectively paying to be their own cloud provider and maintaining the systems over their lifetime. With this in mind, many of the bigger public cloud providers offer “virtual private clouds,” in which a chunk of their resources are dedicated solely to a single customer (single-tenancy). This of course comes at a slightly higher cost than the plain public cloud offering, but does allow more finely grained control for those consumers who need it.

Hybrid: Hybrid clouds are, as the name suggests, a mixture of the two cloud approaches outlined above (public and private). This offers the best of both worlds and can be a useful approach when flexibility is required, or when certain data accessing processes are not practical to run from an off-site public cloud (at time of writing, a 50fps realtime stream of uncompressed 4K raw to a grade, for example, is unlikely to happen from a vanilla public cloud agreement without some additional bandwidth discussions — and costs).

Having the flexibility to migrate data between a virtual private cloud and a local private cloud while continuing to work, could help minimize the impact on existing infrastructure locally, and could also enable workflows and interchange between local and “cloud-native” applications. Certain processes that take up a lot of resources locally could be re-located to a virtual private cloud for a lower cost, freeing up local resources for more time-sensitive applications.

Community: Here’s where the cloud could shine as a prospect from a production standpoint. This cloud model is based on businesses and those with a stake in the process pooling their resources and collaborating, coming up with a system and overarching set of processes that they all operate under — in effect offering a completely customized set of cloud services for any given project.

From a storage perspective, this could mean a production company running a virtual private cloud with the cost being distributed across all stakeholders accessing that data. Original camera files, for example, may be transferred to this virtual private cloud during the shoot, with post, VFX, marketing and reversioning houses downloading and uploading their work in turn. As all data transfers are monitored and tracked, the billing from a production standpoint on a per-vendor (or departmental) basis becomes much easier — everyone just pays for what they use.

MovieLabs’ “Envisioning Production in 2030” white paper, goes deeper into production related applications of cloud technologies over the coming decade (among other sharp in-sights), and is well worth absorbing over a cup of coffee or two.

As production technologies progress, we are only ever going to generate more and more data. For storage professionals, those managing systems, or project managers looking to improve timeframes and reduce costs, solutions may not only be financial or center around logistics. They may also factor in how easily it facilitates collaboration, interchange and fostering closer working relationships. To that question, the cloud may well be a clear best fit.

Studio Images: Goldcrest Post Production / Neil Harrison


Peter Collins is a post professional with experience working in film and television globally. He has worked at the forefront of new production technologies and consults on workflows, project management and industry best practices. He can be contacted via twitter via @PCPostPro or email at pcpostpro@icloud.com.

Storage for Editors

By Karen Moltenbrey

Whether you are a small-, medium- or large-size facility, storage is at the heart of your workflow. Consider, for instance, the one-person shop Fin Film Company, which films and edits footage for branding and events, often on water. Then there’s Uppercut, a boutique creative/post studio where collaborative workflow is the key to pushing boundaries on commercials and other similar projects.

Let’s take a look at Uppercut’s workflow first…

Uppercut
Uppercut is a creative editorial boutique shop founded by Micah Scarpelli in 2015 and offering a range of post services. Based in New York and soon Atlanta, the studio employs five editors with their own suites along with an in-house Flame artist who has his own suite.

Taylor Schafer

In contrast to Uppercut’s size, its storage needs are quite large, with five editors working on as many as five projects at a time. Although most of it is commercial work, some of those projects can get heavy in terms of the generated media, which is stored on-site.

So, for its storage needs, the studio employs an EditShare RAID system. “Sometimes we have multiple editors working on one large campaign, and then usually an assistant is working with an editor, so we want to make sure they have access to all the media at the same time,” says Taylor Schafer, an assistant editor at Uppercut.

Additionally, Uppercut uses a Supermicro nearline server to store some of its VFX data, as the Flame artist cannot access the EditShare system on his CentOS operating system. Furthermore, the studio uses LTO-6 archive media in a number of ways. “We use EditShare’s Ark to LTO our partitions once the editors are done with them for their projects. It’s wonderfully integrated with the whole EditShare system. Ark is easy to navigate, and it’s easy to swap LTO tapes in and out, and everything is in one location,” says Schafer.

The studio employs the EditShare Ark to archive its editors’ working files, such as Premiere and Avid projects, graphics, transcodes and so forth. Uppercut also uses BRU (Backup Restore Utility) from Tolis Group to archive larger files that only live on LaCie hard drives and not on EditShare, such as a raw grade. “Then we’re LTO’ing the project and the whole partition with all the working files at the end through Ark,” Schafer explains.

The importance of having a system like this was punctuated over the summer when Uppercut underwent a renovation and had to move into temporary office space at Light Iron, New York — without the EditShare system. As a result, the team had to work off of hard drives and Light Iron’s Avid Nexis for some limited projects. “However, due to storage limits, we mainly worked off of the hard drives, and I realized how important a file storage system that has the ability to share data in real time truly is,” Schafer recalls. “It was a pain having to copy everything onto a hard drive, hand it back to the editor to make new changes, copy it again and make sure all the files were up to date, as opposed to using a storage system like ours, where everything is instantly up to date. You don’t have to worry whether something copied over correctly or not.”

She continues: “Even with Nexis, we were limited in our ability to restore old projects, which lived on EditShare.”

When a new project comes in at Uppercut, the first thing Schafer and her colleagues do is create a partition on EditShare and copy over the working template, whether it’s for Avid or Premiere, on that partition. Then they get their various working files and start the project, copying over the transcodes they receive. As the project progresses, the artists will get graphics and update the partition size as needed. “It’s so easy to change on our end,” notes Schafer. And once the project is completed, she or another assistant will make sure all the files they would possibly need, dating back to day one of the project, are on the EditShare, and that the client files are on the various hard drives and FTP links.

Reebok

“We’ll LTO the partition on EditShare through Ark onto an LTO-6 tape, and once that is complete, then generally we will take the projects or partition off the EditShare,” Schafer continues. The studio has approximately 26TB of RAID storage but, due to the large size of the projects, cannot retain everything on the EditShare long term. Nevertheless, the studio has a nearline server that hosts its masters and generics, as well as any other file the team might need to send to a client. “We don’t always need to restore. Generally the only time we try to restore is when we need to go back to the actual working files, like the Premiere or Avid project,” she adds.

Uppercut avoids keeping data locally on workstations due to the collaborative workflow.

According to Schafer, the storage setup is easy to use. Recently, Schafer finished a Reebok project she and two editors had been working on. The project initially started in Avid Media Composer, which was preferred by one of the editors. The other editor prefers Premiere but is well-versed on the Avid. After they received the transcodes and all the materials, the two editors started working in tandem using the EditShare. “It was great to use Avid on top of it, having Avid bins to open separately and not having to close out of the project and sharing through a media browser or closing out of entire projects, like you have to do with a Premiere project,” she says. “Avid is nice to work with in situations where we have multiple editors because we can all have the project open at once, as opposed to Premiere projects.”

Later, after the project was finished, the editor who prefers Premiere did a director’s cut in that software. As a result, Schafer had to re-transcode the footage, “which was more complicated because it was shot on 16mm, so it was also digitized and on one large video reel instead of many video files — on top of everything else we were doing,” she notes. She re-transcoded for Premiere and created a Premiere project from scratch, then added more storage on EditShare to make sure the files were all in place and that everything was up to date and working properly. “When we were done, the client had everything; the director had his director’s cut and everything was backed up to our nearline for easy access. Then it was LTO’d through Ark on LTO-6 tapes and taken off EditShare, as well as LTO’d on BRU for the raw and the grade. It is now done, inactive and archived.”

Without question, says Schafer, storage is important in the work she and her colleagues do. “It’s not so much about the storage itself, but the speed of the storage, how easily I’m able to access it, how collaborative it allows me to be with the other people I’m working with. Storage is great when it’s accessible and easy for pretty much anyone to use. It’s not so good when it’s slow or hard to navigate and possibly has tech issues and failures,” Schafer says. “So, when I’m looking for storage, I’m looking for something that is secure, fast and reliable, and most of all, easy to understand, no matter the person’s level of technical expertise.”

Chris Aguilar

Fin Film Company
People can count themselves fortunate when they can mix business with pleasure and integrate their beloved hobby with their work. Such is the case for solo producer/director/editor Chris Aguilar of Fin Film Company in Southern California, which he founded a decade ago. As Aguilar says, he does it all, as does Fin Film, which produces everything from conferences to music videos and commercial/branded content. But his real passion involves outdoor adventure paddle sports, from stand-up paddleboarding to pro paddleboarding.

“That’s been pretty much my niche,” says Aguilar, who got his start doing in-house production (photography, video and so forth) for a paddleboard company. Since then, he has been able to turn his passion and adventures into full-time freelance work. “When someone wants an event video done, especially one involving paddleboard races, I get the phone call and go!”

Like many videographers and editors, Aguilar got his start filming weddings. Always into surfing himself, he would shoot surfing videos of friends “and just have fun with it,” he says of augmenting that work. Eventually, this allowed him to move into areas he is more passionate about, such as surfing events and outdoor sports. Now, Aguilar finds that a lot of his time is spent filming paddleboard events around the globe.

Today, there are many one-person studios with solo producers, directors and editors. And as Aguilar points out, their storage needs might not be on the level of feature filmmakers or even independent TV cinematographers, but that doesn’t negate their need for storage. “I have some pretty wide-ranging storage needs, and it has definitely increased over the years,” he says.

In his work, Aguilar has to avoid cumbersome and heavy equipment, such as Atomos recorders, because of their weight on board the watercraft he uses to film paddleboard events. “I’m usually on a small boat and don’t have a lot of room to haul a bunch of gear around,” he says. Rather, Aguilar uses Panasonic’s AG-CX350 as well as Panasonic’s EVA1 and GH5, and on a typical two-day shoot (the event and interviews), he will fill five to six 64GB cards.

“Because most paddleboard races are long-distance, we’re usually on the water for about five to eight hours,” says Aguilar. “Although I am not rolling cameras the whole time, the weight still adds up pretty quickly.”

As for storage, Aguilar offloads his video onto SSD drives or other kinds of external media. “I call it my ‘working drive for editing and that kind of thing,’” he says. “Once I am done with the edit and other tasks, I have all those source files somewhere.” He calls on the G-Technology G-Drive Mobile SSD 1TB for in the field and some editing and their Ev Raw portable raw drive for back ups and some editing. He also calls on Gylph’s Atom SSD for the field.

For years, that “somewhere” has been a cabinet that was filled with archived files. Indeed, that cabinet is currently holding, in Aguilar’s estimate, 30TB of data, if not more. “That’s just the archives. I have 10 or 11 years of archives sitting there. It’s pretty intense,” he adds. But, as soon as he gets an opportunity, those will be ported to the same cloud backup solution he is using for all his current work.

Yes, he still uses the source cards, but for a typical project involving an end-to-end shoot, Aguilar will use at least a 1TB drive to house all the source cards and all the subsequent work files. “Things have changed. Back in the day, I used hard drives – you should see the cabinet in my office with all these hard drives in it. Thank God for SSDs and other options out there. It’s changed our lives. I can get [some brands of] 1TB SSD for $99 or a little more right now. My workflow has me throwing all the source cards onto something like that that’s dedicated to all those cards, and that becomes my little archive,” explains Aguilar.

He usually uploads the content as fast as possible to keep the data secure. “That’s always the concern, losing it, and that’s where Backblaze comes in,” Aguilar says. Backblaze is a cloud backup solution that is easily deployed across desktops and laptops and managed centrally — a solution Aguilar recently began employing. He also uses Iconik Solutions’ digital management system, which eases the task of looking up video files or pulling archived files from Backblaze. The digital management system sits on top of Backblaze and creates little offline proxies of the larger content, allowing Aguilar to view the entire 10-year archive online in one interface.

According to Aguilar, his archived files are an important aspect of his work. Since he works so many paddleboard events, he often receives requests for clips from specific racers or races, some dating back years. Prior to using Backblaze, if someone requested footage, it was a challenge to locate it because he’d have to pull that particular hard drive and plug it into the computer, “and if I had been organized that year, I’ll know where that piece of content is because I can find it. If I wasn’t organized that year, I’d be in trouble,” he explains. “At best, though, it would be an hour and a half or more of looking around. Now I can locate and send it in 15 minutes.”

Aguilar says the Iconik digital management system allows him to pull up the content on the interface and drill down to the year of the race, click on it, download it and send it off or share it directly through his interface to the person requesting the footage.

Aguilar went live with this new Backblaze and digital management system storage workflow this year and has been fully on board with it for just the past two to three months. He is still uncovering all the available features and the power underneath the hood. “Even for a guy who’s got a technical background, I’m still finding things I didn’t know I could do,” and as such, Aguilar is still fine-tuning his workflow. “The neat thing with Iconik is that it could actually support online editing straight up, and that’s the next phase of my workflow, to accommodate that.”

Fortunately or unfortunately, at this time Aguilar is just starting to come off his busy season, so now he can step back and explore the new system. And transfer onto the new system all the material on the old source cards in that cabinet of his.

“[The new solution] is more efficient and has reduced costs since I am not buying all these drives anymore. I can reuse them now. But mostly, it has given me peace of mind that I know the data is secure,” says Aguilar. “I have been lucky in my career to be present for a lot of cool moments in the sport of paddling. It’s a small community and a very close-knit group. The peace of mind knowing that this history is preserved, well, that’s something I greatly appreciate. And I know my fellow paddlers also appreciate it.”


Karen Moltenbrey is a veteran writer, covering visual effects and post production.

Storage Roundtable

By Randi Altman

Every year in our special Storage Edition, we poll those who use storage and those who make storage. This year is no different. The users we’ve assembled for our latest offering weigh in on how they purchase gear, how they employ storage and cloud-based solutions. Storage makers talk about what’s to come from them, how AI and ML are affecting their tools, NVMe growth and more.

Enjoy…

Periscope Post & Audio, GM, Ben Benedetti

Periscope Post & Audio is a full-service post company with facilities in Hollywood and Chicago’s Cinespace. Both facilities provide a range of sound and picture finishing services for TV, film, spots, video games and other media.

Ben Benedetti

What types of storage are you using for your workflows?
For our video department, we have a large, high-speed Quantum media array supporting three color bays, two online edit suites, a dailies operation, two VFX suites and a data I/O department. The 15 systems in the video department are connected via 16GB fiber.

For our sound department, we are using an Avid Nexis System via 6e Ethernet supporting three Atmos mix stages, two sound design suites, an ADR room and numerous sound-edit bays. All the CPUs in the facility are securely located in two isolated machine rooms (one for video on our second floor and one for audio on the first). All CPUs in the facility are tied via an IHSE KVM system, giving us incredible flexibility to move and deliver assets however our creatives and clients need them. We aren’t interested in being the biggest. We just want to provide the best and most reliable services possible.

Cloud versus on-prem – what are the pros and cons?
We are blessed with a robust pipe into our facility in Hollywood and are actively discussing with our engineering staff about using potential cloud-based storage solutions in the future. We are already using some cloud-based solutions for our building’s security system and CCTV systems as well as the management of our firewall. But the concept of placing client intellectual property in the cloud sparks some interesting conversations.We always need immediate access to the raw footage and sound recordings of our client productions, so I sincerely doubt we will ever completely rely on a cloud-based solution for the storage of our clients’ original footage. We have many redundancy systems in place to avoid slowdowns in production workflows. This is so critical. Any potential interruption in connectivity that is beyond our control gives me great pause.

How often are you adding or upgrading your storage?
Obviously, we need to be as proactive as we can so that we are never caught unready to take on projects of any size. It involves continually ensuring that our archive system is optimized correctly and requires our data management team to constantly analyze available space and resources.

How do you feel about the use of ML/AI for managing assets?
Any AI or ML automated process that helps us monitor our facility is vital. Technology advancements over the past decade have allowed us to achieve amazing efficiencies. As a result, we can give the creative executives and storytellers we service the time they need to realize their visions.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
As we have facilities in both Chicago and Hollywood, our ability to take advantage of Google cloud-based services for administration has been a real godsend. It’s not glamorous, but it’s extremely important to keeping our facilities running at peak performance.

The level of coordination we have achieved in that regard has been tremendous. Those low-tiered storage systems provide simple and direct solutions to our administrative and accounting needs, but when it comes to the high-performance requirements of our facility’s color bays and audio rooms, we still rely on the high-speed on-premises storage solutions.

For simple archiving purposes, a cloud-based solution might work very well, but for active work currently in production … we are just not ready to make that leap … yet. Of course, given Moore’s Law and the exponential advancement of technology, our position could change rapidly. The important thing is to remain open and willing to embrace change as long as it makes practical sense and never puts your client’s property at risk.

Panasas, Storage Systems Engineer, RW Hawkins

RW Hawkins

Panasas offers a scalable high-performance storage solution. Its PanFS parallel file system, delivered on the ActiveStor appliance, accelerates data access for VFX feature production, Linux-based image processing, VR/AR and game development, and multi-petabyte sized active media archives.

What kind of storage are you offering, and will that be changing in the coming year?
We just announced that we are now shipping the next generation of the PanFS parallel file system on the ActiveStor Ultra turnkey appliance, which is already in early deployment with five customers.

This new system offers unlimited performance scaling in 4GB/s building blocks. It uses multi-tier intelligent data placement to maximize storage performance by placing metadata on low-latency NVMe SSDs, small files on high IOPS SSDs and large files on high-bandwidth HDDs. The system’s balanced-node architecture optimizes networking, CPU, memory and storage capacity to prevent hot spots and bottlenecks, ensuring high performance regardless of workload. This new architecture will allow us to adapt PanFS to the ever-changing variety of workloads our customers will face over the next several years.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Absolutely. However, too many tiers can lead to frustration around complexity, loss of productivity and poor reliability. We take a hybrid approach, whereby each server has multiple types of storage media internal to one server. Using intelligent data placement, we put data on the most appropriate tier automatically. Using this approach, we can often replace a performance tier and a tier two active archive with one cost-effective appliance. Our standard file-based client makes it easy to gateway to an archive tier such as tape or an object store like S3.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI/ML is so widespread, it seems to be all encompassing. Media tools will benefit greatly because many of the mundane production tasks will be optimized, allowing for more creative freedom. From a storage perspective, machine learning is really pushing performance in new directions; low latency and metadata performance are becoming more important. Large amounts of unstructured data with rich metadata are the norm, and today’s file systems need to adapt to meet these requirements.

How has NVMe advanced over the past year?
Everyone is taking notice of NVMe; it is easier than ever to build a fast array and connect it to a server. However, there is much more to making a performant storage appliance than just throwing hardware at the problem. My customers are telling me they are excited about this new technology but frustrated by the lack of scalability, the immaturity of the software and the general lack of stability. The proven way to scale is to build a file system on top of these fast boxes and connect them into one large namespace. We will continue to augment our architecture with these new technologies, all the while keeping an eye on maintaining our stability and ease of management.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Today’s modern NAS can take on all the tasks that historically could only be done with SAN. The main thing holding back traditional NAS has been the client access protocol. With network-attached parallel clients, like Panasas’ DirectFlow, customers get advanced client caching, full POSIX semantics and massive parallelism over standard ethernet.

Regarding cloud, my customers tell me they want all the benefits of cloud (data center consolidation, inexpensive power and cooling, ease of scaling) without the vendor lock-in and metered data access of the “big three” cloud providers. A scalable parallel file system forms the core of a private cloud model that yields the benefits without the drawbacks. File-based access to the namespace will continue to be required for most non-web-based applications.

Goldcrest Post, New York, Technical Director, Ahmed Barbary

Goldcrest Post is an independent post facility, providing solutions for features, episodic TV, docs, and other projects. The company provides editorial offices, on-set dailies, picture finishing, sound editorial, ADR and mixing, and related services.

Ahmed Barbary

What types of storage are you using for your workflows?
Storage performance in the post stage is tremendously demanding. We are using multiple SAN systems in office locations that provide centralized storage and easy access to disk arrays, servers, and other dedicated playout applications to meet storage needs throughout all stages of the workflow.

While backup refers to duplicating the content for peace of mind, short-term retention, and recovery, archival signifies transferring the content from the primary storage location to long-term storage to be preserved for weeks, months, and even years to come. Archival storage needs to offer scalability, flexible and sustainable pricing, as well as accessibility for individual users and asset management solutions for future projects.

LTO has been a popular choice for archival storage for decades because of its affordable, high-capacity solutions with low write/high read workloads that are optimal for cold storage workflows. The increased need for instant access to archived content today, coupled with the slow roll-out of LTO-8, has made tape a less favorable option.

Cloud versus on-prem – what are the pros and cons?
The fact is each option has its positives and negatives, and understanding that and determining how both cloud and on-premises software fit into your organization are vital. So, it’s best to be prepared and create a point-by-point comparison of both choices.

When looking at the pros and cons of cloud vs. on-premises solutions, everything starts with an understanding of how these two models differ. With a cloud deployment, the vendor hosts your information and offers access through a web portal. This enables more mobility and flexibility of use for cloud-based software options. When looking at an on-prem solution, you are committing to local ownership of your data, hardware, and software. Everything is run on machines in your facility with no third-party access.

How often are you adding or upgrading your storage?
We keep track of new technologies and continuously upgrade our systems, but when it comes to storage, it’s a huge expense. When deploying a new system, we do our best to future-proof and ensure that it can be expanded.

How do you feel about the use of ML/AI for managing assets?
For most M&E enterprises, the biggest potential of AI lies in automatic content recognition, which can drive several path-breaking business benefits. For instance, most content owners have thousands of video assets.

Cataloging, managing, processing, and re-purposing this content typically requires extensive manual effort. Advancements in AI and ML algorithms have
now made it possible to drastically cut down the time taken to perform many of these tasks. But there is still a lot of work to be done — especially as ML algorithms need to be trained, using the right kind of data and solutions, to achieve accurate results.

What role might the different tiers of cloud storage play in the lifecycle of an asset?
Data sets have unique lifecycles. Early in the lifecycle, people access some data often, but the need for access drops drastically as the data ages. Some data stays idle in the cloud and is rarely accessed once stored. Some data expires days or months after creation, while other data sets are actively read and modified throughout their lifetimes.

Rohde & Schwarz, Product Manager, Storage Solutions, Dirk Thometzek

Rohde & Schwarz offers broadcast and media solutions to help companies grow in media production, management and delivery in the IP and wireless age.

Dirk Thometzek

What kind of storage are you offering, and will that be changing in the coming year?
The industry is constantly changing, so we monitor market developments and key demands closely. We will be adding new features to the R&S SpycerNode in the next few months that will enable our customers to get their creative work done without focusing on complex technologies. The R&S SpycerNode will be extended with JBODs, which will allow seamless integration with our erasure coding technology, guaranteeing complete resilience and performance.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
Each workflow is different, so, consequently, there is almost no system alike. The real artistry is to tailor storage systems according to real requirements without over-provisioning hardware or over-stressing budgets. Using different tiers can be very helpful to build effective systems, but they might introduce additional difficulties to the workflows if the system isn’t properly designed.

Rohde & Schwarz has developed R&S SpycerNode in a way that its performance is linear and predictable. Different tiers are aggregated under a single namespace, and our tools allow seamless workflows while complexity remains transparent to the users.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
Machine learning and artificial intelligence can be helpful to automate certain tasks, but they will not replace human intervention in the short term. It might not be helpful to enrich media with too much data because doing so could result in imprecise queries that return far too much content.

However, clearly defined changes in sequences or reoccurring objects — such as bugs and logos — can be used as a trigger to initiate certain automated workflows. Certainly, we will see many interesting advances in the future.

How has NVMe advanced over the past year?
NVMe has very interesting aspects. Data rates and reduced latencies are admittedly quite impressive and are garnering a lot of interest. Unfortunately, we do see a trend inside our industry to be blinded by pure performance figures and exaggerated promises without considering hardware quality, life expectancy or proper implementation. Additionally, if well-designed and proven solutions exist that are efficient enough, then it doesn’t make sense to embrace a technology just because it is available.

R&S is dedicated to bringing high-end devices to the M&E market. We think that reliability and performance build the foundation for user-friendly products. Next year, we will update the market on how NVMe can be used in the most efficient way within our products.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We definitely see a trend away from classic Fibre Channel to Ethernet infrastructures for various reasons. For many years, NAS systems have been replacing central storage systems based on SAN technology for a lot of workflows. Unfortunately, standard NAS technologies will not support all necessary workflows and applications in our industry. Public and private cloud storage systems play an important role in overall concepts, but they can’t fulfil all necessary media production requirements or ease up workflows by default. Plus, when it comes to subscription models, [sometimes there could be unexpected fees]. In fact, we do see quite a few customers returning to their previous services, including on-premises storage systems such as archives.

When it comes to the very high data rates necessary for high-end media productions, NAS will relatively quickly reach its technical limits. Only block-level access can deliver the reliable performance necessary for uncompressed productions at high frame rates.

That does not necessarily mean Fibre Channel is the only solution. The R&S SpycerNode, for example, features a unified 100Gb/s Ethernet backbone, wherein clients and the redundant storage nodes are attached to the same network. This allows the clients to access the storage over industry-leading NAS technology or native block level while enabling true flexibility using state-of-the-art technology.

MTI Film, CEO, Larry Chernoff

Hollywood’s MTI Film is a full-service post facility, providing dailies, editorial, visual effects, color correction, and assembly for film, television, and commercials.

Larry Chernoff

What types of storage are you using for your workflows?
MTI uses a mix of spinning and SSD disks. Our volumes range from 700TB to 1000TB and are assigned to projects depending on the volume of expected camera files. The SSD volumes are substantially smaller and are used to play back ultra-large-resolution files, where several users are using the file.

Cloud versus on-prem — what are the pros and cons?
MTI only uses on-prem storage at the moment due to the real-time, full-resolution nature of our playback requirements. There is certainly a place for cloud-based storage but, as a finishing house, it does not apply to most of our workflows.

How often are you adding or upgrading your storage?
We are constantly adding storage to our facility. Each year, for the last five, we’ve added or replaced storage annually. We now have approximately 8+ PB, with plans for more in the future.

How do you feel about the use of ML/AI for managing assets?
Sounds like fun!

What role might the different tiers of cloud storage play in the lifecycle of an asset?
For a post house like MTI, we consider cloud storage to be used only for “deep storage” since our bandwidth needs are very high. The amount of Internet connectivity we would require to replicate the workflows we currently have using on-prem storage would be prohibitively expensive for a facility such as MTI. Speed and ease of access is critical to being able to fulfill our customers’ demanding schedules.

OWC,Founder/CEO, Larry O’Connor

Larry O’Connor

OWC offers storage, connectivity, software, and expansion solutions designed to enhance, accelerate, and extend the capabilities of Mac- and PC-based technology. Their products range from the home desktop to the enterprise rack to the audio recording studio to the motion picture set and beyond.

What kind of storage are you offering, and will that be changing in the coming year?
OWC will be expanding our Jupiter line of NAS storage products in 2020 with an all new external flash base array. We will also be launching the OWC ThunderBay Flex 8, a three-in-one Thunderbolt 3 storage, docking, and PCIe expansion solution for digital imaging, VFX, video production, and video editing.

Are certain storage tiers more suitable for different asset types, workflows etc?
Yes. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
We see U.2 SSDs as a trend that can help storage in this space. Also, solutions that allow the use of external docking of U.2 across different workflow needs.

How has NvME advanced over the past year?
We have seen NVMe technology become higher in capacity, higher in performance, and substantially lower in power draw. Yet even with all the improving performance, costs are lower today versus 12 months ago. SSD and NVMe are better for on-set storage and editing. Once you are finished and looking to archive, HDD are a better solution for long term storage.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
I see both still having their place — I can’t speak to if one will take over the other. SANs provide other services that typically go hand in hand with M&E needs.

As for cloud, I can see some more cloud coming in, but for M&E on-site needs, it just doesn’t compete anywhere near with what the data rate demand is for editing, etc. Everything independently has its place.

EditShare, VP of Product Management, Sunil Mudholkar

EditShare offers a range of media management solutions, from ingest to archive with a focus on media and entertainment.

Sunil Mudholkar

What kind of storage are you offering and will that be changing in the coming year?
EditShare currently offers RAID and SSD, along with our nearline Sata HDD-based storage. We are on track to deliver NVMe- and cloud-based solutions in the first half of 2020. The latest major upgrade of our file system and management console, EFS2020, enables us to migrate to emerging technologies, including cloud deployment and using NVMe hardware.

EFS can manage and use multiple storage pools, enabling clients to use the most cost-effective tiered storage for their production, all while keeping that single namespace.

Are certain storage tiers more suitable for different asset types, workflows etc?
Absolutely. It’s clearly financially advantageous to have varying performance tiers of storage that are in line with the workflows the business requires. This also extends to the cloud, where we are seeing public cloud-based solutions augment or replace both high-performance and long-term storage needs. Tiered storage enables clients to be at their most cost effective by including parking storage and cloud storage for DR, while keeping SSD and NVME storage ready and primed for their high-end production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
AI and ML have somewhat of an advantage for storage when it comes to things like algorithms that are designed to automatically move content between storage tiers to optimize costs. This has been commonplace in the distribution side of the ecosystem for a long time with CDNs. ML and AI have a great ability to impact the Opex side of asset management and metadata by helping to automate very manual, repetitive data entry tasks through audio and image recognition, as an example.

AI can also assist by removing mundane human-centric repetitive tasks, such as logging incoming content. AI can assist with the growing issue of unstructured and unmanaged storage pools, enabling the automatic scanning and indexing of every piece of content located on a storage pool.

How has NVMe advanced over the past year?
Like any other storage medium, when it’s first introduced there are limited use cases that make sense financially, and only a certain few can afford to deploy it. As the technology scales and changes in form factor, and pricing becomes more competitive and inline with other storage options, it then can become more mainstream. This is what we are starting to see with NVMe.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
Yes, NAS has overtaken SAN. It’s easier technology to deal with — this is fairly well acknowledged. It’s also easier to find people/talent with experience in NAS. Cloud will start to replace more NAS workflows in 2020, as we are already seeing today. For example, our ACL media spaces project options within our management console were designed for SAN clients migrating to NAS. They liked the granular detail that SAN offered, but wanted to migrate to NAS. EditShare’s ACL enables them to work like a SAN but in a NAS environment.

Zoic Studios CTO Saker Klippsten

Zoic Studios is an Emmy-winning VFX company based in Culver City, California, with sister offices in Vancouver and NYC. It creates computer-generated special effects for commercials, films, television and video games.

Saker Klippsten

What types of projects are you working on?
We work on a range of projects for series, film, commercial and interactive games (VR/AR). Most of the live-action projects are mixed with CG/VFX and some full-CG animated shots. In addition, there is typically some form of particle or fluid effects simulation going on, such as clouds, water, fire, destruction or other surreal effects.

What types of storage are you using for those workflows?
Cryogen – Off-the-shelf tape/disk/chip. Access time > 1 day. Mostly tape-based and completely offline, which requires human intervention to load tapes or restore from drives.
Freezing – Tape robot library. Access time < .5 day. Tape-based and in the robot. This does not require intervention.Cold – Spinning disk. Access time — slow (online). Disaster recovery and long-term archiving.
Warm – Spinning disk. Access time — medium (online). Data that needs to still be accessed promptly and transferred quickly (asset depot).
Hot – Chip-based. Access time — fast (online). SSD generic active production storage.
Blazing – Chip-based. Access time — uber fast (online). NVMe dedicated storage for 4K and 8K playback, databases and specific simulation workflows.

Cloud versus on-prem – what are the pros and cons?
The great debate! I tend to not look at it as pro vs. con, but where you are as a company. Many factors are involved and there is no one size that fits all, as many are led to believe, and neither cloud or on-prem alone can solve all your workflow and business challenges.

Cinemax’s Warrior (Credit: HBO/David Bloomer)

There are workflows that are greatly suited for the cloud and others that are potentially cost prohibitive for a number of reasons, such as the size of the data set being generated. Dynamics Cache Simulations are a good example, which can quickly generate tens of TBs or sometimes hundreds of TBs. If the workflow requires you to transfer this data on premises for review, it could take a very long time. Other workflows such as 3D CG-generated data can take better advantage of the cloud. They typically have small source file payloads that need to be uploaded and then only require final frames to be downloaded, which is much more manageable. Depending on the size of your company and level of technical people on hand, the cloud can be a problem

What triggers buying more storage in your shop?
Storage tends to be one of the largest and most significant purchases at many companies. End users do not have a clear concept of what happens at the other end of the wire from their workstation.

All they know is that there is never enough storage and it’s never fast enough. Not investing in the right storage can not only be detrimental to the delivery and production of a show, but also to the mental focus and health of the end users. If artists are constantly having to stop and clean up/delete, it takes them out of their creative rhythm and slows down task completion.

If the storage is not performing properly and is slow, this will not only have an impact on delivery, but the end user might be afraid they are being perceived as being slow. So what goes into buying more storage? What type of impact will buying more storage have on the various workflows and pipelines? Remember, if you are a mature company you are buying 2TB of storage for every 1TB required for DR purposes, so you have a complete up-to-the-hour backup.

Do you see ML/AI as important to your content strategy?
We have been using various layers of ML and heuristics sprinkled throughout our content workflows and pipelines. As an example, we look at the storage platforms we use to understand what’s on our storage, how and when it’s being used, what it’s being used for and how it’s being accessed. We look at the content to see what it contains and its characteristics. What are the overall costs to create that content? What insights can we learn from it for similarly created content? How can we reuse assets to be more efficient?

Dell Technologies, CTO, Media & Entertainment, Thomas Burns

Thomas Burns

Dell offers technologies across workstations, displays, servers, storage, networking and VMware, and partnerships with key media software vendors to provide media professionals the tools to deliver powerful stories, faster.

What kind of storage are you offering, and will that be changing in the coming year?
Dell Technologies offers a complete range of storage solutions from Isilon all-flash and disk-based scale-out NAS to our object storage, ECS, which is available as an appliance or a software-defined solution on commodity hardware. We have also developed and open-sourced Pravega, a new storage type for streaming data (e.g. IoT and other edge workloads), and continue to innovate in file, object and streaming solutions with software-defined and flexible consumption models.

Are certain storage tiers more suitable for different asset types, workflows etc?
Intelligent tiering is crucial to building a post and VFX pipeline. Today’s global pipelines must include software that distinguishes between hot data on the fastest tier and cold or versioned data on less performant tiers, especially in globally distributed workflows. Bringing applications to the media rather than unnecessarily moving media into a processing silo is the key to an efficient production.

What do you see are the big technology trends that can help storage for M&E? ML? AI?
New developments in storage class memory (SCM) — including the use of carbon nanotubes to create a nonvolatile, standalone memory product with speeds rivaling DRAM without needing battery backup — have the potential to speed up media workflows and eliminate AI/ML bottlenecks. New protocols such as NVMe allow much deeper I/O queues, overcoming today’s bus bandwidth limits.

GPUDirect enables direct paths between GPUs and network storage, bypassing the CPU for lower latency access to GPU compute — desirable for both M&E and AI/ML applications. Ethernet mesh, a.k.a. Leaf/Spine topologies, allow storage networks to scale more flexibly than ever before.

How has NVMe advanced over the past year?
Advances in I/O virtualization make NVMe useful in hyper-converged infrastructure, by allowing different virtual machines (VMs) to share a single PCIe hardware interface. Taking advantage of multi-stream writes, along with vGPUs and vNICs, allows talent to operate more flexibly as creative workstations start to become virtualized.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
IP networks scale much better than any other protocol, so NAS allows on-premises workloads to be managed more efficiently than SAN. Object stores (the basic storage type for cloud services) support elastic workloads extremely well and will continue to be an integral part of public, hybrid and private cloud media workflows.

ATTO, Manager, Products Group, Peter Donnelly

ATTO network and storage connectivity products are purpose-made to support all phases of media production, from ingest to final archiving. ATTO offers an ecosystem of high-performance connectivity adapters, network interface cards and proprietary software.

Peter Donnelly

What kind of storage are you offering, and will that be changing in the coming year?
ATTO designs and manufactures storage connectivity products, and although we don’t manufacture storage, we are a critical part of the storage ecosystem. We regularly work with our customers to find the best solutions to their storage workflow and performance challenges.

ATTO designs products that use a wide variety of storage protocols. SAS, SATA, Fibre Channel, Ethernet and Thunderbolt are all part of our core technology portfolio. We’re starting to see more interest in NVMe solutions. While NVMe has already seen some solid growth as an “inside-the-box” storage solution, scalability, cost and limited management capabilities continue to limit its adoption as an external storage solution.

Data protection is still an important criteria in every data center. We are seeing a shift from traditional hardware RAID and parity RAID to software RAID and parity code implementations. Disk capacity has grown so quickly that it can take days to rebuild a RAID group with hardware controllers. Instead, we see our customers taking advantage of rapidly dropping storage prices and using faster, reliable software RAID implementations with basic HBA hardware.

How has NVMe advanced over the past year?
For inside-the-box storage needs, we have absolutely seen adoption skyrocket. It’s hard to beat the price-to-performance ratio of NVMe drives for system boot, application caching and similar use cases.

ATTO is working independently and with our ecosystem partners to bring those same benefits to shared, networked storage systems. Protocols such as NVMe-oF and FC-NVMe are enabling technologies that are starting to mature, and we see these getting further attention in the coming year.

Do you see NAS overtaking SAN for larger work groups? How about cloud taking on some of what NAS used to do?
We see customers looking for ways to more effectively share storage resources. Acquisition and ongoing support costs, as well as the ability to leverage existing technical skills, seem to be important factors pulling people toward Ethernet-based solutions.
However, there is no free lunch, and these same customers aren’t able to compromise on performance and latency concerns, which are important reasons why they used SANs in the first place. So there’s a lot of uncertainty in the market today. Since we design and market products in both the NAS and SAN spaces, we spend a lot of time talking with our customers about their priorities so that we can help them pick the solutions that best fit their needs.

Masstech, CTO, Mike Palmer

Masstech creates intelligent storage and asset lifecycle management solutions for the media and entertainment industry, focusing on broadcast and video content storage management with IT technologies.

Mike Palmer

What kind of storage are you offering, and will that be changing in the coming year?
Masstech products are used to manage a combination of any or all of these kinds of storage. Masstech allows content to move without friction across and through all of these technologies, most often using automated workflows and unified interfaces that hide the complexity otherwise required to directly manage content across so many different types of storage.

Are certain storage tiers more suitable for different asset types, workflows, etc.?
One of the benefits of having such a wide range of storage technologies to choose from is that we have the flexibility to match application requirements with the optimum performance characteristics of different storage technologies in each step of the lifecycle. Users now expect that content will automatically move to storage with the optimal combination of speed and price as it progresses through workflow.

In the past, HSM was designed to handle this task for on-prem storage. The challenge is much wider now with the addition of a plethora of storage technologies and services. Rather than moving between just two or three tiers of on-prem storage, content now often needs to flow through a hybrid environment of on-prem and cloud storage, often involving multiple cloud services, each with three or four sub-tiers. Making that happen in a seamless way, both to users and to integrated MAMs and PAMs, is what we do.

What do you see are the big technology trends that can help storage for M&E?
Cloud storage pricing that continues to drop along with the advance of storage density in both spinning disk and solid state. All of these are interrelated and have the general effect of lowering costs for the end user. For those who have specific business requirements that drive on-prem storage, the availability of higher density tape and optical disks is enabling petabytes of very efficient cold storage within less space than contained in a single rack.

How has NVMe advanced over the past year?
In addition to the obvious application of making media available more quickly, the greatest value of NVMe within M&E may be found in enabling faster search of both structured and unstructured metadata associated with media. Yes, we need faster access to media, but in many cases we must first find the media before it can be accessed. NVMe can make that search experience, particularly for large libraries, federated data sets and media lakes, lightning quick.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
Just as AWS, Azure and Wasabi, among other large players, have replaced many instances of on-prem NAS, so have Box, Dropbox, Google Drive and iCloud replaced many (but not all) of the USB drives gathering dust in the bottom of desk drawers. As NAS is built on top of faster and faster performing technologies, it is also beginning to put additional pressure on SAN – particularly for users who are sensitive to price and the amount of administration required.

Backblaze, Director of Product Marketing, M&E, Skip Levens

Backblaze offers easy-to-use cloud backup, archive and storage services. With over 12 years of experience and more than 800 Petabytes of customer data under management, Backblaze has offers cloud storage to anyone looking to create, distribute and preserve their content forever.

What kind of storage are you offering and will that be changing in the coming year?
At Backblaze, we offer a single class, or tier, of storage where everything’s active and immediately available wherever you need it, and it’s protected better than it would be on spinning disk or RAID systems.

Skip Levens

Are certain storage tiers more suitable for different asset types, workflows, etc?
Absolutely. For example, animators need different storage than a team of editors all editing a 4K project at the same time. And keeping your entire content library on your shared storage could get expensive indeed.

We’ve found that users can give up all that unneeded complexity and cost that gets in the way of creating content in two steps:
– Step one is getting off of the “shared storage expansion treadmill” and buying just enough on-site shared storage that fits your team. If you’re delivering a TV show every week and need a SAN, make it just large enough for your work in process and no larger.

– Step two is to get all of your content into active cloud storage. This not only frees up space on your shared storage, but makes all of your content highly protected and highly available at the same time. Since most of your team probably use MAM to find and discover content, the storage that assets actually live on is completely transparent.

Now life gets very simple for creative support teams managing that workflow: your shared storage stays fast and lean, and you can stop paying for storage that doesn’t fit that model. This could include getting rid of LTO, big JBODs or anything with a limited warranty and a maintenance contract.

What do you see are the big technology trends that can help storage for M&E?
For shooters and on-set data wranglers, the new class of ultra-fast flash drives dramatically speeds up collecting massive files with extremely high resolution. Of course, raw content isn’t safe until it’s ingested, so even after moving shots to two sets of external drives or a RAID cart, we’re seeing cloud archive on ingest. Uploading files from a remote location, before you get all the way back to the editing suite, unlocks a lot of speed and collaboration advantages — the content is protected faster, and your ingest tools can start making proxy versions that everyone can start working on, such as grading, commenting, even rough cuts.

We’re also seeing cloud-delivered workflow applications. The days of buying and maintaining a server and storage in your shop to run an application may seem old-fashioned. Especially when that entire experience can now be delivered from the cloud and on-demand.

Iconik, for example, is a complete, personalized deployment of a project collaboration, asset review and management tool – but it lives entirely in the cloud. When you log in, your app springs to life instantly in the cloud, so you only pay for the application when you actually use it. Users just want to get their creative work done and can’t tell it isn’t a traditional asset manager.

How has NVMe advanced over the past year?
NVMe means flash storage can completely ditch legacy storage controllers like the ones on traditional SATA hard drives. When you can fit 2TB of storage on a stick thats only 22 millimeters by 80 millimeters — not much larger than a stick of gum — and it’s 20 times faster than an external spinning hard drive and draws only 3.5V, that’s a game changer for data wrangling and camera cart offload right now.

And that’s on PCIe 3. The PCI Express standard is evolving faster and faster too. PCIe 4 motherboards are starting to come online now, PCIe 5 was finalized in May, and PCIe 6 is already in development. When every generation doubles the available bandwidth that can feed that NVMEe storage, the future is very, very bright for NVMe.

Do you see NAS overtaking SAN for larger workgroups? How about cloud taking on some of what NAS used to do?
For users who work in widely distributed teams, the cloud is absolutely eating NAS. When the solution driving your team’s projects and collaboration is the dashboard and focus of the team — and active cloud storage seamlessly supports all of the content underneath — it no longer needs to be on a NAS.

But for large teams that do fast-paced editing and creation, the answer to “what is the best shared storage for our team” is still usually a SAN, or tightly-coupled, high-performance NAS.

Either way, by moving content and project archives to the cloud, you can keep SAN and NAS costs in check and have a more productive workflow, and more opportunities to use all that content for new projects.

Behind the Title: Matter Films president Matt Moore

Part of his job is finding talent and production partners. “We want the most innovative and freshest directors, cinematographers and editors from all over the world.”

NAME: Matt Moore

COMPANY: Phoenix and Los Angeles’ Matter Films
and OH Partners

CAN YOU DESCRIBE YOUR COMPANY?
Matter Films is a full-service production company that takes projects from script to screen — doing both pre-production and post in addition to producing content. We are joined by our sister company OH Partners, a full-service advertising agency.

WHAT’S YOUR JOB TITLE?
President of Matter Films and CCO of OH Partners,

WHAT DOES THAT ENTAIL?
I’m lucky to be the only person in the company who gets to serve on both sides of the fence. Knowing that, I think that working with Matter and OH gives me a unique insight into how to meet our clients’ needs best. My number one job is to push both teams to be as innovative and outside of the box as possible. A lot of people do what we do, so I work on our points of differentiation.

Gila River Hotels and Casinos – Sports Partnership

I spend a lot of time finding talent and production partners. We want the most innovative and freshest directors, cinematographers and editors from all over the world. That talent must push all of our work to be the best. We then pair that partner with the right project and the right client.

The other part of my job is figuring out where the production industry is headed. We launched Matter Films because we saw a change within the production world — many production companies weren’t able to respond quickly enough to the need for social and digital work, so we started a company able to address that need and then some.

My job is to always be selling ideas and proposing different avenues we could pursue with Matter and with OH. I instill trust in our clients by using our work as a proof point that the team we’ve assembled is the right choice to get the job done.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
People assumed when we started Matter Films that we would keep everything in-house and have no outside partners, and that’s just not the case. Matter actually gives us even more resources to find those innovators from across the globe. It allows us to do more.

The variation in budget size that we accept at Matter Films would also surprise people. We’ll take on projects with anywhere from $1,000 to one million-plus budgets. We’ve staffed ourselves in such a way that even small projects can be profitable.

WHAT’S YOUR FAVORITE PART OF THE JOB?
It sounds so cliché, but I would have to say the people. I’m around people that I genuinely want to see every single day. I love when we all get together for our meetings, because while we do discuss upcoming projects, we also goof off and just hang out. These are the people I go into battle with every single day. I choose to go into the battle with people that I whole-heartedly care about and enjoy being with. It makes life better.

WHAT’S YOUR LEAST FAVORITE?
What’s tough is how fast this business changes. Every day there’s a new conference or event, and just when you think an idea you’ve had is cutting edge and brand new, you realize you have to keep going and push to be more innovative. Just when you get caught up, you’re already behind. The big challenge is how you’re going to constantly step up your game.

WHAT IS YOUR MOST PRODUCTIVE TIME OF THE DAY?
I’m an early morning person. I can get more done if I start before everybody else.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I was actually pre-med for two years in college with the desire to be a surgeon. When I was an undergrad, I got an abysmal grade on one of our exams and the professor pulled me aside and told me that a score that low proved that I truly did not care about learning the material. He allowed me to withdraw from the class to find something I was more passionate about, and that was life changing.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I found out in college. I genuinely just loved making a product that either entertained or educated people. I started in the news business, so every night I would go home after work and people could tell me about the news of the day because of what I’d written, edited and put on TV.

People knew about what was going on because of the stories that we told. I have a great love for telling stories and having others engage with that story. If you’re good at the job, peoples’ lives will be different as a result of what you create.

Barbuda Ocean Club

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We just wrapped a large shoot in Maryland for Live Casino, and a different tourism project for a luxury property in Barbuda. We’re currently developing our work with Virgin, and we have a shoot for a technology company focused on developing autonomous driving and green energy upcoming as well. We’re all over the map with the range of work that we have in the pipeline.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
One of my favorite projects actually took place before Matter Films was officially around, but we had a lot of the same team. We did an environmentally sensitive project for Sedona, Arizona, called Sedona Secret 7. Our campaign told the millions of tourists who arrive there how to find other equally beautiful destinations in and around Sedona instead of just the ones everyone already knew.

It was one of those times when advertising wasn’t about selling something, but about saving something.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My phone, a pair of AirPods and a laptop. The Matter Films team gave me AirPods for my birthday, so those are extra special!

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
My usage on Instagram is off the charts; it’s embarrassing. While I do look at everyone’s vacation photos or what workout they did that day, I also use Instagram as a talent sourcing tool for a lot of work purposes: I follow directors, animation studios and tons of artists that I either get inspiration from or want to work with.

A good percentage of people I follow are creatives that I want to work with at some point. I also reach out to people all the time for potential collaborations.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I love outdoor adventures. Some days I’ll go on a crazy hike here in Arizona or rent a four-wheeler and explore the desert or mountains. I also love just hanging out with my kids — they’re a great age.

Alaina Zanotti rejoins Cartel as executive producer

Santa Monica-based editorial and post studio Cartel has named Alaina Zanotti as executive producer to help with business development and to oversee creative operations along with partner and executive producer Lauren Bleiweiss. Additionally, Cartel has bolstered its roster with the signing of comedic editor Kevin Zimmerman.

Kevin Zimmerman

With more than 15 years of experience, Zanotti joins Cartel after working for clients that include BBDO, Wieden+Kennedy, Deutsch, Google, Paramount and Disney. Zanotti most recently served as senior executive producer at Method Studios, where she oversaw business development for global VFX and post. Prior to that stint, she joined Cartel in 2016 to assist the newly established post and editorial house’s growth. Previously, Zanotti spent more than a decade driving operations and raising brand visibility for Method and Company 3.

Editor Zimmerman joins Cartel following a tenure as a freelance editor, during which his comedic timing and entrepreneurial spirit earned him commercial work for Avocados From Mexico and Planters that aired during 2019’s Super Bowl.

Throughout his two-decade career in editorial, Zimmerman has held positions at Spot Welders, NO6, Whitehouse Post and FilmCore, with recent work for Sprite, Kia, hotels.com, Microsoft and Miller Lite, and a PSA for Girls Who Code. Zimmer has previously worked with Cartel partners Adam Robinson and Leo Scott.

Object Matrix and Arvato partner for managing digital archives

Object Matrix and Arvato Systems have partnered to help companies instantly access, manage, browse and edit clips from their digital archives.

Using Arvato’s production asset management platform, VPMS EditMate along with the media-focused object storage solution from Object Matrix, MatrixStore, the companies report that organizations can significantly reduce the time needed to manage media workflows, while making content easily discoverable. The integration makes it easy to unlock assets held in archive, enable creative collaboration and monetize archived assets.

MatrixStore is a media-focused private and hybrid cloud storage platform that provides instant access to all media assets. Built upon object-based storage technology, MatrixStore provides digital content governance through an integrated and automated storage platform supporting multiple media-based workflows while providing a secure and scalable solution.

VPMS EditMate is a toolkit built for managing and editing projects in a streamlined, intuitive and efficient manner, all from within Adobe Premiere Pro. From project creation and collecting media, to the export and storage of edited material, users benefit from a series of features designed to simplify the spectrum of tasks involved in a modern and collaborative editing environment.

Alkemy X adds Albert Mason as head of production

Albert Mason has joined VFX house Alkemy X as head of production. He comes to Alkemy X with over two decades of experience in visual effects and post production. He has worked on projects directed by such industry icons as Peter Jackson on the Lord of the Rings trilogy, Tim Burton on Alice in Wonderland and Robert Zemeckis on The Polar Express. In his new role at Alkemy X, he will use his experience in feature films to target the growing episodic space.

A large part of Alkemy X’s work has been for episodic visual effects, with credits that include Amazon Prime’s Emmy-winning original series, The Marvelous Mrs. Maisel, USA’s Mr. Robot, AMC’s Fear the Walking Dead, Netflix’s Maniac, NBC’s Blindspot and Starz’s Power.

Mason began his career at MTV’s on-air promos department, sharpening his production skills on top series promo campaigns and as a part of its newly launched MTV Animation Department. He took an opportunity to transition into VFX, stepping into a production role for Weta Digital and spending three years working globally on the Lord of the Rings trilogy. He then joined Sony Pictures Imageworks, where he contributed to features including Spider-Man 3 and Ghost Rider. He has also produced work for such top industry shops as Logan, Rising Sun Pictures and Greymatter VFX.

“[Albert’s] expertise in constructing advanced pipelines that embrace emerging technologies will be invaluable to our team as we continue to bolster our slate of VFX work,” says Alkemy X president/CEO Justin Wineburgh.

2019 HPA Award winners announced

The industry came together on November 21 in Los Angeles to celebrate its own at the 14th annual HPA Awards. Awards were given to individuals and teams working in 12 creative craft categories, recognizing outstanding contributions to color grading, sound, editing and visual effects for commercials, television and feature film.

Rob Legato receiving Lifetime Achievement Award from presenter Mike Kanfer. (Photo by Ryan Miller/Capture Imaging)

As was previously announced, renowned visual effects supervisor and creative Robert Legato, ASC, was honored with this year’s HPA Lifetime Achievement Award; Peter Jackson’s They Shall Not Grow Old was presented with the HPA Judges Award for Creativity and Innovation; acclaimed journalist Peter Caranicas was the recipient of the very first HPA Legacy Award; and special awards were presented for Engineering Excellence.

The winners of the 2019 HPA Awards are:

Outstanding Color Grading – Theatrical Feature

WINNER: “Spider-Man: Into the Spider-Verse”
Natasha Leonnet // Efilm

“First Man”
Natasha Leonnet // Efilm

“Roma”
Steven J. Scott // Technicolor

Natasha Leonnet (Photo by Ryan Miller/Capture Imaging)

“Green Book”
Walter Volpatto // FotoKem

“The Nutcracker and the Four Realms”
Tom Poole // Company 3

“Us”
Michael Hatzer // Technicolor

 

Outstanding Color Grading – Episodic or Non-theatrical Feature

WINNER: “Game of Thrones – Winterfell”
Joe Finley // Sim, Los Angeles

 “The Handmaid’s Tale – Liars”
Bill Ferwerda // Deluxe Toronto

“The Marvelous Mrs. Maisel – Vote for Kennedy, Vote for Kennedy”
Steven Bodner // Light Iron

“I Am the Night – Pilot”
Stefan Sonnenfeld // Company 3

“Gotham – Legend of the Dark Knight: The Trial of Jim Gordon”
Paul Westerbeck // Picture Shop

“The Man in The High Castle – Jahr Null”
Roy Vasich // Technicolor

 

Outstanding Color Grading – Commercial  

WINNER: Hennessy X.O. – “The Seven Worlds”
Stephen Nakamura // Company 3

Zara – “Woman Campaign Spring Summer 2019”
Tim Masick // Company 3

Tiffany & Co. – “Believe in Dreams: A Tiffany Holiday”
James Tillett // Moving Picture Company

Palms Casino – “Unstatus Quo”
Ricky Gausis // Moving Picture Company

Audi – “Cashew”
Tom Poole // Company 3

 

Outstanding Editing – Theatrical Feature

Once Upon a Time… in Hollywood

WINNER: “Once Upon a Time… in Hollywood”
Fred Raskin, ACE

“Green Book”
Patrick J. Don Vito, ACE

“Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese”
David Tedeschi, Damian Rodriguez

“The Other Side of the Wind”
Orson Welles, Bob Murawski, ACE

“A Star Is Born”
Jay Cassidy, ACE

 

Outstanding Editing – Episodic or Non-theatrical Feature (30 Minutes and Under)

VEEP

WINNER: “Veep – Pledge”
Roger Nygard, ACE

“Russian Doll – The Way Out”
Todd Downing

“Homecoming – Redwood”
Rosanne Tan, ACE

“Withorwithout”
Jake Shaver, Shannon Albrink // Therapy Studios

“Russian Doll – Ariadne”
Laura Weinberg

 

Outstanding Editing – Episodic or Non-theatrical Feature (Over 30 Minutes)

WINNER: “Stranger Things – Chapter Eight: The Battle of Starcourt”
Dean Zimmerman, ACE, Katheryn Naranjo

“Chernobyl – Vichnaya Pamyat”
Simon Smith, Jinx Godfrey // Sister Pictures

“Game of Thrones – The Iron Throne”
Katie Weiland, ACE

“Game of Thrones – The Long Night”
Tim Porter, ACE

“The Bodyguard – Episode One”
Steve Singleton

 

Outstanding Sound – Theatrical Feature

WINNER: “Godzilla: King of Monsters”
Tim LeBlanc, Tom Ozanich, MPSE // Warner Bros.
Erik Aadahl, MPSE, Nancy Nugent, MPSE, Jason W. Jennings // E Squared

“Shazam!”
Michael Keller, Kevin O’Connell // Warner Bros.
Bill R. Dean, MPSE, Erick Ocampo, Kelly Oxford, MPSE // Technicolor

“Smallfoot”
Michael Babcock, David E. Fluhr, CAS, Jeff Sawyer, Chris Diebold, Harrison Meyle // Warner Bros.

“Roma”
Skip Lievsay, Sergio Diaz, Craig Henighan, Carlos Honc, Ruy Garcia, MPSE, Caleb Townsend

“Aquaman”
Tim LeBlanc // Warner Bros.
Peter Brown, Joe Dzuban, Stephen P. Robinson, MPSE, Eliot Connors, MPSE // Formosa Group

 

Outstanding Sound – Episodic or Non-theatrical Feature

WINNER: “The Haunting of Hill House – Two Storms”
Trevor Gates, MPSE, Jason Dotts, Jonathan Wales, Paul Knox, Walter Spencer // Formosa Group

“Chernobyl – 1:23:45”
Stefan Henrix, Stuart Hilliker, Joe Beal, Michael Maroussas, Harry Barnes // Boom Post

“Deadwood: The Movie”
John W. Cook II, Bill Freesh, Mandell Winter, MPSE, Daniel Colman, MPSE, Ben Cook, MPSE, Micha Liberman // NBC Universal

“Game of Thrones – The Bells”
Tim Kimmel, MPSE, Onnalee Blank, CAS, Mathew Waters, CAS, Paula Fairfield, David Klotz

“Homecoming – Protocol”
John W. Cook II, Bill Freesh, Kevin Buchholz, Jeff A. Pitts, Ben Zales, Polly McKinnon // NBC Universal

 

Outstanding Sound – Commercial 

WINNER: John Lewis & Partners – “Bohemian Rhapsody”
Mark Hills, Anthony Moore // Factory

Audi – “Life”
Doobie White // Therapy Studios

Leonard Cheshire Disability – “Together Unstoppable”
Mark Hills // Factory

New York Times – “The Truth Is Worth It: Fearlessness”
Aaron Reynolds // Wave Studios NY

John Lewis & Partners – “The Boy and the Piano”
Anthony Moore // Factory

 

Outstanding Visual Effects – Theatrical Feature

WINNER: “The Lion King”
Robert Legato
Andrew R. Jones
Adam Valdez, Elliot Newman, Audrey Ferrara // MPC Film
Tom Peitzman // T&C Productions

“Avengers: Endgame”
Matt Aitken, Marvyn Young, Sidney Kombo-Kintombo, Sean Walker, David Conley // Weta Digital

“Spider-Man: Far From Home”
Alexis Wajsbrot, Sylvain Degrotte, Nathan McConnel, Stephen Kennedy, Jonathan Opgenhaffen // Framestore

“Alita: Battle Angel”
Eric Saindon, Michael Cozens, Dejan Momcilovic, Mark Haenga, Kevin Sherwood // Weta Digital

“Pokemon Detective Pikachu”
Jonathan Fawkner, Carlos Monzon, Gavin Mckenzie, Fabio Zangla, Dale Newton // Framestore

 

Outstanding Visual Effects – Episodic (Under 13 Episodes) or Non-theatrical Feature

Game of Thrones

WINNER: “Game of Thrones – The Bells”
Steve Kullback, Joe Bauer, Ted Rae
Mohsen Mousavi // Scanline
Thomas Schelesny // Image Engine

“Game of Thrones – The Long Night”
Martin Hill, Nicky Muir, Mike Perry, Mark Richardson, Darren Christie // Weta Digital

“The Umbrella Academy – The White Violin”
Everett Burrell, Misato Shinohara, Chris White, Jeff Campbell, Sebastien Bergeron

“The Man in the High Castle – Jahr Null”
Lawson Deming, Cory Jamieson, Casi Blume, Nick Chamberlain, William Parker, Saber Jlassi, Chris Parks // Barnstorm VFX

“Chernobyl – 1:23:45”
Lindsay McFarlane
Max Dennison, Clare Cheetham, Steven Godfrey, Luke Letkey // DNEG

 

Outstanding Visual Effects – Episodic (Over 13 Episodes)

Team from The Orville – Outstanding VFX, Episodic, Over 13 Episodes (Photo by Ryan Miller/Capture Imaging)

WINNER: “The Orville – Identity: Part II”
Tommy Tran, Kevin Lingenfelser, Joseph Vincent Pike // FuseFX
Brandon Fayette, Brooke Noska // Twentieth Century FOX TV

“Hawaii Five-O – Ke iho mai nei ko luna”
Thomas Connors, Anthony Davis, Chad Schott, Gary Lopez, Adam Avitabile // Picture Shop

“9-1-1 – 7.1”
Jon Massey, Tony Pirzadeh, Brigitte Bourque, Gavin Whelan, Kwon Choi // FuseFX

“Star Trek: Discovery – Such Sweet Sorrow Part 2”
Jason Zimmerman, Ante Dekovic, Aleksandra Kochoska, Charles Collyer, Alexander Wood // CBS Television Studios

“The Flash – King Shark vs. Gorilla Grodd”
Armen V. Kevorkian, Joshua Spivack, Andranik Taranyan, Shirak Agresta, Jason Shulman // Encore VFX

The 2019 HPA Engineering Excellence Awards were presented to:

Adobe – Content-Aware Fill for Video in Adobe After Effects

Epic Games — Unreal Engine 4

Pixelworks — TrueCut Motion

Portrait Displays and LG Electronics — CalMan LUT based Auto-Calibration Integration with LG OLED TVs

Honorable Mentions were awarded to Ambidio for Ambidio Looking Glass; Grass Valley, for creative grading; and Netflix for Photon.

IDC goes bicoastal, adds Hollywood post facility 


New York’s International Digital Centre (IDC) has opened a new 6,800-square-foot digital post facility in Hollywood, with Rosanna Marino serving as COO. She will manage the day-to-day operations of the West Coast post house. IDC LA will focus on serving the entertainment, content creation, distribution and streaming industries.

Rosanna Marino

Marino will manage sales, marketing, engineering and the day-to-day operations for the Hollywood location, while IDC founder/CEO Marcy Gilbert, will lead the company’s overall activities and New York headquarters.

IDC will provide finishing, color grading and editorial in Dolby Vision 4K HDR, UHD as well as global QC. IDC LA features 11 bays and a DI theater, which includes Dolby 7.1 Atmos audio mixing, dubbing and audio description. They are also providing subtitle and closed caption-timed text creation and localization, ABS scripting and translations in over 40 languages.

To complete the end-to-end chain, they provide IMF and DCP creation, supplemental and all media fulfillment processing, including audio and timed text conforms for distribution. IDC is an existing Netflix Partner Program member — NP3 in New York and NPFP for the Americas and Canada.

IDC LA occupies the top two floors and rooftop deck in a vintage 1930’s brick building on Santa Monica Boulevard.

Review: Nugen Audio’s VisLM2 loudness meter plugin

By Ron DiCesare

In 2010, President Obama signed the CALM Act (Commercial Advertisement Loudness Mitigation) regulating the audio levels of TV commercials. At that time, I had many “laypeople” complain to me how commercials were often so much louder than the TV programs. Over the past 10 years, I have seen the rise of audio meter plugins to meet the requirements of the CALM Act, resulting in reducing this complaint dramatically.

A lot has changed since the 2010 FCC mandate of -24LKFS +/-2db. LKFS was the scale name at the time, but we will get into this more later. Today, we have countless viewing options such as cable networks, a large variety of streaming services, the internet and movie theaters utilizing 7.1 or Dolby Atmos. Add to that, new metering standards such as True Peak and you have the likelihood of confusing and possibly even conflicting audio standards.

Nugen Audio has updated its VisLM for addressing today’s complex world of audio levels and audio metering. The VisLM2 is a Mac and Windows plugin compatible with Avid Pro Tools and any DAW that uses RTAS, AU, AAX, VST and VST3. It can also be installed as a standalone application for Windows and OSX. By using its many presets, Loudness History Mode and countless parameters to view and customize, the VisLM2 can help an audio mixer monitor a mix to see when their programs are in and out of audio level spec using a variety of features.

VisLM2

The Basics
The first thing I needed to see was how it handled the 2010 audio standard of -24LKFS, now known as LUFS. LKFS (Loudness K-weighted relative to Full Scale) was the term used in the United States. LUFS (Loudness Units relative to Full Scale) was the term used in Europe. The difference is in name only, and the audio level measurement is identical. Now all audio metering plugins use LUFS, including the VisLM2.

I work mostly on TV commercials, so it was pretty easy for me to fire up the VisLM2 and get my LUFS reading right away. Accessing the US audio standard dictated by the CALM Act is simple if you know the preset name for it: ITU-R B.S. 1770-4. I know, not a name that rolls off the tongue, but it is the current spec. The VisLM2 has four presets of ITU-R B.S. 1770 — revision 01, 02, 03 and the current revision 04. Accessing the presets is easy, once you realize that they are not in the preset section of the plugin as one might think. Presets are located in the options section of the meter.

While this was my first time using anything from Nugen Audio, I was immediately able to run my 30-second TV commercial and get my LUFS reading. The preset gave me a few important default readings to view while mixing. There are three numeric displays that show Short-Term, Loudness Range and Integrated, which is how the average loudness is determined for most audio level specs. There are two meters that show Momentary and Short-Term levels, which are helpful when trying to pinpoint any section that could be putting your mix out of audio spec. The difference is that Momentary is used for short bursts, such as an impact or gun shot, while Short-Term is used for the last three-second “window” of your mix. Knowing the difference between the two readings is important. Whether you work on short- or long-format mixes, knowing how to interpret both Momentary and Short-Term readings is very helpful in determining where trouble spots might be.

Have We Outgrown LUFS?
Most, if not all, deliverables now specify a True Peak reading. True Peak has slowly but firmly crept its way into audio spec and it can be confusing. For US TV broadcast, True Peak spec can range as high as -2dBTP and as low as -6dBTP, but I have seen it spec out even lower at -8dBTP for some of my clients. That means a TV network can reject or “bounce back” any TV programming or commercial that exceeds its LUFS spec, its True Peak spec or both.

VisLM2

In most cases, LUFS and True Peak readings work well together. I find that -24LUFS Integrated gives a mixer plenty of headroom for staying below the True Peak maximum. However, a few factors can work against you. The higher the LUFS Integrated spec (say, for an internet project) and/or the lower the True Peak spec (say, for a major TV network), the more difficult you might find it to manage both readings. For anyone like me — who often has a client watching over my shoulder telling me to make the booms and impacts louder — you always want to make sure you are not going to have a problem keeping your mix within spec for both measurements. This is where the VisLM2 can help you work within both True Peak and LUFS standards simultaneously.

To do that using the VisLM2, let’s first understand the difference between True Peak and LUFS. Integrated LUFS is an average reading over the duration of the program material. Whether the program material is 15 seconds or two hours long, hitting -24LUFS Integrated, for example, is always the average reading over time. That means a 10-second loud segment in a two-hour program could be much louder than a 10-second loud segment in a 15-second commercial. That same loud 10 seconds can practically be averaged out of existence during a two-hour period with LUFS Integrated. Flawed logic? Possibly. Is that why TV networks are requiring True Peak? Well, maybe yes, maybe no.

True Peak is forever. Once the highest True Peak is detected, it will remain as the final True Peak reading for the entire length of the program material. That means the loud segment at the last five minutes of a two-hour program will dictate the True Peak reading of the entire mix. Let’s say you have a two-hour show with dialogue only. In the final minute of the show, a single loud gunshot is heard. That one-second gunshot will determine the other one hour, 59 minutes, and 59 seconds of the program’s True Peak audio level. Flawed logic? I can see it could be. Spotify’s recommended levels are -14LUFS and -2dBTP. That gives you a much smaller range for dynamics compared to others such as network TV.

VisLM2

Here’s where the VisLM2 really excels. For those new to Nugen Audio, the clear stand out for me is the detailed and large history graph display known as Loudness History Mode. It is a realtime updating and moving display of the mix levels. What it shows is up to you. There are multiple tabs to choose from, such as Integrated, True Peak, Short-Term, Momentary, Variance, Flags and Alerts, to name a few. Selecting any of these tabs will result in showing, or not showing, the corresponding line along the timeline of the history graph as the audio plays.

When any of the VisLM2’s presets are selected, there are a whole host of parameters that come along with it. All are customizable, but I like to start with the defaults. My thinking is that the default values were chosen for a reason, and I always want to know what that reason is before I start customizing anything.

For example, the target for the preset of ITU-R B.S. 1770-4 is -24LUFS Integrated and -2dBTP. By default, both will show on the history graph. The history graph will also show default over and under audio levels based on the alerts you have selected in the form of min and max LUFS. But, much to my surprise, the default alert max was not what I expected. It wasn’t -24LUFS, which seemed to be the logical choice to me. It was 4dB higher at -20LUFS, which is 2dB above the +/-2dB tolerance. That’s because these min and max alert values are not for Integrated or average loudness as I had originally thought. These values are for Short-Term loudness. The history graph lines with its corresponding min and max alerts are a visual cue to let the mixer know if he or she is in the right ballpark. Now this is not a hard and fast rule. Simply put, if your short-term value stays somewhere between -20 and -28LUFS throughout most of an entire project, then you have a good chance of meeting your target of -24LUFS for the overall integrated measurement. That is why the value range is often set up as a “green” zone on the loudness display.

VisLM2

The folks at Nugen point out that it isn’t practically possible to set up an alert or “red zone” for integrated loudness because this value is measured over the entire program. For that, you have to simply view the main reading of your Integrated loudness. Even so, I will know if I am getting there or not by viewing my history graph while working. Compare that to the impractical approach of running the entire mix before having any idea of where you are going to net out. The VisLM2 max and min alerts help keep you working within audio spec right from the start.

Another nice feature about the large history graph window is the Macro tab. Selecting the Macro feature will give you the ability to move back and forth anywhere along the duration of your mix displayed in the Loudness History Mode. That way you can check for problem spots long after they have happened. Easily accessing any part of the audio level display within the history graph is essential. Say you have a trouble spot somewhere within a 30-minute program; select the Macro feature and scroll through the history graph to spot any overages. If an overage turns out to be at, say, eight minutes in, then cue up your DAW to that same eight-minute mark to address changes in your mix.

Another helpful feature designed for this same purpose is the use of flags. Flags can be added anywhere in your history graph while the audio is running. Again, this can be helpful for spotting, or flagging, any problem spots. For example, you can flag a loud action scene in an otherwise quiet dialogue-driven program that you know will be tricky to balance properly. Once flagged, you will have the ability to quickly cue up your history graph to work with that section. Both the Macro and Flag functions are aided by tape-machine-like controls for cueing up the Loudness History Mode display to any problem spots you might want to view.

Presets, Presets, Presets
The VisLM2 comes with 34 presets for selecting what loudness spec you are working with. Here is where I need to rely on the knowledge of Nugen Audio to get me going in the right direction. I do not know all of the specs for all of the networks, formats and countries. I would venture a guess that very few audio mixers do either. So I was not surprised when I saw many presets that I was not familiar with. Common presets in addition to ITU-R B.S. 1770 are six versions of EBU R128 for European broadcast and two Netflix presets (stereo and 5.1), which we will dive into later on. The manual does its best to describe some of the presets, but it falls short. The descriptions lack any kind of real-world language, only techno-garble. I have no idea what AGCOM 219/9/CSP LU is and, after reading the manual, I still don’t! I hope a better source of what’s what regarding each preset will become available sometime soon.

MasterCheck

But why no preset for Internet audio level spec? Could mixing for AGCOM 219/9/CSP LU be even more popular than mixing for the Internet? Unlikel. So let’s follow Nugen’s logic here. I have always been in the -18LUFS range for Internet only mixes. However, ask 10 different mixers and you will likely get 10 different answers. That is why there is not an Internet preset included with the VisLM2 as I had hoped. Even so, Nugen offers its MasterCheck plugin for other platforms such as Spotify and YouTube. MasterCheck is something I have been hoping for, and it would be the perfect companion to the VisLM2.

The folks at Nugen have pointed out a very important difference between broadcast TV and many Internet platforms: Most of the streaming services (YouTube, Spotify, Tidal, Apple Music, etc.) will perform their own loudness normalization after the audio is submitted. They do not expect audio engineers to mix to their standards. In contrast, Netflix and most TV networks will expect mixers to submit audio that already meets their loudness standards. VisLM2 is aimed more toward engineers who are mixing for platforms in the second category.

Streaming Services… the Wild West?
Streaming services are the new frontier, at least to me. I would call it the Wild West by comparison to broadcast TV. With so many streaming services popping up, particularly “off-brand” services, I would ask if we have gone back in time to the loudness wars of the late 2000s. Many streaming services do have an audio level spec, but I don’t know of any consensus between them like with network TV.

That aside, one of the most popular streaming services is Netflix. So let’s look at the VisLM2’s Netflix preset in detail. Netflix is slightly different from broadcast TV because its spec is based on dialogue. In addition to -2dTP, Netflix has an LUFS spec of -27 +/- 2dB Integrated Dialogue. That means the dialogue level is averaged out over time, rather than using all program material like music and sound effects. Remember my gunshot example? Netflix’s spec is more forgiving of that mixing scenario. This can lead to more dynamic or more cinematic mixes, which I can see as a nice advantage when mixing.

Netflix currently supports Dolby Atmos on selected titles, but word on the street is that Netflix deliverables will be requiring Atmos for all titles. I have not confirmed this, but I can only hope it will be backward-compatible for non-Atmos mixes. I was lucky enough to speak directly with Tomlinson Holman of THX fame (Tomlinson Holman eXperiment) about his 10.2 format that included height long before Atmos was available. In the case of 10.2, Holman said it was possible to deliver a single mono channel audio mix in 10.2 by simply leaving all other channels empty. I can only hope this is the same for Netflix’s Atmos deliverables so you can simply add or subtract the amount of channels needed when you are outputting your final mix. Regardless, we can surely look to Nugen Audio to keep us updated with its Netflix preset in the VisLM2 should this become a reality.

True Peak within VisLM2

VisLM Updates
For anyone familiar with the original version of the VisLM, there are three updates that are worth looking at. First is the ability to resize and select what shows in the display. That helps with keeping the window active on your screen as you are working. It can be a small window so it doesn’t interfere with your other operations. Or you can choose to show only one value, such as Integrated, to keep things really small. On the flip side, you can expand the display to fill the screen when you really need to get the microscope out. This is very helpful with the history graph for spotting any trouble spots. The detail displayed in the Loudness History Mode is by far the most helpful thing I have experienced using the VisLM2.

Next is the ability to display both LUFS and True Peak meters simultaneously. Before, it was one or the other and now it is both. Simply select the + icon between the two meters. With the importance of True Peak, having that value visible at all times is extremely valuable.

Third is the ability to “punch in,” as I call it, to update your Integrated reading while you are working. Let’s say you have your overall Integrated reading, and you see one section that is making you go over. You can adjust your levels on your DAW as you normally would and then simply “punch in” that one section to calculate the new Integrated reading. Imagine how much time you save by not having to run a one-hour show every time you want to update your Integrated reading. In fact, this “punch in” feature is actually the VisLM2 constantly updating itself. This is just another example of how the VisLM2 helps keep you working within audio spec right from the start.

Multi-Channel Audio Mixing
The one area I can’t test the VisLM2 on is multi-channel audio, such as 5.1 and Dolby Atmos. I work mostly on TV commercials, Internet programming, jazz records and the occasional indie film. So my world is all good old-fashioned stereo. Even so, the VisLM2 can measure 5.1, 7.1, and 7.1.2, which is the channel count for Dolby Atmos bed tracks. For anyone who works in multi-channel audio, the VisLM2 will measure and display audio levels just as I have described it working in stereo.

Summing Up
With the changing landscape of TV networks, streaming services and music-only platforms, the resulting deliverables have opened up the flood gates of audio specs like never before. Long gone are the days of -24LUFS being the one and only number you need to know.

To help manage today’s complicated and varied amount of deliverables along with the audio spec to go with it, Nugen Audio’s VisLM2 absolutely delivers.


Ron DiCesare is a NYC-based freelance audio mixer and sound designer. His work can be heard on national TV campaigns, Vice and the Viceland TV network. He is also featured in the doc “Sing You A Brand New Song” talking about the making of Coleman Mellett’s record album, “Life Goes On.”

Report: Apple intros 16-inch MacBook Pro, previews new Mac Pro, display

By Pat Birk

At a New York City press event, Apple announced that it will being shipping a new 16-inch MacBook Pro this week. This new offering will feature an updated 16-inch Retina display with a pixel density of 226ppi; 9th-generation Intel processors featuring up to 8 cores and up to 64GB of DDR4 memory; vastly expanded SSDs ranging from 512GB to a whopping 8TB; upgraded discrete AMD Radeon Pro 5000M series graphics; completely redesigned speakers and internal microphones; and an overhauled keyboard dubbed, of course, the “Magic Keyboard.”

The MacBook Pro’s new Magic Keyboard.

These MacBooks also feature a new cooling system, with wider vents and a 35 percent larger heatsink, along with a 100-watt hour battery (which the company stressed is the maximum capacity allowed by the Federal Aviation Administration), contributing to an additional hour of battery life while web browsing or playing back video.

I had the opportunity to do a brief hands-on demo, and for the first time since Apple introduced the Touch Bar to the MacBook Pro, I have found myself wanting a new Mac. The keyboard felt great, offering far more give and far less plastic-y clicks than the divisive Butterfly keyboard. The Mac team has reintroduced a physical escape key, along with an inverted T-style cluster of arrow keys, both features that will be helpful for coders. Apple also previewed its upcoming Mac Pro tower and Pro Display XDR.

Sound Offerings
As an audio guy, I was naturally drawn to the workstation’s sound offerings and was happy when the company dedicated a good portion of the presentation to touting its enhanced speaker and microphone arrays. The six-speaker system features dual-opposed woofer drivers, which offer enhanced bass while canceling out problematic distortion-causing frequencies. When compared side by side with high-end offerings from other manufacturers, the MacBook offered a far more complete sonic experience than the competition, and I believe Apple is right in saying that they’ve achieved an extra half octave of bass range with this revision.

The all-new MacBook Pro features a 16-inch Retina display.

It’s really impressive for a laptop, but I honestly don’t see it replacing a good pair of headphones or a half-decent Bluetooth for most users. I can see it being useful in the occasional pitch meeting, or showing an idea or video to a friend with no other option, but feel it’s more of a nice touch than a major selling point.

The three-microphone array was impressive, as well, and I can see it offering legitimate functionality for working creatives. When A/B’d with competing internal microphones, there was really no comparison. The MacBook’s mics deliver crisp, clean recordings with very little hiss and no noticeable digital artifacting, both of which were clearly present in competing PCs. I could realistically see this working for a small podcast, or on-the-go musicians recording demos. We live in a world where Steve Lacie recorded and produced a beat for Kendrick Lamar on an iPhone. When Apple claims that the signal-to-noise ratio rivals or even surpasses that of digital mics like the Blue Yeti, they may very well be right. However, in an A/B comparison, I found the Blue to have more body and room ambience, while the MacBook sounded a bit thin and sterile.

Demos
The rest of the demo featured creative professionals — coders, animators, colorists and composers — pushing the spec’d out Mac and MacBook Pros to their limits. A coder demonstrated testing a program in realtime on eight emulations of iOS and iPad OS at once.

A video editor demonstrated the new Mac Pro (not the MacBook) running a project with six 8K video sources playing at once through an animation layer, with no rendering at all. We were also treated to a brief Blackmagic Da Vinci Resolve demo on a Pro Display XDR. A VFX artist demonstrated making realtime lighting changes to an animation comprised of eight million polygons on the Mac Pro, again with no need for rendering.

The Mac Pro and Pro Display XDR, the world’s best pro display, will be available in December.

Composers showed us a Logic X session running a track produced for Lizzo by Oak Felder. The song had over 200 tracks, replete with plugins and instruments — Felder was able to accomplish this on an MacBook Pro. Also on the MacBook, they had a session loaded running multiple instances of MIDI instruments using sample libraries from Cinesamples, Spitfire Audio and Orchestral Tools. The result could easily have fooled me into believing it had been recorded with a live orchestra, and the fact that all of these massive, processor intensive sample libraries could operate at the same without making the Mac Pro break a sweat had me floored.

Summing Up
Apple has delivered a very solid upgrade in the new 16-inch MacBook Pro, especially as a replacement for the earlier iterations of the Touch Bar MacBook Pros. They have begun taking orders, with prices starting at $2,399 for the 2.6GHz 6-core model, and $2,799 for the 2.3GHz 8-core model.
As for the new Mac Pro and Pro Display XDR, they’re coming in December, but company representatives remained tight-lipped on a release date.


Pat Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City.

postPerspective’s ‘SMPTE 2019 Live’ interview coverage

postPerspective was the official production team for SMPTE during its most recent conference in downtown Los Angeles this year. Taking place once again at the Bonaventure Hotel, the conference featured events and sessions all week. (You can watch those interviews here.)

These sessions ranged from “Machine Learning & AI in Content Creation” to “UHD, HDR, 4K, High Frame Rate” to “Mission Critical: Project Artemis, Imaging from the Moon and Deep Space Imaging.” The latter featured two NASA employees and a live talk with astronauts on the International Space Station. It was very cool.

postPerspective’s coverage was also cool and included many sit-down interviews with those presenting at the show (including former astronaut and One More Orbit director Terry Virts as well as Todd Douglas Miller, the director of the Apollo 11 doc), SMPTE executives and long-standing members of the organization.

In addition to the sessions, manufacturers had the opportunity to show their tools on the exhibit floor, where one of our crews roamed with camera and mic in hand reporting on the newest tech.

Whether you missed the conference or experienced it firsthand, these exclusive interviews will provide a ton of information about SMPTE, standards, and the future of our industry, as well as just incredibly smart people talking about the merger of technology and creativity.

Enjoy our coverage!

Blog: Making post deliverables simple and secure

By Morgan Swift

Post producers don’t have it easy. With an ever-increasing number of platforms for distribution and target languages to cater to, getting one’s content to the global market can be challenging to say the least. To top it all, given the current competitive landscape, producers are always under pressure to reduce costs and meet tight deadlines.

Having been in the creative services business for two decades, we’ve all seen it before — post coordinators and supervisors getting burnt out working late nights, often juggling multiple projects and being pushed to the breaking point. You can see it in their eyes. What adds to the stress is dealing with multiple vendors to get various kinds of post finishing work done — from color grading to master QC to localization.

Morgan Swift

Localization is not the least of these challenges. Different platforms specify different deliverables, including access services like closed captions (CC) and audio description (AD); along with as-broadcast scripts (ABS) and combined continuity spotting lists (CCSL). Each of these deliverables requires specialized teams and tools to execute. Needless to say, they also have a significant impact on the budget — usually at least tens of thousands of dollars (much more for a major release).

It is therefore extremely critical to plan post deliverables well in advance to ensure that you are in complete control of turnaround time (TAT), expected spend and potential cost saving opportunities. Let’s look at a few ways of streamlining the process of creating access services deliverables. To do this, we need to understand the various factors at play.

First of all, we need to consider the amount of effort involved in creating these deliverables. There is typically a lot of overlap, as deliverables like as-broadcast scripts and combined continuity spotting lists are often required for creating closed captions and audio description. This means that it is cheaper to combine the creation of all these deliverables instead of getting them done separately.

The second factor to think about is security. Given that pre-release content is extremely vulnerable to piracy, the days of getting an extra DVD with visible timecode for closed captions should be over. Even the days of sending a non-studio-approved link just to create the deliverables should be over.
Why? Because today, there exist tailor-made solutions that have been designed to facilitate secure localization operations. They enable easy creation of a folder that can be used to send and receive files securely, even by external vendors. One such solution is Clear Media ERP, which was built ground-up by Prime Focus Technologies in order to address these challenges.

There is no additional cost to send and receive videos or post deliverable files if you already have a system like this set up for a show. You can keep your pre-release content completely safe, leveraging the software’s advanced security features which include multi-factor authentication, Okta integration, bulk watermarking, burnt-in watermarks for downloads, secure script and document distribution and more.

With the right tech stack, you can get one beautifully organized and secure location to store all of your Access Services deliverables. Which means your team can finally sit back and focus on what matters the most — creating incredible content.


Morgan Swift  is director of account management at Prime Focus Technologies in Los Angeles.

SMPTE 2019 Live: Gala Award Winners

postPerspective was invited by SMPTE to host the exclusive coverage of their 2019 Awards Gala. (Watch here!)

The annual event was hosted by Kasha Patel (a digital storyteller at NASA Earth Observatory by day and a science comedian by night!), and presenters included Steve Wozniak. Among this year’s honorees — Netflix’s Anne Aaron, Gary J. Sullivan, Michelle Munson and Sky’s Cristina Gomila Torres. Honorary Membership was bestowed on Roderick Snell (Snell & Wilcox) and Paul Kellar (Quantel).

If you missed this year’s SMPTE Awards Gala, or even if you were there, check out our backstage interviews with some of our industry’s luminaries. We hope you enjoy watching these interviews as much as we enjoyed shooting them.

Oh, and a big shout out to the team from AlphaDogs who shot and edited all of our 2019 SMPTE Live coverage!