Author Archives: Randi Altman

The Kominsky Method‘s post brain trust: Ross Cavanaugh and Ethan Henderson

By Iain Blair

As Bette Davis famously said, “Old age ain’t no place for sissies!” But Netflix’s The Kominsky Method proves that in the hands of veteran sitcom creator Chuck Lorre — The Big Bang Theory, Two and a Half Men and many others — there’s plenty of laughs to be mined from old age… and disease, loneliness and incontinence.

The show stars Michael Douglas as divorced, has-been actor and respected acting coach Sandy Kominsky and Alan Arkin as his longtime agent Norman Newlander. The story follows these bickering best friends as they tackle life’s inevitable curveballs while navigating their later years in Los Angeles, a city that values youth and beauty above all. Both comedic and emotional, The Kominsky Method won Douglas a Golden Globe.

The single-camera show is written by Al Higgins, David Javerbaum and Lorre, who also directed the first episode. Lorre, Higgins and Douglas executive produce the series, which is produced by Chuck Lorre Productions in association with Warner Bros. Television.

I recently spoke with post producer Ross Cavanaugh and post coordinator Ethan Henderson about posting the show.

You are in the middle of Season 2?
Ross Cavanaugh: Yes, and we’re moving along quite quickly. We’re already about three-quarters of the way through the season shooting-wise, out of the eight-show arc.

Where do you shoot, and what’s the schedule like?
Cavanaugh: We shoot mainly on the lot at Warner Bros. and then at various locations around LA. We start prepping each show one week before we start shooting, and then we get dailies the day after the first shooting day.

Our dailies lab is Picture Shop, which is right up the street in Burbank and very convenient for us. So getting footage from the set to them is quick, and they’re very fast at turning the dailies around. We usually get them by midnight the same day we drop them off,  then our editors start cutting fairly quickly after that.

Where do you do all the post?
Cavanaugh: Mainly at Picture Shop, who are very experienced in TV post work. They do all the post finishing and some of the VFX stuff — usually the smaller things, like beauty fixes and cleanup. They also do all the final color correction since DP Anette Haellmigk really wanted to work with colorist George Manno. They’ve been really great.

Ethan Henderson: We’re back and forth from the lot to Picture Shop, and once we get more heavily involved in all the post I spend a lot of time there while we are onlining the show and coloring and doing the VFX drop ins and when we start the final deliverables process, since everything for Netflix comes out of there.

What are the big challenges of post production on this show, and how closely do you work with Chuck Lorre?
Cavanaugh: As with any TV show, you’re always on a very tight deadline, and there are a lot of moving parts to deal with very quickly. While our prolific showrunner Chuck Lorre is busy with all the projects he has going — especially with all the writing — he always makes time for us. He’s very passionate about the cut and is extremely on top of things.

I’d say the challenges on this show are actually fairly minimal. Basically, we ran a pretty tight ship on the first season, and now I’d say it’s a well-oiled machine. We haven’t had any big problems or surprises in post, which can happen.

Let’s talk about editing. You have two editors, Matthew Barbato and Gina Sansom. I assume that’s because of the time factor. How does that work?
Cavanaugh: Actually, Matthew moved to Veep, and Steven Lang took over for him this season. Each editor has their own assistant editor — Steven has Jack Cunningham and Gina has Barb Steele. They cut separately and work on an odds and evens schedule, each doing every other episode. We all get together to watch screenings of the Director’s Cut, usually in the editorial bay.

What are the big editing challenges?
Cavanaugh: We have a pretty big cast, and there’s a ton of jokes and stuff going on all the time. In addition to Michael Douglas and Alan Arkin, the actors are so experienced. They gave such great performances — there’s a lot of material for the editors to cut from. To be honest, the scripts are all so tight that I think one of the challenges is knowing when to cut out a joke, to serve the pacing of an episode.

This isn’t a VFX-driven show, but there are some visual effects shots. Can you explain?
Cavanaugh: We do a lot of driving scenes and use 24frame.com, who have this really good wraparound HD projection technology, so we pretty much shoot all our car scenes on the stage.

Henderson: Once in a while we’ll pick up some exterior or establishing shots on a freeway using doubles in the cars. All the plates are picked ahead of time. Once in a while, for the sake of continuity, we’ll have to replace a plate in the background and put a different section of the plate in because too many cars ran by and it didn’t match up in the edit.

That’s one of the things that comes up every so often. The other big thing is that both of the leads wear glasses, so reflections of crew and equipment can become an issue; we have to deal with all that and clean it up.

Cavanaugh: We don’t use many big VFX shots, and we can’t reveal much about what happens in the new season, but sometimes there’s stuff like the scene in season one where one of the characters threw some firecrackers at Michael Douglas’ feet. We obviously weren’t going to throw real ones at Michael Douglas, although I think he’d have sucked it up if we’d done it that way! We were shooting in a residential neighborhood at night and we couldn’t set off real ones because they are very loud, so we ended up doing it all with VFX. FuseFx handled the workload for the heavier VFX work.

Henderson: There was a big shot in the pilot where we did a lot of shot extensions in a restaurant where Sandy Kominsky (Douglas) and Nancy Travis’ character are having coffee. It was this big sweeping pan down over the city.

Can you talk about the importance of sound and music?
Cavanaugh: They both play a key role, and we have a great team that includes music editor Joe Deveau, supervising sound editor Lou Thomas and sound mixers Yuri Reese and Bill Smith. The sound recording quality we get on set is always great, so that means we only need very minimal ADR. The whole sound mix is done here on the lot at Warners.

Our composer, Jeff Cardoni, worked with Chuck on Young Sheldon, and he’s really on top of getting all the new cues for the show. We basically have two versions of our main title sequence music cues — one is very bombastic and in-your-face, and the other is a bit more subtle — and it’s funny how it broke down in the first season. The guy who cut the pilot and the odd episodes went with the more bombastic version, while the second editor on the even episodes preferred the softer cues, so I’ll be curious to see how all that breaks down in the new season.

How important is all the coloring on this?
Cavanaugh: Very important. After we do all the online we ship it over to George at Picture Shop and spend about a day and a half on it. The DP either comes in or gets a file, and she gives her notes. Then we’ll play it for Chuck. We’re in the HDR world with Dolby Vision, and it makes it look so beautiful — but then we have to do the standard pass on it as well.

I know you can’t reveal too much about the new season, but what can fans expect?
Henderson: They’re getting a continuation of these two characters’ journey together — growing old and everything that comes with that. I think it feels like a very natural extension of the first season.

Cavanaugh: In terms of the post process, I feel like we’re a Swiss watch now. We’re ticking along very smoothly. Sometimes post can be a nightmare and full of problems, so it’s great to have it all under control.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Colorfront at NAB showing 8K HDR, product updates

Colorfront, which makes on-set dailies and transcoding systems, has rolled out new 8K HDR capabilities and updates across its product lines. The company has also deepened its technology partnership with AJA and entered into a new collaboration with Pomfort to bring more efficient color and HDR management on-set.

Colorfront Transkoder is a post workflow tool for handling UHD, HDR camera, color and editorial/deliverables formats, with recent customers such as Sky, Pixelogic, The Picture Shop and Hulu. With a new HDR GUI, Colorfront’s Transkoder 2019 performs the realtime decompression/de-Bayer/playback of Red and Panavision DXL2 8K R3D material displayed on a Samsung 82-inch Q900R QLED 8K Smart TV in HDR and in full 8K resolution (7680 X 4320). The de-Bayering process is optimized through Nvidia GeForce RTX graphics cards with Turing GPU architecture (also available on Colorfront On-Set Dailies 2019), with 8K video output (up to 60p) using AJA Kona 5 video cards.

“8K TV sets are becoming bigger, as well as more affordable, and people are genuinely awestruck when they see 8K camera footage presented on an 8K HDR display,” said Aron Jaszberenyi, managing director, Colorfront. “We are actively working with several companies around the world originating 8K HDR content. Transkoder’s new 8K capabilities — across on-set, post and mastering — demonstrate that 8K HDR is perfectly accessible to an even wider range of content creators.”

Powered by a re-engineered version of Colorfront Engine and featuring the HDR GUI and 8K HDR workflow, Transkoder 2019 supports camera/editorial formats including Apple ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE (High Density Encoding).

Transkoder 2019’s mastering toolset has been further expanded to support Dolby Vision 4.0 as well as Dolby Atmos for the home with IMF and Immersive Audio Bitstream capabilities. The new Subtitle Engine 2.0 supports CineCanvas and IMSC 1.1 rendering for preservation of content, timing, layout and styling. Transkoder can now also package multiple subtitle language tracks into the timeline of an IMP. Further features support fast and efficient audio QC, including solo/mute of individual tracks on the timeline, and a new render strategy for IMF packages enabling independent audio and video rendering.

Colorfront also showed the latest versions of its On-Set Dailies and Express Dailies products for motion pictures and episodic TV production. On-Set Dailies and Express Dailies both now support ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE. As with Transkoder 2019, the new version of On-Set Dailies supports real-time 8K HDR workflows to support a set-to-post pipeline from HDR playback through QC and rendering of HDR deliverables.

In addition, AJA Video Systems has released v3.0 firmware for its FS-HDR realtime HDR/WCG converter and frame synchronizer. The update introduces enhanced coloring tools together with several other improvements for broadcast, on-set, post and pro AV HDR production developed by Colorfront.

A new, integrated Colorfront Engine Film Mode offers an ACES-based grading and look creation toolset with ASC Color Decision List (CDL) controls, built-in LOOK selection including film emulation looks, and variable Output Mastering Nit Levels for PQ, HLG Extended and P3 colorspace clamp.

Since launching in 2018, FS-HDR has been used on a wide range of TV and live outside broadcast productions, as well as motion pictures including Paramount Pictures’ Top Gun: Maverick, shot by Claudio Miranda, ASC.

Colorfront licensed its HDR Image Analyzer software to AJA for AJA’s HDR Image Analyzer in 2018. A new version of AJA HDR Image Analyzer is set for release during Q3 2019.

Finally, Colorfront and Pomfort have teamed up to integrate their respective HDR-capable on-set systems. This collaboration, harnessing Colorfront Engine, will include live CDL reading in ACES pipelines between Colorfront On-Set/Express Dailies and Pomfort LiveGrade Pro, giving motion picture productions better control of HDR images while simplifying their on-set color workflows and dailies processes.

Color Chat: Light Iron’s Sean Dunckley

Sean Dunckley joined Light Iron New York’s studio in 2013, where he has worked on episodic television and features films. He finds inspiration in many places, but most recently in the photography of Stephen Shore and Greg Stimac. Let’s find out more…

NAME: Sean Dunckley

COMPANY: LA- and NYC-based Light Iron

CAN YOU DESCRIBE YOUR COMPANY?
Light Iron is a Panavision company that offers end-to-end creative and technical post solutions. I color things there.

AS A COLORIST, WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I like to get involved early in the process. Some of the most rewarding projects are those where I get to work with the cinematographer from pre-production all the way through to the final DCP.

Ongoing advances in technology have really put the spotlight on the holistic workflow. As part of the Panavision ecosystem, we can offer solutions from start to finish, and that further strengthens the collaboration in the DI suite. We can help a production with camera and lens choices, oversee dailies and then bring all that knowledge into the final grade.

Recently, I had a client who was worried about the speed of his anamorphics at night. The cinematographer was much more comfortable shooting the faster spherical lenses, but the film and story called for the anamorphic look. In pre-production, I was able to show him how we can add some attributes of anamorphic lenses in post. That project ended up shooting a mix of anamorphic and spherical, delivering on both the practical and artistic needs.

Hulu’s Fyre Fraud doc.

WHAT SYSTEM DO YOU WORK ON?
Filmlight’s Baselight. Its color management tools offer with strong paint capabilities, and the Blackboard 2 panel is very user-friendly.

ARE YOU SOMETIMES ASKED TO DO MORE THAN JUST COLOR ON PROJECTS?
Now that DI systems have expanded their tools, I can integrate last-minute fixes during the DI sessions without having to stop and export a shot to another application. Baselight’s paint tools are very strong and have allowed me to easily solve many client issues in the room. Many times, this has saved valuable time against strict deadlines.

WHAT’S YOUR FAVORITE PART OF THE JOB?
That’s easy. It is the first day of a new project. It feels like an artistic release when I am working with filmmakers to create style frames. I like to begin the process by discussing the goals of color with the film’s creative team.

I try to get their take on how color can best serve the story. After we talk, we play for a little while. I demonstrate the looks that have been inspired by their words and then form a color palette for the project. During this time, it is just as important to learn what the client doesn’t like as much as what they do like.

WHAT’S YOUR LEAST FAVORITE?
I think the hours can be tough at times. The deadlines we face often battle with the perfectionist in me.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
Architecture is a field I would have loved to explore. It’s very similar, as it is equal parts technical and creative.

WHY DID YOU CHOOSE THIS PROFESSION?
I had always been interested in post. I used to cut skateboard videos with friends in high school. In film school, I pursued more of an editing route. After graduation, I got a job at a post house and quickly realized I wanted to deviate and dive into color.

Late Night with Emma Thompson. Photo by Emily Aragones

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
Recent film titles I worked on include Late Night and Brittany Runs a Marathon, both of which got picked up at Sundance by Amazon.

Other recent projects include Amazon Studio’s Life Itself, and the Fyre Fraud documentary on Hulu. Currently, I am working on multiple episodic series for different OTT studios.

The separation that used to exist between feature films, documentaries and episodics has diminished. Many of my clients are bouncing between all types of projects and aren’t contained to a single medium.

It’s a unique time to be able to color a variety of productions. Being innovative and flexible is the name of the game here at Light Iron, and we’ve always been encouraged to follow the client and not the format.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
It’s impossible to pick a single project. They are all my children!

WHERE DO YOU FIND INSPIRATION?
I go through phases but right now it’s mostly banal photography. Stephen Shore and Greg Stimac are two of my favorite artists. Finding beauty in the mundane has a lot to do with the shape of light, which is very inspiring to me as a colorist.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
I need my iPhone, Baselight and, of course, my golf course range finder.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
I follow Instagram for visuals, and I keep up with Twitter for my sports news and scores.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I have young children, so they make sure I leave those stresses back at the office, or at least until they go to bed. I also try to sneak in some golf whenever I can.

Review: Sonarworks Reference 4 Studio Edition for audio calibration

By David Hurd

What is a flat monitoring system, and how does it benefit those mixing audio? Well, this is something I’ll be addressing in this review of Sonarworks Reference 4 Studio Edition, but first some background…

Having a flat audio system simply means that whatever signal goes into the speakers comes out sonically pure, exactly as it was meant to. On a graph, it would look like a straight line from 20 cycles on the left to 20,000 cycles on the right.

A straight, flat line with no peaks or valleys would indicate unwanted boosts or cuts at certain frequencies. There is a reason that you want this for your monitoring system. If there are peaks in your speakers at the hundred-cycle mark on down you get boominess. At 250 to 350 cycles you get mud. At around a thousand cycles you get a honkiness as if you were holding your nose when you talked, and too much high-end sounds brittle. You get the idea.

Before

After

If your system is not flat, your monitors are lying to your ears and you can’t trust what you are hearing while you mix.

The problem arises when you try to play your audio on another system and hear the opposite of what you mixed. It works like this: If your speakers have too much bass then you cut some of the bass out of your mix to make it sound good to your ears. But remember, your monitors are lying, so when you play your mix on another system, the bass is missing.

To avoid this problem, professional recording studios calibrate their studio monitors so that they can mix in a flat-sounding environment. They know that what they hear is what they will get in their mixes, so they can happily mix with confidence.

Every room affects what you hear coming out of your speakers. The problem is that the studio monitors that were close to being flat at the factory are not flat once they get put into your room and start bouncing sound off of your desk and walls.

Sonarworks
This is where Sonarwork’s calibration mic and software come in. They give you a way to sonically flatten out your room by getting a speaker measurement. This gives you a response chart based upon the acoustics of your room. You apply this correction using the plugin and your favorite DAW, like Avid Pro Tools. You can also use the system-wide app to correct sound from any source on your computer.

So let’s imagine that you have installed the Sonarworks software, calibrated your speakers and mixed a music project. Since there are over 30,000 locations that use Sonarworks, you can send out your finished mix, minus the Sonarworks plugins since their room will have different acoustics, and use a different calibration setting. Now, the mastering lab you use will be hearing your mix on their Sonarworks acoustically flat system… just as you mixed it.

I use a pair of Genelec studio monitors for both audio projects and audio-for-video work. They were expensive, but I have been using them for over 15 years with great results. If you don’t have studio monitors and just choose to mix on headphones, Sonarworks has you covered.

The software will calibrate your headphones.

There is an online product demo at sonarworks.com that lets you select which headphones you use. You can switch between bypass and the Sonarworks effect. Since they have already done the calibration process for your headphones, you can get a good idea of the advantages of mixing on a flat system. The headphone option is great for those who mix on a laptop or small home studio. It’s less money as well. I used my Sennheiser HD300 Pro series headphones.

I installed Sonarworks on my “Review” system, which is what I use to review audio and video production products. I then tested Sonarworks on both Pro Tools 12 music projects and video editing work, like sound design using a sound FX library and audio from my Blackmagic Ursa 4.6K camera footage. I was impressed at the difference that the Sonarworks software made. It opened my mixes and made it easy to find any problems.

The Sonarworks Reference 4 Studio Edition takes your projects to a whole new level, and finally lets you hear your work in a sonically pure and flat listening environment.

My Review System
The Sonarworks Reference 4 Studio Edition was tested on
my Mac Pro 6-core trash can running High Sierra OSX, 64GB RAM, 12GB of RAM on the D700 video cards; a Blackmagic UltraStudio 4K box; four G-Tech G-Speed 8TB RAID boxes with HighPoint RAID controllers; Lexar SD and Cfast card readers; video output viewed a Boland 32-inch broadcast monitor; a Mackie mixer; a Complete Control S25 keyboard; and a Focusrite Clarett 4 Pre.

Software includes Apple FCPX, Blackmagic Resolve 15 and Pro Tools 12. Cameras used for testing are a Blackmagic 4K Production camera and the Ursa Mini 4.6K Pro, both powered by Blueshape batteries.


David Hurd is production and post veteran who owns David Hurd Productions in Tampa. You can reach him at david@dhpvideo.com.

AWS at NAB with a variety of partners, cloud workflows

During NAB 2019, Amazon Web Services (AWS) showcased advances for content creation, media supply chains and content distribution that improve agility and enhance quality across video workflows. Demonstrations included enhanced live and on-demand video workflows, such as next-gen transcoding, studio in the cloud, content protection, low latency and personalization. The company also highlighted cloud-based machine learning capabilities for content redaction, highlight creation, video clipping, live subtitling and metadata extraction.

AWS was joined by 12 technology partners in showing solutions that help users create, protect, distribute and monetize streaming video content. More than 60 Amazon Partner members across the show floor demonstrated media solutions built on AWS and interoperable with AWS services to deliver scalable video workflows.

Here are some workflows highlighted:
• Studio in the cloud – Users can deploy a creative studio in the cloud for visual effects, animation and editing workloads. They can scale rendering, virtual workstations and data storage globally with AWS Thinkbox Deadline, Amazon Elastic Compute Cloud (EC2) instances and AWS Cloud storage options such as Amazon Simple Storage Service (Amazon S3), Amazon FSx and more.
• Next-generation transcoding – AWS Elemental MediaConvert spotlighted advanced features for file-based video processing. Support for IMF inputs and CMAF output simplifies video delivery, and integrated Quality-Defined Variable Bitrate (QVBR) rate control enables high-quality video while lowering bitrates, storage and bandwidth requirements.
• Cloud DVR services – AWS Elemental MediaPackage enables an end-to-end cloud DVR workflow that lets content providers deliver DVR-like experiences, such as catch-up and start-over functionality for viewing on mobile and other over-the-top (OTT) devices.

AWS also highlighted intelligent workflows and automated capabilities:
• Media-to-cloud migration – Media asset management tools integrate with AWS Elemental MediaConvert, Amazon S3 and Amazon CloudFront to accelerate migration of large-scale video archives into the cloud. Built-in metadata tools improve search and management for massive media archives.
• Smart language workflows – AWS Elemental Media Services and Amazon Machine Learning work together to automate realtime transcription, caption creation and multi-language subtitling and dubbing, as well as creation of video clips based on caption text.
• Deep media archive – The new Amazon S3 Glacier Deep Archive storage class is a low-cost cloud storage offering that enables customers to eliminate digital tape from their media infrastructures. It is ideally suited to cold media archives and to second copy and disaster recovery needs.

Behind the Title: PS260 editor Ned Borgman

This editor’s path began early. “I was the kid who would talk during the TV show and then pay attention to the commercials,” he says.

Name: Ned Borgman

Company: PS260

Can you describe your company?
PS260 is a post house built for ideas, creative solutions and going beyond the boards. We have studios in New York, Venice, California and Boston. I am based in New York.

What’s your job title?
Film editor, problem solver, cleaner of messes.

What does that entail?
My job is to make everything look great. Every project takes an entire team of super-talented people who bring their expertise to bear to tell a story. They create all of the puzzle pieces that end up in the dailies, and I put them together in such a way that they can all shine their best.

Facebook small business campaign

What would surprise people the most about what falls under that title?
I think it would be the sheer amount of stuff that can become an editor’s responsibility. So many details go into crafting a successful edit, and an editor needs to be well-versed in all of it. Color grading, visual effects, design, animation, music, sound design, the list goes on. The point isn’t to be a master of all of those things, (that’s why we work with other amazing people when it comes to finishing), but to know the needs of each of those parts and how to make sure every detail can get properly addressed.

What’s your favorite part of the job?
It’s the middle part. When we’re all in the middle of the edit, up to our necks in footage and options and ideas. Out of all of that exploration the best bits start to stand out. The sound design element from that cut and the music track from that other version and a take we tried last night. It all starts to make sense, and from there it’s about making sure the best bits can work well together.

What’s your least favorite?
Knowing there are always some great cuts that will only ever exist inside a Premiere Pro bin. Not every performance or music track or joke can make it into the final cut and out into the world and that’s ok. Maybe those cuts are airing in some other parallel universe.

What is your most productive time of the day?
Whenever the office is empty. So either early in the morning or late at night.

If you didn’t have this job, what would you be doing instead?
Probably something with photography. I’m too attached to visual storytelling, and I’m a horrible illustrator.

Why did you choose this profession? How early on did you know this would be your path? 
I’ve always been enamored with commercials. I was the kid who would talk during the TV show and then pay attention to the commercials. I remember making my first in-camera edit in third grade when I was messing around with the classroom camcorder set up on a tripod. I had recorded myself in front of the camera and then recorded a bit of the empty classroom. Playing it back, it looked like I had vanished into thin air. It blew my eight-year-old mind.

Burger King

Can you name some recent projects you have worked on?
Let’s see, Burger King’s flame-broiled campaign with MullenLowe was great. It has a giant explosion, which is always nice. Facebook’s small business campaign with 72andSunny was a lot of fun with an amazing team of people. And some work for the Google Home Hub launch with Google Creative Labs was fun because launching stuff is exciting.

Do you put on a different hat when cutting for a specific genre? 
Not exactly. Every genre has its specific needs, but I think the fundamentals remain the same. I need to pay attention to rhythm, to performances, to music, to sound design, to VO — all of that stuff. It’s about staying in tune with how all of these ingredients interact with each other to create a reaction from the audience, no matter the reaction you’re striving for.

What is the project that you are most proud of?
I grew up obsessed with practical effects in movies, so I’d have to say Burger King “Gasoline Shuffle”. It has a massive explosion that was shot in camera and it looks incredible. I wish I was on set that day.

What do you use to edit?
Adobe Premiere Pro all the way. I like to think that one day I’ll be back on Avid Media Composer though.

What is your favorite plugin?
I don’t have one. Just give me that basic install.

Are you often asked to do more than edit? If so, what else are you asked to do?
Sure. I’ll often record the scratch VO when there’s one needed. My voice is…serviceable. What that means is that as soon as the real VO talent gets placed in the cut, everyone’s thrilled with how much better everything sounds. That’s cool by me.

Name three pieces of technology you can’t live without.
My iPhone, my Shure in-ear headphones, and an extra long charging cable.

This is a high stress job with deadlines and client expectations. What do you do to de-stress from it all?
Change some diapers. My wife and I just had our first kid last August, and she’s incredible. A game of peek-a-boo can really change your perspective.

Quantum offers new F-Series NVMe storage platform

During the NAB show, Quantum introduced its new F-Series NVMe storage arrays designed for performance, availability and reliability. Using non-volatile memory express (NVMe) Flash drives for ultra-fast reads and writes, the series supports massive parallel processing and is intended for studio editing, rendering and other performance-intensive workloads using large unstructured datasets.

Incorporating the latest Remote Direct Memory Access (RDMA) networking technology, the F-Series provides direct access between workstations and the NVMe storage devices, resulting in predictable and fast network performance. By combining these hardware features with the new Quantum Cloud Storage Platform and the StorNext file system, the F-Series offers end-to-end storage capabilities for post houses, broadcasters and others working in rich media environments, such as visual effects rendering.

The first product in the F-Series is the Quantum F2000, a 2U dual-node server with two hot-swappable compute canisters and up to 24 dual-ported NVMe drives. Each compute canister can access all 24 NVMe drives and includes processing power, memory and connectivity specifically designed for high performance and availability.

The F-Series is based on the Quantum Cloud Storage Platform, a software-defined block storage stack tuned specifically for video and video-like data. The platform eliminates data services unrelated to video while enhancing data protection, offering networking flexibility and providing block interfaces.

According to Quantum, the F-Series is as much as five times faster than traditional Flash storage/networking, delivering extremely low latency and hundreds of thousands of IOPs per chassis. The series allows users to reduce infrastructure costs by moving from Fiber Channel to Ethernet IP-based infrastructures. Additionally, users leveraging a large number of HDDs or SSDs to meet their performance requirements can gain back racks of data center space.

The F-Series is the first product line based on the Quantum Cloud Storage Platform.

HP shows off new HP Z6 and Z8 G4 workstations at NAB

HP was at NAB demoing their new HP Z6 and Z8 G4 workstations, which feature Intel Xeon scalable processors and Intel Optane DC persistent memory technology to eliminate the barrier between memory and storage for compute-intensive workflows, including machine learning, multimedia and VFX. The new workstations offer accelerated performance with a processor-architecture that allows users to work faster and more efficiently.

Intel Optane DC allows users to improve system performance by moving large datasets closer to the CPU so it can be assessed, processed and analyzed in realtime and in a more affordable way. This will allow for no data loss after a power cycle or application closure. Once applications are written to take advantage of this new technology, users will benefit from accelerated workflows and little or no downtime.

Targeting 8K video editing in realtime and for rendering workflows, the HP Z6 G4 workstation is equipped with two next-generation Intel Xeon processors providing up to 48 total processor cores in one system, Nvidia and AMD graphics and 384GB of memory. Users can install professional-grade storage hardware without using standard PCIe slots, offering the ability to upgrade over time.

Powered by up to 56 processing cores and up to 3TB of high-speed memory, the HP Z8 G4 workstation can run complex 3D simulations, supporting VFX workflows and handling advanced machine learning algorithms. They are certified for some of the most-used software apps, including Autodesk Flame and DaVinci Resolve.

HP’s Remote Graphics Software (RGS), included with all HP Z workstations, enables remote workstation access from any Windows, Linux or Mac device.

Avid is collaborating with HP to test RGS with Media Composer|Cloud VM.

The HP Z6 G4 workstation with new Intel Xeon processors is available now for the base price of $2,372. The HP Z8 G4 workstation starts at $2,981.

AI and deep learning at NAB 2019

By Tim Nagle

If you’ve been there, you know. Attending NAB can be both exciting and a chore. The vast show floor spreads across three massive halls and several hotels, and it will challenge even the most comfortable shoes. With an engineering background and my daily position as a Flame artist, I am definitely a gear-head, but I feel I can hardly claim that title at these events.

Here are some of my takeaways from the show this year…

Tim Nagle

8K
Having listened to the rumor mill, this year’s event promised to be exciting. And for me, it did not disappoint. First impressions: 8K infrastructure is clearly the goal of the manufacturers. Massive data rates and more Ks are becoming the norm. Everybody seemed to have an 8K workflow announcement. As a Flame artist, I’m not exactly looking forward to working on 8K plates. Sure, it is a glorious number of pixels, but the challenges are very real. While this may be the hot topic of the show, the fact that it is on the horizon further solidifies the need for the industry at large to have a solid 4K infrastructure. Hey, maybe we can even stop delivering SD content soon? All kidding aside, the systems and infrastructure elements being designed are quite impressive. Seeing storage solutions that can read and write at these astronomical speeds is just jaw dropping.

Young Attendees
Attendance remained relatively stable this year, but what I did notice was a lot of young faces making their way around the halls. It seemed like high school and university students were able to take advantage of interfacing with manufacturers, as well as some great educational sessions. This is exciting, as I really enjoy watching young creatives get the opportunity to express themselves in their work and make the rest of us think a little differently.

Blackmagic Resolve 16

AI/Deep Learning
Speaking of the future, AI and deep learning algorithms are being implemented into many parts of our industry, and this is definitely something to watch for. The possibilities to increase productivity are real, but these technologies are still relatively new and need time to mature. Some of the post apps taking advantage of these algorithms come from Blackmagic, Autodesk and Adobe.

At the show, Blackmagic announced their Neural Engine AI processing, which is integrated into DaVinci Resolve 16 for facial recognition, speed warp estimation and object removal, to name just a few. These features will add to the productivity of this software, further claiming its place among the usual suspects for more than just color correction.

Flame 2020

The Autodesk Flame team has implemented deep learning in to their app as well. It portends really impressive uses for retouching and relighting, as well as creating depth maps of scenes. Autodesk demoed a shot of a woman on the beach, with no real key light possibility and very flat, diffused lighting in general. With a few nodes, they were able to relight her face to create a sense of depth and lighting direction. This same technique can be used for skin retouch as well, which is very useful in my everyday work.

Adobe has also been working on their implementation of AI with the integration of Sensei. In After Effects, the content-aware algorithms will help to re-texture surfaces, remove objects and edge blend when there isn’t a lot of texture to pull from. Watching a demo artist move through a few shots, removing cars and people from plates with relative ease and decent results, was impressive.

These demos have all made their way online, and I encourage everyone to watch. Seeing where we are headed is quite exciting. We are on our way to these tools being very accurate and useful in everyday situations, but they are all very much a work in progress. Good news, we still have jobs. The robots haven’t replaced us yet.


Tim Nagle is a Flame artist at Dallas-based Lucky Post.

NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.