AMD 2.1

Category Archives: Cameras

Killing Eve EP Sally Woodward Gentle talks Season 3

By Iain Blair

Killing Eve is more than just one of the most addictive spy thrillers on TV. It’s also a dark comedy, a workplace drama and a globe-trotting actioner that tells the story of two women engaged in an epic game of cat-and-mouse — Eve, head of a secret MI6 unit (Sandra Oh), and Villanelle (Jodie Comer), a beautiful, (and strangly likeable) psychopathic assassin she’s been tasked to track down.

Sally Woodward Gentle

The award-winning show continues the story of the two women when it returns for its third season on April 26 on both BBC America and AMC. Season 3 sees Eve back in action after having survived being shot by Villanelle in the Season 2 finale. Eve is now in Rome and her current MI6 status is in flux after being manipulated by Carolyn (Fiona Shaw).

Continuing the show’s tradition of passing the baton to a new female writing voice, for Season 3 Suzanne Heathcote serves as lead writer and executive producer, joining executive producers Sally Woodward Gentle, Phoebe Waller-Bridge, Lee Morris, Gina Mingacci, Damon Thomas, Jeff Melvoin and Sandra Oh. Killing Eve is produced by Sid Gentle Films and is distributed by Endeavor Content.

I recently spoke with EP Sally Woodward Gentle — the BAFTA-winning and Emmy- and Golden Globe-nominated EP of the dramas Any Human Heart and The Durrells in Corfu — about making the show (based on the books of Luke Jennings), the Emmys and her love of post.

What can you tell us about Season 3 without giving too much away?
It’s a much more emotional season. We move Eve and Villanelle’s relationship on, and we get to see more of what Villanelle is really about. At the same time, Eve is really tested.  And we bring in lots of new characters, which is very exciting, and Carolyn and Konstantin (Eve’s handler, played by Kim Bodnia) have got huge roles to play.

The appeal of two women leads seems obvious now, but were there doubts at first having them play traditional male roles when you first optioned the Luke Jennings novellas?
Not really. In fact, it didn’t even cross my mind. I just felt that people really enjoy having a female assassin, and that it would be great to have another woman chase her.  And I didn’t feel that the idea was wildly original. It just felt right, but I knew there were other female assassin shows out there, and I didn’t want people to go, “Oh, there’s La Femme Nikita.” I did feel it was time to do something more bold with it.

Is that how you decided to involve Phoebe Waller-Bridge?
Exactly. I’d read Fleabag and we’d had a meeting, and I just loved her attitude. Back then, she’d only done Fleabag and written some very clever comedy. I loved the idea of putting Luke’s novellas together with her attitude, joie de vivre and love of TV and what it could do. It didn’t feel like, “Wow, this will be earth-shatteringly different!” It just felt like something really interesting to do. Just do it and see what comes out.

You’ve executive produced all three seasons. What are the main challenges of this show?
To keep it feeling really fresh each year, and to not repeat stuff you’ve done. To examine new, different areas of emotional relationships, and to put people under different types of stress. The other big challenge is we have to turn it around from start to finish in just one year. We have to write all the scripts, shoot them, post them and get them out there in that time. It’s really hard work, both physically and mentally, but a lot of our team’s been here since the start. They love it, and that really helps, and everyone wants to push it a bit harder every season, so we embrace all new ideas.

Is it true that when Sandra Oh was first approached, she didn’t quite believe she was being cast as Eve?
Yeah, she hadn’t pictured herself in the role, but she’s brilliant.

What do Sandra and Jodie bring to the mix?
They bring so much. We were still finishing scripts for Season 1 as we shot, so you can’t help but feed back their input into the scripts, and the characters really have so much of the actors’ DNA. They just know them so well and how they’d respond.

You always use great locations. Where did you shoot Season 3?
In Spain, Romania, the UK. We get around!

Where do you post?
All at Molinaire in London. We do everything from the edit to the grade, and we do all the sound at Hackenbacker, which is part of Molinaire.

Do you like the post process?
I love it, and really enjoy it. We have a great post supervisor, Kate Stannard, who’s been on the show since the start. The great thing about post is that you get to rewrite all the raw material and be really creative with it.

Talk about editing. You have several editors, I assume because of the time factor. How does that work?
We have a great team of editors, including Dan Crinnion, who’s been on the show since the start, and an Italian editor Simone Nesti who does the assembly and who’s also been with us since the start. As soon as a director has finished shooting, we get right in there with the editor who does that block.  We shoot in blocks of two episodes,and each block has its own editor and assistants. It’s not a huge team considering the amount of work.

The show is a real genre mash-up – thriller, comedy, action, emotional drama. How do you handle all the shifting tones?
That’s the big editing challenge, and the thing we were most concerned about in Season 1  — that Eve’s and Villanelle’s two stories were too disparate to be knitted together properly. But once you start to get a feel for what the show is, and what works and what doesn’t, it flows more easily. For me, if it gets too broad and it doesn’t feel truthful, that’s not good. But then it’s also a big piece of entertainment, so you can be really wild with it. We’re not saving lives, and there’s no massive message. We just want to be truthful about human behavior and be very entertaining.

There’s obviously a lot of attention also paid to the sound and the music.
Thanks for noticing. We have a great production sound team. Nigel Heath is our rerecording mixer, and our aim is to not have the dialogue too clean and out front. We like to keep a lot of texture in the background and make it feel quite immersive. Then in terms of the music, composer David Holmes is quite bold in his choices. That’s very tricky as we try hard not to be too genre and obvious with the cues, so they don’t just reinforce the visuals and what you should feel. So at a very dark moment the score might be quite celebratory and glorious. We’re constantly trying to flip it.

What about the DI?
It’s incredibly important, and our colorist Gareth Spensley has worked on it since season 1 so he knows the show really well, and he works very closely with our DP Julian Court who’s done most of the episodes since the start. Sometimes we have to shoot out of sequence, at different times of year, so you have to match all that. We try to find locations that feel fresh and exciting, and then we try hard not to over-stylize the look, and keep all the skin tones as natural and real as possible, and then enhance the beauty of the rest of it. At the very start of the show, we thought of pushing the look to get a more ‘noir’ look, but it just didn’t feel right, so we just leant more into the pleasure of the visuals.

How important are the Emmys to a show like this?
Hugely important, for both the show and the actors. I think it’s given us far more visibility.

I heard you already got picked up for Season 4. How far along are you with it?
We’re already writing and nearly have the whole season arc worked out, and we’ll start shooting it at the end of September. Of course, it all depends on what happens with the Covid-19 crisis, but that’s the plan.

Will you do more seasons?
I can see us going on as long as we keep refreshing it and move their relationship along. Then we have all these new characters we’ve created who’ll be there in Season 4 and beyond, so there’s plenty to explore.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Blackmagic intros ATEM Mini Pro, updates camera and Hyperdeck software

Blackmagic has added to its ATEM Mini family as well as updated the software for Pocket Cinema Camera 4K and 6K and the HyperDeck Studio Mini.

ATEM Mini Pro

The new ATEM Mini Pro is a low-cost switcher ($595) that makes it easy to create multi-camera productions (up to four cameras) for live streaming. The new version has extra features for recording, streaming and monitoring, including:
– A new hardware streaming engine that allows direct streaming via Ethernet connection to YouTube Live, Facebook and Twitch
– A multi-view option that allows all inputs to be monitored on a single monitor along with live status of recording, streaming and the audio mixer

– Support for recording streams directly to USB flash disks in H.264 and recording to multiple disks for continuous recording

The ATEM Mini Pro is now shipping.

There’s also a new Camera 6.9 software update for Blackmagic Pocket Cinema Camera 4K and 6K models. The update allows Pocket Cinema Camera 4K and 6K to work as a digital film camera and a studio camera and adds studio camera features such as remote control, tally and the DaVinci Resolve color corrector.

Camera 6.9 software

Version 6.9 also makes it possible to connect to any ATEM Mini switcher and get control of the camera parameters, lens and tally light. ATEM Mini can control up to four Pocket Cinema Camera 4K or 6K cameras via the HDMI video connection and ensure all cameras are perfectly matched for a professional live studio workflow. The Camera 6.9 update is now available to download for free.

Finally, the new HyperDeck v7.1 software update adds multiple new features to HyperDeck Studio Mini broadcast recorders:
– New H.264 codec that supports true interlaced HD formats
– Support for AAC audio codec, making it possible to upload files directly to YouTube
– Much faster Ethernet transfers of 110 MB/s and support for longer duration in a single file of at least three hours

HyperDeck v7.1 is now available to download for free.

AMD 2.1

Emma DP Christopher Blauvelt talks looks and workflow

Focus Features Emma, the latest reincarnation of the Jane Austen novel, was directed by Autumn de Wilde and shot by cinematographer Christopher Blauvelt. For de Wilde, a photographer and music video director, Emma is her feature film directorial debut. In addition to her still work on CD covers for The White Stripes, Fiona Apple and Beck she has directed music videos for indie bands, including Spoon, The Decemberists and Rilo Kiley.

DP Christopher Blauvelt (right)

Blauvelt and de Wilde met in 2000 when she directed Elliott Smith’s Son of Sam music video. He then shot some 16mm footage that was projected behind Elliott on his tour, and the collaborators became friends. When it came time to start on her directorial debut, de Wilde reached out, knowing that he could help her bring her vision to the screen.

Emma was shot with the ARRI Alexa LF using ARRI Signature Prime lenses. Blauvelt had done some tests with it a year or so ago for a film he shot with Gus Van Sant, called Don’t Worry, He Won’t Get Far on Foot, so he was familiar with the camera. “We looked at many different cameras to find our aesthetic,” Blauvelt explains. “I remember making a note about the softness and way it treated skin. It was also something I would think about for scope and scale, for which Emma provided the environments in the form of castles and the English countryside. Of course, we didn’t just test cameras — Autumn had given me many references to use as our guide in finding the unique look of Emma, so we tested many different lenses, cameras, filters, lights and lookup tables.”

Principal photography began in March 2019. Blauvelt hadn’t worked on a feature film in the UK before but was fortunate enough to team up with Mission, a UK-based DIT and digital services company, who assisted in setting up a workflow and a color pipeline that ensured that the director and DP’s vision was communicated to everyone. Mission’s Jacob Robinson was the DIT.

DITs have become a more and more important part of the camera crew and often build close working relationships with DPs. Designing the look of a production is a collaborative process that often begins in preproduction. “I really enjoy my working relationships with DITs; they are the people I rely on to inform me on the rules we’ve put in place on any particular shoot,” says Blauvelt. “Usually during prep, we will do an enormous amount of testing and come up with the recipe that we decide on for the shoot.”

The final DI colorist is often part of the mix too. For Emma, Goldcrest’s Rob Pizzey was the DI colorist, so he was also involved in creating the color pipeline and LUTs with Blauvelt and DIT Robinson. As Blauvelt explains, “It’s also really great having the chance to create custom LUTs with our final grade colorist. We work hard to have a formula that works while on location and all the way to the final grade.”

There are several different ways for a DP to work with LUTs. In Blauvelt’s case, during testing the team created several different base LUTs, including day/exterior, day/interior, night/exterior, night/interior, day/exterior in clouds and sun and other variations they might encounter during the shoot. “These LUTs are all adjustable as well,” he continues, “and will be manipulated live on set to achieve the desired look. We also have images to serve as our spirit throughout the shoot to remind us of the original intent as well.”

The digital lab process on Emma was straightforward. Every day, DIT Robinson would send the capture drives from set along with a Blackmagic DaVinci Resolve project with CDLs applied. Mission’s Niall Todd and Neil Gray were tasked with creating synced H264s and DNxHD 115 files for Avid. The data was then backed up to dual LTOs and a G-Tech 10TB G-Drive.

Mission’s on-set to near-set process made it simple for Blauvelt’s vision to be conveyed to everyone. “Mark Purvis has created a collective and creative environment with Mission that I had not experienced before.”

The digital intermediate process was straightforward, with Goldcrest’s Pizzey using DaVinci Resolve to complete the final grading. Emma, which was edited by Nick Emerson, is now streaming.


Matt Shaw on cutting Conan Without Borders: Ghana and Greenland

By Randi Altman

While Conan O’Brien was airing his traditional one-hour late night talk show on TBS, he and his crew would often go on the road to places like Cuba, South Korea and Armenia for Conan Without Borders — a series of one-hour specials. He would focus on regular folks, not celebrities, and would embed himself into the local culture… and there was often some very mediocre dancing, courtesy of Conan. The shows were funny, entertaining and educational, and he enjoyed doing them.

Conan and Matt on the road.

In 2019, Conan and his crew, Team Coco, switched the nightly show from one hour to a new 30-minute format. The format change allowed them to produce three to four hour-long Conan Without Borders specials per year. Two of the places the show visited last year were Ghana and Greenland. As you might imagine, they shoot a lot of footage, which all must be logged and edited, often while on the road.

Matt Shaw is one of the editors on Conan, and he went on the road with the show when it traveled to Greenland. Shaw’s past credits include Deon Cole’s Black Box and The Pete Holmes Show (both from Conan O’Brien’s Conaco production company) and The Late Late Show with James Corden (including Carpool Karaoke). One of his first gigs for Team Coco was editing Conan Without Borders: Made in Mexico. That led to a full-time editing gig on Conan on TBS and many fun adventures.

We reached out to Shaw to find out more about editing these specials and what challenges he faced along the way.

You recently edited Conan Without Borders — the Greenland and Ghana specials. Can you talk about preparing for a job like that? What kind of turnaround did you have?
Our Ghana special was shot back in June 2019, with the original plan to air in August, but it was pushed back to November 7 because of how fast the Greenland show came up.

In terms of prep for a show like Ghana, we mainly just know the shooting specs and will handle the rest once the crew actually returns. For the most part, that’s the norm. Ideally, we’ll have a working dark week (no nightly Conan show), and the three editors — me, Rob Ashe and Chris Heller — will take the time to offload, sync and begin our first cuts of everything. We’ll have been in contact with the writers on the shoot to get an idea of what pieces were shot and their general notes from the day.

With Greenland, we had to mobilize and adjust everything to accommodate a drastically different shoot/delivery schedule. The Friday before leaving, while we were prepping the Ghana show to screen for an audience, we heard there might be something coming up that would push Ghana back. On Monday, we heard the plan was to go to Greenland on Wednesday evening, after the nightly show, and turn around Greenland in place of Ghana’s audience screening. We had to adjust the nightly show schedule to still have a new episode ready for Thursday while we were in Greenland.

How did you end up on the Greenland trip?
Knowing we’d only have six days from returning from Greenland to having to finish the show broadcast, our lead editor, Rob Ashe, suggested we send an editor to work on location. We were originally looking into sending footage via Aspera from a local TV studio in Nuuk, Greenland, but we just wouldn’t have been able to turn it around fast enough. We decided about two days before the trip began that I’d go and do what I could to offload, backup, sync and do first cuts on everything.

How much footage did you have per episode, and what did they shoot on?
Ghana had close to 17 hours of material shot over five days on Sony Z450s at 4K XAVC, 29.97. Greenland was closer to 12 hours shot over three days on Panasonic HPX 250s, P2 media recording at 1080 60i.

We also used iPhone/iPad/GoPro footage picked up by the rest of the crew as needed for both shows. I also had a DJI Osmo pocket camera to play with when I had a chance, and we used some of that footage during the montage of icebergs.

So you were editing segments while they were still shooting?
In Greenland, I was cutting daily in the hotel. Midday, I’d get a drop of cards, offload, sync/group and the first cuts on everything. We had a simple offline edit workflow set up where I’d upload my cuts to Frame.io and email my project files to the team — Rob and Chris — in Burbank. They would then download and sync the Frame.io file to a top video layer in the timeline and continue cutting down, with any additional notes from the writers.

Generally, I’d have everything from Day One uploaded by the start of Day Two, etc. It seemed to work out pretty well to set us up for success when we returned. I was also getting notes on requests to help cut a few highlights from our remotes and to put on Team Coco’s Instagram account.

On our return day, we flew to Ilulissat for an iceberg expedition. We had about two hours on the ground before having to return to the airport and fly to Kangerlussuaq, where our chartered plane was waiting to take us back to California. On the flight back, I worked for another four hours or so to sort through the remaining segments and prep everything so we could hit the ground running the following morning. During the flight home, we screened some drone footage from the iceberg trip for Conan, and it really got everyone excited.

What are the challenges of working on the road and with such tight turnarounds?
The night we left for Greenland was preceded by a nightly show in Burbank. After the show ended, we hopped on a plane to fly eight hours to Kangerlussuaq for customs, then another to Nuuk. The minute we landed, we were filming for about three hours before checking into the hotel. I grabbed the morning’s camera cards, went to my room and began cutting. By the time I went to bed, I had cuts done of almost everything from the first day. I’m a terrible sleeper on planes, so the marathon start was pretty insane.

Outside of the little sleep, our offload speeds were slower because we were using different cameras than usual — for the sake of traveling lighter — because the plane we flew in had specific weight restrictions. We actually had to hire local crew for audio and B and C camera because there wasn’t enough room for everyone in the plane to start.

In general, I think the overall trip went as smooth as it could have. It would be interesting to see how it would play out for a longer shoot schedule.

What editing system did you use? What was your setup like? What kind of storage were you using?
On the road I had my MacBook Pro (2018 model), and we rented an identical backup machine in case mine died. For storage, we had four 1TB G-Tech USB-C drives and a 4TB G-RAID to back everything up. I had a USB-3.0 P2 card reader as well and multiple backup readers. A Bluetooth mouse and keyboard rounded out the kit, so I could travel with everything in a backpack.

We had to charter a plane in order to fly directly to Greenland. With such a tight turnaround between filming and delivering the actual show, this was the only way to actually make the special happen. Commercial flights fly only a few days per week out of neighboring countries, and once you’re in Greenland, you either have to fly or take a boat from city to city.

Matt Shaw editing on plane.

On the plane, there was a conference table in the back, so I set up there with one laptop and the G-RAID to continue working. The biggest trouble on the plane was making sure everything stayed secure on the table while taking off and making turns. There were a few close calls when everything started to slide away, and I had to reach to make sure nothing was disconnected.

How involved in the editing is Conan? What kind of feedback did you get?
In general, if Conan has specific notes, the writers will hear them during or right after a shoot is finished. Or we’ll test-screen something after a nightly show taping and indirectly get notes from the writers then.

There will be special circumstances, like our cold opens for Comic-Con, when Conan will come to edit and screen a close-to-final cut. And there just might be a run of jokes that isn’t as strong, but he lets us work with the writers to make what we all think is the best version by committee.

Can you point to some of the more challenging segments from Greenland or Ghana?
The entire show is difficult with the delivery-time constraints while handling the nightly show. We’ll be editing the versions for screening sometimes up to 10 minutes before they have to screen for an audience as well as doing all the finishing (audio mix, color as needed, subtitling and deliverables).

For any given special, we’re each cutting our respective remotes during the day while working on any new comedy pieces for that day’s show, then one or two of us will split the work on the nightly show, while the other keeps working with the travel show writers. In the middle of it all, we’ll cut together a mini tease or an unfinished piece to play into that night’s show to promote the specials, so the main challenge is juggling 30 things at a time.

For me, I got to edit this 1980s-style action movie trailer based on an awesome poster Conan had painted by a Ghanaian artist. We had puppets built, a lot of greenscreen and a body double to composite Conan’s head onto for fight scenes. Story-wise, we didn’t have much of a structure to start, but we had to piece something together in the edit and hope it did the ridiculous poster justice.

The Thursday before our show screened for an audience was the first time Mike Sweeney (head writer for the travel shows) had a chance to look at any greenscreen footage and knew we were test-screening it the following Monday or Tuesday. It started to take shape when one of our graphics/VFX artists, Angus Lyne, sent back some composites. In the end, it came together great and killed with the audience and our staff, who had already seen anything and everything.

Our other pieces seem to have a linear story, and we try to build the best highlights from any given remote. With something like this trailer, we have to switch our thought process to really build something from scratch. In the case of Greenland and Ghana, I think we put together two really great shows.

How challenging is editing comedy versus drama? Or editing these segments versus other parts of Conan’s world?
In a lot of the comedy we cut, the joke is king. There are always instances when we have blatant continuity errors, jump cuts, etc., but we don’t have to kill ourselves trying to make it work in the moment if it means hurting the joke. Our “man on the street” segments are great examples of this. Obviously, we want something to be as polished and coherent as possible, but there are cases when it just isn’t best, in our opinion, and that’s okay.

That being said, when we do our spoofs of whatever ad or try to recreate a specific style, we’re going to do everything to make that happen. We recently shot a bit with Nicholas Braun from Succession where he’s trying to get a job from Conan during his hiatus from Succession. This was a mix of improv and scripted, and we had to match the look of that show. It turned out well and funny and is in the vein of Succession.

What about for the Ghana show?
For Ghana, we had a few segments that were extremely serious and emotional. For example, Conan and Sam Richardson visited Osu Castle, a major slave trade port. This segment demands care and needs to breathe so the weight of it can really be expressed, versus earlier in the show, when Conan was buying a Ghana shirt from a street vendor, and we hard-cut to him wearing a shirt 10 sizes too small.

And Greenland?
Greenland is a place really affected by climate change. My personal favorite segment I’ve cut on these travel specials is the impact the melting icecaps could have on the world. Then there is a montage of the icebergs we saw, followed by Conan attempting to stake a “Sold” sign on an iceberg, signifying he had bought property in Greenland for the US. Originally, the montage had a few jokes within the segment, but we quickly realized it’s so beautiful we shouldn’t cheapen it. We just let it be beautiful.

Comedy or drama, it’s really about being aware of what you have in front of you and what the end goal is.

What haven’t I asked that’s important?
For me, it’s important to acknowledge how talented our post team is to be able to work simultaneously on a giant special while delivering four shows a week. Being on location for Greenland also gave me a taste of the chaos the whole production team and Team Coco goes through, and I think everyone should be proud of what we’re capable of producing.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


Review: GoPro’s Hero 8 and GoPro Max 360 cameras

By Brady Betzel

Every year GoPro outdoes themselves and gives users more reasons why you should either upgrade your old GoPro action camera or invest in the GoPro ecosphere for the first time. Late last year, GoPro introduced the GoPro Hero 8 (for review the Black Edition) and the GoPro Max — a.k.a. the re-imagined Fusion 360 camera.

If you aren’t sure whether you want to buy one or both of the new GoPro cameras, you should take a look at the GoPro TradeUp program where you can send in any camera with an original retail value of $99.99 or more and you will receive $100 off the Hero 8 or $50 off the Hero 7. Check out the TradeUp program. That at least will take the sting off of the $399.99 or higher price.

GoPro Hero 8 Black Edition
Up first, I’ll take a look at the GoPro Hero 8 Black Edition. If you own the Hero 7 Black Edition you will be familiar with many of the Hero 8 features. But there are some major improvements from the Hero 7 to the Hero 8 that will make you think hard about upgrading. The biggest update, in my opinion, is the increase in the max bit rate to 100 Mb/s. With that increase comes better quality video (more data = more information = more details). But GoPro also allows you to use the HEVC (H.265) codec to compress videos, which is a much improved version of the antiquated H.264. You can get into the weeds on the H.264 vs H.265 codecs over at Frame.io’s blog where they have some really great (and nerdy) info.

Anyway, with the bit rate increased — any video from the GoPro Hero 8 Black Edition has the potential to be better than the Hero 7. I say “potential” because if the bit rate doesn’t need to be that high, the GoPro won’t force it to be — it’s a variable bit rate codec that only uses data if it needs to.

Beyond the increased bit rate, there are some other great features. The menu system has been improved even further than the 7, making it easier to get shooting without setting a bunch of advanced settings. There are presets made to set up your GoPro quickly and easily, depending on what you are filming: Standard (1080, 60, Wide), Activity (2.7K, 60, SuperView) and Cinematic (4K, 30, Linear). Live Streaming has been upped from 720p to 1080p, but we are still stuck with streaming from the GoPro through your phone natively to the sites YouTube and Facebook. You can stream to other sites like Twitch by setting up a RTMP URL — but Instagram is still off the list. This is unfortunate because, I think live Instagram streams/stories would be their biggest feature.

Hypersmooth 2.0 is an improved version of Hypersmooth 1.0 (still on the Hero 7). You can stabilize even more if you enable the “Boost” function, which adds more stabilization but at the cost of a larger crop on your video. Hypersmooth is an incredible, gimbal-replacing stabilization that GoPro has engineered with the help of automatic Horizon levelling, GPS and other telemetry data.

Coming soon will be external attachments called “Mods,” which will include a Media Mod adding an HDMI output, 3.5mm microphone jack and two cold shoe mounts; a Display Mod adding a flip up screen to see yourself from the front (think vlogging); and a Light Mod, which will add an LED to the camera for lighting. These will require the Media Mod ($79.99) to be purchased in addition to the Light ($49.99) and/or Display Mod ($79.99). Keep an eye on the GoPro store for info on pre-orders and purchases.

GoPro cameras have always taken great pictures and video… when the lighting is perfect. Even with the Hero 8, low light is the Achilles Heel of GoPro — the video starts to become muddy and hard to view. The Hero 8 shares the same camera sensor as the Hero 7 and even the same GP1 processor, but the Hero 8 was able to squeeze some more tech out of it with features like Timewarp 2.0 and Hypersmooth 2.0. So you will get similar images and videos out of both cameras at a base level, but with the added technology in the Hero 8 Black Edition, if you have the extra $100 you should get the latest version. Overall, the Hero 8 Black Edition is physically thinner (albeit a little taller), the battery compartment is no longer on the bottom and the Hero 8 now has a built-in mount! No more need for extra cages if you don’t need them!

GoPro Max
So last time I used GoPro’s 360 camera creation it was the Fusion. The Fusion was a hard-to-use, bulky, 360 camera that required two separate memory cards. It was not my favorite camera to test, in fact it felt like it needed another round of beta testing. Fast forward to today, and the recently released GoPro Max, which is what the Fusion should have been. While I think the Max is a marked improvement over the Fusion, it is not necessarily for everyone. If you specifically need to make 360 content or want a new GoPro, but also want to keep your options open to filming in 360 degrees, then the Max is for you.

The GoPro Max costs $499.99, so it’s not much more than a Hero 8 Black and it can shoot in 360 degrees. One of the best features is the ability to shoot like a traditional GoPro (i.e. one lens). Unfortunately, you are limited to 1080p/1440p 60/30/24fps — and maybe worst of all no slow-mo. You don’t always need to shoot in 360 with the Max, but you don’t quite get the full line-up of frame rates offered by the traditional Hero 8 Black Edition.

In addition, the bit rate of the Max — maxes out (all pun intended) at 78Mb/s not the 100Mb/s like the Hero 8. But if you want to shoot in 360, the GoPro Max is pretty simple to get running. It will even capture 360 audio with its six-mic array, shoot 24fps or 30fps at 5.6K resolution for stitched spherical video and, best of all, editing the spherical video is much easier in the latest GoPro phone app update. Unfortunately, editing on a Windows computer is not as easy as on the phone.

I am using a computer with Windows 10, Intel i7 6-core CPU, 32 GB memory and an Nvidia RTX 2080 GPU — so not a slow laptop. You can find all of the Windows software and plugins for 360 video on GoPro’s website. It seems like GoPro is trying to bury software for computers because it wasn’t easy to find these Windows apps. Nonetheless, you will want to download the GoPro Max Exporter and the Adobe GoPro FX Reframe plugins if you plan on editing your footage. Before I go any further, if you want to watch a video tutorial, I suggest you watch Abe Kislevitz’s tutorial on the GoPro Max workflow in Adobe Premiere. He is really good at explaining the workflow. Abe works at GoPro in media production and always has great videos, check his website out while you’re at it.

Moving on, in MacOS, iOS and Android, you can work with GoPro Max’s native 360 file format via the GoPro built apps. The 360 file format is a Google-created mapping known as EAC or Equal Area Cubemap. Get all the nerdy bits here.

Unfortunately, in Windows you need to convert the 360 videos into the “old school” equirectangular format that we’ve been used to. To do this, you run your 360 videos through the GoPro Exporter, which can use the power of both the CPU and GPU to create new files. This extra step stinks, I can’t sugar coat it. But with better compression and mapping structures come growing pains… I guess. GoPro is supposedly building a Windows-based reframer and exporter like the MacOS version, but after trolling some GoPro forums it seems like they have a lot of work to do and/or came to the conclusion that most users are going to use their phones or a MacOS-based system.

Either way, the process to reframe your 360 video and export as 1920×1080 or 1080×1920 goes: convert your 360 files to a more usable Cineform, H.265 or H.264 Quicktime > import into Adobe Premiere > create a sequence that will match your largest output size (i.e. 1920×1080) > apply the GoPro FX Reframe plugin and identify your output size (i.e. 1920×1080) > keyframe rotation, edit, reframe, etc. > export.

It’s not awful as long as you know what you are going to get yourself into. But take that with a grain of salt; I am a professional video editor that works in video 10-12 hours a day at least five days a week. I am very comfortable with this kind of stuff. It seems like it would take some more work if I wasn’t an editor by trade. If I was to try and explain this to my wife or kids, they may roll their eyes at me and just ask me to do it… understandably. If you want to upload to a site like YouTube without editing, you will still need to convert the 360 videos to something that is more manageable for YouTube like H.264 or H.265.

When working in Premiere with the GoPro Reframe plugin I actually found the effect controls overlay intuitive when panning and tilting. The hard part when keyframing panning and tilting in Premiere is adjusting the bezier curve keyframe graph. It’s not exactly intuitive, which makes it a challenge to get smooth starts and ends to your pans and tilts… even with Easy Ease In/Out set. With a little work you can get it done, but I really hope GoPro gets their Windows-based GoPro Max 360 editor up and running so I don’t have to use Adobe.

But after I did a couple of keyframed pans and tilts to my footage I took while riding It’s a Small World at Disneyland with my three sons, I exported a quick H.264. For about 50 seconds of footage took four minutes to export. Not exactly speedy. But playing back my footage in a 1920×1080 timeline in Premiere at ¼ resolution when using the GoPro plugin was actually pretty smooth.

You can check out the video I edited with the GoPro FX Reframer plugin in Premiere on my YouTube channel: https://youtu.be/frhJ3T8fzmE. I then wanted to take my video of our Tea Cup ride at Disneyland straight from the camera and upload it to YouTube. Unfortunately, you still need to convert the video in the GoPro Max Exporter to an H.264 or the like but this time it only took 1:45 to export a two-minute 4K video. You can check out the 4K-360 Tea Cup ride on YouTube.

Final Thoughts
Are the GoPro Hero 8 Black Edition and GoPro Max 360 camera worth an upgrade or new purchase? I definitely think the Hero 8 is worth it. GoPro always makes great cameras that are waterproof down to 33 feet, take great pictures when there is enough available light, can create amazing hyperlapses and timewarps incamera with little work, fine tune your shutter speed or flat-color footage with ProTune settings, and all of this can fit in your pocket.

I used to take a DSLR to Disneyland, but with the Hypersmooth 2.0 and the improved HDR images that come out of the Hero 8, this is the only camera you will need.

The GoPro Max is a different story. I like what the GoPro Max does technically; it films 360 video easily and can double as a 1440p Hero camera. But I found myself fumbling a little with the Max because of its larger footprint when compared to the Hero 8 (it’s definitely smaller than the old Fusion 360 camera though). And when editing on your phone or doing a straight upload, the videos are relatively easy to process, as long as you don’t mind the export time. Unfortunately, using Premiere to edit these videos is a little rough; it’s better than with the Fusion but it’s not simple.

Maybe I’m just getting older and don’t want to give in to the VR/360 video yet, or maybe it’s just not as smooth of a process as I would have hoped for. But if you want to work in 360 and don’t mind a lack of slow motion, I wouldn’t tell you to not buy the GoPro Max. It’s a great camera with durable lenses and exterior case.

Check out the videos on YouTube and see what amazing people like Abe Kislevitz are doing; they may just show you what you need to see. And check out www.gopro.com for more info including when those new Hero 8 Mods will be available.


 


Roger Deakins and 1917 win Theatrical prize at ASC Awards

The Theatrical Award for best cinematography in a motion picture went to Roger Deakins, ASC, BSC, for 1917 at the 34th American Society of Cinematographers Outstanding Achievement Awards.

Jarin Blaschke took the Spotlight Award for The Lighthouse and Fejmi Daut and Samir Ljuma won the inaugural Documentary Award for Honeyland. In the TV categories, winners included Colin Watkinson, ASC, BSC, for The Handmaid’s Tale; John Conroy, ISC, for The Terror: Infamy; and C. Kim Miles, CSC, MySC, for Project Blue Book.

TCM’s Ben Mankiewicz hosted the awards gala, which was held at the Ray Dolby Ballroom at Hollywood & Highland.

Below is the complete list of winners and nominees:

Theatrical Release Category – presented by Diane Lane

Roger Deakins, ASC, BSC – “1917” – WINNER

Phedon Papamichael, ASC, GSC – “Ford v Ferrari”

Rodrigo Prieto, ASC, AMC – “The Irishman”

Robert Richardson, ASC – “Once Upon a Time in Hollywood”

Lawrence Sher, ASC – “Joker”

Jarin Blaschke – The Lighthouse

Spotlight Award Category – presented by Bartosz Bielenia

 

Jarin Blaschke – “The Lighthouse” – WINNER

Natasha Braier, ASC, ADF – “Honey Boy”

Jasper Wolf, NSC – “Monos”


Documentary Category – presented by Todd Phillips

Fejmi Daut and Samir Ljuma – “Honeyland” – WINNER

Nicholas de Pencier – “Anthropocene: The Human Epoch”

Evangelia Kranioti – “Obscuro Barroco”


Episode of a Series for Non-Commercial Television – presented by Emily Deschanel

David Luther – “Das Boot,” Gegen die Zeit (episode 6)

M. David Mullen, ASC – “The Marvelous Mrs. Maisel,” Simone

Colin Watkinson – Handmaid’s Tale

Chris Seager, BSC – “Carnival Row,” Grieve No More

Brendan Steacy, CSC – “Titans,” Dick Grayson

Colin Watkinson, ASC, BSC – “The Handmaid’s Tale,” Night – WINNER


Episode of a Series for Commercial Television – presented by Jane Lynch

Dana Gonzales, ASC – “Legion,” Chapter 20

C. Kim Miles, CSC, MySC – “Project Blue Book,” The Flatwoods Monster – WINNER

Polly Morgan, ASC, BSC – “Legion,” Chapter 23

Peter Robertson, ISC – “Vikings,” Hell

David Stockton, ASC – “Gotham,” Ace Chemicals


Motion Picture, Miniseries, or Pilot Made for Television – presented by Michael McKean

John Conroy – The Terror: Infamy

John Conroy, ISC – “The Terror: Infamy,” A Sparrow in a Swallow’s Nest – WINNER

P.J. Dillon, ISC – “The Rook,” Chapter 1

Chris Manley, ASC – “Doom Patrol,” pilot

Martin Ruhe, ASC – “Catch-22,” Episode 5

Craig Wrobleski, CSC – “The Twilight Zone,” Blurryman

With the exception of Deakins, all of the awards were handed out to first-time winners. Deakins collected the top honor last year for “Blade Runner 2049” and previously for “Skyfall,” “The Man Who Wasn’t There,” and “The Shawshank Redemption.”

Honorary awards were also presented, including:

Frederick Elmes (left) talking about his award.

The ASC Board of Governors Award was given to Werner Herzog by Paul Holdengräber, interviewer/curator/writer and executive director of the Onassis Foundation. The award recognizes Herzog’s significant and indelible contributions to cinema. It is the only ASC Award not given to a cinematographer and is reserved for filmmakers who have been champions of the visual art form.

  • The ASC Lifetime Achievement Award was presented to Frederick Elmes, ASC, by writer-director Lisa Cholodenko. The duo collaborated on the Emmy-winning “Olive Kittridge.”
  • The ASC Career Achievement in Television Award was handed out to Donald A. Morgan, ASC, by actor Tim Allen. The two work together on the award-winning “Last Man Standing,” and previously collaborated on “Home Improvement.”
  • The International Award was bestowed upon Bruno Delbonnel, ASC, AFC, by writer-director Joel Coen. The duo has joined forces on several films, including the Oscar-nominated “Inside Llewyn Davis” and “The Ballad of Buster Scruggs.”
  • This year’s President’s Award went to Don McCuaig, ASC. It was given to him by longtime friend and actor-stuntman Mickey Gilbert.
  • The ASC Bud Stone Award of Distinction was given to Kim Snyder, president and CEO of Panavision. This award is presented to an ASC Associate Member who has demonstrated extraordinary service to the society and/or has made a significant contribution to the motion-picture industry.

 Main Image: Roger Deakins and his wife James Deakins


Review: FilmConvert Nitrate for film stock emulation

By Brady Betzel

If you’ve been around any sort of color grading forums or conferences, you’ve definitely heard some version of this: Film is so much better than digital. While I don’t completely disagree with the sentiment, let’s be real. We are in a digital age, and the efficiency and cost associated with digital recording is, in most cases, far superior to film.

Personally, I love the way film looks; it has an essence that is very difficult to duplicate — from the highlight roll-offs to the organic grain — but it is very costly. That is why film is hard to imitate digitally, and that is why so many companies try and often fail.

Sony A7iii footage

One company that has had grassroots success with digital film stock emulation is FilmConvert. The original plugin, known as FilmConvert Pro, works with Adobe’s Premiere and After Effects, Avid Media Composer and as an OFX plugin for apps like Blackmagic’s DaVinci Resolve.

Recently, FilmConvert expanded its lineup with the introduction of Nitrate, a film emulation plugin that can take Log-based video and transform it into full color corrected media with a natural grain similar to that of commonly loved film stocks. Currently, Nitrate works with Premiere and After Effects, with an OFX version for Resolve. A plugin for FCPX is coming in March.

The original FilmConvert Pro plugin works great, but it adjusts your image through an sRGB pipeline. That means FilmConvert Pro adjusts any color effects after your “base” grade is locked in while living in an sRGB world. While you download camera-specific “packs” that apply the film emulation — custom-made based on your sensor and color space — you are still locked into an sRGB pipeline, with little wiggle room. This means sometimes blowing out your highlights and muddying your shadows with little ability to recover any details.

SonyA7iii footage

I imagine FilmConvert Pro was introduced at a time when a lot of users shot with cameras like Canon 5D or other sRGB cameras that weren’t shooting in a Log color space. Think of using a LUT and trying to adjust the highlights and shadows after the LUT; typically, you will have a hard time getting any detail back, losing dynamic range even if your footage was shot Log. But if you color before a LUT (think Log footage), you can typically recover a lot of information as long as your shot was recorded properly. That blown-out sky might be able to be recovered if shot in a Log colorspace. This is what FilmConvert is solving with its latest offering, Nitrate.

How It Works
FilmConvert’s Nitrate works in a Cineon-Log processing pipeline for its emulation, as well as a full Log image processing pipeline. This means your highlights and shadows are not being heavily compressed into an sRGB color space, which allows you to fine-tune your shadows and highlights without losing as much detail. Simply, it means that the plugin will work more naturally with your footage.

In additional updates, FilmConvert has overhauled its GUI to be more natural and fluid. The Color Wheels have been redesigned, a new color tint slider has been added to quickly remove any green or magenta color cast, a new Color Curve control has been added, and there is now a Grain Response curve.

Grain Response

The Grain Response curve takes adding grain to your footage up a notch. Not only can you select between 8mm and 35mm grain sizes (with many more in between) but you can adjust the application of that grain from shadows to highlights. If you want your highlights to have more grain response, just point the Grain Response curve higher up. In the same window you can adjust the grain size, softness, strength and saturation via sliders.

Of the 19 film emulation options to choose from, there are many unique and great-looking presets. From the “KD 5207 Vis3” to the “Plrd 600,” there are multiple brands and film stocks offered. For instance, the “Kodak 5207 Vis3” is described on Kodak’s website in more detail:

“Vision3 250D Film offers outstanding performance in the extremes of exposure — including increased highlight latitude, so you can move faster on the set and pull more detail out of the highlights in post. You’ll also see reduced grain in shadows, so you can push the boundaries of underexposure and still get outstanding results.”

One of my favorite emulations in Nitrate — “Fj Velvia 100” or Fujichrome Velvia 100 — is described on FilmConvert’s website:

“FJ Velvia 100 is based on the Fujichrome Velvia 100 photographic film stock. Velvia is a daylight-balanced color reversal film that provides brighter ultra-high-saturation color reproduction. The Velvia is especially suited to scenery and nature photography as well as other subjects that require precisely modulated vibrant color reproduction.”

Accurate Grain

FilmConvert’s website offers a full list of the 19 film stocks, as well as examples and detailed descriptions of each film stock.

Working With FilmConvert Nitrate
I used Nitrate strictly in Premiere Pro because the OFX version (specifically for Resolve) wasn’t available at the time of this review.

Nitrate works pretty well inside of Premiere, and surprisingly plays back fluidly — this is probably thanks to its GPU acceleration. Even with Sony a7 III UHD footage, Premiere was able to keep up with Lumetri Color layered underneath the FilmConvert Nitrate plugin. To be transparent I tested Nitrate on a laptop with an Intel i7 CPU and an Nvidia RTX 2080 GPU, so that definitely helps.

At first, I struggled to see where I would fit FilmConvert’s Nitrate plugin into my normal workflow so I could color correct my own footage and add a grade later. However, when I started cycling through the different film emulations, I quickly saw that they were adding a lot of life to the images and videos. Whether it was the grain that comes from the updated 6K grain scans in Nitrate or the ability to identify which camera and color profile you used when filming via the downloadable camera packs, FilmConvert’s Nitrate takes well-colored footage and elevates it to finished film levels.

It’s pretty remarkable; I came in thinking FilmConvert was essentially a preset LUT plugin and wasn’t ready for it to be great. To my surprise, it was great and it will add the extra edge of professional feeling to your footage quickly and easily.

Test 1
In my first test, I threw some clips I had shot on a Sony a7 III camera in UHD (at SLog3 — SGamut3) into a timeline, applied the FilmConvert Nitrate plugin and realized I needed to download the Sony camera packs. This pack was about 1GB, but others —like the Canon 5D Mark II — came in at just over 300MB. Not the end of the world, but if you have multiple cameras, you are going to need to download quite a few packs and the download size will add up.

Canon 5D

I tried using just the Nitrate plugin to do color correction and film emulation from start to finish, but I found the tools a little cumbersome and not really my style. I am not the biggest fan of Lumetri color correction tools, but I used those to get a base grade and apply Nitrate over that grade. I tend to like to keep looks to their own layer, so coloring under Nitrate was a little more natural to me.

A quick way to cycle through a bunch of looks quickly is to apply Nitrate to the adjustment layer and hit the up or down arrows. As I was flicking through the different looks, I noticed that FilmConvert does a great job processing the film emulations with the specified camera. All of the emulations looked good with or without a color balance done ahead of time.

It’s like adding a LUT and then a grade all in one spot. I was impressed by how quickly this worked and how good they all looked. When I was done, I rendered my one-minute sequence out of Adobe Media Encoder, which took 45 seconds to encode a ProResHQ and 57 seconds for an H.264 at 10Mb/s. For reference, the uncolored version of this sequence took 1:17 for the ProResHQ and :56 for the H.264 at 10Mb/s. Interesting, because the Nvidia RTX 2080 GPU definitely kicked in more when the FilmConvert Nitrate effect was added. That’s a definite plus.

Test 2
I also shot some clips using the Blackmagic Pocket Cinema Camera (BMPCC) and the Canon 5D Mark II. With the BMPCC, I recorded CinemaDNG files in the film color space, essentially Log. With the 5D, the videos were recorded as Movie files wrapped in MP4 files (unless you shoot with the Magic Lantern hack, which allows you to record in the raw format). I brought in the BMPCC CinemaDNG files via the Media Browser as well as imported the 5D Movs and applied the FilmConvert Nitrate plugin to the clips. Keep in mind you will need to download and install those camera packs if you haven’t already.

Pocket Cinema Camera

For the BMPCC clips I identified the camera and model as appropriate and chose “Film” under profile. It seemed to turn my CinemaDNG files a bit too orange, which could have been my white balance settings and/or the CinemaDNG processing done by Premiere. I could swing the orange hue out by using the temperature control, but it seemed odd to have to knock it down to -40 or -50 for each clip. Maybe it was a fluke, but with some experimentation I got it right.

With the Canon 5D Mark II footage, I chose the corresponding manufacturer and model as well as the “Standard” profile. This worked as it should. But I also noticed some other options like Prolost, Marvel, VisionTech, Technicolor, Flaat and Vision Color — these are essentially color profiles people have made for the 5D Mark II. You can find them with a quick Google search.

Summing Up
In the end, FilmConvert’s Nitrate will elevate your footage. The grain looks smooth and natural, the colors in the film emulation add a modern take on nostalgic color corrections (that don’t look too cheesy), and most cameras are supported via downloads. If you don’t have a large budget for a color grading session you should be throwing $139 at FilmConvert for its Nitrate plugin.

Nitrate in Premiere

When testing Nitrate on a few different cameras, I noticed that it even made color matching between cameras a little bit more consistent. Even if you have a budget for color grading, I would still suggest buying Nitrate; it can be a great starting block to send to your colorist for inspiration.

Check out FilmConvert’s website and definitely follow them on Instagram, where they are very active and show a lot of before-and-afters from their users  — another great source of inspiration.

Main Image: Two-year-old Oliver Betzel shot with a Canon 5D with KD P400 Ptra emulsion applied.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and The Shop. He is also a member of the Producer’s Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.


DP Chat: The Grudge’s Zachary Galler

By Randi Altman

Being on set is like coming home for New York-based cinematographer Zachary Galler, who as a child would tag along with his father while he directed television and film projects. The younger Galler started in the industry as a lighting technician and quickly worked his way up to shooting various features and series.

His first feature as a cinematographer, The Sleepwalker, premiered at the in 2014 and was later distributed by IFC. His second feature, She’s Lost Control, was awarded the C.I.C.A.E. Award at the Berlin International Film Festival later that year. Other television credits include all eight episodes of Discovery’s scripted series Manhunt: Unabomber, Hulu’s The Act and USA’s Briarpatch (coming in February). He recently completed the feature Nicolas Pesce-directed thriller The Grudge, which stars John Cho and Betty Gilpin and is in theaters now.

Tell us about The Grudge. How early did you get involved in planning, and what direction were you given by the director about the look he wanted?
Nick and I worked together on a movie he directed called Piercing. That was our first collaboration, but we discovered that we had very similar ideas and working styles and we formed a special relationship. Shortly after that project, we started talking about The Grudge, and about a year later we were shooting. We talked a lot about how this movie should feel, and how we could achieve something new and different from something neither of us had done before. We used a lot of look-books and movie references to communicate, so when it came time to shoot we had the visual language down fluently and that allowed us keep each other consistent in execution.

How would you describe the look?
Nick really liked the bleach-bypass look from David Fincher’s Se7en, and I thought about a mix of that and (photographer) Bill Henson. We also knew that we had to differentiate between the different storyline threads in the movie, so we had lots to figure out. One of the threads is darker and looks very yellow, while another is warmer and more classic. Another is slightly more desaturated and darker. We did keep the same bleach-bypass look throughout, but adjusted our color temperature, contrast and saturation accordingly. For a horror movie like this, I really wanted to be able to control where the shadow detail turned into black, because some of our scare scenes relied on that so we made sure to light accordingly, and were able to fine-tune most of that in-camera.

How did you work with the director and colorist to achieve that look?
We worked with FotoKem colorist Kostas Theodosiou (who used Blackmagic Resolve). I was shooting a TV show during the main color pass, so I only got to check in to set looks and approve final color, but Nick and Kostas did a beautiful job. Kostas is a master of contrast control and very tastefully helped us ride that line of where there should be detail and where it should not be detail. He was definitely an important part of the collaboration and helped make the movie better.

Where was it shot and how long was the shoot?
We shot the movie in 35 days in Winnipeg, Canada.

How did you go about choosing the right camera and lenses for this project and why these tools?
Nick decided early on that he wanted to shoot this film anamorphic. Panavision has been an important partner for me on most of my projects, and I knew that I loved their glass. We got a range of different lenses from Panavision Toronto to help us differentiate our storylines — we shot one on T Series, one on Primo anamorphics and one on G Series anamorphics. The Alexa Mini was the camera of choice because of its low light sensitivity and more natural feel.

Now more general questions…

How did you become interested in cinematography?
My father was a director, so I would visit him on set a lot when I was growing up. I didn’t know quite what I wanted to do when I was young but I knew that it was being on set. After dropping out of film school, I got a job working in a lighting rental warehouse and started driving trucks and delivering lights to sets in New York. I had always loved taking pictures as a kid and as I worked more and learned more, I realized that what I wanted to do was be a DP. I was very lucky in that I found some great collaborators early on in my career that both pushed me and allowed me to fail. This is the greatest job in the world.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
Artistically, I am inspired by painters, photographers and other DPs. There are so many people doing such amazing work right now. As far as technology is concerned, I’m a bit slow with adopting, as I need to hold something in my hands or see what it does before I adopt it. I have been very lucky to get to work with some great crews, and often a camera assistant, gaffer or key grip will bring something new to the table. I love that type of collaboration.

 

DP Zachary Galler (right) and director Nicolas Pesce on the set of Screen Gems’ The Grudge.

What new technology has changed the way you works?
For some reason, I was resistant to using LUTs for a long time. The Grudge was actually the first time I relied on something that wasn’t close to just plain Rec 709. I always figured that if I could get the 709 feeling good when I got into color I’d be in great shape. Now, I realize how helpful they can be, and that you can push much further. I also think that the Astera LED tubes are amazing. They allow you to do so much so fast and put light in places that would be very hard to do with other traditional lighting units.

What are some of your best practices or rules you try to follow on each job?
I try to be pretty laid back on set, and I can only do that because I’m very picky about who I hire in prep. I try and let people run their departments as much as possible and give them as much information as possible — it’s like cooking, where you try and get the best ingredients and don’t do much to them. I’ve been very lucky to have worked with some great crews over the years.

What’s your go-to gear — things you can’t live without?
I really try and keep an open mind about gear. I don’t feel romantically attached to anything, so that I can make the right choices for each project.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 


DP Chat: The Morning Show cinematographer Michael Grady

By Randi Altman

There have never been more options to stream content than right now. In addition to veterans Netflix, Amazon and Hulu, Disney+ and Apple TV+ have recently joined the fray.

In fact, not only did Apple TV+ just launch last month, its The Morning Show — about what goes on behind the scenes on, well, a morning show — has earned three Golden Globe nominations. The show stars Jennifer Aniston, Reese Witherspoon, Steve Carell and Billy Crudup.

L-R: Mimi Leder and Michael Grady on set.

Veteran cinematographer Michael Grady (On the Basis of Sex, The Leftovers, Ozark) was called on by frequent collaborator and executive producer Mimi Leder to shoot the show. We reached out to Grady to find out more about the show and how he works.

How early did you get involved on The Morning Show, and what direction were you given about the shoot?
I have worked with Mimi Leder often over the last 15 years. We have done multiple projects, so we have a great shorthand. We were finishing a movie called On the Basis of Sex when she first mentioned The Morning Show. We spoke about the project even before she was certain that she would take it on as the executive producer and lead director. Ultimately, it is awesome to work with Mimi because she really creates an amazingly collaborative and open work environment. She really allows each person to bring something specific to a project while always staying at the wheel and gently guiding everyone toward a common goal. It’s a very different process from many directors. She knows how to maximize the talents of those around her while staying in control.

Mimi directed episodes 1 and 2. They are essentially the pilot and the setup of the show. After her two episodes, there was a very seasoned and talented group of directors that did episodes 3 through 9. On episode 4, I worked with Lynn Shelton, who is truly an amazing director working with actors. She is one of the loveliest and deeply collaborative directors that I have ever experienced.

For episode 6, I had the pleasure of working with Tucker Gates. Tucker is a brilliant veteran director that I immediately felt at ease with. I adored working with him. We really saw eye to eye on the common ground of filmmaking. He is a director that is experienced enough to really understand all aspects of filmmaking and respects each person on the crew and what they are also trying to achieve. I thought that he directed the most technically challenging episode created this season.

Next, I had another decorated director to make episode 6. Michelle MacLaren has made some great TV in the past, and she did it again on her episode of The Morning Show. We had worked together on The Leftovers, and I loved the work we did together on that show and again on this one. She is a visionary director. Extremely driven.

How would you describe the look of show?
Well, Mimi and I looked at a lot of films as reference before we began. We settled in on a clean, elegant, classical feel. The look of Michael Clayton, shot by Robert Elswit, was a key reference for the show. We used a motif of reflections: glass, mirrors, water, steel, hard surfaces, and … of course, moving images on monitors. Both natural and man-made reflections of our cast were all sign posts for framing the look of the show. It seemed an appropriately perfect motif for telling the story of how America’s morning news programs function and the underbelly that we attempt to investigate.

How did you work with the producers and the colorist to achieve the intended look?
Siggy Ferstl at Company 3 is our colorist. Siggy and I have collaborated on well over 10 movies for the last 15 years. I think he is, without question, the best colorist in the movie business.

The show has a rich, reserved elegance about it. On a project like this, Siggy needs very little guidance or direction from me. He easily understands the narrative and what the look and feel should be on a show. We talk, and it evolves. He is amazing at identifying what you have created in the shoot and then expanding upon those concepts in an attempt to enhance and solidify what you were attempting in image acquisition.

Where was it shot, and how long was the shoot?
We shot in LA on the Sony lot, all over LA and then also in New York City and Las Vegas. We shot for five months. The shoot ran November through the middle of May. I began prep a month or so before.

How did you go about choosing the right camera and lenses for this project? Can you talk about camera tests?
We opted for the Panavision Millennium DXL2 with Primo 70 lenses — Apple’s specs required a 4K minimum. We tested a few systems and ended up choosing the Panavision for its awesome versatility. One camera does it all — Steadi, hand-held, studio, etc. At the time, there were few options for large format. There are many more now. We tested quite a few lenses with Jenn, Reese and Billy and ultimately chose the Primo 70s. We loved the clean but smooth look of these lenses. We shot them clean with zero filtration. I really liked the performance of the lenses for this show.

Can you talk about lighting?
Obviously, lighting is everything. Depth, contrast, color and the overall richness of the image are all achieved through lighting and art direction. Planning and previsualization are the key elements. We tried to take great care in each image, but this Panavision camera is groundbreaking in terms of shooting raw on the streets at night. The native 1600 ASA is insane. The images are so fast and clean, but our priority on this show was how the camera functioned within the realm of skin tones, texture, etc. We loved the natural look and feel of this camera and lens combo. The DXL really is an awesome addition to the large-format camera choices out there right now.

Any challenging scenes that you are particularly proud of or found most challenging?
We had an episode that took place in Las Vegas, and we shot for one long night. All mostly just grab and go. Very guerilla-style. Most of the sequences on Las Vegas Boulevard are all natural … no artificial light. Along the same lines as above, the camera performed on the streets of Las Vegas beautifully. The images were very clean, and the range and latitude of the DXL were amazing. I am shocked at how today’s cameras perform at low-light levels.

You’ve also shot feature films. Can you talk about differences in your process?
I don’t really see huge differences any more between features and high-end TV like this show. It used to be that features were given more time, and then more was expected. Well, you may still get more time, but everyone expects feature-quality, cinema-like images in dramatic television.

We have three huge international movie stars; I treat them no differently than if their images were to go up on the big screen. One-hour drama shows are the single most difficult and demanding projects to work on these days. They have all of the same demands as feature films, but they are created in a much tighter window. The expectations of quality seem very much the same today. Further, I think that the long grind of episodic TV also makes it tougher. It’s a long marathon, not a sprint.

Now for some more general questions …
How did you become interested in cinematography?
I was always an art student. I studied painting and drawing mostly. Later, I studied philosophy and business in college. I took private lessons from local artists growing up and also spent a lot of time playing sports (football). But I always loved movies. I found this to be the perfect job for me. It combines all of those elements. To me, telling stories with pictures requires the many skills that I learned from both sports and art.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
I used to be inspired by photos and art and other cinematographers’ work on films and shows. Of course, those things still inspire me, but people inspire me so much more now. The inspiration of reality seems far more interesting to me than abstractions today. The real emotions of people are what actually inspires movies to attempt to be art. Emotions are what movies are truly about. How can an emotion inspire an image and how can an image inspire an emotion? That’s the deal.

Technology and I don’t really get along very well. To me, it’s always only about storytelling. Technology has never been all that interesting to me. I stay somewhat aware of the new toys, but I’m not very obsessed. My crew keeps me informed also.

That being said, how has technology changed the way you work?
The film-to-video transition was obviously the biggest technological change in my career. Today, I think that the insanely sensitive cameras and their high native ASA ratios are the biggest technological advantages now. These fast cameras allow us to work in such low-light levels that it just seems a lot easier than it was 20 years ago.

The incredible advances in LED lights have also really altered the work process. They are now so powerful in such a compact footprint that it is increasingly easier to get a decent image today. It’s all smaller and not so cumbersome.

What are some of your best practices or rules you try to follow on each job?
Over the years, I have gone from obsessed to maniacal to relaxed to obsessed again. Today, I am really trying to be more respectful of all the artists working on the project and to just not let it all get to me. The pressure of it affects people differently. I really try to stay more even and calm. It really is all about the crew. DPs just direct traffic. Point people in the right way and direct them. Don’t give them line readings, but direct them. You just must be confident in why you are directing them a certain way. If you don’t believe, neither will they. In the end, the best practice a DP can ever have is to always get the best crew possible. People make movies. You need good people. You are only as good as your crew. It really is that simple.

Explain your ideal collaboration with the director when setting the look of a project.
Mimi Leder is my best example of a collaborative director. We have a shared taste, and she really understands the value of a talented crew working together on a clearly defined common goal. The best directors communicate well and share enough information and ideas with the crew so that the crew can go and execute those ideas, and ultimately, expand upon them.

If a director clearly understands the story that they are telling, then they can eloquently communicate the concepts that will bring that story to life. The artists around them can expand a director’s ideas and solidify and embed them into the images. We all bring multiple levels of detail to the story. Hopefully, we are all inserting the same thematic ideas in our storytelling decisions. Mimi really allows her department heads to explore the story and bring their aesthetic sensibilities to the project. In the end, real collaboration is synonymous with good directing.

What’s your go-to gear? Things you can’t live without?
Lots of iced lattes and cold brew. I really don’t have a constant accessory, short of coffee.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Quick Chat: The Rebel Fleet’s Michael Urban talks on-set workflows

When shooting major motion pictures and episodic television with multiple crews in multiple locations, production teams need a workflow that gives them fast access and complete control of the footage across the entire production, from the first day of the shoot to the last day of post. This is Wellington, New Zealand-based The Rebel Fleet’s reason for being.

What exactly do they do? Well we reached out to managing director Michael Urban to find out.

Can you talk more about what you do and what types of workflows you supply?
The Rebel Fleet supplies complete workflow solutions, from on-set Qtake video assist and DIT to dailies, QC, archive and delivery to post. By managing the entire workflow, we can provide consistency and certainty around the color pipeline, monitor calibration, crew expertise and communication, and production can rely on one team to take care of that part of the workflow.

We have worked closely with Moxion many times and use its Immediates workflow, which enables automated uploads direct from video assist into its secure dailies platform. Anyone with access to the project can view rushes and metadata from set moments after the video is shot. This also enables different shooting units to automatically and securely share media. Two units shooting in different countries can see what each other has shot, including all camera and scene/take metadata. This is then available and catalogued directly into the video assist system. We have a lot of experience working alongside camera and VFX on-set as well as delivering to post, making sure we are delivering exactly what’s needed in the right formats.

You recently worked on a film that was shot in New Zealand and China, and you sent crews to China. Can you talk about that workflow a bit and name the film?
I can’t name the film yet, but I can tell you that it’s in the adventure genre and is coming out in the second half of 2020. The main pieces of software are Colorfront On-Set Dailies for processing all the media and Yoyotta for downloading and verifying media. We also use Avid for some edit prep before handing over to editorial.

How did you work with the DP and director? Can you talk about those relationships on this particular film?
On this shoot the DP and director had rushes screenings each night to go over the main unit and second unit rushes and make sure the dailies grade was exactly what they wanted. This was the last finesse before handing over dailies to editorial, so it had to be right. As rushes were being signed off, we would send them off to the background render engine, which would create four different outputs in multiple resolutions and framing. This meant that moments after the last camera mag was signed off, the media was ready for Avid prep and delivery. Our data team worked hard to automate as many processes as possible so there would be no long nights sorting reports and sheets. That work happened as we went throughout the day instead of leaving a multitude of tasks for the end of the day.

How do your workflows vary from project to project?
Every shoot is approached with a clean slate, and we work with the producers, DP and post to make sure we create a workflow that suits the logistical, budgetary and technical needs of that shoot. We have a tool kit that we rely on and use it to select the correct components required. We are always looking for ways to innovate and provide more value for the bottom line.

You mentioned using Colorfront tools, what does that offer you? And what about storage? Seems like working on location means you need a solid way to back up.
Colorfront On-Set Dailies takes care of QC, grade, sound sync and metadata. All of our shared storage is built around Quantum Xcellis, plus the Quantum QXS hybrid storage systems for online and nearline. We create the right SAN for the job depending on the amount of storage and clients required for that shoot.

Can you name projects you’ve worked on in the past as well as some recent work?
Warner Bros.’ The Meg, DreamWorks’ Ghost in the Shell, Sonar’s The Shannara Chronicles, STX Entertainment’s Adrift, Netflix’s The New Legends of Monkey and The Letter for the King and Blumhouse’s Fantasy Island.

GoPro intros Hero8 Black and Max cameras, plus accessories

GoPro has added two new cameras to its lineup — Hero8 Black and GoPro Max — as well as modular accessories called Mods.

The Hero8 Black ($399) features HyperSmooth 2.0 video stabilization and offers improved pitch-axis stabilization. It also now supports all frame rates and resolutions. TimeWarp 2.0 auto adjusts to the operator’s speed and can be slowed to realtime with a tap. The revamped SuperPhoto feature offers ghost-free HDR action photos, and new LiveBurst captures 1.5 seconds of 12MP (4K 4:3) footage before and after the shutter. Hero8 Black also has a new wind-optimized front-facing mic and high-fidelity audio improvements.

The camera has four new digital lenses — ranging from GoPro’s patented SuperView to zero-distortion linear — and customizable capture presets for quick access to settings for any activity. It’s all housed in a frameless design with folding mounting fingers.

Hero8 Black is available for preorder now, with shipments beginning October 15.

Accessories
Hero8 Black can be turned into a vlogging or production camera with Mods, GoPro’s new modular accessory ecosystem. The Media Mod, Display Mod and Light Mod ($49.99) equip Hero8 Black with professional-grade audio, a front-facing display and enhanced lighting. The Mods enable on-demand expansion of Hero8 Black’s capabilities without losing the compact ruggedness of the Hero camera design.

The Media Mod ($79.99) features shotgun-mic directional audio and has two cold shoe mounts for additional accessories along with Type-C, HDMI and 3.5mm external mic adapter ports.

The Display Mod ($79.99) is a folding front- or rear-facing 1.9-inch display that attaches to the top of the Media Mod. It’s the perfect size for both framing up vlogging shots and folding down and out of the way when not in use.

The Light Mod ($49.99) is waterproof to 33 feet (10 meters), wearable and gear-mountable. The Light Mod is ready to brighten any scene, whether mounted to the Media Mod or attached to a GoPro mount. It’s rechargeable and comes complete with a diffuser to soften lighting when filming with Hero8 Black. Mods will be available for preorder in December.

GoPro Max
GoPro Max ($499) is a dual-lens GoPro camera. Waterproof to 16 feet (five meters), Max can be used as a single lens max stabilized Hero camera, a dual lens 360 camera or a vlogging camera — all in one. Max HyperSmooth, with its unbreakable stabilization and in-camera horizon leveling, eliminates the need for a gimbal. Max TimeWarp bends time and space with expanded control and performance over traditional TimeWarp. And Max SuperView delivers GoPro’s widest, most immersive field of view yet.

When creating 360 edits, Max users now have Reframe, the GoPro app’s new keyframe-based editing experience. Now it’s easy to quickly “reframe” 360 footage into a traditional video with super-smooth pans and transitions. Reframe matches the power of desktop 360 editing solutions, but with the convenience and usability of the GoPro app.

For vlogging, Max has four digital lenses for the ideal “look,” a front-facing touch screen for easy framing and six mics that enable shotgun-mic audio performance.

Max can be preordered now, with shipments beginning in late October.

Wildlife DP Steve Lumpkin on the road and looking for speed

For more than a decade, Steve Lumpkin has been traveling to the Republic of Botswana to capture and celebrate the country’s diverse and protected wildlife population. As a cinematographer and still photographer, Under Prairies Skies Photography‘s Lumpkin will spend a total of 65 days this year filming in the bush for his current project, Endless Treasures of Botswana.

Steve Lumpkin

It’s a labor of love that comes through in his stunning photographs, whether they depict a proud and healthy lioness washed with early-morning sunlight, an indolent leopard draped over a tree branch or a herd of elephants traversing a brilliant green meadow. The big cats hold a special place in Lumpkin’s heart, and documenting Botswana’s largest pride of lions is central to the project’s mission.

“Our team stands witness to the greatest conservation of the natural world on the planet. Botswana has the will and the courage to protect all things wild,” he explains. “I wanted to fund a not-for-profit effort to create both still images and films that would showcase The Republic of Botswana’s success in protecting these vulnerable species. In return, the government granted me a two-year filming permit to bring back emotional, true tales from the bush.”

Lumpkin recently graduated to shooting 4K video in the bush in Apple ProRes Raw, using a Sony FS5 camera and an Atomos Inferno recorder. He brings the raw footage back to his US studio for post, working in Apple Final Cut Pro on an iMac 5K and employing a variety of tools, including Color Grading Central and Neat Video.

Leopard

Until recently, Lumpkin was hitting a performance snag when transferring files from his QNAP TBS 882T NAS storage system to his iMac Pro. “I was only getting read times of about 100 Mb/sec from Thunderbolt, so editing 4K footage was painful,” he says. “At the time, I was transitioning to ProRes RAW, and I knew I needed a big performance kick.”

On the recommendation of Bob Zelin, video engineering consultant and owner of Rescue 1, Lumpkin installed Sonnet’s Solo10G Thunderbolt 3 adapter. The Solo10G uses the 10GbE standard to connect computers via Ethernet cables to high-speed infrastructure and storage systems. “Instantly, I jumped to a transfer rate of more than 880MB per second, a nearly tenfold throughput increase,” he says. “The system just screams now – the Solo10G has accelerated every piece of my workflow, from ingest to 4K editing to rendering and output.”

“So many colleagues I know are struggling with this exact problem — they need to work with huge files and they’ve got these big storage arrays, but their Thunderbolt 2 or 3 connections alone just aren’t cutting it.”

With Lumpkin, everything comes down to the wildlife. He appreciates any tools that help streamline his ability to tell the story of the country and its tremendous success in protecting threatened species. “The work we’re doing on behalf of Botswana is really what it’s all about — in 10 or 15 years, that country might be the only place on the planet where some of these animals still exist.

“Botswana has the largest herd of elephants in Africa and the largest group of wild dogs, of which there are only about 6,000 left,” says Lumpkin. “Products like Sonnet’s Solo10G, Final Cut, the Sony FS5 camera and Atomos Inferno, among others, help our team celebrate Botswana’s recognition as the conservation leader of Africa.”

Sony intros 4K camera with 6K full-frame sensor, auto-focus

At IBC 2019, Sony announced the PXW-FX9, its first XDCAM camera featuring an advanced 6K full-frame sensor and Fast Hybrid Auto Focus (AF) system. The new camera offers content creators greater creative freedom and flexibility to capture stunning images that truly resonate with audiences.

Building on the success of the PXW-FS7 and PXW-FS7M2, the FX9 combines high mobility with an advanced AF system for higher bokeh and slow-motion capabilities thanks to its newly developed sensor. The FX9 also inherits its color science and a dual base ISO from the Venice digital motion picture camera, creating the ultimate tool for documentaries, music videos, drama productions and event shooting.

The FX9 was designed in close collaboration with the creative community. It offers the versatility, portability and performance expected of an FS7 series “run and gun” style camera, while also offering high dynamic range and full-frame shooting features.

“With the new FX9, we are striking a balance between agility and creative performance. We’ve combined the cinematic appeal of full-frame with advanced professional filmmaking capabilities in a package that’s extremely portable and backed by the versatility of Sony E-mount,” says Sony’s Neal Manowitz.

The new Exmor R sensor offers wide dynamic range with high sensitivity, low noise and over 15 stops of latitude that can be recorded internally in 4K 4:2:2 10-bit. Oversampling of the full-frame 6K sensor’s readout allows pros to create high-quality 4K footage with bokeh effects through shallow depth of field, while wide-angle shooting opens new possibilities for content creators to express their creativity.

A dual base ISO of 800 and 4000 enables the image sensor’s characteristics to best capture scenes from broad daylight to the middle of the night. With S-Cinetone color science, the new sensor can create soft facial tones. The camera can also capture content up to five times slow-motion with full HD 120fps shooting played back at 24p.

The shallow depth of field available with a full-frame image sensor requires precise focus control, and the enhanced Fast Hybrid AF system, with customizable transition speeds and sensitivity settings, combines phase detection AF for fast, accurate subject tracking with contrast AF for exceptional focus accuracy. The dedicated 561-point phase-detection AF sensor covers approximately 94% in width and 96% in height of the imaging area, allowing consistently accurate, responsive tracking — even with fast-moving subjects while maintaining shallow depth of field.

Inspired by the high mobility run-and-gun style approach from the FS7 series of cameras, the FX9 offers content creators shooting flexibility thanks to a continuously variable Electronic Variable ND filter. This enables instant exposure level changes depending on the filming environment, such as moving from an inside space to outdoors or while filming in changing natural light conditions.

Additionally, the FX9’s image stabilization metadata can be imported to Sony’s Catalyst Browse/Prepare software to create stable visuals even in handheld mode. Sony is also working to encourage third-party nonlinear editing tools to adopt this functionality.

The FX9 will be available toward the end of 2019.

Red adds Helium and Gemini sensor options to Ranger cameras

Red has added its Helium 8K S35 and Gemini 5K S35 sensors to the Red Ranger camera ecosystem. These two new options offer an alternative for creators who prefer an integrated, all-in-one system to the more modular Red DSMC2 camera.

The Ranger Helium 8K S35 and Ranger Gemini 5K S35 are available now via Red’s global network of resellers, through participating rental houses and directly through Red. They join the Red Ranger Monstro 8K VV sensor, which remains a rental house-only product.

All three sensor variants of the Red Ranger camera system include the same benefits of the compact, standardized camera body, weighing around 7.5 pounds (depending on battery). The system can also handle heavy-duty power sources to satisfy power-hungry configurations and boasts a large fan for quiet, more efficient temperature management.

The Red Ranger camera system consists of three SDI outputs (two mirrored and one independent), allowing two different looks to be output simultaneously; wide-input voltage (11.5V to 32V); 24V and 12V power outs (two of each); one 12V P-Tap; integrated 5-pin XLR stereo audio input (line/mic/+48V selectable); as well as genlock, timecode, USB, and control. Both V-Lock and Gold Mount battery options are supported.

As with all current Red cameras, the Ranger can simultaneously record Redcode RAW plus Apple ProRes or Avid DNxHD or DNxHR at up to 300 MB/s write speeds. It also features Red’s end-to-end color management and post workflow with the enhanced image processing pipeline (IPP2).

Ranger Helium and Ranger Gemini ship complete with:

  • New production top handle
  • Shimmed PL mount
  • New LCD/EVF Adaptor D with improved cable routing when used on the left side of the camera
  • New 24V AC power adaptor with 3-pin 24V XLR power cable, which can also be used with 24V block batteries
  • Lens mount shim pack
  • Compatible Hex and Torx tools

Additionally, Red plans to introduce Canon EF Mount versions of both Ranger Helium and Ranger Gemini later this year.

Pricing for the two new variants is $29,950 for Ranger Helium and $24,950 for Ranger Gemini.

Scratch 9.1 now supports AJA Kona 5, Red 8K workflows

Assimilate’s Scratch 9.1 now supports AJA Kona 5 audio and video I/O cards, enabling users to output 8K 60p video via 12G-SDI. Scratch 9.1 is also now supporting AJA’s Io 4K Plus I/O box with Thunderbolt 3 connectivity. The product also works with AJA’s T-Tap Io 4K, Kona 1 and Kona 4.

Scratch support for Kona 5 allows for a smooth dailies and finishing workflow for Red 8K footage. Scratch handles the decoding and deBayering of 8K Red RAW in realtime at full resolution and can now natively output 8K over SDI through Kona 5, facilitating a full end-to-end 8K workflow.

Available immediately, Scratch 9.1 starts at $89 a month and $695 annually. AJA Kona 5 and Io 4K Plus are available now through AJA’s reseller network for $2,995 and $2,495 respectively.

DP and director talk about shooting indie Concrete Kids

Director Lije Sarki’s Concrete Kids tells the story of two nine year old boys from Venice, California, who set off on a mission to cross Los Angeles on skateboards at night to reach the Staples Center by morning for a contest. Now streaming on Amazon Prime, the low-budget feature from The Orchard was shot by cinematographer Daron Keet with VariCam LTs in available light mainly at night.

There were many challenges in shooting Concrete Kids, including working with child actors, who could only shoot for three hours per night. “Seventeen nights at three hours per night, that really only equals like four days of shooting,” explains Sarki. “The only way we could shoot and be that mobile and fast was if we used ambient light with Daron holding the camera and using the occasional sticks for wide shots. I also didn’t want to use kids that didn’t skate because I don’t like to cheat authenticity. I didn’t want to cheat the skateboarding and wanted it to feel real. I really wanted to make everything small — just a camera and someone recording sound.”

“When Lije said he didn’t want to have a crew,” says Keet, “I was a little [surprised] because I’m used to having a full crew, and I like using traditional film tools. I also don’t like making movies as documentaries. But I always like to push myself and have different challenges. One reason I didn’t hesitate in doing the film is that Lije is very organized. The more work you do up front, the easier the shoot will be.”

Keet shot the film with the VariCam LT. For Keet, the look was going to be determined by the actual environment, not influenced by his lighting. “We were working at such low light levels,” explains Keet. “We shot in alleys that to your eye, looked black. I would just aim the VariCam down this alley and then you would see something different to what your eye was seeing. It was amazing. We even had experiences where a traffic light would change from green to red and change the illumination from a green ambiance to a red ambiance. For me, it was an incredible challenge and a different way of working where I’m not just looking for opportunities for available light but I’m looking for opportunities where I’m needing the camera to find those opportunities.

DP Daron Keet (in wheelchair) on set

“It’s really nice to have a tool where you can tell the story you want with the tools you have,” he continues. “A lot of people don’t like night shooting, but I actually love it because as a DP you have more control at night because everything is turned off and you can place lights where you want. Concrete Kids was a much different challenge because I was shooting at night, but I didn’t have much control. I had to be able to see things in a different way.”

With the VariCam LT, Keet captured 10-bit 422 4K (4096×2160) AVC-Intra files in V-Log while monitoring his footage with the Panasonic V-709 LUT. Since over 90% of the movie was shot at night in available light, Keet captured at native 5000 ISO. For certain shots he even used the LT’s internal ND for night sequences where he wanted more shallow depth of field.

Although he stuck to mainly available street lights, Keet occasionally used a magnetized one-foot tube light that he kept in his back pocket. “There was one scene that was very well illuminated in the background, but the foreground was dark, so I wanted to balance it out,” explains Keet. “I stuck the light onto a stop sign and it balanced the light perfectly. With digital I’m pretty good at knowing what’s going to overexpose. It’s more about camera angles and always trying to have things backlit because you’re always getting enough light to bounce around.”

For lenses, Keet employed a vintage set of Super Baltar prime lenses, which the production received from Steve Gelb at LensWorksRentals.com. Keet loved how the lenses spread the light throughout the frame. “A lot of the new lights will hold the flares,” explains Keet. “The Super Baltars spreads the flares and makes the image look really creamy and it gives you more of a rounded bokeh. If you look at anamorphic for example, it would you give you an oblong shape. With the Baltars, the iris is smooth and rounded. If you see an out of focus street lamp, the outer edge on newer glass might be sharp but with older glass, it will be much softer. A creamy look is also very forgiving on faces.”

Keet shot wide open most of the time and relied on his skill working as a focus puller years prior. Sarki also had a wireless director’s monitor so he could see check focus for Keet as well.

The film was edited by Pete Lazarus using Adobe Premiere Pro. Studio Unknown in Baltimore did the sound mix remotely. The film was color graded by Asa Fox at The Mill in LA pro bono. Fox gave Sarki a few different options for the look and Keet and Sarki would make adjustments so the film would feel consistent. Because they didn’t have a lot of time for the color grade, Keet relied on a trick he learned to keep things moving. He and Sarki would find their favorite moment that encapsulates a certain scene and work on that color before moving on to the next scene. “When you do that, you can work pretty quickly and then just walk away, leaving the colorist to do his job since we didn’t want to waste any time.”

“So many people helped make this project work because of their contributions without financial benefit,” says Sarki. “I’m super happy with the end result.”

Main Image: Director Lije Sarki

 

DP Chat: Autumn Durald Arkapaw on The Sun Is Also a Star

By Randi Altman

Autumn Durald Arkapaw always enjoyed photography and making films with friends in high school, so it was inevitable that her path would lead to cinematography.

“After a genre course in college where we watched Raging Bull and Broadway Danny Rose I was hooked. From that day on I wanted to find out who was responsible for photographing a film. After I found out it was an actual job, I set out to become a DP. I immediately started learning what the job entailed and also started applying to film schools with my photography portfolio.”

The Sun Is Also a Star

Her credits are vast and include James Franco’s Palo Alto, the indie film One & Two, and music videos for the Jonas Brothers and Arcade Fire. Most recently she worked on Emma Forrest’s feature film Untogether, Max Minghella’s feature debut Teen Spirit and director Ry Russo-Young’s The Sun Is Also a Star, which follows a two young people who hit it off immediately and spend one magical day enjoying each other and the chaos that is New York City.

We recently reached out to Durald Arkapaw to find out more about these films, her workflow and more.

You’ve been busy with three films out this year — Untogether, Teen Spirit and The Sun Is Also a Star. What attracts you to a project?
I’m particular when it comes to choosing a narrative project. I have mostly worked with friends in the past and continue to do so. When making feature films, I throw myself into it. So it’s usually the relationship with the director and their vision that draws me first to a project.

Tell us about The Sun Is Also a Star. How would you describe the overall look of the film?
Director Ry Russo-Young and I wanted the film to feel grounded and not like the usual overlit/precious versions of these films we’ve all encountered. We wanted it to have texture and darks and lights, and the visuals to have a soulfulness. It was important that the world we created felt like an authentic and emotional environment.

Autumn Durald Arkapaw

How early did you get involved in the production? What were some of the discussions about conveying the story arc visually?
Ry and I met early on before she left for prep in New York. She shared with me her passion for wanting to make something new in this genre. That was always the basis for me when I thought about the story unfolding over one day and the arc of these characters. It was important for us to have the light show their progression through the city, but also have it highlight their love.

How did you go about choosing the right camera and lenses to achieve the look?
Ry was into anamorphic before I signed on, so it was already alluring to me once she sent me her look book and visual inspirations. I tend to shoot mostly in the Panavision anamorphic format, so my love goes deep for this medium. As for our camera, the ARRI Alexa Mini was our first choice since it renders a filmic texture, which is very important to me.

Any challenging scene or scenes that you are particularly proud of?
One of my favorite scenes/shots in the film is when Daniel (Charles Melton) sees Natasha (Yara Shahidi) for the first time in Grand Central Station. We had a Scorpio 23-foot telescopic crane on the ground floor. It is a beautiful shot that pulls out, booms down from Daniel’s medium shot in the glass staircase windows, swings around the opposite direction and pushes in while also zooming in on a 12:1 into an extreme closeup of Natasha’s face. We only did two takes and we nailed it on the first one. My focus puller, Ethan Borsuk, is an ace, and so is my camera operator, Andrew Fletcher. We all celebrated that one.

The Sun is Also a Star

Were you involved in the final color grading? What’s important to you about the collaboration between DP and colorist?
Yes, we did final color at Company 3 in New York. Drew Geary was our DI colorist. I do a lot of color on set, and I like to use the on-set LUT for the final as well. So, it’s important that my colorist and I share the same taste. I also like to work fast. Drew was amazing. He was fantastic to work with and added a lot to the overall look and feel.

What inspires you artistically?
Talented, inspiring, hardworking people. Because filmmaking is a team effort, and those around me inspire me to make better art.

How do you stay on top of advancing technology that serves your vision?
Every opportunity I get to shoot is an opportunity to try something new and tell a story differently. Working with directors that like to push the envelope is always a plus. Since I work a lot in commercials, that always affords me the occasion to try new technology and have fun with it.

Has any recent or new technology changed the way you work, looking back over the past few years?
I recently wrapped a film where we shot a few scenes with the iPhone. Something I would have never considered in the past, but the technology has come a long way. Granted the film is about a YouTube star, but I was happily surprised at how decent some of the stuff turned out.

What are some of your best practices or rules you try to follow on each job?
Always work fast and always make it look the best you can while, most importantly, telling the story.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Perpetual Grace’s DPs, colorist weigh in on show’s gritty look

You don’t have to get very far into watching the Epix series Perpetual Grace LTD to realize just how ominous this show feels. It begins with the opening shots, and by the time you’ve spent a few minutes with the dark, mysterious characters who populate this world — and gathered hints of the many schemes within schemes that perpetuate the story — the show’s tone is clear. With its black-and-white flashbacks and the occasional, gritty flash-forwards, Perpetual Grace gets pretty dark, and the action goes in directions you won’t see coming.

This bizarre show revolves around James (Westworld’s Jimmi Simpson), who gets caught up in what initially seems like a simple con that quickly gets out of control. Sir Ben Kingsley, Jacki Weaver, Chris Conrad and Luis Guzmán also star as an assortment of strange and volatile characters.

The series comes from the minds of executive producer Steve Conrad, who also served in that role on Amazon’s quirky drama Patriot, and Bruce Terris, who was both a writer and a first AD on that show.

These showrunners developed the look with other Patriot veterans: cinematographers James Whitaker and Nicole Hirsch Whitaker, who incorporated colorist Sean Coleman’s input before commencing principal photography.

Coleman left his grading suite at Company 3 in Santa Monica to spend several days at the series’ New Mexico location. While there he worked with the DPs to build customized LUTs for them to use during production. This meant that everyone on set could get a strong sense of how lighting, costumes, sets and locations would read with the show’s signature looks applied.

The Whitakers on set

“I’ve never been able to work with the final colorist this way,” says Whitaker, who also alternated directing duties with Conrad. “It was great having him there on set where we could talk about the subtleties of color. What should the sky look like? What should blood look like? Faces? Clothes? Using Resolve, he made two LUTs — “the main one for the color portions and a different one specifically for the black-and-white parts.”

The main look of the show is inspired by film noir and western movie tropes, and all with a tip of the hat to Roger Deakins’ outstanding work on The Assassination of Jesse James by the Coward Robert Ford. “For me,” says Whitaker, “it’s about strong contrast, deep blacks and desert colors … the moodier the better. I don’t love very blue skies, but we wanted to keep some tonality there.”

“It’s real sweaty, gritty, warm, nicotine-stained kind of thing,” Coleman elaborates.

“When we showed up in New Mexico,” Whitaker recalls, “all these colors did exist at various times of the day, and we just leaned into them. When you have landscapes with big, blue skies, strong greens and browns, you can lean in the other way and make it overly saturated. We leaned into it the other way, holding the brown earth tones but pulling out some of the color, which is always better for skin tones.”

The LUTs, Whitaker notes, offer a lot more flexibility than the DPs would have if they used optical filters. Beyond the nondestructive aspect of a LUT, it also allows for a lot more complexity. “If you think about a ‘sepia’ or ‘tobacco’ filter or something like that, you think of an overall wash that goes across the entire frame, and I get immediately bored by that. It’s tricky to do something that feels like it’s from a film a long time ago without dating the project you’re working on now; you want a lot of flexibility to get [the imagery] where you want it to go.”

The series was shot in November through February, often in brutally cold environments. Almost the entire series (the present-day scenes and black-and-white flashbacks) was shot on ARRI Alexa cameras in a 2.0:1 aspect ratio. A frequent Whitaker/Hirsch Whitaker collaborator, DIT Ryan Kunkleman applied and controlled the LUTs so the set monitors reflected their effect on the look.

The flash forwards, which usually occur in very quick spurts, were shot on a 16mm Bolex camera using Kodak’s 7203 (50D) and 7207 (250D) color negative film, which was pushed two stops in processing to enhance grain in post by Coleman.

Final color was done at Company 3’s Santa Monica facility, working primarily alongside the Whitakers. “We enhanced the noir look with the strong, detailed blacks,” says Coleman. Even though a lot of the show exudes the dry desert heat, it was actually shot over a particularly cold winter in New Mexico. “Things were sometimes kind of cold-looking, so sometimes we’d twist things a bit. We also added some digital ‘grain’ to sort of muck it up a little.”

For the black and white, Coleman took the color material in Resolve and isolated just the blue channel in order to manipulate it independent of the red and green, “to make it more inky,” he says. “Normally, you might just drain the color out, but you can really go further than that if you want a strong black-and-white look. When you adjust the individual channel, you affect the image in a way that’s similar to the effect of shooting black-and-white film through a yellow filter. It helps us make darker skies and richer blacks.”

Sean Coleman

“We’ve booked a whole lot of hours together, and that provides a level of comfort,” says Hirsch Whitaker about her and Whitaker’s work with Coleman. “He does some wonderful painting [in Resolve] that helps make a character pop in the frame or direct the viewer’s eye to a specific part of the frame. He really enjoys the collaborative element of color grading.”

Whitaker seconds that emotion: “As a cinematographer, I look at color grading a bit like working on set. It’s not a one-person job. It takes a lot of people to make these images.”


Glassbox’s virtual camera toolset for Unreal, Unity, Maya

Virtual production software company Glassbox Technologies has released its virtual camera plugin DragonFly from private beta for public use. DragonFly offers professional virtual cinematography tools to filmmakers and content creators, allowing users to view character performances and scenes within computer-generated virtual environments in realtime, through the camera’s viewfinder, an external monitor or iPad.

Available for Unreal Engine, Unity 3D and Autodesk Maya, DragonFly delivers an inclusive virtual cinematography workflow that allows filmmakers and content creators to make and test creative decisions faster and earlier in the process, whittling down production cost on projects of all scopes and sizes.

This off-the-shelf toolkit allows users to create previz to postviz without the need for large teams of operators, costly hardware or proprietary tools. It is platform-agnostic and fits seamlessly into any workflow out of box. Users can visualize and explore a CG virtual environment, then record, bookmark, create snapshots and replicate real camera movement as seamlessly as conducting a live-action shoot.

“Virtual production poses great potential for creators, but there were no off-the-shelf filming solutions available that worked out of the box,” notes co-founder/CPO Mariana Acuña. “In response, we made DragonFly: a virtual window that allows users to visualize complex sets, environments and performances through a viewfinder. Without the need for a big stage or mocap crew, it brings greater flexibility to the production and post pipeline for films, animation, immersive content, games and realtime VFX.”

The product was developed in collaboration with top Hollywood visualization and production studios, including The Third Floor for best-in-class results.

“Prior to DragonFly, each studio created its own bespoke virtual production workflow, which is costly and time-consuming per project. DragonFly makes realtime virtual production usable for all creators,” says Evelyn Cover, global R&D manager for The Third Floor. “We’re excited to collaborate with the Glassbox team to develop and test  DragonFly in all kinds of production scenarios from previz to post, with astounding success.”

Glassbox’s second in-beta virtual production software solution, BeeHive — the multi-platform, multi-user collaborative virtual scene syncing, editing and review solution–is slated to launch later this summer.

DragonFly is now available for purchase or can be downloaded for free as a 15-day trial from the Glassbox website. Pricing and licensing includes a permanent license option costing $750 USD (including $250 for the first year of support and updates) and an annual rental option costing $420 a year.

Remembering ARRI’s Franz Wieser

By Randi Altman

Franz Wieser passed away last week, and the world is worse for it. I’ve known Franz for over 20 years, going back to when he was still based in ARRI’s Blauvelt, New York, office and I was editor of Post Magazine.

We would meet in the city from time to time for an event or a meal. In fact, he introduced me to a hidden gem of a restaurant just off Washington Square Park that has become one of my favorites. It reminds me of him — warm, friendly and welcoming.

I always laugh when I remember him telling me about when his car broke down here in New York. Even though he had his hazard lights on and it was clear his car wasn’t cooperating, people kept driving by and giving him the finger. He was bemused but incredulous, which made it even funnier.

Then he moved to LA and I saw him less… a quick hello at trade shows a couple of times a year. When I think of Franz, I remember his smile first and how soft spoken and kind he was.

He touched many over the years and their stories are similar to mine.

“I have known Franz for nearly two decades, but it was during the earliest days of ARRI’s digital era that we truly connected,” shares Gary Adcock, an early ARRI digital adopter, writer and industry consultant. “We got together after one of the director of photography conferences I chaired at NAB to talk about ARRI’s early D20 and D21 digital cameras. Franz was just a great person, always a kind word, always wanting to know how your family and friends were. It will be that kindness that I will miss the most.”

“This is such sad news,” says Andy Shipsides, CTO at Burbank’s AbleCine. “Franz was a dear friend and will be greatly missed. He was an amazing person and brought fun and levity to his work everyday. I had lunch with him several months ago and I feel lucky to have shared that time with him. Franz was a truly a delightful person. He took me out when I first moved to LA to welcome me to the city, which I will always remember. He always had a smile on his face, and his positive energy was contagious. He will be very much missed, a big loss for our industry.”

ARRI sent out the following about Franz.

It is with great sadness, that we share the news of the passing of Franz Wieser, VP, marketing at ARRI Inc.

Franz Wieser grew up in Rosenheim in Bavaria, Germany. He was originally hired by ARRI CT in nearby Stephanskirchen, where ARRI’s Lighting factory is situated. Franz started at ARRI with an internship with Volker Bahnemann, a member of the supervisory board of the ARRI Group, at what was then called Arriflex Corporation in Blauvelt, NY, USA, and spent some time doing market research in New York and California.

In July 1994, Franz accepted a position as marketing manager at Arriflex with Volker Bahnemann and relocated to New York at that time. Franz had a distinguished career of 25 years in marketing for Arriflex and ARRI Inc., leading to his current position of VP of marketing based in the ARRI Burbank office. His contributions spanned the marketing of ARRI film and digital camera systems and analog and digital lighting fixtures. He also built sustaining relationships with the American Society of Cinematographers (ASC) and many others in the film and television industry. His ability to connect with people, his friendliness and reliability, along with his deep understanding of the film industry was outstanding. He was a highly valued member of the global marketing network and a wonderful person and colleague.

Glenn Kennel, president and CEO of ARRI Inc., says “Franz will be remembered by his colleagues and many friends in the industry as a friend and mentor, willing to listen and help. He always had a smile on his face and a gracious approach.”

We are very saddened by his early loss and will remember him well. Our deepest sympathy goes out to his wife and his parents. 

Hobo Films’ Howard Bowler on new series The System

By Randi Altman

Howard Bowler, the founder of New York City-based audio post house Hobo
Audio, has launched Hobo Films, a long-form original content development company.

Howard Bowler’s many faces

Bowler is also the founder and president of Green Point Creative, a marijuana-advocacy branding agency focused on the war on drugs and changing drug laws. And it is this topic that inspired Hobo Films’ first project, a dramatic series called The System. It features actress Lolita Foster from Netflix’s Orange Is The New Black.

Bowler has his hand in many things these days, and with those paths colliding, what better time to reach out to find out more?

After years working in audio post, what led you to want to start an original long-form production arm?
I’ve always wanted to do original scripted content and have been collecting story ideas for years. As our audio post business has grown, it’s provided us a platform to develop this related, exciting and creative business.

You are president/founder of Green Point Creative. Can you tell us more about that initiative?
Green Point Creative is an advocacy platform that was born out of personal experience. After an arrest followed by release (not me), I researched the history of marijuana prohibition. What I found was shocking. Hobo VP Chris Stangroom and I started to produce PSAs through Green Point to share what we had learned. We brought in Jon Mackey to aid in this mission, and he’s since moved up the ranks of Hobo into production management. The deeper we explored this topic, the more we realized there was a much larger story to tell and one that couldn’t be told through PSAs alone.

You wrote the script for the show The System? Can you tell our readers what the show is about?
The show’s storyline plots the experiences of a white father raising his bi-racial son, set against the backdrop of the war on drugs. The tone of the series is a cross between Marvel Comics and Schindler’s List. What happens to these kids in the face of a nefarious system that has them in its grips, how they get out, fight back, etc.

What about the shoot? How involved were you on set? What cameras were used? Who was your DP?
I was very involved the whole time working with the director Michael Cruz. We had to change lines of the script on set if we felt they weren’t working, so everyone had to be flexible. Our DP was David Brick, an incredible talent, driven and dedicated. He shot on the Red camera and the footage is stunning.

Can you talk about working with the director?
I met Michael Cruz when we worked together at Grey, a global advertising agency headquartered in NYC. I told him back then that he was born to direct original content. At the time he didn’t believe me, but he does now.

L-R: DP David Brick and director Mike Cruz on set

Mike’s directing style is subtle but powerful; he knows how to frame a shot and get the performance. He also knows how to build a formidable crew. You’ve got to have a dedicated team in place to pull these things off.

What about the edit and the post? Where was that done? What gear was used?
Hobo is a natural fit for this type of creative project and is handling all the audio post as well as the music score that is being composed by Hobo staffer and musician Oscar Convers.

Mike Cruz tapped the resources of his company, Drum Agency to handle the first phase of editing and they pulled together the rough cuts. For final edit, we connected with Oliver Parker. Ollie was just coming off two seasons of London Kills, a police thriller that’s been released to great reviews. Oliver’s extraordinary editing elevated the story in ways I hadn’t predicted. All editing was done on an Avid Media Composer. Music was composed by Hobo staffer Oscar Convers.

The color grade via Juan Salvo at TheColourSpace using Blackmagic Resolve. [Editor’s Note: We reached out to Salvo to find out more. “We got the original 8K Red files from editorial and conformed that on our end. The look was really all about realism. There’s a little bit of stylized lighting in some scenes, and some mixed-temperature lights as well. Mostly, the look was about finding a balance between some of the more stylistic elements and the very naturalist, almost cinéma vérité tone of the series.

“I think ultimately we tried to make it true-to-life with a little bit of oomph. A lot of it was about respecting and leaning into the lighting that DP Dave Brick developed on the shoot. So during the dialogue scenes, we tend to have more diffuse light that feels really naturalist and just lets the performances take center stage, and in some of the more visual scenes we have some great set piece lighting — police lights and flashlights — that really drive the style of those shots.”]

Where can people see The System?
Click here view the first five minutes of the pilot and learn more about the series.

Any other shows in the works?
Yes, we have several properties in development and to help move these projects forward, we’ve brought on Tiffany Jackman to lead these efforts. She’s a gifted producer who spent 10 years honing her craft at various agencies, as well as working on various films. With her aboard, we can now create an ecosystem that connects all the stories.

All Is True director Kenneth Branagh

By Iain Blair

Five-time Oscar-nominee Ken Branagh might be the biggest Shakespeare fan in the business. In fact, it’s probably fair to say that the actor/director/producer/screenwriter largely owes his fame and fortune to the Bard. For the past 30 years he’s directed (and often starred in) dozens of theatrical productions, as well as feature film adaptations of Shakespeare’s works, starting with 1989’s Henry V. That film won him two Oscar nominations: Best Actor and Best Director. He followed that with Much Ado About Nothing, Othello, Hamlet (which won him a Best Adapted Screenplay Oscar nod), Love’s Labour’s Lost and As You Like It.

Ken Branagh and Iain Blair

So it was probably only a matter of time before the Irish star jumped at the chance to play Shakespeare himself in the new film All Is True, a fictionalized look at the final years of the playwright. Set in 1613, Shakespeare is acknowledged as the greatest writer of the age, but disaster strikes when his renowned Globe Theatre burns to the ground. Devastated, Shakespeare returns to Stratford, where he must face a troubled past and a neglected family — wife Anne (Judi Dench) and two daughters, Susanna (Lydia Wilson) and Judith (Kathryn Wilder). The large ensemble cast also includes Ian McKellen as the Earl of Southampton.

I sat down with Branagh — whose credits include directing such non-Shakespeare movies as Thor, Cinderella and Murder on the Orient Express and acting in Dunkirk and Harry Potter and the Chamber of Secrets — to talk about about making the film and his workflow.

You’ve played many of Shakespeare’s characters in film or on stage. Was it a dream come true to finally play the man himself, or was it intimidating?
It was a dream come true, as I feel like he’s been a guide and mentor since I discovered him at school. And, rather like a dog, he’s given me unconditional love ever since. So I was happy to return some. It’s easy to forget that he was just a guy. He was amazing and a genius, but first and foremost he was a human being.

What kind of film did you hope to make?
A chamber piece, a character piece that took him out of his normal environment. I didn’t want it to be the predictable romp inside a theater, full of backstage bitching and all that sort of theatricality. I wanted to take him away from that and put him back in the place he was from, and I also wanted to load the front part of the movie with silence instead of tons of dialogue.

How close do you feel it gets to the reality of his final years?
I think it’s very truthful about Stratford. It was a very litigious society, and some of the scenes — like the one where John Lane stands up in church and makes very public accusations — all happened. His son Hamnet’s death was unexplained, and Shakespeare did seem to be very insecure in some areas. He wanted money and success and he lived in a very volatile world. If he was supposed to be this returning hero coming back to the big house and a warm welcome from his family, whom he hadn’t seen much of the past two decades, it didn’t quite happen that way. No, he was this absentee dad and husband, and the town had an ambivalent relationship with him; it wasn’t a peaceful retirement at all.

The film is visually gorgeous, and all the candlelit scenes reminded me of Barry Lyndon.
I’m so glad you said that as DP Zac Nicholson and I were partly inspired by that film and that look, and we used only candlelight and no additional lights for those scenes. Painters, like Vermeer and Rembrandt, were our inspiration for all the day and night scenes, respectively.

Clint Eastwood told me, “Don’t ever direct and star in a movie unless you’re a sucker for punishment — it’s just too hard.” So how hard was it?
(Laughs) He’s right. It is very hard, and a lot of work, but it’s also a big privilege. But I had a lot of great help — the crew and people like Judi and Ian. They had great suggestions and you listen to every tidbit they have to offer. I don’t know how Clint does it, but I do a lot of listening and stealing. The directing and acting are so interlinked to me, and I love directing as I get to watch Ian and Judi work, and they’re such hard workers. Judi literally gets to the set before anyone else, and she’s pacing up and down and getting ready to defend Anne Hathaway. She has this huge empathy for her characters which you feel so much, and here she was giving voice to a woman who could not read or write.

Where did you post?
We were based at Longcross Studios, where we did Murder on the Orient Express and the upcoming Artemis Fowl. We did most of it there, and then we ended up at The Post Republic, which has places in London and Berlin, to do the final finishing. Then we did all the final mixing at Twickenham with the great re-recording mixer Andy Nelson and his team. It was my second picture with Andy Nelson as the rerecording mixer. I am completely present throughout and I am completely involved in the final mix.

Do you like the post process?
I love it. It’s the place where I understood, right from my first film, that it could make — in terms of performance — a good one bad, a good one great, a bad one much better. The power of change in post is just amazing to me, and realizing that anything is possible if you have the imagination. So the way you juxtapose the images you’ve collected — and the way a scene from the third act might actually work better in the first act — is so huge in post. That fluidity was a revelation to me, and you can have these tremendous eureka moments in post that can be beautiful and so inspiring.

Can you talk about working with editor Una Ni Dhongaile, who cut The Crown and won a BAFTA for Three Girls?
She’s terrific. She wasn’t on the set but we talked a lot during the shoot. I like her because she really has an opinion. She’s definitely not a “yes” person, but she’s also very sensitive. She also gets very involved with the characters and protects you as a director. She won’t let you cut too soon or too deep, and she encourages you to take a moment to think about stuff. She’s one of those editors who has this special kind of intuition about what the film needs, in addition to all her technical skills and intellectual understanding of what’s going on.

What were the big editing challenges?
After doing a lot of very long takes we used the very best, and despite using a very painterly style we didn’t make the film feel too static. We didn’t want to falsely or artificially cut to just affect the pace, but allow it to flow naturally so every minute was earned. We also didn’t want to feel afraid of holding a particular shot for a long time. We definitely needed pauses and rests, and Shakespeare is musical in his poetry and the way he juxtaposes fast and slow moments. So all those decisions were critical and needed mulling as well as executing.

Talk about the importance of sound and music, as it’s a very quiet film.
It’s absolutely critical in a world like this where light and sound play huge roles and are so utterly different to our own modern understanding of it. The aural and audio space you can offer an audience for this was a big chance to adventure back in time, when the world was far more sparsely populated. Especially in a little place like Stratford; silence played a big role as well. You’re offering a hint of the outside world and the aural landscape is really the bedrock for all the introspection and thoughtfulness this movie deals with.

Patrick Doyle’s music has this gossamer approach — that was the word we used. It was like a breath, so that the whole sound experience invited the audience into the meditative world of Shakespeare. We wanted them to feel the seasons pass, the wind in the trees, and how much more was going on than just the man thinking about his past. It was the experience of returning home and being with this family again, so you’d hear a creak of a chair and it would interrupt his thoughts. So we worked hard on every little detail like that.

Where did you do the grading and coloring?
Post Republic in their North London facility, and again, I’m involved every step of the way.

Did making this film change your views about Shakespeare the man?
Yes, and it was an evolving thing. I’ve always been drawn to his flawed humanity, so it seemed real to be placing this man in normal situations and have him be right out of his comfort zone at the start of the film. So you have this acclaimed, feted and busy playwright, actor, producer and stage manager suddenly back on the dark side of the moon, which Stratford was back then. It was a small town, a three-day trip from London, and it must have been a shock. It was candlelight and recrimination. But I think he was a man without pomp. His colleagues most often described him as modest and gentle, so I felt a vulnerability that surprised me. I think that’s authentic to the man.

What’s next for you?
Disney’s Artemis Fowl, the fantasy-adventure based on the books, which will be released on May 29, and then I start directing Death on the Nile for Fox, which starts shooting late summer.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Showrunner: Eric Newman of Netflix’s Narcos: Mexico

By Iain Blair

Much like the drugs that form the dark heart of Narcos: Mexico, the hit Netflix crime drama is full of danger, chills and thrills — and is highly addictive. It explores the origins of the modern, ultra-violent drug war by going back to its roots, beginning at a time when the Mexican trafficking world was a loose and disorganized confederation of independent growers and dealers. But that all changed with the rise of the Guadalajara Cartel in the 1980s as Félix Gallardo (Diego Luna) — the real-life former Sinaloan police-officer-turned-drug lord — takes the helm, unifying traffickers in order to build an empire.

L-R: Director José Padilha and producer Chris Brancato bookend Eric Newman on the set of Narcos, Season 1.

The show also follows DEA agent Kiki Camarena (Michael Peña), who moves his wife and young son from California to Guadalajara to take on a new post. He quickly learns that his assignment will be more challenging than he ever could have imagined. As Kiki garners intelligence on Félix and becomes more entangled in his mission, a tragic chain of events unfold, affecting the drug trade and the war against it for years to come.

Narcos showrunner, writer and executive producer Eric Newman is a film and television veteran whose resume includes the Academy Award-nominated Children of Men, as well as The Dawn of the Dead, The Last Exorcism and Bright. After over 20 years in the movie industry, Newman transitioned into television as an executive producer on Hemlock Grove for Netflix. It was his curiosity about the international drug trade that led him to develop and executive produce his passion project Narcos, and Newman assumed showrunning responsibilities at the end of its first season. Narcos: Mexico initially started out as the fourth season of Narcos before Netflix decided to make it a stand-alone series.

I recently spoke with Newman about making the show, his involvement in post and another war that’s grabbed a lot of headlines — the one between streaming platforms and traditional cinema.

Do you like being a showrunner?
Yeah! There are aspects of it I really love. I began toward the end of the first season and there was this brief period where I tried not to be the showrunner, even though it was my show. I wasn’t really a writer — I wasn’t in the WGA — so I had a lot of collaborators, but I still felt alone in the driver’s seat. It’s a huge amount of work, from the writing to the shoot and then post, and it never really ends. It’s exhausting but incredibly rewarding.

What are the big challenges of running this show?
If I’d known more about TV at the time, I might have been far more frightened than I was (laughs).The big one is dealing with all the people and personalities involved. We have anywhere between 200 and 400 people working on the show at any given time, so it’s tricky. But I love working with actors, I think I’m a good listener, and any major problems are usually human-oriented. And then there’s all the logistics and moving parts. We began the series shooting in Colombia and then moved the whole thing to Mexico, so that was a big challenge. But the cast and crew are so great, we’re like a big family at this point, and it runs pretty smoothly now.

How far along are you with the second season of Narcos: Mexico?
We’re well into it, and while it’s called Season Two, the reality for us is that it’s the fifth season of a long, hard slog.

This show obviously deals with a lot of locations. How difficult is it when you shoot in Mexico?
It can be hard and grueling. We’re shooting entirely in Mexico — nothing in the States. We shot in Colombia for three years and we went to Panama once, and now we’re all over Mexico — from Mexico City to Jalisco, Puerto Vallarta, Guadalajara, Durango and so on.

It’s also very dangerous subject matter, and one of your location scouts was murdered. Do you worry about your safety?
That was a terrible incident, and I’m not sure whoever shot him even knew he was a location scout on our show. The reality is that a number of incredibly brave journalists, who had nowhere near the protection we have, had already shared these stories — and many were killed for it. So in many ways we’re late to the party.

Of course, you have to be careful anywhere you go, but that’s true of every city. You can find trouble in LA or New York if you are in the wrong place. I don’t worry about the traffickers we depict, as they’re mainly all dead now or in jail, and they seem OK with the way they’re depicted… that it’s pretty truthful. I worry a little bit more about the police and politicians.

Where do you post and do you like the post process?
I absolutely love post, and I think it’s a deeply underrated and under-appreciated aspect of the show. We’ve pulled off far more miracles in post than in any of the writing and shooting. We do all the post at Lantana in LA with the same great team that we’ve had from the start, including post producer Tim King and associate post producer Tanner King.

When we began the series in Colombia, we were told that Netflix didn’t feel comfortable having the footage down there for editing because of piracy issues, and that worked for me. I like coming back to edit and then going back down to Mexico to shoot. We shoot two episodes at a time and cut two at a time. I’m in the middle of doing fixes on Episode 2 and we’re about to lock Episode 3.

Talk about editing. You have several editors, I assume because of the time factor. How does that work?
We have four full-time editors — Iain Erskine, Garret Donnelly, Monty DeGraff and Jon Otazua — who each take separate episodes, plus we have one editor dedicated to the archival package, which is a big part of the show. We’ve also promoted two assistant editors, which I’m very proud of. That’s a nice part of being on a show that’s run for five years; you can watch people grow and move them up the ladder.

You have a huge cast and a lot of moving pieces in each episode. What are the big editing challenges?
We have a fair amount of coverage to sort through, and it’s always about telling the story and the pacing — finding the right rhythm for each scene.

This show has a great score and great sound design. Where do you mix, and can you talk about the importance of sound and music?
We do all the mixing at Technicolor, and we have a great team that includes supervising sound editor Randle Akerson and supervising ADR editor Thomas Whiting. (The team also includes sound effects editors Dino R. DiMuro and Troy Prehmus, dialogue editor David Padilla, music editor Chris Tergesen, re-recording mixers Pete Elia and Kevin Roache and ADR mixer Judah Getz.)

It’s all so crucial. All you have to do is look at a rough edit without any sound of music and it’s just so depressing. I come from a family of composers, so I really appreciate this part of post, and composer Gustavo Santaolalla has done a fantastic job, and the music’s changed a bit since we moved to Mexico. I’m fairly involved with all of it. I get a final playback and maybe I’ll have a few notes, but generally the team has got it right.

In 2017, you formed Screen Arcade with producer Bryan Unkeless, a production company based at Netflix with deals for features and television. I heard you have a new movie you’re producing for Netflix, PWR with Jamie Foxx and Joseph Gordon-Levitt?
It’s all shot, and we’re just headed into the Director’s Cut. We’re posting in New York and have our editorial offices there. Netflix is so great to partner with. They care as much about the quality of image and sound as any studio I’ve ever worked with — and I’ve worked with everyone. In terms of the whole process and deliverables, there’s no difference.

It’s interesting because there’s been a lot of pushback against Netflix and other streaming platforms from the studios, purists and directors like Steven Spielberg. Where do you see the war for cinema’s future going?
I think it’ll be driven entirely by audience viewing habits, as it should be. Some of my all-time favorite movies — The Bridge on the River Kwai, Taxi Driver, Sunset Boulevard, Barry Lyndon — weren’t viewed in a movie house.

Cinema exhibition is a business. They want Black Panther and Star Wars, so it’s a commerce argument not a creative one. With all due respect to Spielberg, no one can dictate viewing habits, and maybe for now they can deny Netflix and streaming platforms Academy awards, but not forever.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

DP Chat: The Man in the High Castle’s Gonzalo Amat

By Randi Altman

Amazon’s The Man in the High Castle is based on the 1962 Phillip K. Dick novel, which asks the question: “What would it look like if the Germans and Japanese won World War II?” It takes a look at the Nazi and Japanese occupation of portions of the United States and the world. But it’s a Philip K. Dick story, so you know there is more to it than that… like an alternate reality.

The series will premiere its fourth and final season this fall on the streaming service. We recently reached out to cinematographer Gonzalo Amat, who was kind enough to talk to us about workflow and more.

How did you become interested in cinematography?
Since I was very young, I had a strong interest in photography and was shooting stills as long as I can remember. Then, when I was maybe 10 or 12 years old, I discovered that movies also had a photographic aspect. I didn’t think about doing it until I was already in college studying communications, and that is when I decided to make it my career.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology?
Artistically, I get inspiration from a lot of sources, such as photography, film, literature, painting or any visual medium. I try to curate what I consume, though. I believe that everything we feed our brain somehow shows up in the work we do, so I am very careful about consuming films, books and photography that feed the story that I will be working on. I think any creation is inspiration. It can be all the way from a film masterpiece to a picture drawn by a kid, music, performance art, historical photographs or testimonies, too.

About staying on top: I read trade magazines and stay educated through seminars and courses, but at some point, it’s also about using those tools. So I try to test the tools instead of reading about them. Almost any rental place or equipment company will let you try newer tools. If I’m shooting, we try to schedule a test for a particular piece of equipment we want to use, during a light day.

What new technology has changed the way you work?
The main new technology would be the migration of most projects to digital. That has changed the way we work on set and collaborate with the directors, since everyone can now see, on monitors, something closely resembling the final look of the project.

A lot of people think this is a bad thing that has happened, but for me, it actually allows more clear communication about the concrete aspects of a sometimes very personal vision. Terms like dark, bright, or colorful are very subjective, so having a reference is a good point to continue the conversation.

Also, digital technology has helped use more available light on interiors and use less light on exterior nights. Still, it hasn’t reached the latitude of film, where you could just let the windows burn. It’s trickier for exterior day shots, where I think you end up needing more control. I would also say that the evolution of visual effects as a more invisible tool has helped us achieve a lot more from a storytelling perspective and has affected the way we shoot scenes in general.

What are some of your best practices, or rules you try to follow on each job?
Each project is different, so I try to learn how that particular project will be. But there are some time-tested rules that I try to implement. The main line is to always go for the story; every answer is always in the script. Another main rule is communication. So being open about questions, even if they seem silly. It’s always good to ask.

Another rule is listening to ideas. People that end up being part of my team are very experienced and sometimes have solutions to problems that come up. If you are open to ideas, more ideas will come, and people will do their jobs with more intention and commitment. Gratitude, respect, collaboration, communication and being conscious about safety is important and part of my process.

Gonzalo Amat on set

Explain your ideal collaboration with the director when setting the look of a project.
Every director is different, so I look at each new project as an opportunity to learn. As a DP, you have to learn and adapt, since through your career you will be asked for different levels of involvement. Because of my interest in storytelling, I personally prefer a bit more of a hands-off approach from directors; talking more about story and concepts, where we collaborate setting up the shoots for covering a scene, and same with lighting: talking moods and concepts that get polished as we are on set. Some directors will be very specific, and that is a challenge because you have to deliver what is inside their heads and hopefully make it better. I still enjoy this challenge, because it also makes you work for someone’s vision.

Ideally, developing the look of a project comes from reading the script together and watching movies and references together. This is when you can say “dark like this” or “moody like this” because visual concepts are very subjective, and so is color. From then on, it’s all about breaking up the script and the visual tone and arc of the story, and subsequently all the equipment and tools for executing the ideas. Lots of meetings as well as walking the locations with just the director and DP are very useful.

How would you describe the overarching look of the show?
Basically, the main visual concept of this project is based in film noir, and our main references were The Conformist and Blade Runner. As we went along, we added some more character-based visual ideas inspired by projects like In the Mood for Love and The Insider for framing.

The main idea is to visually portray the worlds of the characters through framing and lighting. Sometimes, we play it the way the script tells us; sometimes we counterpoint visually what it says, so we can make the audience respond in an emotional way. I see cinematography as the visual music that makes people respond emotionally to different moods. Sometimes it’s more subtle and sometimes more obvious. We prefer to not be very intrusive, even though it’s not a “realist” project.

How early did you get involved in the production?
I start four or five weeks before the season. Even if I’m not doing the first episode, I will still be there to prepare new sets and do some tests for new equipment or characters. Preparation is key in a project like this, because once we start with the production the time is very limited.

Did you start out on the pilot? Did the look change from season to season at all?
James Hawkinson did the pilot, and I came in when the series got picked up. He set up the main visual concepts, and when it came to series I adapted some of the requirements from the studio and the notes from Ridley Scott into the style we see now.

The look has been evolving from season to season, as we feel we can be bolder with the visual language of the show. If you look at the pilot all the way to the end of Season 3, or Season 4, which is filming, you can definitely see a change, even though it still feels like the same project — the language has been polished and distilled. I think we have reached the sweet spot.

Does the look change at all when the timelines shift?
Yes, all of the timelines require a different look and approach with lighting and camera use. Also, the art design and wardrobe changes, so we combine all those subtle changes to give each world, place and timeline a different feel. We have lots of conceptual meetings, and we develop the look and feel of each timeline and place. Once these concepts are established, the team gets to work constructing the sets and needed visual elements, and then we go from there.

This is a period piece. How did that affect the look, if at all?
We have tried to give it a specific and unique look that still feels tied to the time period so, yes, the fact that this happens in our own version of the ‘60s has determined the look, feeling and language of the series. We base our aesthetics in what the real world was in 1945, which our story diverges from to form this alternate world.

The 1960s of the story are not the real 1960s because there is no USA and no free Europe, so that means most of the music and wardrobe doesn’t look like the 1960s we know. There are many Nazi and Japanese visual elements on the visuals that distinguish us from a regular 1960s look, but it still feels period.

How did you go about choosing the right camera and lenses for this project?
Because we had a studio mandate to finish in 4K, the Red One with Zeiss Master Prime lenses was chosen in the pilot, so when I came on we inherited that tech. We stuck with all this for the first season, but after a few months of shooting we adapted the list and filters and lighting. On Season 2, we pushed to change to an ARRI Alexa camera, so we ended up adjusting all the equipment around this new camera and it’s characteristics — such as needing less light, so we ended up with less lighting equipment.

We also added classic Mitchell Diffusion Filters and some zooms. Lighting and grip equipment have been evolving toward less and less equipment since we light less and less. It’s a constant evolution. We also looked at some different lens options in the season breaks, but we haven’t added them because we don’t want to change our budget too much from season to season, and we use them as required.

Any challenging scenes that you are particularly proud of in Season 3?
I think the most challenging scene was the one in the Nebenwelt tunnel set. We had to have numerous meetings about what this tunnel was as a concept and then, based on the concept, find a way to execute it in a visual way. We wanted to make sure that the look of the scene matched the concepts of quantum physics within the story.

I wanted to achieve lighting that felt almost like plasma. We decided to put a mirror at the end of the tunnel with circle lighting right above it. We then created the effect of the space travel by using a blast of light — using lighting strikes with an elaborate setup that collectively used more than a million watts. It was a complex setup, but fortunately we had a lot of very talented people come together to execute it.

What’s your go-to gear (camera, lens, mount/accessories) — things you can’t live without?
On this project, I’d say it’s the 40mm lens. I don’t think this project would have the same vibe without this lens. Then, of course, I love the Technocrane, but we don’t use it every day, for budgetary and logistical reasons.

For other projects, I would say the ARRI Alexa camera and the 40mm and handheld accessories. You can do a whole movie with just those two; I have done it, and it’s liberating. But if I had an unlimited budget, I would love to use a Technocrane every day with a stabilized remote head.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Collaboration company Pix acquires Codex

Pix has reached an agreement to acquire London-based Codex, in a move that will enable both companies to deliver a range of new products and services, from streamlined camera capture to post production finishing.

The Pix System  is a collaboration tool that provides industry pros with secure access to production content on mobile devices, laptops or TVs from offices, homes or while traveling. They won an Oscar for its technology in 2019.

Codex products include recorders and media processing systems that transfer digital files and images from the camera to post, and tools for color dynamics, dailies creation, archiving, review and digital asset management.

“Our clients have relied on Pix to protect their material and ideas throughout all phases of production. In Codex, we found a group that similarly values relationships with attention to critical details,” explains Pix founder/CEO Eric Dachs. “Codex will retain its distinct brand and culture, and there is a great deal we can do together for the benefit of our clients and the industry.”

Over the years, Pix and Codex have seen wide industry adoption, delivering a proven record of contributing value to their clients. Introduced in 2003, Pix soon became a trusted and widely used secure communication and content management provider. The Pix System enables creative continuity and reduces project risk by ensuring that ideas are accurately shared, stored, and preserved throughout the entire production process.

“Pix and Codex are complementary, trusted brands used by leading creatives, filmmakers and studios around the world,” says Codex managing director Marc Dando. “The integration of both services into one simplified workflow will deliver the industry a fast, secure, global collaborative ecosystem.”

With the acquisition of Codex, Pix will expand its servicing reach across the globe. Pix founder Dachs will remain as CEO, and Dando will take on the role of chief design officer at Pix, with a focus on existing and new products.

NAB 2019: postPerspective Impact Award winners

postPerspective has announced the winners of our Impact Awards from NAB 2019. Seeking to recognize debut products with real-world applications, the postPerspective Impact Awards are voted on by an anonymous judging body made up of respected industry artists and pros (to whom we are very grateful). It’s working pros who are going to be using these new tools — so we let them make the call.

It was fun watching the user ballots come in and discovering which products most impressed our panel of post and production pros. There are no entrance fees for our awards. All that is needed is the ability to impress our voters with products that have the potential to make their workdays easier and their turnarounds faster.

We are grateful for our panel of judges, which grew even larger this year. NAB is exhausting for all, so their willingness to share their product picks and takeaways from the show isn’t taken for granted. These men and women truly care about our industry and sharing information that helps their fellow pros succeed.

To be successful, you can’t operate in a vacuum. We have found that companies who listen to their users, and make changes/additions accordingly, are the ones who get the respect and business of working pros. They aren’t providing tools they think are needed; they are actively asking for feedback. So, congratulations to our winners and keep listening to what your users are telling you — good or bad — because it makes a difference.

The Impact Award winners from NAB 2019 are:

• Adobe for Creative Cloud and After Effects
• Arraiy for DeepTrack with The Future Group’s Pixotope
• ARRI for the Alexa Mini LF
• Avid for Media Composer
• Blackmagic Design for DaVinci Resolve 16
• Frame.io
• HP for the Z6/Z8 workstations
• OpenDrives for Apex, Summit, Ridgeview and Atlas

(All winning products reflect the latest version of the product, as shown at NAB.)

Our judges also provided quotes on specific projects and trends that they expect will have an impact on their workflows.

Said one, “I was struck by the predicted impact of 5G. Verizon is planning to have 5G in 30 cities by end of year. The improved performance could reach 20x speeds. This will enable more leverage using cloud technology.

“Also, AI/ML is said to be the single most transformative technology in our lifetime. Impact will be felt across the board, from personal assistants, medical technology, eliminating repetitive tasks, etc. We already employ AI technology in our post production workflow, which has saved tens of thousands of dollars in the last six months alone.”

Another echoed those thoughts on AI and the cloud as well: “AI is growing up faster than anyone can reasonably productize. It will likely be able to do more than first thought. Post in the cloud may actually start to take hold this year.”

We hope that postPerspective’s Impact Awards give those who weren’t at the show, or who were unable to see it all, a starting point for their research into new gear that might be right for their workflows. Another way to catch up? Watch our extensive video coverage of NAB.

NAB 2019: An engineer’s perspective

By John Ferder

Last week I attended my 22nd NAB, and I’ve got the Ross lapel pin to prove it! This was a unique NAB for me. I attended my first 20 NABs with my former employer, and most of those had me setting up the booth visits for the entire contingent of my co-workers and making sure that the vendors knew we were at each booth and were ready to go. Thursday was my “free day” to go wandering and looking at the equipment, cables, connectors, test gear, etc., that I was looking for.

This year, I’m part of a new project, so I went with a shopping list and a rough schedule with the vendors we needed to see. While I didn’t get everywhere I wanted to go, the three days were very full and very rewarding.

Beck Video IP panel

Sessions and Panels
I also got the opportunity to attend the technical sessions on Saturday and Sunday. I spent my time at the BEITC in the North Hall and the SMPTE Future of Cinema Conference in the South Hall. Beck TV gave an interesting presentation on constructing IP-based facilities of the future. While SMPTE ST2110 has been completed and issued, there are still implementation issues, as NMOS is still being developed. Today’s systems are and will for the time being be hybrid facilities. The decision to be made is whether the facility will be built on an IP routing switcher core with gateways to SDI, or on an SDI routing switcher core with gateways to IP.

Although more expensive, building around an IP core would be more efficient and future-proof. Fiber infrastructure design, test equipment and finding engineers who are proficient in both IP and broadcast (the “Purple Squirrels”) are large challenges as well.

A lot of attention was also paid to cloud production and distribution, both in the BEITC and the FoCC. One such presentation, at the FoCC, was on VFX in the cloud with an eye toward the development of 5G. Nathaniel Bonini of BeBop Technology reported that BeBop has a new virtual studio partnership with Avid, and that the cloud allows tasks to be performed in a “massively parallel” way. He expects that 5G mobile technology will facilitate virtualization of the network.

VFX in the Cloud panel

Ralf Schaefer, of the Fraunhofer Heinrich-Hertz Institute, expressed his belief that all devices will be attached to the cloud via 5G, resulting in no cables and no mobile storage media. 5G for AR/VR distribution will render the scene in the network and transmit it directly to the viewer. Denise Muyco of StratusCore provided a link to a virtual workplace: https://bit.ly/2RW2Vxz. She felt that 5G would assist in the speed of the collaboration process between artist and client, making it nearly “friction-free.” While there are always security concerns, 5G would also help the prosumer creators to provide more content.

Chris Healer of The Molecule stated that 5G should help to compress VFX and production workflows, enable cloud computing to work better and perhaps provide realtime feedback for more perfect scene shots, showing line composites of VR renders to production crews in remote locations.

The Floor
I was very impressed with a number of manufacturers this year. Ross Video demonstrated new capabilities of Inception and OverDrive. Ross also showed its new Furio SkyDolly three-wheel rail camera system. In addition, 12G single-link capability was announced for Acuity, Ultrix and other products.

ARRI AMIRA (Photo by Cotch Diaz)

ARRI showed a cinematic multicam system built using the AMIRA camera with a DTS FCA fiber camera adapter back and a base station controllable by Sony RCP1500 or Skaarhoj RCP. The Sony panel will make broadcast-centric people comfortable, but I was very impressed with the versatility of the Skaarhoj RCP. The system is available using either EF, PL, or B4 mount lenses.

During the show, I learned from one of the manufacturers that one of my favorite OLED evaluation monitors is going to be discontinued. This was bad news for the new project I’ve embarked on. Then we came across the Plura booth in the North Hall. Plura as showing a new OLED monitor, the PRM-224-3G. It is a 24.5-inch diagonal OLED, featuring two 3G/HD/SD-SDI and three analog inputs, built-in waveform monitors and vectorscopes, LKFS audio measurement, PQ and HLG, 10-bit color depth, 608/708 closed caption monitoring, and more for a very attractive price.

Sony showed the new HDC-3100/3500 3xCMOS HD cameras with global shutter. These have an upgrade program to UHD/HDR with and optional processor board and signal format software, and a 12G-SDI extension kit as well. There is an optional single-mode fiber connector kit to extend the maximum distance between camera and CCU to 10 kilometers. The CCUs work with the established 1000/1500 series of remote control panels and master setup units.

Sony’s HDC-3100/3500 3xCMOS HD camera

Canon showed its new line of 4K UHD lenses. One of my favorite lenses has been the HJ14ex4.3B HD wide-angle portable lens, which I have installed in many of the studios I’ve worked in. They showed the CJ14ex4.3B at NAB, and I even more impressed with it. The 96.3-degree horizontal angle of view is stunning, and the minimization of chromatic aberration is carried over and perhaps improved from the HJ version. It features correction data that support the BT.2020 wide color gamut. It works with the existing zoom and focus demand controllers for earlier lenses, so it’s  easily integrated into existing facilities.

Foot Traffic
The official total of registered attendees was 91,460, down from 92,912 in 2018. The Evertz booth was actually easy to walk through at 10a.m. on Monday, which I found surprising given the breadth of new interesting products and technologies. Evertz had to show this year. The South Hall had the big crowds, but Wednesday seemed emptier than usual, almost like a Thursday.

The NAB announced that next year’s exhibition will begin on Sunday and end on Wednesday. That change might boost overall attendance, but I wonder how adversely it will affect the attendance at the conference sessions themselves.

I still enjoy attending NAB every year, seeing the new technologies and meeting with colleagues and former co-workers and clients. I hope that next year’s NAB will be even better than this year’s.

Main Image: Barbie Leung.


John Ferder is the principal engineer at John Ferder Engineer, currently Secretary/Treasurer of SMPTE, an SMPTE Fellow, and a member of IEEE. Contact him at john@johnferderengineer.com.

NAB 2019: A cinematographer’s perspective

By Barbie Leung

As an emerging cinematographer, I always wanted to attend an NAB show, and this year I had my chance. I found that no amount of research can prepare you for the sheer size of the show floor, not to mention the backrooms, panels and after-hours parties. As a camera operator as well as a cinematographer who is invested in the post production and exhibition end of the spectrum, I found it absolutely impossible to see everything I wanted to or catch up with all the colleagues and vendors I wanted to. This show is a massive and draining ride.

Panasonic EV1

There was a lot of buzz in the ether about 5G technology. Fast and accurate, the consensus seems to be that 5G will be the tipping point in implementing a lot of the tech that’s been talked about for years but hasn’t quite taken off yet, including the feasibility of autonomous vehicles and 8K streaming stateside.

It’s hard to deny the arrival of 8K technology while staring at the detail and textures on an 80-inch Sharp 8K professional display. Every roof tile, every wave in the ocean is rendered in rich, stunning detail.

In response to the resolution race, on the image capture end of things, Arri had already announced and started taking orders for the Alexa Mini LF — its long-awaited entry into the large format game — in the week before NAB.

Predictably, at NAB we saw many lens manufacturers highlighting full-frame coverage. Canon introduced its Sumire Prime lenses, while Fujinon announced the Premista 28-100mm T2.9 full-format zoom.

Sumire Prime lenses

Camera folks, including many ASC members, are embracing large format capture for sure, but some insist the appeal lies not so much in the increased resolution, but rather in the depth and overall image quality.

Meanwhile, back in 35mm sensor land, Panasonic continues its energetic push of the EVA1 camera. Aside from presentations at their booth emphasizing “cinematic” images from this compact 5.7K camera, they’ve done a subtle but not-to-subtle job of disseminating the EVA1 throughout the trade show floor. If you’re at the Atomos booth, you’ll find director/cinematographers like Elle Schneider presenting work shot with Atomos with the EVA1 balanced on a Ronin-S, and if you stop by Tiffen you’ll find an EVA1 being flown next to the Alexa Mini.

I found a ton of motion control at the show. From Shotover’s new compact B1 gyro stabilized camera system to the affable folks at Arizona-based Defy, who showed off their Dactylcam Pro, an addictively smooth-to-operate cable-suspension rig. The Bolt high-speed Cinebot had high-speed robotic arms complete with a spinning hologram.

Garret Brown at the Tiffen booth.

All this new gimbal technology is an ever-evolving game changer. Steadicam inventor Garrett Brown was on hand at the Tiffen booth to show the new M2 sled, which has motors elegantly built into the base. He enthusiastically heralded that camera operators can go faster and more “dangerously” than ever. There was so much motion control that it vied for attention alongside all the talk of 5G, 8K and LED lighting.

Some veterans of the show have expressed that this year’s show felt “less exciting” than shows of the past eight to 10 years. There were fewer big product launch announcements, perhaps due to past years where companies have been unable to fulfill the rush of post-NAB orders for new products for 12 or even 18 months. Vendors have been more conservative with what to hype, more careful with what to promise.

For a new attendee like me, there was more than enough new tech to explore. Above all else, NAB is really about the people you meet. The tech will be new next year, but the relationships you start and build at NAB are meant to last a career.

Main Image: ARRI’s Alexa Mini LF.


Barbie Leung is a New York-based cinematographer and camera operator working in independent film and branded content. Her work has played Sundance, the Tribeca Film Festival and Outfest. You can follow her on Instagram at @barbieleungdp.

Colorfront at NAB with 8K HDR, product updates

Colorfront, which makes on-set dailies and transcoding systems, has rolled out new 8K HDR capabilities and updates across its product lines. The company has also deepened its technology partnership with AJA and entered into a new collaboration with Pomfort to bring more efficient color and HDR management on-set.

Colorfront Transkoder is a post workflow tool for handling UHD, HDR camera, color and editorial/deliverables formats, with recent customers such as Sky, Pixelogic, The Picture Shop and Hulu. With a new HDR GUI, Colorfront’s Transkoder 2019 performs the realtime decompression/de-Bayer/playback of Red and Panavision DXL2 8K R3D material displayed on a Samsung 82-inch Q900R QLED 8K Smart TV in HDR and in full 8K resolution (7680 X 4320). The de-Bayering process is optimized through Nvidia GeForce RTX graphics cards with Turing GPU architecture (also available on Colorfront On-Set Dailies 2019), with 8K video output (up to 60p) using AJA Kona 5 video cards.

“8K TV sets are becoming bigger, as well as more affordable, and people are genuinely awestruck when they see 8K camera footage presented on an 8K HDR display,” said Aron Jaszberenyi, managing director, Colorfront. “We are actively working with several companies around the world originating 8K HDR content. Transkoder’s new 8K capabilities — across on-set, post and mastering — demonstrate that 8K HDR is perfectly accessible to an even wider range of content creators.”

Powered by a re-engineered version of Colorfront Engine and featuring the HDR GUI and 8K HDR workflow, Transkoder 2019 supports camera/editorial formats including Apple ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE (High Density Encoding).

Transkoder 2019’s mastering toolset has been further expanded to support Dolby Vision 4.0 as well as Dolby Atmos for the home with IMF and Immersive Audio Bitstream capabilities. The new Subtitle Engine 2.0 supports CineCanvas and IMSC 1.1 rendering for preservation of content, timing, layout and styling. Transkoder can now also package multiple subtitle language tracks into the timeline of an IMP. Further features support fast and efficient audio QC, including solo/mute of individual tracks on the timeline, and a new render strategy for IMF packages enabling independent audio and video rendering.

Colorfront also showed the latest versions of its On-Set Dailies and Express Dailies products for motion pictures and episodic TV production. On-Set Dailies and Express Dailies both now support ProRes RAW, Blackmagic RAW, ARRI Alexa LF/Alexa Mini LF and Codex HDE. As with Transkoder 2019, the new version of On-Set Dailies supports real-time 8K HDR workflows to support a set-to-post pipeline from HDR playback through QC and rendering of HDR deliverables.

In addition, AJA Video Systems has released v3.0 firmware for its FS-HDR realtime HDR/WCG converter and frame synchronizer. The update introduces enhanced coloring tools together with several other improvements for broadcast, on-set, post and pro AV HDR production developed by Colorfront.

A new, integrated Colorfront Engine Film Mode offers an ACES-based grading and look creation toolset with ASC Color Decision List (CDL) controls, built-in LOOK selection including film emulation looks, and variable Output Mastering Nit Levels for PQ, HLG Extended and P3 colorspace clamp.

Since launching in 2018, FS-HDR has been used on a wide range of TV and live outside broadcast productions, as well as motion pictures including Paramount Pictures’ Top Gun: Maverick, shot by Claudio Miranda, ASC.

Colorfront licensed its HDR Image Analyzer software to AJA for AJA’s HDR Image Analyzer in 2018. A new version of AJA HDR Image Analyzer is set for release during Q3 2019.

Finally, Colorfront and Pomfort have teamed up to integrate their respective HDR-capable on-set systems. This collaboration, harnessing Colorfront Engine, will include live CDL reading in ACES pipelines between Colorfront On-Set/Express Dailies and Pomfort LiveGrade Pro, giving motion picture productions better control of HDR images while simplifying their on-set color workflows and dailies processes.

NAB 2019: First impressions

By Mike McCarthy

There are always a slew of new product announcements during the week of NAB, and this year was no different. As a Premiere editor, the developments from Adobe are usually the ones most relevant to my work and life. Similar to last year, Adobe was able to get their software updates released a week before NAB, instead of for eventual release months later.

The biggest new feature in the Adobe Creative Cloud apps is After Effects’ new “Content Aware Fill” for video. This will use AI to generate image data to automatically replace a masked area of video, based on surrounding pixels and surrounding frames. This functionality has been available in Photoshop for a while, but the challenge of bringing that to video is not just processing lots of frames but keeping the replaced area looking consistent across the changing frames so it doesn’t stand out over time.

The other key part to this process is mask tracking, since masking the desired area is the first step in that process. Certain advances have been made here, but based on tech demos I saw at Adobe Max, more is still to come, and that is what will truly unlock the power of AI that they are trying to tap here. To be honest, I have been a bit skeptical of how much AI will impact film production workflows, since AI-powered editing has been terrible, but AI-powered VFX work seems much more promising.

Adobe’s other apps got new features as well, with Premiere Pro adding Free-Form bins for visually sorting through assets in the project panel. This affects me less, as I do more polishing than initial assembly when I’m using Premiere. They also improved playback performance for Red files, acceleration with multiple GPUs and certain 10-bit codecs. Character Animator got a better puppet rigging system, and Audition got AI-powered auto-ducking tools for automated track mixing.

Blackmagic
Elsewhere, Blackmagic announced a new version of Resolve, as expected. Blackmagic RAW is supported on a number of new products, but I am not holding my breath to use it in Adobe apps anytime soon, similar to ProRes RAW. (I am just happy to have regular ProRes output available on my PC now.) They also announced a new 8K Hyperdeck product that records quad 12G SDI to HEVC files. While I don’t think that 8K will replace 4K television or cinema delivery anytime soon, there are legitimate markets that need 8K resolution assets. Surround video and VR would be one, as would live background screening instead of greenscreening for composite shots. No image replacement in post, as it is capturing in-camera, and your foreground objects are accurately “lit” by the screens. I expect my next major feature will be produced with that method, but the resolution wasn’t there for the director to use that technology for the one I am working on now (enter 8K…).

AJA
AJA was showing off the new Ki Pro Go, which records up to four separate HD inputs to H.264 on USB drives. I assume this is intended for dedicated ISO recording of every channel of a live-switched event or any other multicam shoot. Each channel can record up to 1080p60 at 10-bit color to H264 files in MP4 or MOV and up to 25Mb.

HP
HP had one of their existing Z8 workstations on display, demonstrating the possibilities that will be available once Intel releases their upcoming DIMM-based Optane persistent memory technology to the market. I have loosely followed the Optane story for quite a while, but had not envisioned this impacting my workflow at all in the near future due to software limitations. But HP claims that there will be options to treat Optane just like system memory (increasing capacity at the expense of speed) or as SSD drive space (with DIMM slots having much lower latency to the CPU than any other option). So I will be looking forward to testing it out once it becomes available.

Dell
Dell was showing off their relatively new 49-inch double-wide curved display. The 4919DW has a resolution of 5120×1440, making it equivalent to two 27-inch QHD displays side by side. I find that 32:9 aspect ratio to be a bit much for my tastes, with 21:9 being my preference, but I am sure there are many users who will want the extra width.

Digital Anarchy
I also had a chat with the people at Digital Anarchy about their Premiere Pro-integrated Transcriptive audio transcription engine. Having spent the last three months editing a movie that is split between English and Mandarin dialogue, needing to be fully subtitled in both directions, I can see the value in their tool-set. It harnesses the power of AI-powered transcription engines online and integrates the results back into your Premiere sequence, creating an accurate script as you edit the processed clips. In my case, I would still have to handle the translations separately once I had the Mandarin text, but this would allow our non-Mandarin speaking team members to edit the Mandarin assets in the movie. And it will be even more useful when it comes to creating explicit closed captioning and subtitles, which we have been doing manually on our current project. I may post further info on that product once I have had a chance to test it out myself.

Summing Up
There were three halls of other products to look through and check out, but overall, I was a bit underwhelmed at the lack of true innovation I found at the show this year.

Full disclosure, I was only able to attend for the first two days of the exhibition, so I may have overlooked something significant. But based on what I did see, there isn’t much else that I am excited to try out or that I expect to have much of a serious impact on how I do my various jobs.

It feels like most of the new things we are seeing are merely commoditized versions of products that may originally have been truly innovative when they were initially released, but now are just slightly more fleshed out versions over time.

There seems to be much less pioneering of truly new technology and more repackaging of existing technologies into other products. I used to come to NAB to see all the flashy new technologies and products, but now it feels like the main thing I am doing there is a series of annual face-to-face meetings, and that’s not necessarily a bad thing.

Until next year…


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Sony’s NAB updates — a cinematographer’s perspective

By Daniel Rodriguez

With its NAB offerings, Sony once again showed that they have a firm presence in nearly every stage of production, be it motion picture, broadcast media or short form. The company continues to keep up to date with the current demands while simultaneously preparing for the inevitable wave of change that seems to come faster and faster each year. While the introduction of new hardware was kept to a short list this year, many improvements to existing hardware and software were released to ensure Sony products — both new and existing — still have a firm presence in the future.

The ability to easily access, manipulate, share and stream media has always been a priority for Sony. This year at NAB, Sony continued to demonstrate its IP Live, SR Live, XDCAM Air and Media Backbone Hive platforms, which give users the opportunity to manage media all over the globe. IP Live allows users to access remote production, which contains core processing hardware while accessing it anywhere. This extends to 4K and HDR/SDR streaming as well, which is where SR Live comes into play. SR Live allows for a native 4K HDR signal to be processed into full HD and regular SDR signals, and a core improvement is the ability to adjust the curves during a live broadcast for any issues that may arise in converting HDR signals to SDR.

For other media, including XDCAM-based cameras, XDCAM Air allows for the wireless transfer and streaming of most media through QoS services, and turns almost any easily accessible camera with wireless capabilities into a streaming tool.

Media Backbone Hive allows users to access their media anywhere they want. Rather than just being an elaborate cloud service, Media Backbone Hive allows internal Adobe Cloud-based editing, accepts nearly every file type, allows a user to embed metadata and makes searching simple with keywords and phrases that are spoken in the media itself.

For the broadcast market, Sony introduced the Sony HDC-5500 4K HDR three-CMOS sensor camcorder which they are calling their “flagship” camera in this market. Offering 4K HDR and high frame rates, the camera also offers a global shutter — which is essential for dealing with strobing from lights — and can now capture fast action without the infamous rolling shutter blur. The camera allows for 4K output over 12G SDI, allowing for 4K monitoring and HDR, and as these outputs continue to be the norm, the introduction of the HDC-5500 will surely be a hit with users, especially with the addition of global shutter.

Sony is very much a company that likes to focus on the longevity of their previous releases… cameras especially. Sony’s FS7 is a camera that has excelled in its field since its introduction in 2014, and to this day is an extremely popular choice for short form, narrative and broadcast media. Like other Sony camera bodies, the FS7 allows for modular builds and add-ons, and this is where the new CBK-FS7BK ENG Build-Up Kit comes in. Sporting a shoulder mount and ENG viewfinder, the kit includes an extension in the back that allows for two wireless audio inputs, RAW output, streaming and file transfer via Wireless LAN or 4G/LTE connection, as well as QoS streaming (only through XDCAM Air) and timecode input. This CBK-FS7BK ENG Build-Up Kit turns the FS7 into an even more well-rounded workhorse.

The Sony Venice is Sony’s flagship Cinema camera, replacing the Sony F65, which is still brilliant and a popular camera. Having popped up as recently as last year’s Annihilation, the Venice takes a leap further in entering the full-frame, VistaVision market. Boasting top-of-the-line specs and a smaller, more modular build than the F65, the camera isn’t exactly a new release — it came out in November 2017 — but Sony has secured longevity in their flagship camera in a time when other camera manufacturers are just releasing their own VistaVision-sensored cameras and smaller alternatives.

Sony recently released a firmware update to the Venice that allows X-OCN XT — their highest form of compressed 16-bit RAW — two new imager modes, allowing the camera to sample 5.7K 16:9 in full frame and 6K 2.39:1 in full width, as well as 4K signal over 6G/12G SDI output and wireless remote control with the CBK-WA02. Since the Venice is smaller and able to be mounted on harder-to-reach mounts, wireless control is quickly becoming a feature that many camera assistants need. Newer anamorphic desqueeze modes for 1.25x, 1.3x, 1.5x and 1.8x have also been added, which is huge, since many older and newer lenses are constantly being created and revisited, such as the Technovision 1.5x — made famous by Vittorio Storaro on Apocalypse Now (1979) — and the Cooke Full Frame Anamorphics 1.8X. With VistaVision full frame now being an easily accessible way of filming, new forms of lensing are now becoming common, so systems like anamorphic are no longer limited to 1.3X and 2X. It’s reassuring to see Sony look out for storytellers who may want to employ less common anamorphic desqueeze sizes.

As larger resolutions and higher frame rates become the norm, Sony has introduced the new Sony SxS Pro X cards. A follow up to the hugely successful Sony SxS Pro+ cards, these new cards boost an incredible transfer speed of 10Gbps (1250Mbps) in 120GB and 240GB cards. This is a huge step up from the previous SxS Pro+ cards that offered a read speed of 3.5Gbps and a write speed of 2.8Gbps. Probably the most exciting part of these new cards being introduced is the corresponding SBAC-T40 card reader which guarantees a full 240GB card to be offloaded in 3.5 minutes.

Sony’s newest addition to the Venice camera is the Rialto extension system. Using the Venice’s modular build, the Rialto is a hardware extension that allows you to remove the main body’s sensor and install it into a smaller body unit which is then tethered either nine or 18 feet by cable back to the main body. Very reminiscent of the design of ARRI’s Alexa M unit, the Rialto goes further by being an extension of its main system rather than a singular system, which may bring its own issues. The Rialto allows users to reach spots where it may otherwise prove difficult using the actual Venice body. Its lightweight design allows users to mount it nearly anywhere. Where other camera bodies that are designed to be smaller end up heavy when outfitted with accessories such as batteries and wireless transmitters, the Rialto can easily be rigged to aerials, handhelds, and Steadicams. Though some may question why you wouldn’t just get a smaller body from another camera company, the big thing to consider is that the Rialto isn’t a solution to the size of the Venice body — which is already very small, especially compared to the previous F65 — but simply another tool to get the most out of the Venice system, especially considering you’re not sacrificing anything as far as features or frame rates. The Rialto is currently being used on James Cameron’s Avatar sequels, as its smaller body allows him to employ two simultaneously for true 3D recording whilst giving all the options of the Venice system.

With innovations in broadcast and motion picture production, there is a constant drive to push boundaries and make capture/distribution instant. Creating a huge network for distribution, streaming, capture, and storage has secured Sony not only as the powerhouse that it already is, but also ensures its presence in the ever-changing future.


Daniel Rodriguez is a New York based director and cinematographer. Having spent years working for such companies as Light Iron, Panavision and ARRI Rental, he currently works as a freelance cinematographer, filming narrative and commercial work throughout the five boroughs. 

 

Atomos’ new Shogun 7: HDR monitor, recorder, switcher

The new Atomos Shogun 7 is a seven-inch HDR monitor, recorder and switcher that offers an all-new 1500-nit, daylight-viewable, 1920×1200 panel with a 1,000,000:1 contrast ratio and 15+ stops of dynamic range displayed. It also offers ProRes RAW recording and realtime Dolby Vision output. Shogun 7 will be available in June 2019, priced at $1,499.

The Atomos screen uses a combination of advanced LED and LCD technologies which together offer deeper, better blacks the company says rivals OLED screens, “but with the much higher brightness and vivid color performance of top-end LCDs.”

A new 360-zone backlight is combined with this new screen technology and controlled by the Dynamic AtomHDR engine to show millions of shades of brightness and color. It allows Shogun 7 to display 15+ stops of real dynamic range on-screen. The panel, says Atomos, is also incredibly accurate, with ultra-wide color and 105% of DCI-P3 covered, allowing for the same on-screen dynamic range, palette of colors and shades that your camera sensor sees.

Atomos and Dolby have teamed up to create Dolby Vision HDR “live” — a tool that allows you to see HDR live on-set and carry your creative intent from the camera through into HDR post. Dolby have optimized their target display HDR processing algorithm which Atomos has running inside the Shogun 7. It brings realtime automatic frame-by-frame analysis of the Log or RAW video and processes it for optimal HDR viewing on a Dolby Vision-capable TV or monitor over HDMI. Connect Shogun 7 to the Dolby Vision TV and AtomOS 10 automatically analyzes the image, queries the TV and applies the right color and brightness profiles for the maximum HDR experience on the display.

Shogun 7 records images up to 5.7kp30, 4kp120 or 2kp240 slow motion from compatible cameras, in RAW/Log or HLG/PQ over SDI/HDMI. Footage is stored directly to AtomX SSDmini or approved off-the-shelf SATA SSD drives. There are recording options for Apple ProRes RAW and ProRes, Avid DNx and Adobe CinemaDNG RAW codecs. Shogun 7 has four SDI inputs plus a HDMI 2.0 input, with both 12G-SDI and HDMI 2.0 outputs. It can record ProRes RAW in up to 5.7kp30, 4kp120 DCI/UHD and 2kp240 DCI/HD, depending on the camera’s capabilities. Also, 10-bit 4:2:2 ProRes or DNxHR recording is available up to 4Kp60 or 2Kp240. The four SDI inputs enable the connection of most quad-link, dual-link or single-link SDI cinema cameras. Pixels are preserved with data rates of up to 1.8Gb/s.

In terms of audio, Shogun 7 eliminates the need for a separate audio recorder. Users can add 48V stereo mics via an optional balanced XLR breakout cable, or select mic or line input levels, plus record up to 12 channels of 24/96 digital audio from HDMI or SDI. Monitoring selected stereo tracks is via the 3.5mm headphone jack. There are dedicated audio meters, gain controls and adjustments for frame delay.

Shogun 7 features the latest version of the AtomOS 10 touchscreen interface, first seen on the Ninja V.  The new body of Shogun 7 has a Ninja V-like exterior with ARRI anti-rotation mounting points on the top and bottom of the unit to ensure secure mounting.

AtomOS 10 on Shogun 7 has the full range of monitoring tools, including Waveform, Vectorscope, False Color, Zebras, RGB parade, Focus peaking, Pixel-to-pixel magnification, Audio level meters and Blue only for noise analysis.

Shogun 7 can also be used as a portable touchscreen-controlled multi-camera switcher with asynchronous quad-ISO recording. Users can switch up to four 1080p60 SDI streams, record each plus the program output as a separate ISO, then deliver ready-for-edit recordings with marked cut-points in XML metadata straight to your NLE. The current Sumo19 HDR production monitor-recorder will also gain the same functionality in a free firmware update.

There is asynchronous switching, plus use genlock in and out to connect to existing AV infrastructure. Once the recording is over, users can import the XML file into an NLE and the timeline populates with all the edits in place. XLR audio from a separate mixer or audio board is recorded within each ISO, alongside two embedded channels of digital audio from the original source. The program stream always records the analog audio feed as well as a second track that switches between the digital audio inputs to match the switched feed.

DP Chat: The Village cinematographer William Rexer

By Randi Altman

William Rexer is a cinematographer who has worked on documentaries, music videos, commercials and narratives — both comedies and dramas. He’s frequently collaborated with writer/director Ed Burns (Friends With Kids, Newlyweds, Summertime). Recently, he’s directed photography on several series including The Get Down, The Tick, Sneaky Pete and the new NBC drama The Village.

He sat down with us to answer some questions about his love of cinematography, his process and The Village, which follow a diverse group of people living in the same apartment building in Brooklyn.

The set of The Village. Photo: Peter Kramer

How did you become interested in cinematography?
When I was a kid, my mother had a theater company and my father was an agent/producer. I grew up sleeping backstage. When I was a teen, I was running a followspot (light) for Cab Calloway. I guess there was no escaping some job in this crazy business!

My father would check out 16mm movies from the New York City public library — Chaplin, Keaton — and that would be our weekend night entertainment. When I was in 8th grade, an art cinema started in my hometown; it is now called the Cinema Arts Center in Huntington, New York. It showed cinema from all over the world, including Bergman, Fellini, Jasny. I began to see the world through films and fell in love.

What inspires you artistically?
I love going to the movies, the theater and art galleries. Films like Roma and Cold War make me have faith in the world. What mostly inspires me is checking out what my peers are up to. Tim Ives, ASC, and Tod Campbell are two friends that I love to watch. Very impressive guys. David Mullen, ASC, and Eric Moynier are doing great work on Mrs. Maisel. I guess I would say watching my peers and their work inspires me.

NBC’s The Village

How do you stay on top of advancing technology tools for achieving your vision on set or in post?
The cameras and post workflow change every few months. I check in with the rental houses to stay on top of gear. Panavision, Arri Rental, TCS, Keslow and Abel are great resources. I also stay in touch with post houses. My friends at Harbor and Technicolor are always willing to help create LUTs, evaluate cameras and lenses.

Has any recent or new technology changed the way you work?
The introduction of the Red One MX and the ARRI D-20 changed a lot of things. They made shooting high-quality images affordable and cleaner for the environment. It put 35mm size sensors out there and gave a lot of young people a chance to create.

The introduction of large-format cameras, the Red Monstro 8K VV, the ARRI LF and 65, and the Sony Venice have made my life more interesting. All these sensors are fantastic, and the new color spaces we get to work with like Red’s IPP2 are truly astounding. I like having control of depth of field and controlling where the audience looks.

What are some of your best practices or rules you try to follow on each job?
I try my best to shoot tests, create a LUT in the test phase and take the footage through the entire process and see how it holds up. I make sure that all my monitors are calibrated at the post house to match; that gets us all on the same page. Then, I’ll adjust the LUT after a few days of shooting in the field, using the LUT as a film stock and light to it. I watch dailies, give notes and try to get in with colorist/timer and work with them.

Will Rexer (center) with showrunner Mike Daniels and director Minkie Spiro. Photo: Jennifer Rhoades

Tell us about The Village. How would you describe the general look of the show?
The look of The Village is somewhere between romantic realism and magical realism. It is a world that could be. Our approach was to thread that line between real and the potential — warm and inviting and full of potential.

Can you talk about your collaboration with the showrunner when setting the look of a project?
Mike Daniels, Minkie Spiro, Jessica Rhoades and I looked at a ton of photographs and films to find our look. The pilot designer Ola Maslik and the series designer Neil Patel created warm environments for me.

How early did you get involved in the production?
I had three weeks of prep for the pilot, and I worked with Minkie and Ola finding locations and refining the look.

How did you go about choosing the right camera and lenses to achieve the look?
The show required a decent amount of small gimbal work, so we chose the Red Monstro 8K VV using Red’s IPP2 color space. I love the camera, great look, great functionality and my team has customized the accessories to make our work on set effortless.

We used the Sigma Cine PL Primes with 180mm Leica R, Nikon 200 T2, Nikkor Zero Optik 58mm T1.2, Angenieux HR 25-250mm and some other special optics. I looked at other full-frame lenses but really liked the Sigma lenses and their character. These lenses are a nice mix of roundness and warmth and consistency.

What was your involvement with post? Who supported your vision from dailies through final grade? Have you worked with this facility and/or colorists on past projects?
Dailies were through Harbor Picture Company. I love these guys. I have worked with Harbor since they started, and they are total pros. They have helped me create LUTs for many projects, including Public Morals.

The final post for The Village was done in LA at NBC/Universal. Craig Budrick has done a great job coloring the show. I do wish that I could be in the room, but that’s not always possible.

What’s most satisfying to you about this show?
I am very proud of the show and its message. It’s a romantic vision of the world. TV and cinema often go to the dark side. I like going there, but I do think we need to be reminded of our better selves and our potential.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Review: Mzed.com’s Directing Color With Ollie Kenchington

By Brady Betzel

I am constantly looking to educate myself, no matter what the source — or subject. Whether I am learning how to make a transition in Adobe After Effects from an eSports editor on YouTube to Warren Eagles teaching color correction in Blackmagic’s DaVinci Resolve on FXPHD.com, I’m always beefing up my skills. I even learn from bad tutorials — they teach you what not to do!

But when you come across a truly remarkable learning experience, it is only fair to share with the rest of the world. Last year I saw an ad for an MZed.com course called “Directing Color With Ollie Kenchington,” and was immediately interested. These days you can pretty much find any technical tutorial you can dream of on YouTube, but truly professional, higher education-like, theory-based education series are very hard to come by. Even ones you need to pay for aren’t always worth their price of admission, which is a huge let down.

Ollie sharing his wisdom.

Once I gained access to MZed.com I wanted to watch every educational series they had. From lighting techniques with ASC member Shane Hurlbut to the ARRI Amira Camera Primer, there are over 150 hours of education available from industry leaders. However, I found my way to Directing Color…

I am often asked if I think people should go to college or a film school. My answer? If you have the money and time, you should go to college followed by film school (or do both together, if the college offers it). Not only will you learn a craft, but you will most likely spend hundreds of hours studying and visualizing the theory behind it. For example, when someone asks me about the science behind camera lenses, I can confidently answer them thanks to my physics class based on lenses and optics from California Lutheran University (yes, a shameless plug).

In my opinion, a two-, four- or even 10-year education allows me to live in the grey. I am comfortable arguing for both sides of a debate, as well as the options that are in between —  the grey. I feel like my post-high school education really allowed me to recognize and thrive in the nuances of debate. Leaving me to play devil’s advocate maybe a little too much, but also having civil and proactive discussions with others without being demeaning or nasty — something we are actively missing these days. So if living in the grey is for you, I really think a college education supplemented by online or film school education is valuable (assuming you make the decision that the debt is worth it like I did).

However, I know that is not an option for everyone since it can be very expensive — trust me, I know. I am almost done paying off my undergraduate fees while still paying off my graduate ones, which I am still two or three classes away from finishing. That being said, Directing Color With Ollie Kenchington is the only online education series I have seen so far that is on the same level as some of my higher education classes. Not only is the content beautifully shot and color corrected, but Ollie gives confident and accessible lessons on how color can be used to draw the viewer’s attention to multiple parts of the screen.

Ollie Kenchington is a UK-based filmmaker who runs Korro Films. From the trailer of his Directing Color series, you can immediately see the beauty of Ollie’s work and know that you will be in safe hands. (You can read more about his background here.)

The course raises the online education bar and will elevate the audiences idea of professional insight. The first module “Creating a Palette” covers the thoughts behind creating a color palette for a small catering company. You may even want to start with the last Bonus Module “Ox & Origin” to get a look at what Ollie will be creating throughout the seven modules and about an hour and a half of content.

While Ollie goes over “looks,” the beauty of this course is that he goes through his internal thought processes including deciding on palettes based on color theory. He didn’t just choose teal and orange because it looks good, he chooses his color palette based on complementary colors.

Throughout the course Ollie covers some technical knowledge, including calibrating monitors and cameras, white balancing and shooting color charts to avoid having wrong color balance in post. This is so important because if you don’t do these simple steps, your color correction session while be much harder. And wasting time on fixing incorrect color balance takes time away from the fun of color grading. All of this is done through easily digestible modules that range from two to 20 minutes.

The modules include Creating a Palette; Perceiving Color; Calibrating Color; Color Management; Deconstructing Color 1 – 3 and the Bonus Module Ox & Origin.

Without giving away the entire content in Ollie’s catalog, my favorite modules in this course are the on-set modules. Maybe because I am not on-set that often, but I found the “thinking out loud” about colors helpful. Knowing why reds represent blood, which raise your heart rate a little bit, is fascinating. He even goes through practical examples of color use in films such as in Whiplash.

In the final “Deconstructing Color” modules, Ollie goes into a color bay (complete with practical candle backlighting) and dives in Blackmagic’s DaVinci Resolve. He takes this course full circle to show how since he had to rush through a scene he can now go into Resolve and add some lighting to different sides of someone’s face since he took time to set up proper lighting on set, he can focus on other parts of his commercial.

Summing Up
I want to watch every tutorial MZed.com has to offer. From “Philip Bloom’s Cinematic Masterclass” to Ollie’s other course “Mastering Color.” Unfortunately, as of my review, you would have to pay an additional fee to watch the “Mastering Color” series. It seems like an unfortunate trend in online education to charge a fee and then when an extra special class comes up, charge more, but this class will supposedly be released to the standard subscribers in due time.

MZed.com has two subscription models: MZed Pro, which is $299 for one year of streaming the standard courses, and MZed Pro Premium for $399. This includes the standard courses for one year and the ability to choose one “Premium” course.

“Philip Bloom’s Cinematic Master Class” was the Premium course I was signed up for initially, but you you can decide between this one and the “Mastering Color” course. You will not be disappointed regardless of which one you choose. Even their first course “How to Photograph Everyone” is chock full of lighting and positioning instruction that can be applied in many aspects of videography.

I really was impressed with Directing Color with Ollie Kenchington, and if the other course are this good MZed.com will definitely become a permanent bookmark for me.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Atomos offering Shinobi SDI camera-top monitor

On the heels of its successful Shinobi launch in March, Atomos has introduced Atomos Shinobi SDI, a
super-lightweight, 5-inch HD-SDI and 4K HDMI camera-top monitor. Its color-accurate calibrated display makes makes it suitable compact HDR and SDR reference monitor. It targets the professional video creator who uses or owns a variety of cameras and camcorders and needs the flexibility of SDI or HDMI, accurate high bright and HDR, while not requiring external recording capability.

Shinobi SDI features a compact, durable body combined with an ultra-clear, ultra-bright, daylight viewable 1000-nit display. The anti-reflection, anti-fingerprint screen has a pixel density of 427PPI (pixels per inch) and is factory calibrated for color accuracy, with the option for in-field calibration providing ongoing accuracy. Thanks to the
HD-SDI input and output, plus a 4K HDMI input, it can be used in most productions.

This makes Shinobi SDI a useful companion for high-end cinema and production cameras, ENG cameras, handheld camcorders and any other
HD-SDI equipped source.

“Our most requested product in recent times has been a stand-alone SDI monitor. We are thrilled to be bringing the Atomos Shinobi SDI to market for professional video and film creators,” says Jeromy Young, CEO of Atomos.

ARRI’s new Alexa Mini LF offers large-format sensor in small footprint

Offering a large-format sensor in a small form factor, ARRI has introduced its new Alexa Mini LF camera, which combines the compact size and low weight of the Alexa Mini with the large-format Alexa LF sensor. According to the company, it “provides the best overall image quality for large-format shooting” and features three internal motorized FSND filters, 12V power input, extra power outputs, a new Codex Compact Drive and a new MVF-2 high-contrast HD viewfinder.

The new Alexa Mini LF cameras are scheduled to start shipping in mid-2019.

ARRI’s large-format camera system, launched in 2018, is based around a 4.5K version of the Alexa sensor, which is twice the size and offers twice the resolution of Alexa cameras in 35 format. This allows for large-format looks, with improvements on the Alexa sensor’s natural colorimetry, pleasing skin tones, low noise and it’s suitable for HDR and Wide Color Gamut workflows.

Alexa Mini LF now joins the existing system elements: the high-speed capable Alexa LF camera; ARRI Signature Prime lenses; LPL lens mount and PL-to-LPL adapter; and Lens Data System LDS-2. The combined feature sets and form factors of ARRI’s two large-format cameras encompass all on-set requirements.

The Alexa Mini LF is built for use in challenging professional conditions. It features a hard-wearing carbon body and a wide temperature range of -4° F to +113° F, and each Alexa Mini LF is put through a vigorous stress test before leaving the ARRI factory and is then supported by ARRI’s global service centers.

While Alexa Mini LF is compatible with almost all Alexa Mini accessories, the company says it brings significant enhancements to the Mini camera design. Among them are extra connectors, including regulated 12V and 24V accessory power; a new 6-pin audio connector; built-in microphones; and improved WiFi.

Six user buttons are now in place on the camera’s operating side, and the camera and viewfinder each have their own lock button, while user access to the recording media, and VF and TC connectors, has been made easier.

Alexa Mini LF allows internal recording of MXF/ARRIRAW or MXF/Apple ProRes in a variety of formats and aspect ratios, and features the new Compact Drive recording media from Codex, an ARRI technology partner. This small and lightweight drive offers 1TB of recording. It comes with a USB-C Compact Drive reader that can be used without any extra software or licenses on Mac or Windows computers. In addition, a Compact Drive adapter can be used in any dock that accepts SXR Capture Drives, potentially more than doubling download speeds.

Another development from Codex is Codex High Density Encoding (HDE), which uses sophisticated, loss-less encoding to reduce ARRIRAW file sizes by around 40% during downloading or later in the workflow. This lowers storage costs, shortens transfer times and speeds up workflows.

HDE is free for use with Codex Capture or Compact Drives, openly shared and fast: ARRIRAW Open Gate 4.5K can be encoded at 24fps on a modern MacBook Pro.

ARRI’s new MVF-2 viewfinder for the Alexa Mini LF is the same high-contrast HD OLED display, color science and ARRICAM eyepiece as in Alexa LF’s EVF-2 viewfinder, allowing optimal judgment of focus, dynamic range and color on set.

In addition, the MVF-2 features a large, four-inch flip-out monitor that can display the image or the camera control menu. The MVF-2 can be used on either side of the camera and connects via a new CoaXPress VF cable that has a reach of up to 10m for remote camera operations. It features a refined user interface, a built-in eyepiece lens heater for de-fogging and a built-in headphones connector.

Sony’s RX0 II ultra-compact, rugged camera with 4K, flip-up screen

Sony has added to its camera lineup with the launch of the light and compact RX0 II — what some might call a “GoPro on steroids,” with a higher price tag of approximately $700. It will ship in April. Building upon the waterproof, dustproof, shockproof, crushproof and ultra-compact qualities of the original RX0, the new model offers internal 4K recording, an adjustable LCD screen that tilts upward 180 degrees and downward 90 degrees and the ability to work underwater, as well as new image stabilization solutions for video recording.

At the heart of the RX0 II sits a 1.0-type stacked 15.3-megapixel Exmor RS CMOS image sensor and an advanced Bionz X image processing engine that offer enhanced color reproduction. It has been optimized for both stills and movie shooting across a wide sensitivity range of ISO 80-12800. The Zeiss Tessar T 24mm F4 fixed wide-angle lens has a shortened minimum focusing distance of 20cm.

Measuring 59mm x 40.5mm x 35mm and weighing 132g, the RX0 II fits easily into a pocket. It is waterproof up to 10 meters deep, it’s dustproof, shockproof up to two meters and crushproof up to 200KG force.

The RX0 II offers 4K 30p internal movie recording with full pixel readout and no pixel binning, which allows it to collect approximately 1.7 times the amount of data required for 4K video. By oversampling this data, the appearance of moiré and jaggies is reduced to deliver smooth, high-quality 4K footage with detail and depth. Using the recently introduced Sony Imaging Edge mobile application, this footage can be transferred to a smartphone, edited and shared easily across social networks.

The RX0 II introduces in-body electronic stabilization for steady footage, even when shot handheld. This can be enhanced even further when footage is exported to a smartphone or tablet running the Movie Edit add-on, where the additional information captured during filming can be processed to produce a video with gimbal-like smoothness.

An additional new feature that can also be accessed via the Sony Movie Edit add-on is Intelligent Framing, where the selected subject is kept in the center of the frame and image distortion is corrected in a final edit. Depending on where the video will be shared, a variety of aspect ratios can then be selected.

Additional movie features of the RX0 II include super-slow-motion recording at up to 1,000fps, uncompressed 4K HDMI output and simultaneous proxy movie recording. Users can use Picture Profile, S-Log 2 and Timecode/User Bit functions to make sure their final result exactly matches their creative vision.

The RX0 II can also be used as a multi-camera option. Up to five RX0 II cameras can be controlled wirelessly using Sony Imaging Edge Mobile application and between five and 50 cameras can be controlled via an access point (scheduled for summer 2019). The RX0 II is also compatible with the Camera Control Box CCB-WD1, which enables up to 100 cameras to be connected and controlled in a wired multi-camera setup.

DP Tom Curran on Netflix’s Tidying Up With Marie Kondo

By Iain Blair

Forget all the trendy shows about updating your home décor or renovating your house. What you really need to do is declutter. And the guru of decluttering is Marie Kondo, the Japanese star of the hot Netflix show Tidying Up With Marie Kondo.

The organizational expert became a global star when her first book, 2014’s “The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing,” was translated into English, becoming a New York Times bestseller. Her follow-up was 2016’s “Spark Joy: An Illustrated Master Class on the Art of Organizing and Tidying Up.”

Tom Curran

Clearly, people everywhere need to declutter, and Kondo’s KonMari Method is the answer for those who have too much stuff. As she herself puts it, “My mission is to organize the world and spark joy in people’s lives. Through this partnership with Netflix, I am excited to spread the KonMari Method to as many people as possible.”

I recently spoke with Tom Curran, the cinematographer of the Kondo show. His extensive credits include Ugly Delicious for Netflix, Fish My City for National Geographic and 9 Months for Facebook, which is hosted by Courteney Cox. Curran has an Emmy on his mantle for ABC Sports’ Iditarod Sled Dog Race.

Let’s start with the really important stuff. Do you have too much clutter? Has Marie’s philosophy helped you?
(Laughs). It has! I think we all have too much stuff. To be honest, I was a little skeptical at first about all this. But as I spent time with her and educated myself, I began to realize just how much there is to it. I think that it particularly applies to the US, where we all have so much and move so quickly.

In her world, you come to a pause and evaluate all of that, and it’s really quite powerful. And if you follow all of her steps, you can’t do it quickly. It forces you to slow down and take stock. My wife is an editor, and we’re both always so busy, but now we take little pockets of time to attack different parts of the house and the clutter we have. It’s been really powerful and helpful to us.

Why do you think her method and this show have resonated so much with people everywhere?
Americans tend to get so busy and locked into routines, and Japan’s culture is very different. I’ve worked there quite a bit, and she brings this whole other quality to the show. She’s very thoughtful and kind. I think the show does a good job of showing that, and you really feel it. An awful lot of current TV can be a little sharp and mean, and there’s something old-fashioned about this, and audiences really respond. She doesn’t pass judgment on people’s messy houses — she just wants to help.

You’re well-known for shooting in extreme conditions and locations all over the world. How did this compare?
It was radically different in some ways. Instead of vast and bleak landscapes, like Antarctica, you’re shooting the interiors of people’s homes in LA. Working with EP Hend Baghdady and showrunner Bianca Barnes-Williams, we set out to redefine how to showcase these homes. We used some of the same principles, like how to incorporate these characters into their environment and weave the house into the storyline. That was our main goal.

What were the challenges of shooting this show?
A big one was keeping ourselves out of the shot, which isn’t so easy in a small space. Also, keeping Marie central to all the storytelling. I’ve done several series before, shooting in people’s homes, like Little People, Big World, where we stayed in one family’s home for many years. With this show the crew was walking into their homes for a far shorter time, and none of them were actors. The were baring their souls.

Cleaning up all their clutter before we arrived was contrary to what the show’s all about, so you’re seeing all the ugly. My background’s in cinéma vérité, and a lot of this was stripping back the way these types of unscripted shows are usually done — with multiple cameras. We did use multiple cameras, but often it was just one, as you’re in a tiny room, where there’s no space for another, and we’re shooting wide since the main character in most stories was the home.

As well as being a DP you’re also the owner of Curran Camera, Inc. Did you supply all the camera gear for this through your company?
Sometimes I supply equipment for a series, sometimes not. It all depends on what the project needs. On this, when Hend, Bianca and I began discussing different camera options, I felt it wasn’t a series we could shoot on prime lenses, but we wanted the look that primes would bring. We ended up working with Fujinon Cabrio Cine Zooms and Canon cameras, which gave us a really filmic look, and we got most of our gear from T-stop Camera Rentals in LA. In fact, the Fujinon Cabrio 14-35mm became the centerpiece of the storytelling in the homes because of its wide lens capture — which was crucial for scenes with closets and small rooms and so on.

I assume all the lighting was a big challenge?
You’re right. It was a massive undertaking because we wanted to follow all the progress in each home. And we didn’t want it to be a dingy, rough-looking show, especially since Marie represented this bright light that’d come into people’s homes and then it would get brighter and brighter. We ended up bringing in all the lighting from the east coast, which was the only place I could source what I needed.

For Marie’s Zen house we had a different lighting package with dozens of small fresnels because it was so calm and stood still. For the homes and all the movement, we used about 80 Flex lights — paper-thin LED lights that are easily dimmable and quick to install and take down. Even though we had a pretty small crew, we were able to achieve a pretty consistent look.

How did the workflow operate? How did you deal with dailies?
Our post supervisor Joe Eckardt was pretty terrific, and I’d spend a lot of time going through all the dailies and then give a big download to the crew once a week. We had six to eight camera operators and three crews with two cameras and additional people some days. We had so much footage, and what ended up on screen is just a fraction of what we shot. We had a lot of cards at the end of every day, and they’d be loaded into the post system, and then a team of 16 editors would start going through it all.  Since this was the first season, we were kind of doing it on the fly and trying different techniques to see what worked best.

Color correction and the mix was handled by Margarita Mix. How involved were you in post and the look of the show?
I was very involved, especially early on. Even in the first month or so we started to work on the grade a bit to get some patterns in place; that helped carry us through. We set out to capture a really naturalistic look, and a lot of the homes were very cramped, so we had to keep the wrong lighting look looking wrong, so to speak. I’m pretty happy with what we were able to do. (Margarita Mix’s Troy Smith was the colorist.)

How important is post to you as a DP?
It’s hard to overstate. I’d say it’s not just a big piece of the process, it is the process. When we’re shooting, I only really think about three things; One, what is the story we’re trying to tell? Two, how can we best capture that, particularly with non-actors. How do you create an environment of complete trust where they basically just forget about you? How do we capture Marie doing her thing and not break the flow, since she’s this standup performer? Three, how do we give post what they need? If we’re not giving editorial the right coverage, we’re not doing our job. That last one is the most important to me — since I’m married to an editor, I’m always so aware of post.

The first eight shows aired in January. When is the next season?
We’ve had some light talks about it, and I assume since it’s so popular we’ll do more, but nothing’s finalized yet. I hope we do more.  I love this show.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Red Ranger all-in-one camera system now available

Red Digital Cinema has made its new Red Ranger all-in-one camera system available to select Red authorized rental houses. Ranger includes Red’s cinematic full-frame 8K sensor Monstro in an all-in-one camera system, featuring three SDI outputs (two mirrored and one independent) allowing two different looks to be output simultaneously; wide-input voltage (11.5V to 32V); 24V and 12V power outs (two of each); one 12V P-Tap port; integrated 5-pin XLR stereo audio input (Line/Mic/+48V Selectable); as well as genlock, timecode, USB and control.

Ranger is capable of handling heavy-duty power sources and boasts a larger fan for quieter and more efficient temperature management. The system is currently shipping in a gold mount configuration, with a v-lock option available next month.

Ranger captures 8K RedCode RAW up to 60fps full-format, as well as Apple ProRes or Avid DNxHR formats at 4K up to 30fps and 2K up to 120fps. It can simultaneously record RedCode RAW plus Apple ProRes or Avid DNxHD or DNxHR at up to 300MB/s write speeds.

To enable an end-to-end color management and post workflow, Red’s enhanced image processing pipeline (IPP2) is also included in the system.

Ranger ships complete, including:
• Production top handle
• PL mount with supporting shims
• Two 15mm LWS rod brackets
• Red Pro Touch 7.0-inch LCD with 9-inch arm and LCD/EVF cable
• LCD/EVF adaptor A and LCD/EVF adaptor D
• 24V AC power adaptor with 3-pin 24V XLR power cable
• Compatible Hex and Torx tools

Shooting, posting New Republic’s Indie film, Sister Aimee

After a successful premiere at the Sundance Film Festival, New Republic Studios’ Sister Aimee screened at this month’s SXSW. The movie tells the story of an infamous American evangelist of the 1920s, Sister Aimee Semple McPherson, who gets caught up in her lover’s dreams of Mexico and finds herself on a road trip toward the border.

Sister Aimee shot at the newly renovated New Republic Studios near Austin, Texas, over two and a half weeks. “Their crew used our 2,400-square-foot Little Bear soundstage, our 3,000-square-foot Lone Wolf soundstage, our bullpen office space and numerous exterior locations in our backlot,” reports New Republic Studios president Mindy Raymond, adding that the Sister Aimee production also had access to two screening rooms with 5.1 surround sound, HDMI hookups to 4K monitors and theater-style leather chairs to watch dailies. The film also hit the road, shooting in the New Mexico desert.

L-R: Directors Samantha Buck, Marie Schlingmann at SXSW. Credit: Harrison Funk

Co-written and co-directed by Samantha Buck and Marie Schlingmann, the movie takes some creative license with the story of Aimee. “We don’t look for factual truth in Aimee’s journey,” they explain. “Instead we look for a more timeless truth that says something about female ambition, the female quest for immortality and, most of all, the struggle for women to control their own narratives. It becomes a story about storytelling itself.”

The film, shot by cinematographer Carlos Valdes-Lora at 3.2K ProRes 4444 XQ on an Arri Alexa Mini, was posted at Dallas and Austin-based Charlieuniformtango.

We reached out to the DP and the post team to find out more.

Carlos, why did you choose the package of the Alexa and Cooke Mini S4 Primes?
Carlos Valdes-Lora: In early conversations with the directors, we all agreed that we didn’t want Sister Aimee to feel like a traditional period movie. We didn’t want to use softening filters or vintage lenses. We aimed instead for clear images, deep focus and a rich color palette that remains grounded in the real world. We felt that this would lend the story a greater sense of immediacy and draw the viewer closer to the characters. Following that same thinking, we worked very extensively with the 25mm and 32mm, especially in closeups and medium closeups, emphasizing accessibility.

The Cooke Mini S4s are a beautiful and affordable set (relative to our other options.) We like the way they give deep dimensionality and warmth to faces, and how they create a slightly lower contrast image compared to the other modern lenses we looked at. They quickly became the right choice for us, striking the right balance between quality, size and value.

The Cookes paired with the Alexa Mini gave us a lightweight camera system with a very contained footprint, and we needed to stay fast and lean due to our compressed shooting schedule and often tight shooting quarters. The Chapman Cobra dolly was a big help in that regard as well.

What was the workflow to post like?
Charlieuniformtango producers Bettina Barrow, Katherine Harper, David Hartstein: Post took place primarily between Charlieuniformtango’s Dallas and Austin offices. Post strategizing started months before the shoot, and active post truly began when production began in July 2018.

Tango’s Evan Linton handled dailies brought in from the shoot, working alongside editor Katie Ennis out of Tango’s Austin studio, to begin assembling a rough cut as shooting continued. Ennis continued to cut at the studio through August with directors Schlingmann and Buck.

Editorial then moved back to the directors’ home state of New York to finish the cut for Sundance. (Editor Ennis, who four-walled out of Tango Austin for the first part of post, went to  New York with the directors, working out of a rented space.)

VFX and audio work started early at Tango, with continuously updated timelines coming from editorial, working to have certain locked shots also finished for the Sundance submission, while saving much of the cleanup and other CG heavy shots for the final picture lock.

Tango audio engineer Nick Patronella also tackled dialogue edit, sound design and mix for the submission out of the Dallas studio.

Can you talk about the VFX?
Barrow, Harper, Hartstein: The cut was locked in late November, and the heavy lifting really began. With delivery looming, Tango’s Flame artists Allen Robbins, Joey Waldrip, David Hannah, David Laird, Artie Peña and Zack Smith divided effects shots, which ranged from environmental cleanup, period-specific cleanup, beauty work such as de-aging, crowd simulation, CG sign creation and more. 3D

(L-R) Tango’s Artie Peña, Connor Adams, Allen Robbins in one of the studio’s Flame suites.

Artist Connor Adams used Houdini, Mixamo and Maya to create CG elements and crowds, with final comps being done in Nuke and sent to Flame for final color. Over 120 VFX shots were handled in total and Flame was the go-to for effects. Color and much of the effects happened simultaneously. It was a nice workflow as the project didn’t have major VFX needs that would have impacted color.

What about the color grade?
Barrow, Harper, Hartstein: Directors Buck and Schlingmann and DP Valdes-Lora worked with Tango colorist Allen Robbins to craft the final look of the film — with the color grade also done in Flame. The trio had prepped shooting for a Kodachrome-style look, especially for the exteriors, but really overall. They found important reference in selections of Robert Capa photographs.

Buck, Schlingmann and Valdes-Lora responded mostly to Kodachrome’s treatment of blues, browns, tans, greens and reds (while staying true to skin tone), but also to their gamma values, not being afraid of deep shadows and contrast wherever appropriate. Valdes-Lora wanted to avoid lighting/exposing to a custom LUT on set that would reflect this kind of Kodachrome look, in case they wanted to change course during the process. With the help of Tango, however, they discovered that by dialing back the Capa look it grounded the film a little more and made the characters “feel” more accessible. The roots of the inspiration remained in the image but a little more naturalism, a little more softness, served the story better.

Because of that they monitored on set with Alexa 709, which he thought exposing for would still provide enough room. Production designer Jonathan Rudak (another regular collaborator with the directors) was on the same page during prep (in terms of reflecting this Capa color style), and the practical team did what they could to make sure the set elements complemented this approach.

What about the audio post?
Barrow, Harper, Hartstein: With the effects and color almost complete, the team headed to Skywalker Ranch for a week of final dialogue edit, mix, sound design and Foley, led by Skywalker’s Danielle Dupre, Kim Foscato and E. Larry Oatfield. The team also was able to simultaneously approve color sections in Skywalker’s Stag Theater allowing for an ultra-efficient schedule. With final mix in hand, the film was mastered just after Christmas so that DCP production could begin.

Since a portion of the film was musical, how complex was the audio mix?
Skywalker sound mixer Dupre: The musical number was definitely one of the most challenging but rewarding scenes to design and mix. It was such a strong creative idea that played so deeply into the main character. The challenge was in striking a balance between tying it into the realism of the film while also leaning into the grandiosity of the musical to really sell the idea.

It was really fun to play with a combination of production dialogue and studio recordings to see how we could make it work. It was also really rewarding to create a soundscape that starts off minimally and simply and transitions to Broadway scale almost undetectably — one of the many exciting parts to working with creative and talented filmmakers.

What was the biggest challenge in post?
Barrow, Harper, Hartstein: Finishing a film in five to six weeks during the holidays was no easy feat. Luckily, we were able to have our directors hands-on for all final color, VFX and mix. Collaborating in the same room is always the best when you have no time to spare. We had a schedule where each day was accounted for — and we stuck to it almost down to the hour.

 

DP Chat: Madam Secretary’s Learan Kahanov

By Randi Altman

Cinematographer Learan Kahanov’s love of photography started at an early age, when he would stage sequences and scenes with his Polaroid camera, lining up the photos to create a story.

He took that love of photography and turned it into a thriving career, working in television, features and commercials. He currently works on the CBS drama Madam Secretary — where he was initially  hired to be the A-camera operator and additional DP. He shot 12 episodes and tandem units, then he took over the show fully in Season 3. The New York-shot, Washington, DC-set show stars Téa Leoni as the US Secretary of State, following her struggle to balance her work and personal life.

We recently reached out to Kahanov to find out more about his path, as well as his workflow, on Madam Secretary.

Learan Kahanov on set with director Rob Greenlea.

Can you talk about your path to cinematography?
My mother is a sculptor and printmaker, and when I was in middle school, she went back to get a degree in fine arts with a minor in photography. This essentially meant I was in tow, on many a weeknight, to the darkroom so she could do her printing and, in turn, I learned as well.

I shot mostly black and white all through middle school and high school. I would often use my mother’s art studio to shoot the models who posed for the drawing class she taught. Around the same time, I developed a growing fascination with animal behavior and strove to become a wildlife photographer, until I realized I didn’t have the patience to sit in a tree for days to get the perfect shot.

I soon turned my attention to videography while working at a children’s museum, teaching kids how to use the cameras and how to make short movies. I decided to pursue cinematography officially in high school. I eventually found myself at NYU film school, based off my photography portfolio. As soon as I got to New York City, I started working on indie films, as an electrician and gaffer, shooting every student film and indie project I could.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
I could list artists or filmmakers whose work I gravitate to, but the main thing I learned from my mother about art is that it’s about a feeling. Whether it’s being affected by a beautifully photographed image of a woman in a commercial or getting sucked into the visuals in a wildlife documentary, if you can invoke a feeling and or create an emotion you have made art.

Madam Secretary

I am always looking at things around me, and I’m always aware of how light falls on the world around me. Or how the shape of everyday objects and places change depending on the time, the weather or just my mood at the moment.

My vision of a project is always born out of the story, so the key for me is to always use technology (new or old) to support that story. Sometimes the latest in LED technology is the right tool for the job, sometimes it’s a bare light bulb attached to the underside of a white, five-gallon paint bucket (a trick Gaffer Jack Coffin and I use quite often). I think the balance between vision and technology is a two-way street — the key is to recognize when the technology serves your vision or the other way around.

What new technology has changed the way you work?
In the area of lighting, I have found that no matter what new tools come onto the scene, I still hold true to my go-to lighting techniques that I have preferred for years.

A perfect example would be my love for book lights — a book light is a bounced light that then goes through another layer of diffusion, which is perfect for lighting faces. Whether I am using an old Mole Richardson 5K tungsten unit or the newer ARRI S60 SkyPanels, the concept and end result are basically the same.

That being said, for location work the ARRI LED SkyPanels have become one of the go-to units on my current show, Madam Secretary. The lights’ high-output, low-power consumption, ease for matching existing location color sources and quick effects make them an easy choice for dealing with the faster-paced TV production schedule.

On-set setup

One other piece of gear that I have found myself calling for on a daily basis, since my key grip Ted Lehane introduced me to. It’s a diffusion material called Magic Cloth, which is produced by The Rag Place. This material can work as a bounce, as well as a diffusion, and you can directly light through. It produces a very soft light, as it’s fairly thick, but it does not change the color temperature of the source light. This new material, in conjunction with new LED technology, has created some interesting opportunities for my team.

Many DPs talk about the latest digital sensor, camera support (drone/gimbals, etc.) or LED lighting, but sometimes it’s something very simple, like finding a new diffusion material that can really change the look and the way I work. In fact, I think gripology in general often gets overlooked in the current affairs of filmmaking where everything seems to need to be “state of the art.”

What are some of your best practices or rules that you try to follow on each job?
I have one hard and fast rule in any project I shoot: support the story! I like to think of myself as a filmmaker first, using cinematography as a way to contribute to the filmmaking process. That being said, we can create lots of “rules” and have all the “go-to practices” to create beautiful images, but if what you are doing doesn’t advance the story, or at the very least create the right mood for the scene, then you are just taking a picture.

There are definite things I do because I simply prefer how it looks, but if it doesn’t make sense for the scene/move (based on the directors and my vision), I will then adjust what I do to make sure I am always supporting the story. There are definitely times where a balance is needed. We don’t create in a bubble, as there are all the other factors to consider, like budget, time, shooting conditions, etc. It’s this need/ability to be both technician and artisan that excites me the most about my job.

Can you explain your ideal collaboration with the director when setting the look of a project?
When working in episodic TV, every episode — essentially every eight days — there is a different director. Even when I have a repeat director, I have to adapt quickly between each director’s style. This goes beyond just being a chameleon from a creative standpoint — I need to quickly establish trust and a short hand to help the director put their stamp on their episode, all while staying within the already established look of the show.

Madam Secretary

I have always considered myself not an “idea man” but rather a “make-the-idea-better” man. I say this because being able to collaborate with a director and not just see their vision, but also enhance it and take it a step further (and see their excitement in the process), is completely fulfilling.

Tell us about Madam Secretary. How would you describe the overarching look of the show? How early did you get involved in the production?
I have been a part of Madam Secretary since the beginning, minus the pilot. I was hired as the A camera operator and as an additional DP. Jonathan Brown, ASC, shot the pilot and was the DP for the first two seasons. He was also one of our directors for the first three seasons. In addition to shooting tandem/2nd unit days and filling on scout days, I was the DP whenever Jonathan directed. So while I didn’t create the initial look of the show, I worked closely with Jonathan as the seasons went on until I officially took over in the third season.

Since I took over (and during my episodes), I felt an obligation to hold true to the original look and the intent of the show, while also adding my personal touch and allowing the show’s look to evolve with the series. The show does give us opportunities every week to create something new. While the reoccurring sets/locations do have a relatively set look, every episode takes us to new parts of the world and to new events.

It gives the director, production team and me an opportunity to create different looks and aesthetics to differentiate it from Madam Secretary’s life in DC. While it’s a quick schedule to prep,  research and create new looks for convincing foreign locations every episode (we shoot 99% of the show in New York), it is a challenge that brings a creativity and excitement to the job that I really enjoy.

Learan Kahanov on set with Hillary Clinton for the episode E Pluribus Unum.

Can you talk about what you shoot on and what lenses you use, etc.?
The show is currently shooting on Alexa SXTs with Leica Summicron Prime lenses and Fujinon Cabrio zooms. One of the main things I did when I officially took over the show was to switch to Lecia Primes. We did some testing with Tèa Leoni and Tim Daly on our sets to see how the lenses treated skin tones.

Additionally, we wanted to see how they reacted to the heavy backlight and to the blown out windows we have on many of our sets. We all agreed that the lenses were sharp, but also realized that they created a softer feel on our actors faces, had a nice focus fall-off and they handled the highlights really well. They are flexible enough to help me create different looks while still retaining a consistency for the show. The lenses have an interesting flare characteristic that sometimes makes controlling them difficult, but it all adds to the current look of the show and has yet to be limiting.

You used a Blackmagic Pocket Cinema camera for some specialized shots. Can you describe those?
The show has many scenes that entail some specialized shots that need a small but high-res camera that has an inherently different feel from the Alexa. These shots include webcam and security camera footage. There are also many times when we need to create body/helmet cam footage to emulate images recorded from military/police missions that then were played back in the president’s situation room. That lightweight, high-quality camera allows for a lot of flexibility. We also employ other small cameras like GoPro and DJI Osmo, as well as the Sony A7RII with PL mount.

Madam Secretary

Any challenging scenes that you are particularly proud of?
I don’t think there is an episode that goes by without some type of challenge, but one in particular that I was really happy with took place on a refugee boat in the middle of the Mediterranean Sea.

The scene was set at night where refugees were making a harrowing trip from the north coast of Libya to France. Since we couldn’t shoot on the ocean at night, we brought the boat and a storm into the studio.

Our production designer and art department cut a real boat in half and brought it onto the stage. Drew Jiritano and his special effects team then placed the boat on a gimbal and waterproofed the stage floor so we could place rain towers and air cannons to simulate a storm in the middle of the sea.

Using a technocrane, handheld cameras and interactive lighting, we created a great scene that immersed the audience in a realistic depiction of the dramatic journey that happens more often than most Americans realize.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Blackmagic offers next-gen Ursa Mini Pro camera, other product news

Blackmagic has introduced the Ursa Mini Pro 4.6K G2, a second-generation Ursa Mini Pro camera featuring fully redesigned electronics and a new Super 35mm 4.6K image sensor with 15 stops of dynamic range that combine to support high-frame-rate shooting at up to 300 frames per second.

In addition, the Ursa Mini Pro 4.6K G2 supports Blackmagic RAW and features a new USB-C expansion port for direct recording to external disks. Ursa Mini Pro 4.6K G2 is available now for $5,995 from Blackmagic resellers worldwide.

The new user interface

Key Features:
• Digital film camera with 15 stops of dynamic range
• Super 35mm 4.6K sensor with Blackmagic Design Generation 4 Color Science
• Supports project frame rates up to 60fps and off-speed slow motion recording up to 120fps in 4.6K, 150fps in 4K DCI and 300fps in HD Blackmagic RAW
• Interchangeable lens mount with EF mount included as standard. Optional PL, B4 and F lens mounts available separately
• High-quality 2-, 4- and 6-stop neutral density (ND) filters with IR compensation designed to specifically match the colorimetry and color science of Blackmagic URSA Mini Pro 4.6K G2
• Fully redundant controls including external controls that allow direct access to the most important camera settings such as external power switch, ND filter wheel, ISO, shutter, white balance, record button, audio gain controls, lens and transport control, high frame rate button and more
• Built-in dual C-Fast 2.0 recorders and dual SD/UHS-II card recorders allow unlimited duration recording in high quality
• High-speed USB-C expansion port for recording directly to an external SSD or flash disk
• Lightweight and durable magnesium alloy body
• LCD status display for quickly checking timecode, shutter and lens settings, battery, recording status and audio levels
• Support for Blackmagic RAW files in constant bitrate 3:1, 5:1, 8:1 and 12:1 or constant quality Q0 and Q5 as well as ProRes 4444 XQ, ProRes 4444, ProRes 422 HQ, ProRes 422, ProRes 422 LT, ProRes 422 Proxy recording at 4.6K, 4K, Ultra HD and HD resolutions
• Supports recording of up to 300fps in HD, 150fps in 4K DCI and 120fps at full-frame 4.6K.
• Features all standard connections, including dual XLR mic/line audio inputs with phantom power, 12G-SDI output for monitoring with camera status graphic overlay and separate XLR 4-pin power output for viewfinder power, headphone jack, LANC remote control and standard 4-pin 12V DC power connection
• Built-in high-quality stereo microphones for recording sound
• Offers a four-inch foldout touchscreen for on-set monitoring and menu settings
• Includes full copy of DaVinci Resolve color grading and editing software

Additional Blackmagic news:
– Blackmagic adds Blackmagic RAW to Blackmagic Pocket Cinema Camera 4K
– Blackmagic intros DeckLink Quad HDMI recorder
– Blackmagic updates DeckLink 8K Pro
– Blackmagic announces long-form recording on Blackmagic Duplicator 4K

Colorist Christopher M. Ray talks workflow for Alexa 65-shot Alpha

By Randi Altman

Christopher M. Ray is a veteran colorist with a varied resume that includes many television and feature projects, including Tomorrowland, Warcraft, The Great Wall, The Crossing, Orange Is the New Black, Quantico, Code Black, The Crossing and Alpha. These projects have taken Ray all over the world, including remote places throughout North America, Europe, Asia and Africa.

We recently spoke with Ray, who is on staff at Burbank’s Picture Shop, to learn more about his workflow on the feature film Alpha, which focuses on a young man trying to survive alone in the wilderness after he’s left for dead during his first hunt with his Cro-Magnon tribe.

Ray was dailies colorist on the project, working with supervising DI colorist Maxine Gervais. Gervais of Technicolor won an HPA Award for her work on Alpha in the Outstanding Color Grading — Feature Film category.

Let’s find out more….

Chris Ray and Maxine Gervais at the HPA Awards.

How early did you get involved in Alpha?
I was approached about working on Alpha right before the start of principal photography. From the beginning I knew that it was going to be a groundbreaking workflow. I was told that we would be working with the ARRI Alexa 65 camera, mainly working in an on-set color grading trailer and we would be using FilmLight’s Daylight software.

Once I was on board, our main focus was to design a comprehensive workflow that could accommodate on-set grading and Daylight software while adapting to the ever-changing challenges that the industry brings. Being involved from the start was actually was a huge perk for me. It gave us the time we needed to design and really fine-tune the extensive workflow.

Can you talk about working with the final colorist Maxine Gervais and how everyone communicated?
It was a pleasure working with Maxine. She’s really dialed in to the demands of our industry. She was able to fly to Vancouver for a few days while we were shooting the hair/makeup tests, which is how we were able to form in-person communication. We were able to sit down and discuss creative approaches to the feature right away, which I appreciated as I’m the type of person that likes to dive right in.

At the film’s conception, we set in motion a plan to incorporate a Baselight Linked Grade (BLG) color workflow from FilmLight. This would allow my color grades in Daylight to transition smoothly into Maxine’s Baselight software. We knew from the get-go that there would be several complicated “day for night” scenes that Maxine and I would want to bring to fruition right away. Using the BLG workflow, I was able to send her single “Arriraw” frames that gave that “day for night” look we were searching for. She was able to then send them back to me via a BLG file. Even in remote locations, it was easy for me to access the BLG grade files via the Internet.

[Maxine Gervais weighs in on working with Ray: “Christopher was great to work with. As the workflow on the feature was created from scratch, he implemented great ideas. He was very keen on the whole project and was able to adapt to the ever-changing challenges of the show. It is always important to have on-set color dialed in correctly, as it can be problematic if it is not accurately established in production.”]

How did you work with the DP? What direction were you given?
Being on set, it was very easy for DP Martin Gschlacht to come over to the trailer and view the current grade I was working on. Like Maxine, Martin already had a very clear vision for the project, which made it easy to work with him. Oftentimes, he would call me over on set and explain his intent for the scene. We would brainstorm ways of how I could assist him in making his vision come to life. Audiences rarely see raw camera files, or the how important color can influence the story being told.

It also helps that Martin is a master of aesthetic. The content being captured was extremely striking; he has this natural intuition about what look is needed for each environment that he shoots. We shot in lush rain forests in British Columbia and arid badlands in Alberta, which each inspired very different aesthetics.

Whenever I had a bit of down time, I would walk over to set and just watch them shoot, like a fly on the wall quietly observing and seeing how the story was unfolding. As a colorist, it’s so special to be able to observe the locations on set. Seeing the natural desaturated hues of dead grass in the badlands or the vivid lush greens in the rain forest with your own eyes is an amazing opportunity many of us don’t get.

You were on set throughout? Is that common for you?
We were on set throughout the entire project as a lot of our filming locations were in remote areas of British Columbia and Alberta, Canada. One of our most demanding shooting locations included the Dinosaur Provincial Park in Brooks, Alberta. The park is a UNESCO World Heritage site that no one had been allowed to film at prior to this project. I needed to have easy access to the site in order to easily communicate with the film’s executive team and production crew. They were able to screen footage in their trailer and we had this seamless back-and-forth workflow. This also allowed them to view high-quality files in a comfortable and controlled environment. Also, the ability to flag any potential issues and address them immediately on set was incredibly valuable with a film of such size and complexity.

Alpha was actually the first time I worked in an on-set grading trailer. In the past I usually worked out of the production office. I have heard of other films working with an on-set trailer, but I don’t think I would say that it is overly common. Sometimes, I wish I could be stationed on set more often.

The film was shot mostly with the Alexa 65, but included footage from other formats. Can you talk about that workflow?
The film was mostly shot on the Alexa 65, but there were also several other formats it was shot on. For most of the shoot there was a second unit that was shooting with Alexa XT and Red Weapon cameras, with a splinter unit shooting B-roll footage on Canon 1D, 5D and Sony A7S. In addition to these, there were units in Iceland and South Africa shooting VFX plates on a Red Dragon.

By the end of the shoot, there were several different camera formats and over 10 different resolutions. We used the 6.5K Alexa 65 resolution as the master resolution and mapped all the others into it.

The Alexa 65 camera cards were backed up to 8TB “sled” transfer drives using a Codex Vault S system. The 8TB transfer drives were then sent to the trailer where I had two Codex Vault XL systems — one was used for ingesting all of the footage into my SAN and the second was used to prepare footage for LTO archival. All of the other unit footage was sent to the trailer via shuttle drives or Internet transfer.

After the footage was successfully ingested to the SAN with a checksum verification, it was ready to be colored, processed, and then archived. We had eight LTO6 decks running 24/7, as the main focus was to archive the exorbitant amounts of high-res camera footage that we were receiving. Just the Alexa 65 alone was about 2.8TB per hour for each camera.

Had you worked with Alexa 65 footage previously?
Many times. A few year ago, I was in China for seven months working on The Great Wall, which was one of the first films to shoot with the Alexa 65. I had a month of in-depth pre-production with the camera testing, shooting and honing the camera’s technology. Working very closely with Arri and Codex technicians during this time, I was able to design the most efficient workflow possible. Even as the shoot progressed, I continued to communicate closely with both companies. As new challenges arose, we developed and implemented solutions that kept production running smoothly.

The workflow we designed for The Great Wall was very close to the workflow we ended up using on Alpha, so it was a great advantage that I had previous experience working in-depth with the camera.

What were some of the challenges you faced on this film?
To be honest, I love a challenge. As a colorist, we are thrown into tricky situations every day. I am thankful for these challenges; they improve my craft and enable me to become more efficient at problem solving. One of the largest challenges that I faced in this particular project was working with so many different units, given the number of units shooting, the size of the footage alone and the dozens of format types needed.

We had to be accessible around the clock, most of us working 24 hours a day. Needless to say, I made great friends with the transportation driving team and the generator operators. I think they would agree that my grading trailer was one of their largest challenges on the film since I constantly needed to be on set and my work was being imported/exported in such high resolutions.

In the end, as I was watching this absolutely gorgeous film in the theater it made sense. Working those crazy hours was absolutely worth it — I am thankful to have worked with such a cohesive team and the experience is one I will never forget.

DP Petr Hlinomaz talks about the look of Marvel’s The Punisher

By Karen Moltenbrey

For antiheroes like Frank Castle, the lead character in the Netflix series Marvel’s The Punisher, morality comes in many shades of gray. A vigilante hell-bent on revenge, the Marine veteran used whatever lethal means possible — kidnapping, murder, extortion — to exact revenge on those responsible for the deaths of his family. However, Castle soon found that the criminal conspiracy that set him on this destructive path ran far deeper than initially imagined, and he had to decide whether to embrace his role as the Punisher and help save other victims, or retreat to a more solitude existence.

Alas, in the end, the decision to end the Punisher’s crusade was made not by Frank Castle nor by the criminal element he sought to exact justice upon. Rather, it was made by Netflix, which just recently announced it was cancelling all its live-action Marvel shows. This coming a mere month after Season 2 was released, as many fans are still watching the season’s action play out.

Petr Hlinomaz

The Punisher is dark and intense, as is the show itself. The overall aesthetic is dim and gritty to match the action, yet rich and beautiful at the same time. This is the world initially envisioned by Marvel and then brought to life on screen late in Season 1 by director of photography Petr Hlinomaz under the direction of showrunner Steve Lightfoot.

The Punisher is based on the Marvel Comics character by the same name, and the story is set in the Marvel Cinematic Universe, meaning it shares DNA with the films and other TV shows in the franchise. There is a small family resemblance, but The Punisher is not considered a spin-off of Marvel’s Daredevil, despite the introduction of the Punisher (played by Jon Bernthal) on Season 2 of that series, for which Hlinomaz served as a camera operator and tandem DP. Therefore, there was no intent to match the shows’ cinematic styles.

“The Punisher does not have any special powers like other Marvel characters possess; therefore, I felt that the photographic style should be more realistic, with strong compositions and lighting resembling Marvel’s style,” Hlinomaz says. “It’s its own show. In the Marvel universe, it is not uncommon for characters to go from one show to another and then another after that.”

Establishing the Look
It seems that Hlinomaz followed somewhat in the characters’ footsteps himself, later joining The Punisher crew and taking on the role of DP after the first 10 episodes. He sussed out Lightfoot to find out what he liked as far as framing, look, camera movement and lighting were concerned, and built upon the look of those initial 10 episodes to finish out the last three episodes of Season 1. Then Hlinomaz enhanced that aesthetic on Season 2.

Hlinomaz was assisted by Francis Spieldenner, a Marvel veteran familiar with the property, who in Season 1 and again in Season 2 functioned as A camera/steadicam operator and who shot tandems in addition to serving as DP on two episodes (209 and 211) for Season 2.

“Steve and I had some discussions regarding the color of lighting for certain scenes in Season 2, but he pretty much gave me the freedom of devising the look and camera movement for the show on my own,” recalls Hlinomaz. “I call this look ‘Marvel Noir,’ which is low light and colorful. I never use any normal in the camera color temperature settings (for instance, 3,200K for night and 5,600K for day). I always look for different settings that fit the location and feel of the scene, and build the lighting from there. My approach is very source-oriented, and I do not like cheating in lighting when shooting scenes.”

According to Hlinomaz, the look they were striving for was a mix of Taxi Driver and The Godfather, but darker and more raw. “We primarily used wide-angle lenses to place our characters into our sets and scenery and to see geographically where they are. At times we strived to be inside the actors’ head.” They also used Jason Bourne films as a guideline, “making Jon (the Punisher) and all our characters feel small in the large NYC surroundings,” he adds. “The stunt sequences move fast, continuously and are brutally real.”

In terms of color, Hlinomaz uses very low light with a dark, colorful palette. This compliments New York City, which is colorful, while the city’s multitude of lights and colors “provide a spectacular base for the filming.” The show highlights various location throughout the city. “We felt the look is very fitting for this show, the Punisher being an earnest human being in the beginning of his life, but after joining the force is troubled by his past, PTSD and his family being brutally slaughtered, and in turn, he is brutal and ruthless to ‘bad people,’” explains Hlinomaz.

For instance, in a big fight scene in Season 1, Episode 11 at Micro’s hideout, Hlinomaz showed the top portion of the space to its fullest extent. “It looks dark, mysterious. We used a mixture of top, side and uplighting to make the space look interesting, with lots of color temperature mixes,” he says. “There was a plethora of leftover machinery and huge transformers and generators that were no longer in use, and stairwells that provided a superb backdrop for this sequence.”

The Workflow
For the most part, Hlinomaz has just one day to prep for an episode with the director, and that is often during the technical scout day. “Aside from reading the script and exchanging a few emails, that is the only prep we get,” he says.

During the technical scout, a discussion takes place with the director concerning how the scenes should look and feel. “We discuss lighting and grip, set dressing, blocking, shooting direction, time of day, where we light from, where the sun should be and so on, along with any questions concerning the locations for the next episodes,” he says.

During the scout and rehearsal, Hlinomaz looks for visually stimulating backgrounds, camera angles and shots that will enhance and propel the story line.

When they start shooting the episode, the group rehearses the scene, discusses the most efficient or suitable blocking for the scene and which lenses to use. During the shoot, Hlinomaz takes stills that will be used by the colorists as reference for the lighting, density, color and mood. When the episode is cut and roughly colored, he then will view the episode at the lab (Company 3 in New York) and make notations. Those notes are then provided to the post producer and colorist Tony D’Amore (from Encore) for the final color pass and Lightfoot’s approval.

The group employs HDR, “which, in a way, is hard because you always have to protect for overexposure on sources within the frame,” adds Hlinomaz. In fact, D’Amore has commended Hlinomaz, the directors and Lightfoot with devising unique lighting scenarios that highlighted the HDR aspect of the show in Season 2.

Tools of the Trade
The Punisher’s main unit uses two cameras – “we have crew to cover two at all times,” Hlinomaz says. That number increases to three or more as needed for certain sequences, though there are times when just one camera is used for certain scenes and shots.

According to Hlinomaz, Netflix and Marvel only shoot with Red 4K cameras and up. For the duration of The Punisher shoot, the crew only carried four “Panavised” Red cameras. “We shot 4K but frequently used the 5K and 6K settings to go a bit wider with the [Panavision] Primo lenses, or for a tilt and swing lens special look,” he says, adding that he has used Red cameras for the past four years and is still impressed with the color rendering of the Red sensors. Prior to shooting the series, he tested Zeiss Ultra Prime lenses, Leica Summilux lenses, along with Panavision Primos; Hlinomaz chose the Primos for their 3D rendering of the subjects.

The lens set ranged from 10mm to 150mm; there was also an 11:1 zoom lens that was used sparingly. It all depended on the shot. In Episode 13, when Frank finally shoots and kills hitman Billy Russo (aka Jigsaw), Hlinomaz used an older 12mm lens with softer edges to simulate Billy’s state as he is losing a lot of blood. “It looked great, somewhat out of focus along the edges as Frank approaches; then, when Frank steps closer for the kill, he comes into clear focus,” Hlinomaz explains.

In fact, The Punisher was shot using the same type of camera and lenses as the second season of the now-cancelled Marvel/Netflix series Luke Cage (Hlinomaz served as a DP on Luke Cage Season 2 and a camera operator for four episodes of Season 1). In addition to wide-angle lenses, the show also used more naturalistic lighting, similar to The Punisher.

Hlinomaz details another sequence pertaining to his choice of cameras and lenses on The Punisher, whereby he used 10mm and 14mm lenses for a fight scene inside an elevator. Spieldenner, the A cam operator, was inside the elevator with the performers. “We didn’t pull any walls for that, only the ceilings were pulled for one overhead shot when Frank flips a guy over his shoulder,” explains Hlinomaz. “I did not want to pull any walls; when you do, it feels like the camera is on the outside, especially if it’s a small space like that elevator.”

On-Set Challenges
A good portion of the show is filmed outdoors — approximately two-thirds of the series —which always poses an additional challenge due to constantly changing weather conditions, particularly in New York. “When shooting exteriors, you are in the elements. Night exteriors are better than day exteriors because you have more control, unless the day provides constant lighting — full sun or overcast, with no changes. Sometimes it’s impractical or prohibitive to use overhead cover to block out the sun; then you just have to be quick and make smart decisions on how to shoot a scene with backlight on one side and front fill that feels like sunlight on the other, and make it cut and look good together,” explains Hlinomaz.

As he noted earlier, Hlinomaz is a naturalist when it comes to lighting, meaning he uses existing source-driven lighting. “I like simplicity. I use practicals, sun and existing light to give and drive our light direction,” he further adds. “We use every possible light, from big HMIs all the way down to the smallest battery-driven LED lights. It all depends on a given shot, location, sources and where the natural or existing light is coming from. On the other hand, sometimes it is just a bounce card for a little fill or nothing extra to make the shot look great.”

All The Punisher sets, meanwhile, have hard ceilings. “That means with our use of lower camera angles and wide lenses, we are seeing everything, including the ceilings, and are not pulling bits of ceilings and hanging any lights up from the grid. All lighting is crafted from the floor, driven by sources, practicals, floor bounces, windows and so on,” says Hlinomaz. “My feeling is that this way, the finished product looks better and more natural.”

Most of Season 1’s crew returned for Season 2, so they were familiar with the dark and gritty style, which made things easier on Hlinomaz. The season begins with the Punisher somewhere in the Midwest before agent Madani brings Frank back to New York, although all the filming took place throughout New York.

One of the more challenging sequences this season, according to Hlinomaz, was an ambulance chase that was filmed in Albany, New York. For the shoot, they used a 30-foot Louma crane and Edge arm from Action Camera cars, and three to four Red cameras. For the actual ambulance drop, they placed four additional cameras. “We had to shoot many different passes with stunts as well as the actors, in addition to the Edge arm pass. It was quite a bit of work,” he says. Of course, it didn’t help that when they arrived in Albany to start filming, they encountered a rain delay, but “we used the time to set up the car and ambulance rigs and plan to the last detail how to approach our remaining days there.” For the ambulance interior, they shot on a greenscreen stage with two ambulances — one on a shaky drive simulation rig and the other mounted 20 feet or so high on a teeter rig that simulated the drop of the highway as it tilted forward until it was pointing straight to the ground.

“If I remember correctly, we spent six days total on that sequence,” says Hlinomaz.

The second season of The Punisher was hard work, but a fun and rewarding experience, Hlinomaz contends. “It was great to be surrounded from top to bottom with people working on this show who wanted to be there 100 percent, and that dedication and our hard work is evident, I believe, in the finished season,” he adds.

As Hlinomaz waited for word on Season 3 of The Punisher, he lent his talents to Jessica Jones, also set in the Marvel Cinematic Universe — and sadly also receiving the same ultimate fate — as Hlinomaz stepped in to help shoot Episode 305, with the new Red DSMC2 Gemini 5K S35 camera. “I had a great experience there and loved the new camera. I am looking forward to using it on my next project,” he adds.


Karen Moltenbrey is a veteran VFX and post writer.

Color plays big role in the indie thriller Rust Creek

In the edge-of-your-seat thriller Rust Creek, confident college student Sawyer (Hermione Corfield) loses her way while driving through very rural Appalachia and quickly finds herself in a life-or-death struggle with some very dangerous men. The modestly-budgeted feature from Lunacy Productions — a company that encourages female filmmakers in top roles — packs a lot of power with virtually no pyrotechnics using well-thought-out filmmaking techniques, including a carefully planned and executed approach to the use of color throughout the film.

Director Jen McGowan and DP Michelle Lawler

Director Jen McGowan, cinematographer Michelle Lawler and colorist Jill Bogdanowicz of Company 3 collaborated to help express Sawyer’s character arc through the use of color. For McGowan, successful filmmaking requires thorough prep. “That’s where we work out, ‘What are we trying to say and how do we illustrate that visually?’” she explains. “Film is such a visual medium,” she adds, “but it’s very different from something like painting because of the element of time. Change over time is how we communicate story, emotion and theme as filmmakers.”

McGowan and Lawler developed the idea that Sawyer is lost, confused and overwhelmed as her dire situation becomes clear. Lawler shot most of Rust Creek handholding an ARRI Alexa Mini (with Cooke S4s) following Sawyer as she makes her way through the late autumn forest. “We wanted her to become part of the environment,” Lawler says. “We shot in winter and everything is dead, so there was a lot of brown and orange everywhere with zero color separation.”

Production designer Candi Guterres pushed that look further, rather than fighting it, with choices about costumes and some of the interiors.

“They had given a great deal of thought to how color affects the story,” recalls colorist Bogdanowicz, who sat with both women during the grading sessions (using Blackmagic’s DaVinci Resolve) at Company 3 in Santa Monica. “I loved the way color was so much a part of the process, even subtly, of the story arc. We did a lot in the color sessions to develop this concept where Sawyer almost blends into the environment at first and then, as the plot develops and she finds inner strength, we used tonality and color to help make her stand out more in the frame.”

Lawler explains that the majority of the film was shot on private property deep in the Kentucky woods, without the use of any artificial light. “I prefer natural light where possible,” she says. “I’d add some contrast to faces with some negative fill and maybe use little reflectors to grab a rake of sunlight on a rock, but that was it. We had to hike to the locations and we couldn’t carry big lights and generators anyway. And I think any light I might have run off batteries would have felt fake. We only had sun about three days of the 22-day shoot, so generally I made use of the big ‘silk’ in the sky and we positioned actors in ways that made the best use of the natural light.”

In fact, the weather was beyond bad, it was punishing. “It would go from rain to snow to tornado conditions,” McGowan recalls. “It dropped to seven degrees and the camera batteries stopped working.”

“The weather issues can’t be overstated,” Lawler adds, describing conditions on the property they used for much of the exterior location. “Our base camp was in a giant field. The ground would be frozen in the morning and by afternoon there would be four feet of mud. We dug trenches to keep craft services from flooding.”

The budget obviously didn’t provide for waiting around for the elements to change, David Lean-style. “Michelle and I were always mindful when shooting that we would need to be flexible when we got to the color grading in order to tie the look together,” McGowan explains. “I hate the term ‘fix it post.’ It wasn’t about fixing something, it was about using post to execute what was intended.”

Jill Bogdanowicz

“We were able to work with my color grading toolset to fine tune everything shot by shot,” says Bogdanowicz. “It was lovely working with the two of them. They were very collaborative but were very clear on what they wanted.”

Bogdanowicz also adapted a film emulation LUT, which was based on the characteristics of a Fujifilm print stock and added in a subtle hint of digital grain, via a Boris FX Sapphire plug-in, to help add a unifying look and filmic feel to the imagery. At the very start of the process, the colorist recalls, “I showed Jen and Michelle a number of ‘recipes’ for looks and they fell in love with this one. It’s somewhat subtle and elegant and it made ‘electric’ colors not feel so electric but has a film-style curve with strong contrast in the mids and shadows you can still see into.”

McGowan says she was quite pleased with the work that came out of the color theater. “Color is not one of the things audiences usually pick up on, but a lot of people do when they see Rust Creek. It’s not highly stylized, and it certainly isn’t a distracting element, but I’ve found a lot of people have picked up on what we were doing with color and I think it definitely helped make the story that much stronger.”

Rust Creek is currently streaming on Amazon Prime and Google.

Helicopter Film Services intros Titan ultra-heavy lifting drone

Helicopter Filming Services (HFS) has launched an ultra-heavy lift drone that incorporates a large, capable airframe paired with the ARRI SRH-3. Known as the Titan, the drone’s ARRI SRH-3 stabilized head enables easy integration of existing ARRI lens motors and other functionality directly with the ARRI Alexa 65 and LF cameras.

HFS developed the large drone in response to requests from some legendary DPs and VFX supervisors to enable filmmakers to fly large-format digital or 35mm film packages.

“We have trialed other heavy-lift machines, but all of them have been marginal in terms of performance when carrying the larger cameras and lenses that we’re asked to fly,” says Alan Perrin, chief UAV pilot at HFS. “What we needed, and what we’ve designed, is a system that will capably and safely operate with the large-format cameras and lenses that top productions demand.”

The Titan combines triple redundancy on flight controls and double redundancy on power supply and ballistic recovery into an aircraft that can deploy and operate easily on any production involving a substantial flight duration. The drone can easily fly a 35mm film camera while carrying an ARRI 435 and 400-foot magazine.

Here are some specs:
• Optimized for large-format digital and 35mm film cameras
• Max payload up to 30 kilograms
• Max take-off mass — 80 kilograms
• Redundant flight control systems
• Ballistic recovery system (parachute)
• Class-leading stability
• Flight duration up to 15 minutes (subject to payload weight and configuration)
• HD video downlink
• Gimbal: ARRI SRH3 or Movi XL

Final payload-proving flights are taking place now, and the company is in the process of planning first use on major productions. HFS is also exploring the ability to fly a new 65mm film camera on the Titan.

SciTech Medallion Recipient: A conversation with Curtis Clark, ASC

By Barry Goch

The Academy of Motion Pictures Arts & Sciences has awarded Curtis Clark, ASC, the John A. Bonner Medallion “in appreciation for outstanding service and dedication in upholding the high standards of the Academy.” The presentation took place in early February and just prior to the event, I spoke to Clark and asked him to reflect on the transition from film to digital cinema and his contributions to the industry.

Clark’s career as a cinematographer includes features, TV and commercials. He is also the chair of the ASC Motion Imaging Technology Council that developed the ASC- CDL.

Can you reflect on the changes you’ve seen over your career and how you see things moving ahead in the future?
Once upon a time, life was an awful lot simpler. I look back on it nostalgically when it was all film-based, and the possibilities of the cinematographer included follow-up on the look of dailies and also follow through with any photographic testing that helped to hone in on the desired look. It had its photochemical limitations; its analog image structure was not as malleable or tonally expansive as the digital canvas we have now.

Do you agree that Kodak’s Cineon helped us to this digital revolution — the hybrid film/digital imaging system where you would shoot on film, scan it and then digitally manipulate it before going back out to film via a film recorder?
That’s where the term digital intermediate came into being, and it was an eye opener. I think at the time not everyone fully understood the ramifications of the sort of impact it was making. Kodak created something very potent and led the way in terms of methodologies, or how to arrive at integration of digital into what was then called a hybrid imaging system —combining digital and film together.

The DCI (Digital Cinema Initiative) was created to establish digital projection standards. Without a standard we’d potentially be creating chaos in terms of how to move forward. For the studios, distributors and exhibitors, it would be a nightmare Can you talk about that?
In 2002, I had been asked to form a technology committee at the ASC to explore these issues: how the new emerging digital technologies were impacting the creative art form of cinematography and of filmmaking, and also to help influence the development of these technologies so they best serve the creative intent of the filmmaker.

DCI proposed that for digital projection to be considered ready for primetime, its image quality needed to be at least as good as, if not better than, a print from the original negative. I thought this was a great commitment that the studios were making. For them to say digital projection was going to be judged against a film print projection from the original camera negative of the exact same content was a fantastic decision. Here was a major promise of a solution that would give digital cinema image projection an advantage since most people saw release prints from a dupe negative.

Digital cinema had just reached the threshold of being able to do 2K digital cinema projection. At that time, 4K digital projection was emerging, but it was a bit premature in terms of settling on that as a standard. So you had digital cinema projection and the emergence of a sophisticated digital intermediate process that could create the image quality you wanted from the original negative, but projected on a digital projection.

In 2004, the Michael Mann film Collateral film was shot with the Grass Valley Viper Film Stream, the Sony F900 and Sony F950 cameras, the latest generation of digital motion picture cameras — basically video cameras that were becoming increasingly sophisticated with better dynamic range and tonal contrast, using 24fps and other multiple frame rates, but 24p was the key.
These cameras were used in the most innovative and interesting manner, because Mann combined film with digital, using the digital for the low-light level night scenes and then using film for the higher-light level day exterior scenes and day interior scenes where there was no problem with exposure.

Because of the challenge of shooting the night scenes, they wanted to shoot at such low light levels that film would potentially be a bit degraded in terms of grain and fog levels. If you had to overrate the negative, you needed to underexpose and overdevelop it, which was not desirable, whereas the digital cameras thrived in lower light levels. Also, you could shoot at a stop that gave you better depth of field. At the time, it was a very bold decision. But looking back on it historically, I think it was the inflection point that brought the digital motion picture camera into the limelight as a possible alternative to shooting on film.

That’s when they decided to do Camera Assessment Series tests, which evaluates all the different digital cinema cameras available at the time?
Yeah, with the idea being that we’d never compare two digital cameras together, we’d always compare the digital camera against a film reference. We did that first Camera Assessment Series, which was the first step in the direction of validating the digital motion picture camera as viable for shooting motion pictures compared with shooting on film. And we got part way there. A couple of the cameras were very impressive: the Sony F35, the Panavision Genesis, the Arri D21 and the Grass Valley Viper were pretty reasonable, but this was all still mainly within a 2K (1920×1080) realm. We had not yet broached that 4K area.

A couple years later, we decided to do this again. It was called the Image Control Assessment Series, ICAS. That was shot at Warner Bros. It was the scenes that we shot in a café — daylight interior and then night time exterior. Both scenes had a dramatically large range of contrast and different colors in the image. It was the big milestone. The new Arri Alexa was used, along with the Sony F65 and the then latest versions of the Red cameras.

So we had 4K projection and 4K cameras and we introduced the use of ACES (Academy Color Encoding System) color management. So we were really at the point where all the key components that we needed were beginning to come together. This was the first instance where these digital workflow components were all used in a single significant project testing. Using film as our common benchmark reference — How are these cameras in relation to film? That was the key thing. In other words, could we consider them to be ready for prime time? The answer was yes. We did that project in conjunction with the PGA and a company called Revelations Entertainment, which is Morgan Freeman’s company. Lori McCreary, his partner, was one of the producers who worked with us on this.

So filmmakers started using digital motion picture cameras instead of film. And with digital cinema having replaced film print as a distribution medium, these new generation digital cameras started to replace film as an image capture medium. Then the question was would we have an end-to-end digital system that would become potentially viable as an alternative to shooting on film.

L to R: Josh Pines, Steve MacMillan, Curtis Clark and Dhanendra Patel.

Part of the reason you are getting this acknowledgement from the Academy is your dedication on the highest quality of image and the respect for the artistry, from capture through delivery. Can you talk about your role in look management from on-set through delivery?
I think we all need to be on the same page; it’s one production team whose objective is maintaining the original creative intent of the filmmakers. That includes director and cinematographer and working with an editor and a production designer. Making a film is a collective team effort, but the overall vision is typically established by the director in collaboration with the cinematographer and a production designer. The cinematographer is tasked with capturing that with lighting, with camera composition, movement, lens choices — all those elements that are part of the process of creative filmmaking. Once you start shooting with these extremely sophisticated cameras, like the Sony F65 or Venice, Panavision Millennium DXL, an Arri or the latest versions of the Red camera, all of which have the ability to reproduce high dynamic range, wide color gamut and high resolution. All that raw image data is inherently there and the creative canvas has certainly been expanded.

So if you’re using these creative tools to tell your story, to advance your narrative, then you’re doing it with imagery defined by the potential of what these technologies are able to do. In the modern era, people aren’t seeing dailies at the same time, not seeing them together under controlled circumstances. The viewing process has become fragmented. When everyone had to come together to view projected dailies, there was a certain camaraderie constructive contributions that made the filmmaking process more effective. So if something wasn’t what it should be, then everyone could see exactly what it was and make a correction if you needed to do that.

But now, we have a more dispersed production team at every stage of the production process, from the initial image capture through to dailies, editorial, visual effects and final color grading. We have so many different people in disparate locations working on the production who don’t seem to be as unified, sometimes, as we were when it was all film-based analog shooting. But now, it’s far easier and simpler to integrate visual effects into your workflow. Like Cineon indicated when it first emerged, you could do digital effects as opposed to optical effects and that was a big deal.

So coming back to the current situation, and particularly now with the most advanced forms of imaging, which include high dynamic range, wider color gamut, wider than even P3, REC 2020, having a color management system like ACES that actually has enough color gamut to be able to contain any color space that you capture and want to be able to manipulate.

Can you talk about the challenges you overcame, and how that fits into the history of cinema as it relates to the Academy recognition you received?
As a cinematographer, working on feature films or commercials, I kept thinking, if I’m fortunate enough to be able to manage the dailies and certainly the final color grading, there are these tools called lift gain gamma, which are common to all the different color correctors. But they’re all implemented differently. They’re not cross-platform-compatible, so the numbers from a lift gain gamma — which is the primary RGB grading — from one color corrector will not translate automatically to another color corrector. So I thought, we should have a cross platform version of that, because that is usually seen as the first step for grading.

That’s about as basic as you can get, and it was designed so that it would be a cross-platform implementation, so that everybody who installs and applies the ASC-CDL in a color grading system compatible with that app, whether you did it on a DaVinci, Baselight, Lustre or whatever you were using, the results would be the same and transferable.

You could transport those numbers from one set-up on set using a dailies creation tool, like ColorFront for example. You could then use the ASC CDL to establish your dailies look during the shoot, not while you’re actually shooting, but with the DIT to establish a chosen look that could then be applied to dailies and used for VFX.

Then when you make your way into the final color grading session with the final cut — or whenever you start doing master color grading going back to the original camera source — you would have these initial grading corrections as a starting point as references. This now gives you the possibility of continuing on that color grading process using all the sophistication of a full color corrector, whether it’s power windows or secondary color correction. Whatever you felt you needed to finalize the look.

I was advocating this in the ASC Technology Committee, as it was called, now subsequently renamed the Motion Imaging Technology Council (MITC). We needed a solution like this and there were a group of us who got together and decided that we would do this. There were plenty of people who were skeptical, “Why would you do something like that when we already have lift gain gamma? Why would any of the manufacturers of the different color grading systems integrate this into their system? Would it not impinge upon their competitive advantage if they had a system that people were used to using, and if their own lift gain gamma would work perfectly well for them, why would they want to use the ASC CDL?

We live in a much more fragmented post world, and I saw that becoming even more so with the advances of digital. The ASC CDL would be a “look unifier” that would establish initial look parameters. You would be able to have control over the look at every stage of the way.

I’m assuming that the cinematographer would work with the director and editor, and they would assess certain changes that probably should be made because we’re now looking at cut sequences and what we had thought would be most appropriate when we were shooting is now in the context of an edit and there may need to be some changes and adjustments.

Were you involved in ACES? Was it a similar impetus for ACES coming about? Or was it just spawned because visual effects movies became so big and important with the advent of digital filmmaking?
It was bit of both, including productions without VFX. So I would say that initially it was driven by the fact that there really should be a standardized color management system. Let me give you an example of what I’m talking about. When we were all photochemical and basically shooting with Kodak stock, we were working with film-based Kodak color science.

It’s a color science that everybody knew and understood, even if they didn’t understand it from an engineering photochemical point of view, they understood the effects of it. It’s what helps enable the look and the images that we wanted to create.

That was a color management system that was built into film. That color science system could have been adapted into the digital world, but Kodak resisted that because of the threat to negatives. If you apply that film color science to digital cameras, then you’re making digital cameras look more like film and that could pose a threat to the sale of color film negative.

So that’s really where the birth of ACES came about — to create a universal, unified color management system that would be appropriate anywhere you shot and with the widest possible color gamut. And it supports any camera or display technology because it would always have a more expanded (future proofing) capability within which the digital camera and display technologies would work effectively and efficiently but accurately, reliably and predictably.

Very early on, my ASC Technology Committee (now called Motion Imaging Technology Council) got involved with ACES development and became very excited about it. It was the missing ingredient needed to be able to make the end-to-end digital workflow the success that we thought that it could become. Because we no longer could rely on film-based color science, we had to either replicate that or emulate it with a color management system that could accommodate everything we wanted to do creatively. So ACES became that color management system.

So, in addition to becoming the first cross-platform primary color grading tool, the ASC CDL became the first official ACES look modification transform. Because ACES is not a color grading tool, it’s a color management system, you have to have color grading tools with color management. So you have the color management with ACES, you have the color grading with ASC CDL and the combination of those together is the look management system because it takes them all to make that work. And it’s not that the ASC CDL is the only tool you use for color grading, but it has the portable cross-platform ability to be able to control the color grading from dailies through visual effects up to the final color grade when you’re then working with a sophisticated color corrector.

What do you see for the future of cinematography and the merging of the worlds of post and on-set work and, what do you see as future challenges for future integrations between maintaining the creative intent and the metadata.
We’re very involved in metadata at the moment. Metadata is a crucial part of making all this work, as you well know. In fact, we worked on the common 3D LUT format, which we worked on with the Academy. So there is a common 3D LUT format that is something that would again have cross-platform consistency and predictability. And it’s functionality and its scope of use would be better understood if everyone were using it. It’s a work in progress. Metadata is critical.

I think as we expand the canvas and the palette of the possibility of image making, you have to understand what these technologies are capable of doing, so that you can incorporate them into your vision. So if you’re saying my creative vision includes doing certain things, then you would have to understand the potential of what they can do to support that vision. A very good example in the current climate is HDR.

That’s very controversial in a lot of ways, because the set manufacturers really would love to have everything just jump off the screen to make it vibrant and exciting. However, from a storytelling point of view, it may not be appropriate to push HDR imagery where it distracts from the story.
Well, it depends on how it’s done and how you are able to use that extended dynamic range when you have your bright highlights. And you can use foreground background relationships with bigger depth of field for tremendous effect. They have a visceral presence, because they have a dimensionality when, for example, you see the bright images outside of a window.

When you have an extended dynamic range of scene tones that could add dimensional depth to the image, you can choreograph and stage the blocking for your narrative storytelling with the kind of images that take advantage of those possibilities.

So HDR needs to be thought of as something that’s integral to your storytelling, not just something that’s there because you can do it. That’s when it can become a distraction. When you’re on set, you need a reference monitor that is able to show and convey, all the different tonal and color elements that you’re working with to create your look, from HDR to wider color gamut, whatever that may be, so that you feel comfortable that you’ve made the correct creative decision.

With virtual production techniques, you can incorporate some of that into your live-action shooting on set with that kind of compositing, just like James Cameron started with Avatar. If you want to do that with HDR, you can. The sky is the limit in terms of what you can do with today’s technology.

So these things are there, but you need to be able to pull them all together into your production workflow to make sure that you can comfortably integrate in the appropriate way at the appropriate time. And that it conforms to what the creative vision for the final result needs to be and then, remarkable things can happen. The aesthetic poetry of the image can visually drive the narrative and you can say things with these images without having to be expositional in your dialogue. You can make it more of an experientially immersive involvement with the story. I think that’s something that we’re headed toward, that’s going to make the narrative storytelling very interesting and much more dynamic.

Certainly, and certainly with the advancements of consumer technology and better panels and the high dynamic range developments, and Dolby Vision coming into the home and Atmos audio coming into the home. It’s really an amazing time to be involved in the industry; it’s so fun and challenging.

It’s a very interesting time, and a learning curve needs to happen. That’s what’s driven me from the very beginning and why I think our ASC Motion Imaging Technology Council has been so successful in its 16 years of continuous operation influencing the development of some of these technologies in very meaningful ways. But always with the intent that these new imaging technologies are there to better serve the creative intent of the filmmaker. The technology serves the art. It’s not about the technology per se, it’s about the technology as the enabling component of the art. It enables the art to happen. And expands it’s scope and possibility to broader canvases with wider color gamuts in ways that have never been experienced or possible before.


Barry Goch is a Finishing Artist at The Foundation and a Post Production Instructor at UCLA Extension. You can follow him on Twitter at @gochya.

Red intros LCD touch monitor for DSMC2 cameras

Red Digital Cinema has introduced the DSMC2 Touch 7-inch Ultra-Brite LCD monitor to its line of camera accessories. It offers an optically-bonded touchscreen with Gorilla Glass that allows for what the company calls “intuitive ways to navigate menus, adjust camera parameters and review .R3D clips directly out of the camera.”

The monitor offers a brighter high-definition viewing experience for recording and viewing footage on DSMC2 camera systems, even in direct sunlight. A 1920×1200 resolution display panel provides 2,200 nits of brightness to overcome viewing difficulties in bright outdoor environments as well as a high-pixel density (at 323ppi) and a 1200:1 contrast ratio.

The Ultra-Brite display mounts to Red’s DSMC2 Brain or other 1/4-20 mounting surfaces, and provides a LEMO connection to the camera, making it an ideal monitoring option for gimbals, cranes, and cabled remote viewing. Shooters can use a DSMC2 LEMO Adaptor A in conjunction with the Ultra-Brite display for convenient mounting options away from the DSMC2 camera Brain.

Check out a demo of the new monitor, priced at $3,750, here.

Lucid and Eys3D partner on VR180 depth camera module

EYS3D Microelectronics Technology, the company behind embedded camera modules in some top-tier AR/VR headsets, has partnered with that AI startup Lucid. Lucid will power their next-generation depth-sensing camera module, Axis. This means that a single, small, handheld device can capture accurate 3D depth maps with up to a 180-degree field of view at high resolution, allowing content creators to scan, reconstruct and output precise 3D point clouds.

This new camera module, which was demoed for the first time at CES, will allow developers, animators and game designers a way to transform the physical world into a virtual one, ramping up content for 3D, VR and AR all with superior performance in resolution and field of view at a lower cost than some technologies currently available.

A device capturing the environment exactly as you perceive it, but enhanced with capabilities of precise depth, distance and understanding could help eliminate the boundaries between what you see in the real world and what you can create in the VR and AR world. This is what the Lucid-powered EYS3D’s Axis camera module aims to bring to content creators, as they gain the “super power” of transforming anything in their vision into a 3D object or scene which others can experience, interact with and walk in.

What was only previously possible with eight to 16 high-end DSLR cameras, and expensive software or depth sensors is now combined into one tiny camera module with stereo lenses paired with IR sensors. Axis will cover up to a 180-degree field of view while providing millimeter-accurate 3D in point cloud or depth map format. This device provides a simple plug-and-play experience through USB 3.1 Gen1/2 and supported Windows and Linux software suites, allowing users to further develop their own depth applications such as 3D reconstructing an entire scene, scanning faces into 3D models or just determining how far away an object is.

Lucid’s AI-enhanced 3D/depth solution, known as 3D Fusion Technology, is currently deployed in many devices, such as 3D cameras, robots and mobile phones, including the Red Hydrogen One, which just launched through AT&T and Verizon nationwide.

EYS3D’s new depth camera module powered by Lucid will be available in Q3 2019.

Review: iOgrapher Multi Case for mobile filmmaking

By Brady Betzel

Thanks to the amazing iPhone X, Google Pixel and Samsung Galaxy, almost everyone has a high-end video camera on their person at all times and this is helping to spur on mobile filmmaking and vlogging.

From YouTube to Instagram to movies like Unsane (Steven Soderbergh) or Tangerine (Sean Baker) — and regardless of whether you think a $35,000 camera setup tells a story better than a $1,000 cell phone (looking at you Apple Phone XS Max) — mobile filmmaking is here to stay and will only get better.

iOgrapher’s latest release is the iOgrapher Multi Case, a compact mobile filmmaking mounting solution that works with today’s most popular phones. iOgrapher has typically created solutions that were tied to the mobile device being used for filmmaking, such as an iPhone, the latest Samsung Galaxy phones, iPads or even action cameras like a GoPro Hero 7 Black.

With the new iOgrapher Multi Case you can fit any mobile device that measures more than 5 ½” x 2 ¼” and less than 6 ½” by 3 ⅜”. Unfortunately, you won’t be fitting an iPad or a GoPro in the iOgrapher Multi Case, but don’t fret! iOgrapher makes rigs for those as well. On the top of the Multi Case are two cold shoe mounts for lights, microphones or any other device, like a GoPro. To mount things with ¼” 20 screw mounts in the cold shoes you will need to find a cold shoe to ¼” 20 adapter, which is available on iOgrapher’s accessory page. You can also find these at Monoprice or Amazon for real cheap.

And if you are looking to order more mounts you may want to order some extra cold shoe adapters that can be mounted on the handles of the iOgrapher Multi Case in the additional ¼” 20 screw mounts. The mounts on the handles are great for adding in additional lighting or microphones. I’ve even found that if you are going to be doing some behind-the-scenes filming or need another angle for your shooting, a small camera like a GoPro can be easily mounted and angled. With all this mounting you should assume that you are going to be using the iOgrapher on a sturdy tripod. Just for fun, I mounted the iOgrapher Multi Case onto a GoPro 3-Way Grip, which can also be used as a light tripod. It wasn’t exactly stable but it worked. I wouldn’t suggest using it for more than an emergency shooting situation though.

On the flip side (all pun intended), the iOgrapher can be solidly mounted vertically with the ¼” 20 screw mounts on the handles. With Instagram making headway with vertical video in their Instagram Stories, iOgrapher took that idea and built that into their Multi Case, further cementing grumbling from the old folks who just don’t get vertical video.

Testing
I tried out both a Samsung Galaxy s8+ as well as an iPhone 7+ with their cases on inside of the iOgrapher Multi Case. Both fit. The iPhone 7+ was stretching the boundaries of the Multi Case, but it did fit and worked well. The way the phones are inserted into the Multi Case is by a spring-loaded bottom piece. From the left or top side, if you are shooting vertically, you push the bottom of the mobile device into the corner covered slots of the iOgrapher Multi Case until the top or the left side can be secured under the left or top side of the Multi Case. It’s really easy.

I was initially concerned with the spring loading of the case; I wasn’t sure if the springs would be resilient enough to handle the constant pulling in and out of the phones, but the springs are high quality and held up beautifully. I even tried inserting my mobile phones tons of times and didn’t notice any issues with the springs or my phones.

Take care when inserting your phone into the Multi Case if you have a protective shield on the screen of your device. If you aren’t extra careful it can pull or snag on the cover — especially with the tight fit of a case. Just pay attention and there will be nothing to worry about. The simple beauty of the iOgrapher is that with a wider grip of your filmmaking device, you have a larger area to distribute any shaking coming from your hands, essentially helping stabilize your filmmaking without the need for a full-fledged gimbal.

If you accidentally drop your iOgrapher you may get a scratch, but for the most part they are built sturdy and can withstand punishment, whether it’s from your four year old or from weather. If you want to get a little fancy, you can buy affordable lights like the Litra Torch (check out my review) to attach to the cold shoe mounts, or even a Rode microphone (don’t forget the TRS to TRRS adapter if you are plugging into an iPhone), and you are off and running.

Summing Up
I have been really intrigued with iOgrapher’s products since day one. They are an affordable and sturdy way to jump into filmmaking using cameras everyone carries with them every day: their phones.

Whether you are a high school student looking to get steady and professional mobile video, or a journalist looking for a quick way to make the most of your shots with just a phone, light, mic and tripod mount, the iOgrapher Multi Case will unlock your mobile filmmaking potential.

The iOgrapher Multi Case is a very durable protective case for your mobile filmmaking devices that is a steal at $79.99. If you are a parent that is looking for an inexpensive way to try and tease your child’s interest in video take a look at www.iographer.com and grab a few accessories like a Manfrotto light and Rode VideoMicro to add some subtle lighting and pick up the best quality audio.

Make sure to check out Dave Basulto’s — the creator of iOgrapher — demo of the iOgrapher Multi Case, including trying out the fit of different phones.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.