AMD 2.1

Category Archives: Cameras

Review: GoPro Hero 7 Black action camera

By Brady Betzel

Every year GoPro offers a new iteration of its camera. One of the biggest past upgrades was from the Hero 4 to the Hero 5, with an updated body style, waterproofing without needing external housing and minimal stabilization. That was one of the biggest… until now.

The Hero 7 Black is by far the best upgrade GoPro users have seen, especially if you are sitting on a Hero 5 or earlier. I’ll tell you up front that the built-in stabilization (called Hypersmooth) alone is worth the Hero 7 Black’s $399 price tag, but there are a ton of other features that have been upgraded and improved.

There are three versions of the Hero 7: Black for $399, Silver for $299 and White for $199. The White is the lowest priced Hero 7 and includes features like 1080p @ 60fps video recording, a built-in battery, waterproofing to 33 feet-deep without extra housing, standard video stabilization, 2x slow-mo (1440p/1080p @ 60fps), video recording up to 40Mb/s (1440p), two-mic audio recording, 10MP Photos, and 15/1 burst photos. After reading that you can surmise that the Hero 7 White is as basic as it gets, GoPro even skipped 24fps video recording, ProTune and a front LCD display. But that doesn’t mean the Hero 7 White is a throwaway; what I love about the latest update to the Hero line is the simplicity in operating the menus. In previous generations, the GoPro Hero menus were difficult to use and would often cause me to fumble shots. The Hero 7 menu has been streamlined for a much more simple mode selection process, making the Hero 7 White a basic and relatively affordable waterproof GoPro.

The Hero 7 Silver can be purchased for $299 and has everything the Hero 7 White has, plus some extras, including 4K video recording at 30fps up to 60MB/s, 10MP photos with wide dynamic range to bring out details in the highlights and shadows and a GPS location to show you where your videos and photos were taken. .

The Hero 7 Black
The Hero 7 Black is the big gun in the GoPro Hero 7 lineup. For anyone who wants to shoot multiple frame rates; harness a flat picture profile using ProTune to have extended range when color correcting; record ultra-smooth video without an external gimbal and no post processing; or shoot RAW photos, the Hero 7 Black is for you.

The Hero 7 Black has all of the features of the White and Silver plus a bunch more, including the front-facing LCD display. One of the biggest still-photo upgrades is the ability to shoot 12MP photos with SuperPhoto. SuperPhoto is essentially a “make my image look like the GoPro photos on Instagram” look. It’s an auto-image processor that will turn good photos into awesome photos. Essentially it’s an HDR mode that gives as much latitude in the shadows and highlights as well as noise reduction.
Beyond the SuperPhoto, the Hero 7 has burst rates from 3/1 up to 30/1, a timelapse photo function with intervals ranging from .5 seconds to 60 seconds; the ability to shoot RAW photos in GPR format alongside JPG; the ability to shoot video in 4K at 60fps, 30fps and 24fps in wide mode, as well as 30 and 24fps in SuperView mode (essentially ultra-wide angle); 2.7K wide video up to 120fps and down to 24fps in linear view (no wide-angle warping) all the way down to 720p in wide at 240fps. s.

The Hero 7 records in both MP4 H.264/AVC and H.265/HEVC formats at up to 78MB/s (4K). The Hero 7 Black has a bunch of additional modes including Night Photo; Looping; Timelapse Photo; Timelapse Video; Night Lapse Photo; 8x Slow Mo and Hypersmooth stabilization. It has Wake on Voice commands, as well as live streaming to Facebook Live, Twitch, Vimeo and YouTube. It also features Timewarp video (I will talk more about later); a GP1 processor created by GoPro; advanced metadata that the GoPro app uses to create videos of just the good parts (like smiling photos); ProTune; Karma compatibility; dive-housing compatibility; three-mic stereo audio; RAW audio captured in WAV format; the ability to plug in an external mic with the optional 3.5mm audio mic in cable; and HDMI video output with a micro HDMI cable.

I really love the GoPro Hero 7 and consider it a must-buy if you are on the edge about upgrading an older GoPro camera.

Out of the Box
When I opened the GoPro Hero7 Black I was immediately relieved that it was the same dimensions as the Hero 5 and 6, since I have access to the GoPro Karma drone, Karma gimbal and various accessories. (As a side note, the Hero 7 White and Silver are not compatible with the Karma Drone or Gimbal.) I quickly plugged in the Hero 7 Black to charge it, which only took half an hour. When fully drained the Hero 7 takes a little under two hours to charge.

I was excited to try the new built-in stabilization feature Hypersmooth, as well as the new stabilized in-camera timelapse creator, TimeWarp. I received the Hero 7 Black around Halloween so I took it to an event called “Nights of the Jack” at King Gillette Ranch in Calabasas, California, near Malibu. It took place after dark and featured lit-up jack-o-lanterns, so I figured I could test out the TimeWarp, Hypersmooth and low-light capabilities in one fell swoop.

It was really incredible. I used a clamp mount to hold it onto the kids’ wagon and just hit record. When I stopped recording, the GoPro finished processing the TimeWarp video and I was ready to view it or share it. Overall, the quality of video and the low-light recording were pretty good — not great but good. You can check out the video on YouTube.

The stabilization was mind blowing, especially considering it is electronic image stabilization (EIS), which is software-based, not optical, which is hardware-based. Hardware-based stabilization is typically preferred to software-based stabilization, but GoPro’s EIS is incredible. For most shooting scenarios, the built-in stabilization will be amazing — everyone who watches your clips will think that you are using a hardware gimbal. It’s that good.

The Hero 7 Black has a few options for TimeWarp mode to keep the video length down — you can choose different speeds: 2x, 5x, 10x, 15x, and 30x. For example, 2x will take one minute of footage and turn it into 30 seconds, and 30x will take five minutes of footage and turn it into 10 seconds. Think of TimeWarp as a stabilized timelapse. In terms of resolution, you can choose from 16:9 or 4:3 aspect ratio; 4K, 1440p or 1080p. I always default to 1080 if posting on Instagram or Twitter, since you can’t really see what the 4K difference, and it saves all my data bits and bytes for better image fidelity.

If you’re wondering why you would use TimeWarp over Timelapse, there are a couple of differences. Timewarp will create a smooth video when walking, riding a bike or generally moving around because of the Hypersmooth stabilization. Timelapse will act more like a camera taking pictures at a certain interval to show a passage of time (say from day to night) and will playback a little more choppy. Check out a sample day-to-night timelapse I filmed using the Hero 7 Black set to Timelapse on YouTube.

So beyond the TimeWarp what else is different? Well, just plain shooting 4K at 60fps — you now have the ability to enable the EIS stabilization where you couldn’t on the GoPro Hero 6 Black. It’s a giant benefit for anyone shooting 4K in the palm of their hands and wanting to even slow their 4K down by 50% and retain smooth motion with stabilization already done in-camera. This is a huge perk in my mind. The image processing is very close to what the Hero 6 produces and quite a bit better than the what the Hero 5 produces.

When taking still images, the low-light ability is pretty incredible. With the new Superphoto setting you can get that signature high saturation and contrast with noise reduction. It’s a great setting, although I noticed the subject in focus cannot be moving too fast or you will get some purple fringing. When used under the correct circumstances, the Superphoto is the next iteration of HDR.

I was surprised how much I used the GoPro Hero 7 Black’s auto-rotating menu feature when the camera was held vertically. The Hero 6 could shoot vertically but with the addition of the auto-rotation of the menu, the Hero 7 Black encourages more vertically photos and videos. I found myself taking more vertical photos, especially outdoors — getting a lot more sky in the shots, which adds an interesting perspective.

Summing Up
In the end, the GoPro Hero 7 Black is a must-buy if you are looking for the latest and greatest action-cam or are on the fence about upgrading from the Hero 5 or 6. The Hypersmooth video stabilization is incredible. If you want to take it a step further, combining it with a Karma gimbal will give you a silky smooth shot.

I really fell in love with the TimeWarp function, whether you are a prosumer filming your family at Disneyland or shooting a show in the forest, a quick TimeWarp is a great way to film some dynamic b-roll without any post processing.

Don’t forget the Hero 7 Black has voice control for hands-free operation. On the outside,the Hero 7 Black is actually black in color unlike the Hero 6 (which is a gray) and also has the number “7” labeled on it for easy finding in your case.

I would really love for GoPro to make these cameras charge wirelessly on a mat like my Galaxy phone. It seems like the GoPro action-cameras would be great to just throw on a wireless charger and also use the charger as a file-transfer station. It gets cumbersome to remove a bunch of tiny memory cards or use a bunch of cables to connect your cameras, so why not make it wireless?! I’m sure they are thinking of things like that, because focusing on stabilization was the right move in my opinion.

If GoPro can continue to make focused and powerful updates to their cameras, they will be here for a long time — and the Hero 7 is the right way to start.

Check out GoPro’s website for more info, including accessories like the Travel Kit, which features a little mini tripod/handle (called “Shorty”), a rubberized cover with a lanyard and a case for $59.99.

If you need the ultimate protection for your GoPro Hero 7 Black, look into GoPro Plus, which, for $4.99 a month, gives you VIP support; automatic cloud backup, access for editing on your phone from anywhere and camera replacement for up to two cameras per year of the same model, no questions asked, when something goes wrong. Compare all the new GoPro Hero 7 Models on their website website.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Director Peter Farrelly gets serious with Green Book

By Iain Blair

Director, producer and writer Peter Farrelly is best known for the classic comedies he made with his brother Bob: Dumb and Dumber; There’s Something About Mary; Shallow Hal; Me, Myself & Irene; The Three Stooges; and Fever Pitch. But for all their over-the-top, raunchy and boundary-pushing comedy, those movies were always sweet-natured at heart.

Peter Farrelly

Now Farrelly has taken his gift for heartfelt comedy and put his stamp on a very different kind of film, Green Book, a racially charged feel-good drama inspired by a true friendship that transcended race, class and the 1962 Mason-Dixon line.

Starring Oscar-nominee Viggo Mortensen and Oscar-winner Mahershala Ali, it tells the fact-based story of the ultimate odd couple: Tony Lip, a bouncer from The Bronx and Dr. Don Shirley, a world-class black pianist. Lip is hired to drive and protect the worldly and sophisticated Shirley during a 1962 concert tour from Manhattan to the Deep South, where they must rely on the titular “Green Book” — a travel guide to safe lodging, dining and business options for African-Americans during the segregation era.

Set against the backdrop of a country grappling with the valor and volatility of the civil rights movement, the two men are confronted with racism and danger as they challenge long-held assumptions, push past their seemingly insurmountable differences and embrace their shared humanity.

The film also features Linda Cardellini as Tony Vallelonga’s wife, Dolores, along with Dimiter D. Marinov and Mike Hatton as two-thirds of The Don Shirley Trio. The film was co-written by Farrelly, Nick Vallelonga and Brian Currie and reunites Farrelly with editor Patrick J. Don Vito, with whom he worked on the Movie 43 segment “The Pitch.” Farrelly also collaborated for the first-time with cinematographer Sean Porter (read our interview with him), production designer Tim Galvin and composer Kris Bowers.

I spoke with Farrelly about making the film, his workflow and the upcoming awards season. After its Toronto People’s Choice win and Golden Globe nominations (Best Director, Best Musical or Comedy Motion Picture, Best Screenplay, Best Actor for Mortensen, Best Supporting Actor for Ali), Green Book looks like a very strong Oscar contender.

You told me years ago that you’d love to do a more dramatic film at some point. Was this a big stretch for you?
Not so much, to be honest. People have said to me, “It must have been hard,” but the hardest film I ever made was The Three Stooges… for a bunch of reasons. True, this was a bit of a departure for me in terms of tone, and I definitely didn’t want it to get too jokey — I tend to get jokey so it could easily have gone like that.  But right from the start we were very clear that the comedy would come naturally from the characters and how they interacted and spoke and moved, and so on, not from jokes.

So a lot of the comedy is quite nuanced, and in the scene where Tony starts talking about “the orphans” and Don explains that it’s actually about the opera Orpheus, Viggo has this great reaction and look that wasn’t in the script, and it’s much funnier than any joke we could have made there.

What sort of film did you set out to make?
A drama about race and race relations set in a time when it was very fraught, with light moments and a hopeful, uplifting ending.

It has some very timely themes. Was that part of the appeal?
Absolutely. I knew that it would resonate today, although I wish it didn’t. What really hooked me was their common ground. They really are this odd couple who couldn’t be more different — an uneducated, somewhat racist Italian bouncer, and this refined, highly educated, highly cultured doctor and classically trained pianist. They end up spending all this time together in a car on tour, and teach each other so much along the way. And at the end, you know they’ll be friends for life.

Obviously, casting the right lead actors was crucial. What did Viggo and Mahershala bring to the roles?
Well, for a start they’re two of the greatest actors in the world, and when we were shooting this I felt like an observer. Usually, I can see a lot of the actor in the role, but they both disappeared totally into these characters — but not in some method-y way where they were staying in character all the time, on and off the set. They just became these people, and Viggo couldn’t be less like Tony Lip in real life, and the same with Mahershala and Don. They both worked so hard behind the scenes, and I got a call from Steven Spielberg when he first saw it, and he told me, “This is the best buddy movie since Butch Cassidy and the Sundance Kid,” and he’s right.

It’s a road picture, but didn’t you end up shooting it all in and around New Orleans?
Yes, we did everything there apart from one day in northern New Jersey to get the fall foliage, and a day of exteriors in New York City with Viggo for all the street scenes. Louisiana has everything, from rolling hills to flats. We also found all the venues and clubs they play in, along with mansions and different looks that could double for places like Pennsylvania, Ohio, Indiana, Iowa, Missouri, Kentucky, Tennessee, as well as Carolinas and the Deep South.

We shot for just 35 days, and Louisiana has great and very experienced crews, so we were able to work pretty fast. Then for scenes like Carnegie Hall, we used CGI in post, done by Pixel Magic, and we were also amazingly lucky when it came to the snow scenes set in Maryland at the end. We were all ready to use fake snow when it actually started snowing and sticking. We got a good three, four inches, which they told us hadn’t happened in a decade or two down there.

Where did you post?
We did most of the editing at my home in Ojai, and the sound at Fotokem, where we also did the DI with colorist Walter Volpatto.

Do you like the post process?
I love it. My favorite part of filmmaking is the editing. Writing is the hardest part, pulling the script together. And I always have fun on the shoot, but you’re always having to make sure you don’t screw up the script. So when you get to the edit and post, all the hard work is done in that sense, and you have the joy of watching the movie find its shape as you cut and add in the sound and music.

What were the big editing challenges, given there’s such a mix of comedy and drama?
Finding that balance was the key, but this film actually came together so easily in the edit compared with some of the movies I’ve done. I’ll never forget seeing the first assembly of There’s Something About Mary, which I thought was so bad it made me want to vomit! But this just flowed, and Patrick did a beautiful job.

Can you talk about the importance of music and sound in the film.
It was a huge part of the film and we had a really amazing pianist and composer in Kris Bowers, who worked a lot with Mahershala to make his performance as a musician as authentic as possible. And it wasn’t just the piano playing — Mahershala told me right at the start, “I want to know just how a pianist sits at the piano, how he moves.” So he was totally committed to all the details of the role. Then there’s all the radio music, and I didn’t want to use all the obvious, usual stuff for the period, so we searched out other great, but lesser-known songs. We had great music supervisors, Tom Wolfe and Manish Raval, and a great sound team.

We’re already heading into the awards season. How important are awards to you and this film?
Very important. I love the buzz about it because that gets people out to see it. When we first tested it, we got 100%, and the studio didn’t quite believe it. So we tested again, with “a tougher” audience, and got 98%. But it’s a small film. Everyone took pay cuts to make it, as the budget was so low, but I’m very proud of the way it turned out.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

AMD 2.1

Red upgrades line with DSMC2 Dragon-X 5K S35 camera

Red Digital Cinema has further simplified its product line with the DSMC2 Dragon X 5K S35 camera. Red also announced the DSMC2 Production Module and DSMC2 Production Kit, which are coming in early 2019. More on that in a bit.

The DSMC2 Dragon-X camera uses the Dragon sensor technology found in many of Red’s legacy cameras with an evolved sensor board to enable Red’s enhanced image processing pipeline (IPP2) in camera.

In addition to IPP2, the Dragon-X provides 16.5 stops of dynamic range, as well as 5K resolution up to 96fps in full format and 120fps at 5K 2.4:1. Consistent with the rest of Red’s DSMC2 line-up, Dragon-X offers 300MB/s data transfer speeds and simultaneous recording of Redcode RAW and Apple ProRes or Avid DNxHD/HR.

The new DSMC2 Dragon-X is priced at $14,950 and is also available as a fully-configured kit priced at $19,950. The kit includes: 480GB Red Mini-Mag; Canon lens mount; Red DSMC2 Touch LCD 4.7-inch monitor; Red DSMC2 outrigger handle; Red V-Lock I/O expander; two IDX DUO-C98 batteries with VL-2X charger; G-Technology ev Series Red Mini-Mag reader; Sigma 18-35mm F1.8 DC HSM art lens; Nanuk heavy-duty camera case.

Both the camera and kit are available now at red.com or through Red’s authorized dealers.

Red also announced the new DSMC2 Production Module. Designed for pro shooting configurations, this accessory mounts directly to the DSMC2 camera body and incorporates an industry standard V-Lock mount with integrated battery mount and P-Tap for 12V accessories. The module delivers a comprehensive array of video, XLR audio, power and communication connections, including support for 3-pin 24V accessories. It has a smaller form factor and is more lightweight than Red’s RedVolt Expander with a battery module.

The DSMC2 Production Module is available to order for$4,750 and is expected to ship in early 2019. It will also be available as a DSMC2 Production Kit that will include the DSMC2 Production Module and DSMC2 production top plate. The DSMC2 Production Kit is also available for order for $6,500 and is expected to ship in early 2019.

Scarlet-W owners can upgrade to DSMC2 Dragon-X for $4,950 through Red authorized dealers or directly from Red.


DP Chat: Green Book’s Sean Porter

Sean Porter has worked as a cinematographer on features, documentaries, short films and commercials. He was nominated for a Film Independent Spirit Award for Best Cinematography for his work on It Felt Like Love, and his credits include 20th Century Women, Green Room, Rough Night and Kumiko, the Treasure Hunter.

His most recent collaboration was with director Peter Farrelly on Green Book, which is currently in theaters. Set in 1962, the film follows Italian-American bouncer/bodyguard Tony Lip (Academy Award-nominee Viggo Mortensen) and world-class black pianist Dr. Don Shirley (Academy Award-winner Mahershala Ali) on a concert tour from Manhattan to the Deep South. They must rely on “The Green Book” to guide them to the few establishments that were then safe for African-Americans. Confronted with racism and danger — as well as unexpected humanity and humor — they are forced to set aside differences to survive and thrive on the journey of a lifetime.

Green Book director Peter Farrelly (blue windbreaker) with DP Sean Porter (right, brown jacket).

Porter chose the Alexa Mini mounted with Leica Summilux-C lenses to devise the look for “Green Book.” End-to-end post services were provided by FotoKem, from dailies at their New Orleans site to final color and deliverables at Burbank.

We spoke to him recently about his rise to director of photography and his work on Green Book:

How did you become interested in cinematography?
My relationship with cinematography, and really filmmaking, developed over many years during my childhood. I didn’t study fine art or photography in school, but discovered it later as many others do. I went in through the front door when I was probably 12 or so, and it’s been a long road.

I’m the oldest of four — two brothers and a sister. We grew up in a small town about an hour outside of Seattle, we had a modest yard that butted up to the “back woods.” It was an event when the neighborhood kids got on bikes and road a half mile or so to the only small convenience store around. There wasn’t much to do there, so we naturally had to be pretty inventive in our play. We’d come home from school, put on the TV and at the time Movie Magic was airing on The Discovery Channel. I think that show honestly was a huge inspiration, not only to me but to my brothers as well, who are also visual artists. It was right before Jurassic Park changed the SFX landscape — it was a time when everything was still done photographically, by hand. There were episodes showing how these films achieved all sorts of amazing images using rather practical tools and old school artistry.

My dad was always keen on technology and he had various camcorders throughout the years, beginning with the VHS back when the recorder had to be carried separately. As the cameras became more compact and easier to use, my brothers and I would make all kinds of films, trying to emulate what we had seen on the show. We were experimenting with high-level concepts at a very young age, like forced perspective, matte paintings, miniatures (with our “giant” cat as the monster) and stop motion.

I picked up the technology bug and by the time I was in middle school I was using our family’s first PC to render chromakeys — well before I had access to NLEs. I was conning my teachers into letting me produce “video” essays instead of writing them. Later we moved closer to Seattle and I was able to take vocational programs in media production and went on to do film theory and experimental video at the University of Washington, where I think I started distilling my focus as a cinematographer.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
As I mentioned earlier, I didn’t discover film via fine art or photography, so I didn’t have that foundation of image making and color theory. I learned it all just by doing and paying attention to what I responded to. I didn’t have famous artists to lean on. You could say it was much more grassroots. My family was a lover of popular films, especially American comedies and action adventure. We watched things like Spies like Us, Star Wars, Indiana Jones and The Princess Bride. It was all pure entertainment, of course. I wasn’t introduced to Bergman or Fellini until much, much later. As we got older, my film language expanded and I started watching films by Lynch and Fincher. I will say that those popular ‘90s films had a great combination of efficient storytelling and technical craft that I still resonate with to this day. It’s very much a sort of “blue-collar” film language.

Staying on top of the technology oscillates between an uncontrollable obsession and an unbearable chore. I’ve noticed over the years that I’m becoming less and less invigorated by the tech — many of the new tools are invaluable, but I love relying on my team to filter out the good from the hype so I can focus on how best to tell the story. Some developments you simply can’t ignore; I remember the day I saw footage in class from a Panasonic DVX100. It changed everything!

What new technology has changed the way you work (looking back over the past few years)?
I feel like the digital cameras, while continuing to get better, have slowed down a bit. There was such a huge jump between the early 2000s and the late 2000s. There’s no question digital acquisition has changed the way we make images — and it’s up for debate if it’s been a wholly positive shift. But generally, it’s been very empowering for filmmakers, especially on smaller budgets. It’s given me and my peers the chance to create cinema-quality images on projects that couldn’t afford to shoot on 16mm or 35mm. And over the last five years, the gap between digital and film has diminished, even vanished for many of us.

But if I had to single out one development it’s probably been LEDs over the last two or three years. Literally, five years ago it was all HMI and Kino Flos, and now I don’t remember the last time I touched a Kino. Sometimes we go entire jobs without firing up an HMI. The LEDs have gotten much better recently, and the control we have on set is unprecedented. It makes you wonder how we did it before!

What are some of your best practices or rules you try to follow on each job?
Every time I start a new project, I say to myself, “This time I’m going to get my shit together.” I think I’m going to get organized, develop systems, databases, Filemaker apps, whatever, and streamline the process so I can be more efficient. I’ll have a method for combining scouting photos with storyboards and my script notes so everything is in one place and I can disseminate information to relevant departments. Then I show up at prep and realize the same thing I realize every movie: They are all so, so different.

It’s an effort in futility to think you can adopt a “one-size-fits-all” mentality to preproduction. It just doesn’t work. Some directors storyboard every shot. Some don’t even make shot lists. Some want to previs every scene during the scouting process using stand-ins, others won’t even consider blocking until the actors are there, on the day. So I’ve learned that the efficiency is found in adaptation. My job is to figure out how to get inside my director’s head, see things the way they are seeing them and help them get those ideas into actions and decisions. There’s no app for that, unfortunately! I suppose I try to really listen, and not just to the words my director uses to describe things, but to the subtext and what is between the lines. I try to understand what’s really important to them so I can protect those things and fight for them when the pressure to compromise starts mounting.

Linda Cardellini as Dolores Vallelonga and Viggo Mortensen as Tony Vallelonga in “Green Book,” directed by Peter Farrelly.

On a more practical note, I read many years ago about a DP who would stand on the actor’s mark and look back toward the camera — just to be aware of what sort of environment they were putting the talent in. Addressing a stray glare or a distracting stand might make a big difference to the actor’s experience. I try to do that as often as I can.

Explain your ideal collaboration with the director when setting the look of a project.
It’s hard to reduce such an array of possible experiences down to an “ideal,” as an ideal situation for one film might not be ideal for another depending on the experience the director wants to create on set. I’ve had many different, even conflicting, “processes” with my directors because it suited that specific collaboration. Again, it’s about adapting, being a chameleon to their process. It’s not about coming in and saying, “This is the best way to do this.”

I remember with one director we basically locked ourselves in her apartment for three days and just watched films. We’d pause them now and then and discuss a shot or a scene, but a lot of the time it was just about being together experiencing this curated body of work and creating a visual foundation for us to work from. With another director, we didn’t really watch any films at all, but we did lots and lots of testing. Camera tests, lens tests, lighting tests, filter tests, makeup and SFX tests. And we’d go into a DI suite and look at everything and talk about what was working and what wasn’t. He was also a DP so I think that technical, hands-on approach made sense to him. I think I tested every commercially available fluorescent tube that was on the market to find the right color for that film. I’ll admit as convenient as it would be to have a core strategy to work from, I think I would tire of it. I love walking onto a film and saying, “Ok, how are we going do this?”

Tell us about Green Book. How would you describe the overarching look of the film that you and Peter Farrelly wanted to achieve?
I think, maybe more than I want to admit, that the look of my films is a culmination of the restraints that are imparted by either myself or by production. You’re only going to have a certain amount of time and money for each scene, so calculations and compromises must be made there. You have to work with the given location, time of day and how it’s going be art decorated, so that adds a critical layer. Peter wanted to work a certain way with his actors and have lots of flexibility, so you adapt your process to make that work. Then you give yourself certain creative constraints, and somewhere in between all those things pushing on each other, the look of the film emerges.

That sounds a little arbitrary and Pete and I had some discussions about how it should look, but they were broad conversations. Honesty and authenticity were very important to Pete. He didn’t want things to ever look or feel disingenuous. My very first conversation with him after I was hired was about the car work. He was getting pressure to shoot it all on stage with LED screens. I was honest with him. I told him he’d probably get more time with his actors, and more predictable results on stage, but he’d get more realism from the look and from the performances dragging the entire company out onto the open road and battling the elements.

So we shot all the car work practically, save for a few specific night scenes. I took his words to heart and tried to shape the look out of what was authentic to the time. My gaffer and I researched what lighting fixtures were used then — it wasn’t like it is now with hundreds of different light sources. Back then it was basically tungsten, fluorescent, neon mercury and sodium. We limited our palette to those colors and tuned all our fixtures accordingly. I also avoided stylistic choices that would have made the film feel dated or “affected” — the production design, wardrobe and MCU departments did all of that. Pete and I wanted the story to feel just as relevant now as it did then, so I kept the images clean and largely unadulterated.

How early did you get involved in the production?
I came on about five weeks before shooting. I prepped for one week and then we were all sent home! Some negotiations had stalled production and for several weeks I didn’t know if we would start up again. I’m very grateful everyone made it work so we could make the film.

How did you go about choosing the right camera and lenses for Green Book?
While 35mm would have been a great choice aesthetically for the film, there were some real production advantages to shooting digitally. As we were shooting all the car work practically, it was my prerogative to get as much of the coverage inside the car accomplished at a go. Changing lighting conditions, road conditions and tight schedules prohibited me from shooting an angle, then pulling over and re-rigging the camera. We had up to three Alexa Mini cameras inside the car at once, and many times that was all the coverage planned for the scene, save for a couple cutaways. This allowed us to get multi-page scenes done very efficiently while maintaining light continuity, keeping the realism of the landscapes and capturing those happy (and sometimes sad) accidents.

I chose some very clean, very fast, and very portable lenses: the Leica Summilux-Cs. I used to shoot stills with various Leica film cameras and developed an affinity for the way the lenses rendered. They are always sharp, but there’s some character to the fall off and the micro-contrast that always make faces look great. I had shot many of my previous films with vintage lenses with lots of character and could have easily gone that route, but as I mentioned, I was more interested in removing abstractions — finding something more modern yet still classic and utilitarian.

Any challenging scenes that you are particularly proud of?
Not so much a particular scene, but a spanning visual idea. Many times, when you start a film, you’ll have some cool visual arc you want to try to employ, and along the way various time, location or schedule constraints eventually break it all down. Then you’re left with a few disparate elements that don’t connect the way you wanted them to. Knowing I would face those same challenges but having a bit more resources than some of my other films, I aimed low but held my ground: I wanted the color of the streetlights to work on a spectrum, shifting between safety and danger deepening on the scene or where things were heading in the story.

I broke the film down by location and worked with my gaffer to decide where the environment would be majority sodium (safe/familiar/hopeful) and where it would be mercury (danger/fear/despair). It sounds very rudimentary but when you try to actually pull it off with so many different locations, it can get out of hand pretty quickly. And, of course, many scenes had varying ratios of those colors. I was pleased that I was able to hold onto the idea and not have it totally disintegrate during the shoot.

What’s your go-to gear (camera, lens, mount/accessories) — things you can’t live without?
Go-to tools change from job to job, but the one I rely on more than any is my crew. Their ideas, support and positive energy keep me going in the darkest of hours! As for the nuts and bolts — lately I rarely do a job without SkyPanels and LiteMats. For my process on set, I’ve managed to get rid of just about everything except my light meter and my digital still camera. The still camera is a very fast way to line up shots, and I can send images to my iPad and immediately communicate framing ideas to all departments. It saves a lot of time and guess work!

Main Image: Sean Porter (checkered shirt) on set of Green Book, pictured with director Peter Farrelly.


Steve McQueen on directing Widows

By Iain Blair

British director/writer/producer Steve McQueen burst onto the international scene in 2013 when his harrowing 12 Years a Slave dominated awards season, winning as Academy Award, Golden Globe, BAFTA and a host of others. His directing was also recognized with many nominations and awards.

Now McQueen, who also helmed the 2011 feature Shame (Michael Fassbender, Carey Mulligan) is back with the film Widows.

A taut thriller, 20th Century Fox’s Widows is set in contemporary Chicago in a time of political and societal turmoil. When four armed robbers are killed in a botched heist, their widows — with nothing in common except a debt left behind by their dead husbands’ criminal activities — take fate into their own hands to forge a future on their own terms.

With a screenplay by Gillian Flynn and McQueen himself — and based on the old UK television miniseries of the same name — the film stars, among others, Viola Davis, Michelle Rodriguez, Colin Farrell, Brian Tyree Henry, Daniel Kaluuya, Carrie Coon, Jon Bernthal, Robert Duvall and Liam Neeson.

The production team includes Academy Award-nominated editor Joe Walker (12 Years a Slave), Academy Award-winning production designer Adam Stockhausen (The Grand Budapest Hotel) and director of photography Sean Bobbit (12 Years a Slave).

I spoke with McQueen, whose credits also include 2008’s Hunger, about making the film and his love of post.

This isn’t just a simple heist movie, is it?
No, it isn’t. I wanted to make an all-encompassing movie, an epic in a way, about how we live our daily lives and how they’re affected by politics, race, gender, religion and corruption, and do it through this story. I remember watching the TV series as a kid and how it affected me — how strong all these women were — and I decided to change the location from London to Chicago, which is really an under-used city in movies, and make it a more contemporary view of all these issues.

You assembled a great cast, led by Oscar-winner Viola Davis. What did she bring to the table?
So much weight and gravitas. She’s like an iceberg. There’s so much hidden depth in everything she does, and there’s this well of meaning and emotion she brings to the role, and then everyone has to step up to that.

What were the main technical challenges in pulling it all together?
The big one was logistics and dealing with all the Chicago locations. We had over 60 locations, all over the city, and 81 speaking parts. So there was a lot of planning, and if one thing got stuck it threw off the whole schedule. It would have been almost impossible to reschedule some of the scenes.

How tough was the shoot?
Pretty tough. They’re always grueling, and when you’re writing a script you don’t always think about how many night shoots you’re going to face, and you forget about this big machine you have to bring with you to all the locations. Trying to make any quick change or adjustment is like trying to turn the Titanic. It takes a while.

How early on did you start integrating post and all the VFX?
From day one. You have to when you have a big production with a set release date, so we began cutting and assembling while I shot.

Where did you post?
In Amsterdam, where I live, and then we finished it off in London.

Do you like the post process?
I love it. It’s my favorite part as you have civilized hours — 9 till 5 or whatever —and you’re in total control. You’re not having to deal with 40 or 50 people. It’s just you and the editor in a dark room, actually making the film.

Joe Walker has cut all of your films, including Hunger and Shame, as well Blade Runner 2049, Arrival and Sicario. Can you talk about working with him?
He wasn’t on set, and we had someone else assembling stuff as Joe was still finishing up Blade Runner. He came in when I got back to Amsterdam. Joe and I go way back to 2007, when we did Hunger, and we always work very closely together. I sit right next to him, and I’m there for every single cut, dissolve, whatever. I’m very present. I’m not one of those directors who comes in, gives some notes and then disappears. I don’t know how you do that. I love editing and finding the pace and rhythm. What makes Joes such a great editor is that he started off in music, so he has a great sense of how to work with sound.

What were the big editing challenges?
There are all these intertwined stories and characters, so it’s about finding the right balance and tone and rhythm. The whole opening sequence is all about pulling the audience in and then grabbing them with a caress and then a slap — and another caress and slap — as we set up the story and the main characters. Then there are so many parts to the story that it’s like this big Swiss watch: all these moving parts and different functions. But you always go back to the widows. A script isn’t a film, it’s a guide, so you’re feeling your way in the edit, and seeing what works and what doesn’t. The whole thing has to be cohesive, one thing. That’s your goal.

What about the visual effects?
They were all done by One Of Us and Outpost VFX (both in the UK), but the VFX were all about enhancing stuff, not dazzling the audience. The aim was always for realism, not fantasy.

Talk about the importance of sound and music.
They’re huge for me, and it’s interesting as a lot of the movie has no sound or music. At the beginning, there’s just this one chord on a violin when we get to the title card, and that’s it. There’s no sound for 2/3 of the movie, and then we only have some ambient music and Procul Harum’s “Whiter Shade of Pale” and a Van Morrison song. That’s why all the sound design is so important. When the women lose their husbands, I didn’t want it to be hammy and tug at your heartstrings. I wanted you to feel that pain and that grief and that journey. When they start to act and take control of their lives, that’s when the music and sound kick in, almost like this muscular drive. Our supervising sound editor James Harrison did a great job with all that. We did all the mixing in Atmos at De Lane Lea in London.

Where did you do the DI and how important is it to you?
We did it at Company 3 London with colorist Tom Poole, and it’s very important. We shot on film, and our DP Sean and I spent a lot of time just talking about the palette and the look. When you’re shooting in over 60 locations, it’s not so much about putting your own stamp and look on them, but about embracing what they offer you visually and then tweaking it.

For the warehouse scenes, there was a certain mood and it had crappy tungsten lighting, so we changed it a bit to feel more tactile, and it was the same with most of the locations. We’d play with the palette and the visual mood, which the DI allows you to do so well.

Did the film turn out the way you hoped?
(Laughs) I always hope it turns out better than I hoped or imagined, as your imagination can only take you so far. What’s great is when you go beyond that and come up with something cooler than you could have imagined. That’s what I always want.

What’s next?
I’ve got a few things cooking on the stove, and I should finish writing something in the next few months and then start it next year.

All Images Courtesy of 20th Century Fox/Merrick Morton


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


DP Chat: Polly Morgan, ASC, BSC

Cinematographer Polly Morgan, who became an active member of the ASC in July, had always been fascinated with films, but she got the bug for filmmaking as a teenager growing up in Great Britain. A film crew shot at her family’s farmhouse.

“I was fixated by the camera and cranes that were being used, and my journey toward becoming a cinematographer began.”

We reached out to Morgan recently to talk about her process and about working on the FX show Legion.

What inspires you artistically? And how do you simultaneously stay on top of advancing technology that serves your vision?
I am inspired by the world around me. As a cinematographer you learn to look at life in a unique way, noticing elements that you might not have been aware of before. Reflections, bouncing light, colors, atmosphere and so many more. When I have time off, I love to travel and experience different cultures and environments.

I spend my free time reading various periodicals to stay of top of the latest developments in technology. Various publications, such as the ASC’s magazine, help to not only highlight new tools but also people’s experiences with them. The filmmaking community is united by this exploration, and there are many events where we are able to get together and share our thoughts on a new piece of equipment. I also try to visit different vendors to see demos of new advances in technology.

Has any recent or new technology changed the way you work?
Live on-set grading has given me more control over the final image when I am not available for the final DI. Over the last two years, I have worked more on episodic television, and I am often unable to go and sit with the colorist to do the final grade, as I am working on another project. Live grading enables me to get specific with adjustments on the set, and I feel confident that with good communication, these adjustments will be part of the final look of the project.

How do you go about choosing the right camera and lenses to achieve the right look for a story?
I like to vary my choice of camera and lenses depending on what story I am telling.
When it comes to cameras, resolution is an important factor depending on how the project is going to be broadcast and if there are specific requirements to be met from the distributor, or if we are planning to do any unique framing that might require a crop into the sensor.

Also, ergonomics play a part. Am I doing a handheld show, or mainly one in studio mode? Or are there any specifications that make the camera unique that will be useful for that particular project? For example, I used the Panasonic VariCam when I needed an extremely sensitive sensor for night driving around downtown Los Angeles. Lenses are chosen for contrast and resolution and speed. Also, sometimes size and weight play a part, especially if we are working in tight locations or doing lots of handheld.

What are some best practices, or rules, you try to follow on each job?
Every job is different, but I always try to root my work in naturalism to keep it grounded. I feel like a relatable story can have the most impact on its viewer, so I want to make images that the audience can connect with and be drawn into emotionally. As a cinematographer, we want our work to be invisible but yet always support and enhance the narrative.

On set, I always ensure a calm and pleasant working environment. We work long and bizarre hours, and the work is demanding so I always strive to make it an enjoyable and safe experience for everyone,

Explain your ideal collaboration with the director when setting the look of a project.
It is always my aim to get a clear idea of what the director is imagining when they describe a certain approach. As we are all so different, it is really about establishing a language that can be a shorthand on set and help me to deliver exactly what they want. It is invaluable to look at references together, whether that is art, movies, photography or whatever.

As well as the “look,” I feel it is important to talk about pace and rhythm and how we will choose to represent that visually. The ebb and flow of the narrative needs to be photographed, and sometimes directors want to do that in the edit, or sometimes we express it through camera movement and length of shots. Ideally, I will always aim to have a strong collaboration with a director during prep and build a solid relationship before production begins.

How do you typically work with a colorist?
This really varies from project to project, depending if I am available to sit in during the final DI. Ideally, I would work with the colorist from pre-production to establish and build the look of the show. I would take my camera tests to the post house and work on building a LUT together that would be the base look that we work off while shooting.

I like to have an open dialogue with them during the production stage so they are aware and involved in the evolution of the images.

During post, this dialogue continues as VFX work starts to come in and we start to bounce the work between the colorist and the VFX house. Then in the final grade, I would ideally be in the room with both the colorist and the director so we can implement and adjust the look we have established from the start of the show.

Tell us about FX’s Legion. How would you describe the general look of the show?
Legion is a love letter to art. It is inspired by anything from modernist pop art to old Renaissance masters. The material is very cerebral, and there are many mental planes or periods of time to express visually, so it is a very imaginative show. It is a true exploration of color and light and is a very exciting show to be a part of.

How early did you get involved in the production?
I got involved with Legion starting in Season 2. I work alongside Dana Gonzales, ASC, who established the look of the show in Season one with creator Noah Hawley. My work begins during the production stage when I worked with various directors both prepping and shooting their individual episodes.

Any challenging scenes that you are particularly proud of how it turned out?
Most of the scenes in Legion take a lot of thought to figure out… contextually as well as practically. In Season 2, Episode 2, a lot of the action takes place out in the desert. After a full day, we still had a night shoot to complete with very little time. Instead of taking time to try to light the whole desert, I used one big soft overhead and then lit the scene with flashlights on the character’s guns and headlights of the trucks. I added blue streak filters to create multiple horizontal blue flares from each on-camera source (headlights and flashlights) that provided a very striking lighting approach.

FX’s Legion, Season 2, Episode 2

With the limited hours available, we didn’t have enough time to complete all the coverage we had planned so, instead, we created one very dynamic camera move that started overhead looking down at the trucks and then swooped down as the characters ran out to approach the mysterious object in the scene. We followed the characters in the one move, ending in a wide group shot. With this one master, we only ended up needing a quick reverse POV to complete the scene. The finished product was an inventive and exciting scene that was a product of limitations.

What’s your go-to gear (camera, lens, mount/accessories you can’t live without)?
I don’t really have any go-to gear except a light meter. I vary the equipment I use depending on what story I am telling. LED lights are becoming more and more useful, especially when they are color- and intensity-controllable and battery-operated. When you need just a little more light, these lights are quick to throw in and often save the day!


iOgrapher now offering Multi Case for Androids and iOS phones

iOgrapher has debuted the iOgrapher Multi Case rig for mobile filmmaking. It’s the companies first non-iOS offering. An early pioneer of mobile media filmmaking cases for iOS devices, iOgrapher is now targeting mobile filmmakers with a flexible design to support recent model iOS and Android mobile phones of all sizes.

The iOgrapher Multi Case features:

• Slide in function for a strong and secure fit
• The ability to attach lighting and mics for higher quality mobile video production
• Flexible mount options for any standard tripod in landscape or portrait mode
• ¼ 20-inch screw mounts on handles to attach accessories
• Standard protective cases for your phone can be used — filmmakers no longer need to remove protective cases to use the iOgrapher Multi Case
• It works with Moment Lenses. Users do not need to remove Moment Lens cases or lenses to use the iOgrapher Multi Case
• The Multi Case is designed to work with iPhone 6 and later models, and has been tested to work with popular Samsung, Google Pixel, LG and Motorola phones.

With the launch of the Multi Case, iOgrapher is introducing a new design. The capabilities and mounting options have evolved as a result of customer reviews and feedback, as well as real-world use cases from professional broadcasters, filmmakers, pro-sport coaches and training facilities.

The iOgrapher Multi Case is available for pre-order and is priced at $79. It will ship at the end of November.


Timecode Systems’ timecode-over-bluetooth solution

Timecode Systems has introduced UltraSync Blue, which uses the company’s new patented timecode sync and control protocol. UltraSync Blue transmits timecode to a recording device over Bluetooth with sub-frame accuracy. This enables timecode to be transmitted wirelessly from UltraSync Blue directly into the media file of a connected device.

“The beauty of this solution is that the timecode is embedded directly into a timecode track, so there is no need for any additional conversion software; the metadata is in the right format to be automatically recognized by professional NLEs,” reports Paul Scurrell, CEO of Timecode Systems. “This launches a whole new era for multicamera video production in which content from prosumer and consumer audio and video has the potential to be combined, aligned and edited together with ease and efficiency, and with the same high level of accuracy as footage from top-end, professional recording devices.”

The device itself measures just 55mmx43mmx17mm, weighs only 36g, and costs $179 US, making it small enough to fit neatly into a pocket during filming and affordable enough to be used on any type of production, from documentaries, news gathering, and reality shows to wedding videos and independent films.

By removing the restrictions of a wired connection, crews not only benefit from the convenience of being cable-free, but also from even more versatility in how they can sync content. One feature of UltraSync Blue is the ability to use a single unit to sync up to four recording devices shooting in close range over Bluetooth — a great option for small shoots and interviews, and also for content captured for vlogs and social media.

However, as filming is not always this simple, especially in the professional world, UltraSync Blue is also designed to work seamlessly with the rest of the Timecode Systems product range. For more complicated shoots, sprawling filming locations and recording using a variety of professional equipment, UltraSync Blue can be connected to devices over Bluetooth and then synced over robust, long-range RF to other camera and audio recorders using Timecode Systems units. This also includes any equipment containing a Timecode Systems OEM sync module, such as the AtomX Sync module that was recently launched by Atomos for the new Ninja V.

“With more and more prosumer and consumer cameras and sound recorders coming with built-in Bluetooth technology, we saw an opportunity to use this wireless connectivity to exchange timecode metadata,” Scurrell adds. “By integrating a robust Bluetooth Low Energy chip into UltraSync Blue, we’ve been able to create a simple, low-cost timecode sync product that has the potential to work with any camera or sound recording device with Bluetooth connectivity.”

Timecode Systems is now working with manufacturers and app developers to adopt its new super-accurate timing protocol into their Bluetooth-enabled products. At launch, both the MAVIS professional camera app and Apogee MetaRecorder app (both for iPhone) are already fully compatible, allowing — for the first time — sound and video recorded on iPhone devices to be synchronized over the Timecode Systems network.

“It’s been an exciting time for sync technology. In the past couple of years, we’ve seen some massive advancements not only in terms of reducing the size and cost of timecode solutions, but also with solutions becoming more widely compatible with more consumer-level devices such as GoPro and DSLR cameras,” Scurrell explains. “But there was still no way to embed frame-accurate timecode into sound and video recordings captured on an iPhone; this was the biggest thing missing from the market. UltraSync Blue, in combination with the MAVIS and MetaRecorder apps, fills this gap.”

Zoom Corporation is working on new releases of H3-VR Handy Recorder and F8n MultiTrack Field Recorder. When released later this year, both of these Zoom sound recorders will have the ability to receive timecode over Bluetooth from the UltraSync Blue.

Timecode Systems is now taking orders for UltraSync Blue and will be shipping in October 2018.


New CFast 2.0 card for ARRI Alexa Mini and Amira cameras

ARRI has introduced the ARRI Edition AV Pro AR 256 CFast 2.0 card by Angelbird, which has been designed and certified for use in the ARRI Alexa Mini and Amira camera systems and can be used for ProRes and MXF/ARRIRAW recording. (Support for new CFast 2.0 cards is currently not planned for ALEXA XT, SXT(W) and LF cameras.)

ARRI has worked closely with Angelbird Technologies, based in Vorarlberg, Austria. Angelbird is no stranger to film production, and some of their gear can be found at ARRI Rental European locations.

For the ARRI Edition CFast card, the Angelbird team developed an ARRI-specific card that uses a combination of thermally conductive material and so-called underfill to provide superior heat dissipation from the chips and to secure the electronic components against mechanical damage.

The result, according to ARRI, is a rock-solid 256 GB CFast 2.0 card with stable recording performance all the way across the storage space. The ARRI Edition AV PRO AR 256 memory card is available from ARRI and other sales channels offering ARRI products.

GoPro introduces new Hero7 camera lineup

GoPro’s new Hero7 lineup includes the company’s flagship Hero7 Black, which comes with a timelapse video mode, live streaming and improved video stabilization. The new video stabilization, HyperSmooth, allows users to capture professional-looking, gimbal-like stabilized video without  a motorized gimbal. HyperSmooth also works underwater and in high-shock and wind situations where gimbals fail.

With Hero7 Black, GoPro is also introducing a new form of video called TimeWarp. TimeWarp Video applies a high-speed, “magic-carpet-ride” effect, transforming longer experiences into short, flowing videos. Hero7 Black is the first GoPro to live stream, enabling users to automatically share in realtime to Facebook, Twitch, YouTube, Vimeo and other platforms internationally.

Other Hero7 Black features:

  • SuperPhoto – Intelligent scene analyzation for professional-looking photos via automatically applied HDR, Local Tone Mapping and Multi-Frame Noise Reduction
  • Portrait Mode – Native vertical-capture for easy sharing to Instagram Stories, Snapchat and others
  • Enhanced Audio – Re-engineered audio captures increased dynamic range, new microphone membrane reduces unwanted vibrations during mounted situations
  • Intuitive Touch Interface – 2-inch touch display with simplified user interface enables native vertical (portrait) use of camera
  • Face, Smile + Scene Detection – Hero7 Black recognizes faces, expressions and scene-types to enhance automatic QuikStory edits on the GoPro app
  • Short Clips – Restricts video recording to 15- or 30-second clips for faster transfer to phone, editing and sharing.
  • High Image Quality – 4K/60 video and 12MP photos
  • Ultra Slo-Mo – 8x slow motion in 1080p240
  • Waterproof – Waterproof without a housing to 33ft (10m)
  • Voice Control – Verbal commands are hands-free in 14 languages
  • Auto Transfer to Phone – Photos and videos move automatically from camera to phone when connected to the GoPro app for on-the-go sharing
  • GPS Performance Stickers – Users can track speed, distance and elevation, then highlight them by adding stickers to videos in the GoPro app

The Hero7 Black is available now on pre-order for $399.

Panavision, Sim, Saban Capital agree to merge

Saban Capital Acquisition Corp., a publicly traded special purpose acquisition company, Panavision and Sim Video International have agreed to combine their businesses to create a premier global provider of end-to-end production and post production services to the entertainment industry. Under the terms of the business combination agreement, Panavision and Sim will become wholly owned subsidiaries of Saban Capital Acquisition Corp. Upon completion, Saban Capital Acquisition Corp. will change its name to Panavision Holdings Inc. and is expected to continue to trade on the Nasdaq stock exchange. Kim Snyder, president and chief executive officer of Panavision, will serve as chairman and chief executive officer. Bill Roberts, chief financial officer of Panavision, will serve in that role for the combined company.

Panavision designs, manufactures and provides high-precision optics and camera technology for the entertainment industry and is a leading global provider of production equipment and services. Sim is a leading provider of production and post production solutions with facilities in Los Angeles, Vancouver, Atlanta, New York and Toronto.

“This acquisition will leverage the best of Panavision’s and Sim’s resources by providing comprehensive products and services to best address the ever-adapting needs of content creators globally,” says Snyder.

“We’re combining the talent and integrated services of Sim with two of the biggest names in the business, Panavision and Saban,” adds James Haggarty, president and CEO of Sim. “The resulting scale of the new combined enterprise will better serve our clients and help shape the content-creation landscape.”

The respective boards of directors of Saban Capital Acquisition Corp., Panavision and Sim have unanimously approved the merger with completion subject to Saban Capital Acquisition Corp. stockholder approval, certain regulatory approvals and other customary closing conditions. The parties expect that the process will be completed in the first quarter of 2019.

Our Virtual Production Roundtable

By Randi Altman

Evolve or die. That old adage, while very dramatic, fits well with the state of our current production workflows. While most productions are now shot digitally, the warmth of film is still in the back of pros’ minds. Camera makers and directors of photography often look for ways to retain that warmth in digital. Whether it’s through lighting, vintage lenses, color grading, newer technology or all of the above.

There is also the question of setting looks on-set and how 8K and HDR are affecting the picture and workflows. And let’s not forget shooting for OTT series. There is a lot to cover!

In an effort to get a variety of perspectives, we reached out to a few cinematographers and some camera manufacturers to talk trends and technology. Enjoy!

Claudio Miranda, ASC

Claudio Miranda is a Chilean cinematographer who won an Oscar for his work on Life of Pi. He also worked on The Curious Case of Benjamin Button, the first movie nominated for a cinematography Oscar that was shot entirely on digital. Other films include Oblivion, Tomorrowland and the upcoming Top Gun: Maverick.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
Seems like everyone is shooting large format. Chris Nolan and Quentin Tarantino shot 65mm film for their last projects. New digital cameras such as the Alexa LF and Sony Venice cater to this demand. People seem to like the shallow depth of field of these larger format lenses.

How is HDR affecting the way things are being shot these days? Are productions shooting/monitoring HDR on-set?
For me, too much grain in HDR can be distracting. This must be moderated in the camera acquisition format choice and DI. Panning in a high-contrast environment can cause painful strobing. This can be helped in the DI and set design. HDR done well is more important than 8K or even 3D.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
8K can be important for VFX plates. For me, creatively it is not important, 4K is enough. The positive of 8K is just more K. The downside is that I would rather the camera companies focus on dynamic range, color latitude, sensitivity and the look and feel of the captured image instead of trying to hit a high K number. Also, there are storage and processing issues.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices, and workflows, if at all?
I have not shot for a streaming service. I do think we need to pay attention to all deliverables and make adjustments accordingly. In the DI, I am there for the standard cinema pass, HDR pass, IMAX pass, home video pass and other formats that arise.

Is the availability of all those camera resolutions a help or a hindrance?
I choose the camera that will fit the job. It is my job in prep to test and pick the camera that best serves the movie.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
On set, I am able to view HDR or 709. I test the pipeline and make sure the LUT is correct and make modifications if needed. I do not play with many LUTs on set, I normally just have one. I treat the camera like a film stock. I know I will be there in the DI to finalize the look. On set is not the place for futzing with LUTs on the camera. My plate is full enough as it is.

If not already covered, how has production changed in the last two years?
I am not sure production has changed, but there are many new tools to use to help make work more efficient and economical. I feel that I have always had to be mindful of the budget, no matter how large the show is. I am always looking for new solutions.

Daryn Okada, ASC
Daryn Okada is known for his work on films such as Mean GirlsAnna Karenina and Just Like Heaven. He has also worked on many TV series, such as Scandal, Grey’s Anatomy and Castle. He served as president of the ASC from 2006 to 2009.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses? 

Modern digital cinema cameras can achieve a level of quality with the proper workflows and techniques to evolve a story’s visual identity parallel explorations shooting on film. Larger image sensors, state-of-the-art lenses and mining historic optics enable cinematographers to use their experience and knowledge of the past to paint rich visual experiences for today’s audience.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
HDR is a creative and technical medium just as shooting and projecting 65mm film would be. It’s up to the director and the cinematographer to decide how to orchestrate the use of HDR for their particular story.

Can you address 8K? What are the positives, and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
8K will is working its way into production like 65mm and 35mm VistaVision did by providing more technical resolution for use in VFX or special-venue exhibition. The enormous amount of data and cost to handle it must be justified by its financial return and does it benefit a particular story. Latitude and color depth are paramount to creating a motion picture’s pallet and texture. Trying to use a format just because it’s technically possible may be distracting to an audience’s acceptance of a story or creative concept.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices, and workflows, if at all?

I think the delivery specifications of OTT have generally raised the bar, making 4K and wide color gamut the norm. For cinematographers that have spent years photographing features, we are accustomed to creating images with detail for a big screen and a wide color pallet. It’s a natural creative process to shoot for 4K and HDR in that respect. 

Are the availability of all those camera resolutions a help or a hindrance? 
Having the best imaging available is always welcomed. Even if a camera is not technically exploited, the creation of subtle images is richer and possible through the smoother transition and blending of color, contrast and detail from originating with higher resolutions and color range.

Can you talk about color management from the sensor/film to the screen? How do you ensure correct color management from the set into dailies and post, the DI and final delivery?
As a cinematographer we are still involved in workflows for dailies and post production to ensure everyone’s creative efforts to the final production are maintained for the immediate viewer and preserved for the audiences in the future.

How has production changed over the last two years?
There are more opportunities to produce content with creative high-quality cinematography thanks to advancements in cameras and cost-effective computing speed combined with demands of high quality displays and projection.

Vanja Černjul, ASC
This New York-based DP recently worked on the huge hit Crazy Rich Asians. In addition to feature film work, Černjul has shot TV shows (Deuce’s season 1 finale and two seasons of Marco Polo, as well as commercials for Panasonic and others.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
One interesting trend I noticed is the comeback of image texture. In the past, cinematographers used to expose film stock differently according to the grain texture they desired. Different exposure zones within the same frame had different grain character, which produced additional depth of the image. We lost that once we switched to digital. Crude simulations of film grain, such as overall filters, couldn’t produce the dimensionality we had with film.

Today, I am noticing new ways of bringing the texture back as a means of creative expression. The first one comes in the form of new, sophisticated post production tools designed to replicate the three-dimensional texturing that occurs naturally when shooting film, such as the realtime texturing tool LiveGrain. Monitoring the image on the set with a LiveGrain texture applied can impact lighting, filtration or lens choices. There are also new ways to manipulate texture in-camera. With the rise of super-sensitive, dual-native ISO sensors we can now shoot at very low-light levels and incorporate so-called photon shot noise into the image. Shot noise has organic character, very much like film grain.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?

The creative potential of HDR technology is far greater than that of added resolution. Unfortunately, it is hard for cinematographers to take full advantage of HDR because it is still far from being the standard way the audience sees our images. We can’t have two completely different looks for a single project, and we have to make sure the images are working on SDR screens. In addition, it is still impractical to monitor in HDR on the set, which makes it difficult to adjust lighting and lens choices to expanded dynamic range. Once HDR screens become a standard, we will be able to really start creatively exploring this new territory.

Crazy Rich Asians

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
Additional resolution adds more available choices regarding relationship of optical systems and aspect ratios. I am now able to choose lenses for their artifacts and character regardless of the desired aspect ratio. I can decide to shoot one part of the film in spherical and the other part in anamorphic and crop the image to the project’s predetermined aspect ratio without fear of throwing away too much information. I love that freedom.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices and workflows, if at all?
For me, the only practical difference between shooting high-quality content for cable or streaming is the fact that Netflix demands their projects to be capt
ured in true 4K RAW. I like the commitment to higher technical standards, even though this may be an unwelcome restriction for some projects.

Is the availability of all those camera resolutions a help or a hindrance?
I like choices. As large format lenses become more available, shooting across formats and resolutions will become easier and simpler.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
The key for correct color management from the set to final color grading is in preproduction. It is important to take the time to do proper tests and establish the communication between DIT, the colorist and all other people involved as early as possible. This ensures that original ideas aren’t lost in the process.

Adjusting and fine-tuning the LUT to the lenses, lighting gels and set design and then testing it with the colorist is very important. Once I have a bulletproof LUT, I light and expose all the material for it specifically. If this part of the process is done correctly, the time in final color grading can be spent on creative work rather than on fixing inconsistencies.

I am very grateful for ACES workflow, which offers long-overdue standardization. It is definitely a move in the right direction.

How has production changed over the last two years?
With all the amazing post tools that are becoming more available and affordable, I am seeing negative trends of further cutting of preproduction time, and lack of creative discipline on the set. I sincerely hope this is just a temporary confusion due to recalibration of the process.

Kate Reid, DP
Kate Reid is a UK-based DP working in TV and film. Her recent work includes the TV series Hanna (Amazon) Marcella 2 (Netflix) and additional photography on the final season Game of Thrones for HBO. She is currently working on Press for BBC.

Can you talk about some camera trends you’ve been seeing? Such as Large Format? The use of old/vintage lenses?
Large format cameras are being used increasingly on drama productions to satisfy the requirement for additional resolution by certain distribution platforms. And, of course, the choice to use large format cameras in drama brings with it another aesthetic that DPs now have as another tool: Choosing if increased depth-of-field fall off, clarity in the image etc., enhances the particular story they wish to portray on screen.

Like many other DPs, I have always enjoyed using older lenses to help make the digital image softer, more organic and less predictable, but the larger format cameras now mean that much of this older glass designed for 35mm size sensor may not cover the increased sensor size, so newer lenses designed for the larger format cameras may become popular by necessity, alongside older larger format glass that is enjoying a renaissance.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
I have yet to shoot a show that requires HDR delivery. It hasn’t yet become the default in drama production in the UK.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and frame rate more important currently?
I don’t inherently find an ultra sharp image attractive. Through older glass and diffusion filters on the lens, I am usually looking to soften and break down my image, so I personally am not all about the extra Ks. How the camera’s sensor reproduces color and handles highlights and shadows is of more interest to me, and I believe has more impact on the picture.

Of primary importance is how practical a camera is to work with — size and how comfortable the camera is to handle would supersede excessive resolution — as the first requirement of any camera has got to be whether it allows you to achieve the shots you have in mind, because a story isn’t told through its resolution.

Can you talk about how shooting streaming content for OTTs, like Netflix/Amazon, has changed production practices, and workflows, if at all?
The major change is the requirement by Netflix for true 4K resolution, determining which cameras cinematographers are allowed to shoot on. For many cinematographers the Arri Alexa was their digital camera of choice, which was excluded by this rule, and therefore we have had to look to other cameras for such productions. Learning a new camera, its sensor, how it handles highlights, produces color, etc., and ensuring the workflow through to the post facility is something that requires time and testing, which has certainly added to a DP’s workload.

From a creative perspective, however, I found shooting for OTTs (I shot two episodes of the TV series Hanna made by Working Title TV and NBC Universal for Amazon) has been more liberating than making a series for broadcast television as there is a different idea and expectation around what the audience wants to watch and enjoy in terms of storytelling. This allowed for a more creative way of filming.

Is the availability of all those camera resolutions a help or a hindrance?
Where work is seen now can vary from a mobile phone screen to a digital billboard in Times Square, so it is good for DPs to have a choice of cameras and their respective resolutions so we can use the best tool of each job. It only becomes a hindrance if you let the technology lead your creative process rather than assist it.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
Ideally, I will have had the time and opportunity to shoot tests during prep and then spend half a day with the show’s colorist to create a basic LUT I can work with on set. In practice, I have always found that I tweak this LUT during the first days of production with the DIT, and this is what serves me throughout the rest of the show.

I usually work with just one LUT that will be some version of a modified Rec. 709 (unless the look of the show drastically requires something else). It should then be straight forward in that the DIT can attach a LUT to the dailies, and this is the same LUT applied by editorial so that exactly what you see on set is what is being viewed in the edit.

However, where this fails is that the dailies uploaded to FTP sites — for viewing by the execs, producers and other people who have access to the work — are usually very compressed with low resolution, so it bears little resemblance to how the work looked on set or looks in the edit. This is really unsatisfying as for months, key members of production are not seeing an accurate reflection of the picture. Of course, when you get into the grade this can be restored, but it’s dangerous if those viewing the dailies in this way have grown accustomed to something that is a pale comparison of what was shot on set.

How has production changed over the last two years?
There is less differentiation between film and television in how productions are being made and, critically, where they are being seen by audiences, especially with online platforms now making award-winning feature films. The high production values we’ve seen with Netflix and Amazon’s biggest shows has seen UK television dramas pushing to up their game, which does put pressure on productions, shooting schedules and HODs, as the budgets to help achieve this aren’t there yet.

So, from a ground-level perspective, for DPs working in drama this looks like more pressure to produce work of the highest standard in less time. However, it’s also a more exciting place to be working as the ideas about how you film something for television versus cinema no longer need apply. The perceived ideas of what an audience is interested in, or expect, are being blown out the water by the success of new original online content, which flies in the face of more traditional storytelling. Broadcasters are noticing this and, hopefully, this will lead to more exciting and cinematic mainstream television in the future.

Blackmagic’s Bob Caniglia
In addition to its post and broadcast tools, Blackmagic offers many different cameras, including the Pocket Cinema Camera, Pocket Cinema Camera 4K, Micro Studio Camera 4K, Micro Cinema Camera, Studio Camera, Studio Camera 4K, Ursa Mini Pro, Ursa Mini 4.6K, Ursa Broadcast.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
Lens freedom is on everyone’s mind right now… having the freedom to shoot in any style. This is bringing about things like seeing projects shot on 50-year-old glass because the DP liked the feel of a commercial back in the ‘60s.

We actually just had a customer test out actual lenses that were used on The Godfather, The Shining and Casablanca, and it was amazing to see the mixing of those with a new digital cinema camera. And so many people are asking for a camera to work with anamorphic lenses. The trend is really that people expect their camera to be able to handle whatever look they want.

For large format use, I would say that both Hollywood and indie filmmakers are using them more often. Or, at least they trying to get the general large format look by using anamorphic lenses to get a shallow depth of field.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
Right now, HDR is definitely more of a concern for DPs in Hollywood, but also with indie filmmakers and streaming service content creators. Netflix and Hulu have some amazing HDR shows right now. And there is plenty of choice when it comes to the different HDR formats and shooting and monitoring on set. All of that is happening everyday, while 8K still needs the industry to catch up with the various production tools.

As for impacting shooting, HDR is about more immersive colors, and a DP needs to plan for it. It gives viewers a whole new level of image detail in what they shoot. They have to be much more aware of every surface or lighting impact so that the viewer doesn’t get distracted. Attention to detail gets even higher in HDR, and DPs and colorists will need to keep a close eye on every shot, including when an image in a sideview mirror’s reflection is just a little too sharp and needs a tweak.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? Or are latitude and framerate more important currently?
You can never have enough Ks! Seriously. It is not just about getting a beautiful 8K TV, it is about giving the production and post pros on a project as much data as possible. More data means more room to be creative, and is great for things like keying.

Latitude and framerate are important as well, and I don’t think any one is more important than another. For the viewers, the beauty will be in large displays, you’re already seeing 8K displays in Times Square, and though you may not need 8K on your phone, 8K on the side of a building or highway will be very impactful.

I do think one of the ways 8K is changing production practices is that people are going to be much more storage conscious. Camera manufacturers will need to continue to improve workflows as the images get larger in an effort to maximize storage efficiencies.

Can you talk about how shooting streaming content, for OTTs like Netflix/Amazon, has changed production practices, and workflows, if at all?
For streaming content providers, shoots have definitely been impacted and are forcing productions to plan for shooting in a wider number of formats. Luckily, companies like Netflix have been very good about specifying up front the cameras they approve and which formats are needed.

Is the availability of all those camera resolutions a help or a hindrance?
While it can be a bit overwhelming, it does give creatives some options, especially if they have a smaller delivery size than the acquisition format. For instance, if you’re shooting in 4K but delivering in HD, you can do dynamic zooms from the 4K image that look like an optical zoom, or you can get a tight shot and wide shot from the same camera. That’s a real help on a limited budget of time and/or money.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
Have the production and the post people planning together from the start and create the look everyone should be working on right up front.

Set the LUTs you want before a single shot is done and manage the workflow from camera to final post. Also, choose post software that can bring color correction on-set, near-set and off-set. That lets you collaborate remotely. Definitely choose a camera that works directly with any post software, and avoid transcoding.

How has production changed in the last two years?
Beyond the rise of HDR, one of the other big changes is that more productions are thinking live and streaming more than ever before. CNN’s Anderson Cooper now does a daily Facebook Live show. AMC has the live Talking Dead-type formats for many of their shows. That trend is going to keep happening, so cinematographers and camera people need to be thinking about being able to jump from scripted to live shooting.

Red Digital Cinema’s Graeme Nattress
Red Digital Cinema manufactures professional digital cameras and accessories. Red’s DSMC2 camera offers three sensor options — Gemini 5K S35, Helium 8K S35 and Monstro 8K VV.

Can you talk about some camera trends you’ve been seeing?
Industry camera trends continue to push image quality in all directions. Sensors are getting bigger, with higher resolutions and more dynamic range. Filmmakers continue to innovate, making new and amazing images all the time, which drives our fascination for advancing technology in service to the creative.

How is HDR affecting the way things are being shot these days?
One of the benefits of a primary workflow based on RAW recording is that HDR is not an added extra, but a core part of the system. Filmmakers do consider HDR important, but there’s some concern that HDR doesn’t always look appealing, and that it’s not always an image quality improvement. Cinematography has always been about light and shade and how they are controlled to shape the image’s emotional or storytelling intent. HDR can be a very important tool in that it greatly expands the display canvas to work on, but a larger canvas doesn’t mean a better picture. The increased display contrast of HDR can make details more visible, and it can also make motion judder more apparent. Thus, more isn’t always better; it’s about how you use what you have.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? What’s more important, resolution or dynamic range?
Without resolution, we don’t have an image. Resolution is always going to be an important image parameter. What we must keep in mind is that camera resolution is based on input resolution to the system, and that can — and often will — be different to the output resolution on the display. Traditionally, in video the input and output resolutions were one and the same, but when film was used — which had a much higher resolution than a TV could display — we were taking a high-resolution input and downsampling it to the display, the TV screen.

As with any sampled system, in a digital cinema camera there are some properties we seek to protect and others to diminish. We want a high level of detail, but we don’t want sharpening artifacts and we don’t want aliasing. The only way to achieve that is through a high-resolution sensor, properly filtered (optical low-pass) that can see a large amount of real, un-enhanced detail. So yes, 8K can give you lots of fine detail should you want it, but the imaging benefits extend beyond downsampling to 4K or 2K. 8K makes for an incredibly robust image, but noise is reduced, and what noise remains takes on more of a texture, which is much more aesthetically pleasing.

One challenge of 8K is an increase in the amount of sensor data to be recorded, but that can be addressed through quality compression systems like RedCode.

Addressing dynamic range is very important because dynamic range and resolution work together to produce the image. It’s easy to think that high resolutions have a negative impact upon dynamic range, but improved pixel design means you can have dynamic range and resolution.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
Color management is vitally important and so much more than just keeping color control from on-set through to delivery. Now with the move to HDR and an increasing amount of mobile viewing, we have a wide variety of displays, all with their own characteristics and color gamuts. Color management allows content creators to display their work at maximum quality without compromise. Red cameras help in multiple ways. On camera, one can monitor in both SDR and HDR simultaneously with the new IPP2 image processing pipeline’s output independence, which also allows you to color via CDL and creative 3D LUT in such a way as to have those decisions represented correctly on different monitor types.

In post and grading, the benefits of output independence continue, but now it’s critical that scene colors, which can so easily go out of gamut, are dealt with tastefully. Through the metadata support in the RedCode format, all the creative decisions taken on set follow through to dailies and post, but never get in the way of producing the correct image output, be it for VFX, editorial or grading.

Panavision’s Michael Cioni 
Panavision designs and manufactures high-precision camera systems, including both film and digital cameras, as well as lenses and accessories for the motion picture and television industries.

Can you talk about some camera trends you’ve been seeing?
With the evolution of digital capture, one of the most interesting things I’ve noticed in the market are new trends emerging from the optics side of cinematography. At a glance, it can appear as if there is a desire for older or vintage lenses based on the increasing resolution of large format digital cameras. While resolution is certainly a factor, I’ve noticed the larger contributor to vintage glass is driven by the quality of sensors, not the resolution itself. As sensors increase in resolution, they simultaneously show improvements in clarity, low-light capability, color science and signal-to-noise ratio.

The compounding effect of all these elements are improving images far beyond what was capable with analog film technology, which explains why the same lens behaves differently on film, S35 digital capture and large-format digital capture. As these looks continue to become popular, Panavision is responding through our investments in both restoration of classic lenses as well as designing new lenses with classic characteristics and textures that are optimized for large format photography on super sensors.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look?
Creating images is not always about what component is better, but rather how they elevate images by working in concert. HDR images are a tool that increases creative control alongside high resolution and 16-bit color. These components work really well together because a compelling image can make use of more dynamic range, more color and more clarity. Its importance is only amplified by the amalgamation of high-fidelity characteristics working together to increase overall image flexibility.

Today, the studios are still settling into an HDR world because only a few groups, led by OTT, are able to distribute in HDR to wide audiences. On-set tools capable of HDR, 4K and 16-bit color are still in their infancy and currently cost-prohibitive. 4K/HDR on the set is going to become a standard practice by 2021. 4K wireless transmitters are the first step — they are going to start coming online in 2019. Smaller OLED displays capable of 750 nits+ will follow in 2020, creating an excellent way to monitor higher quality images right on set. In 2021, editorial will start to explore HDR and 4K during the offline process. By 2024, all productions will be HDR from set to editorial to post to mobile devices. Early adopters that work out the details today will find themselves ahead of the competition and having more control as these trends evolve. I recommend cinematographers embrace the fundamentals of HDR, because understanding the tools and trends will help prevent images from appearing artificial or overdone.

Can you address 8K? What are the positives and the negatives? Do we just have too many Ks these days? What’s more important, resolution or dynamic range?
One of the reasons we partnered with Red is because the Monstro 8K VV sensor makes no sacrifice in dynamic range while still maintaining ultra high smoothness at 16 bits. The beauty of technology like this is that we can finally start to have the best from all worlds — dynamic range, resolution, bit depth, magnification, speed and workflow — without having to make quality sacrifices. When cinematographers have all these elements together, they can create images previously never seen before, and 8K is as much part of that story as any other element.

One important way to view 8K is not solely as a thermometer for high-resolution sharpness. A sensor with 35 million pixels is necessary in order to increase the image size, similar to trends in professional photography. 8K large format creates a larger, more magnified image with a wider field of view and less distortion, like the difference in images captured by 70mm film. The biggest positive I’ve noticed is that DXL2’s 8K large-format Red Monstro sensor is so good in terms of quality that it isn’t impacting images themselves. Lower quality sensors can add a “fingerprint” to the image, which can distort the original intention or texture of a particular lens.

With sensors like Monstro capable of such high precision, the lenses behave exactly as the lens maker intended. The same Panavision lenses on a lower grade sensor, or even 35mm film, are exhibiting characteristics that we weren’t able to see before. This is literally breathing new life into lenses that previously didn’t perform the same way until Monstro and large format.

Is the availability of so many camera formats a help or a hindrance?
You don’t have to look far to identify individuals who are easily fatigued by having too many choices. Some of these individuals cope with choices by finding ways to regulate them, and they feel fewer choices means more stability and perhaps more control (creative and economic). As an entrepreneur, I find the opposite to be true: I believe regulating our world, especially with regards to the arts and sciences, is a recipe for protecting the status quo. I fully admit there are situations in which people are fatigued by too many complex choices.

I find that failure is not of the technology itself, rather it’s the fault of the manufactures who have not provided the options in easy-to-consume ways. Having options is exactly what creatives need in order to explore something new and improved. But it’s also up to manufacturers to deliver the message in ways everyone can understand. We’re still learning how to do that, and with each generation the process changes a bit. And while I am not always certain which are the best ways to help people understand all the options, I am certain that the pursuit of new art will motivate us to go out of our comfort zones and try something previously thought not possible.

Have you encountered any examples of productions that have shot streaming content (i.e. for Netflix/Amazon) and had to change production practices and workflows for this format/deliverable?
Netflix and Amazon are exceptional examples of calculated risk takers. While most headlines discuss their investment in the quantity of content, I find the most interesting investment they make is in relationships. Netflix and Amazon are heavily invested in standards groups, committees, outreach, panels and constant communication. The model of the past and present (incumbent studios) are content creators with technology divisions. The model of the future (Netflix, Amazon, Hulu, Apple, Google and YouTube) are all the technology companies with the ability to create content. And technology companies approach problems from a completely different angle by not only embracing the technology, they help invent it. In this new technological age, those who lead and those who follow will likely be determined by the tools and techniques used to deliver. What I call “The Netflix Effect” is the impact Netflix has on traditional groups and how they have all had to strategically pivot based on Netflix’s impact.

How do you ensure correct color management from the set into dailies and post production, the DI and final delivery?
The DXL2 has an advanced color workflow. In collaboration with LiveGrade by Pomfort, DXL2 can capture looks wirelessly from DITs in the form of CDLs and LUTs, which are not only saved into the metadata of the camera, but also baked into in-camera proxy files in the form of Apple ProRes or Avid DNx. These files now contain visual references of the exact looks viewed on monitors and can be delivered directly to post houses, or even editors. This improves creative control because it eliminates the guess work in the application of external color decisions and streamlines it back to the camera where the core database is kept with all the other camera information. This metadata can be traced throughout the post pipeline, which also streamlines the process for all entities that come in contact with camera footage.

How has production changed over the last two years?
Sheesh. A lot!

ARRI‘s Stephan Ukas-Bradley
The ARRI Group manufactures and distributes motion picture cameras, digital intermediate systems and lighting equipment. Their camera offerings include the Alexa LF, Alexa Mini, Alexa 65, Alexa SXT W and the Amira.

Can you talk about some camera trends you’ve been seeing? Such as large format? The use of old/vintage lenses?
Large format opens some new creative possibilities, using a shallow depth of field to guide the audience’s view and provide a wonderful bokeh. It also conveys a perspective truer to the human eye, resulting in a seemingly increased dimensional depth. The additional resolution combined with our specially designed large format Signature Primes result in beautiful and emotional images.

Old and vintage lenses can enhance a story. For instance, when Gabriel Beristain, ASC, used Bausch & Lomb Super Baltar on the Starz show Magic City, and Bradford Young used detuned DNA lenses in conjunction with Alexa 65 on Solo: A Star Wars Story, certain characteristics like flares, reflections, distortions and focus fall-off are very difficult to recreate in post organically, so vintage lenses provide an easy way to create a unique look for a specific story and a way for the director of photography to maintain creative control.

How is HDR affecting the way things are being shot these days? Do you find HDR more important than 8K at the moment in terms of look? Are productions shooting/monitoring HDR on-set?
Currently, things are not done much differently on set when shooting HDR versus SDR. While it would be very helpful to monitor in both modes on-set, HDR reference monitors are still very expensive and very few productions have the luxury to do that. One has to be aware of certain challenges when shooting for an HDR finish. High contrast edges can result in a more pronounced stutter/strobing effect when panning the camera, windows that are blown out in SDR might retain detail in the HDR pass and now all of a sudden, a ladder or grip stand are visible.

In my opinion, HDR is more important than higher resolution. HDR is resolution-independent in regard to viewing devices like phone/tablets and gives the viewer a perceived increased sharpness, and it is more immersive than increased resolution. Also, let’s not forget that we are working in the motion picture industry and that we are either capturing moving objects or moving the camera, and with that introducing motion blur. Higher resolution only makes sense to me in combination with higher frame rates, and that in return will start a discussion about aesthetics, as it may look hyper-real compared to the traditional 24fps capture. Resolution is one aspect of the overall image quality, but in my opinion extended dynamic range, signal/noise performance, sensitivity, color separation and color reproduction are more important.

Can you talk about how shooting streaming content for OTTs, like Netflix/Amazon, has changed production practices and workflows, if at all?
Shooting streaming content has really not changed production practices or workflows. At ARRI, we offer very flexible and efficient workflows and we are very transparent documenting our ARRIRAW file formats in SMPTE RDD 30 (format) and 31 (processing) and working with many industry partners to provide native file support in their products.

Is the availability of all those camera resolutions a help or a hindrance?
I would look at all those different camera types and resolutions as different film stocks and recommend to creatives to shoot their own test and select the camera systems based on what suits their project best.

We offer the ARRI Look Library for Amira, Alexa Mini and Alexa SXT (SUP 3.0), which is a collection of 87 looks, each of them available in three different intensities provided in Rec. 709 color space. Those looks can either be recorded or only used for monitoring. These looks travel with the picture, embedded in the metadata of the ARRIRAW file, QuickTime Atom or HD/SDI stream in form of the actual LUT and ASC CDL. One can also create a look dynamically on set, feeding the look back to the camera and having the ASC CDL values embedded in the same way.

More commonly, one would record in either ARRIRAW or ProRes LogC, while applying a standard Rec. 709 look for monitoring. The “C” in LogC stands for Cineon, which is a film-like response very much the like of a scanned film image. Colorists and post pros are very familiar with film and color grading LogC images is easy and quick.

How has production changed over the last two years?
I don’t have the feeling that production has changed a lot in the past two years, but with the growing demand from OTTs and increased production volume, it is even more important to have a reliable and proven system with flexible workflow options.

Main Image: DP Kate Reid.

DITs: Maintaining Order on Set

By Karen Moltenbrey

The DIT, or digital imaging technician, can best be described as that important link between on-set photography and post production. Part of the camera crew, the DIT works with the cinematographer and post production on the workflow, camera settings, signal integrity and image acquisition. Much more than a data wrangler, a DIT ensures the technical quality control, devises creative solutions involving photography technology and sees that the original camera data and metadata are backed up regularly.

Years ago, the DIT’s job was to solve issues as the industry transitioned from film to digital. But today, with digital being so complex and involving many different formats, this job is more vital than ever, sweating the technical stuff so that the DP and others can focus on their work for a successful production. In fact, one DIT interviewed for this piece notes that the job today focuses less on fine-tuning the live look than it did in the past. One reason for that is the many available tools that enable the files to be shaped more carefully in post.

The DITs interviewed here note that the workflow usually changes from production to production. “If you ask 10 different DITs what they do, they would probably give you 10 different answers,” says one. Still, the focus remains the same: to assist the DP and others, ensuring that everyone and everything is working in concert.

And while some may question whether a production needs the added expense of a DIT, perhaps a better question would be whether they can afford not to have one.

Here, two DITs discuss their ever-changing workflows for this important job.

Michele deLorimier 
Veteran DIT Michele deLorimier describes the role of a digital imaging technician as a problem solver. “It’s like doing puzzles — multiple, different-size puzzles that have to be sorted out,” she says. “It always involves problem solving, from trying to fix the director’s iPhone to the tech parameter settings in the cameras to the whole computer having to be torn apart and put back together. All the while, shooting has not stopped and footage is accumulating.”

There are often multiple cameras, and the footage needs to be downloaded and QC’d, and cards erased and sent back into rotation in order to continue shooting. “So, I guess the greatest tool on the cart is the complete computer workstation, and if it is having a problem, it requires high-gear, intense problem solving,” she adds.

And through it all, deLorimier and her fellow DITs must keep their cool and come up with a solution — and fast.

deLorimier has been working as a DIT for many years now. She honed her problem-solving skills working at live concerts, where she had to be fast on her feet while working with live control of multiple cameras through remote control units and paint boxes. “I’d sit at a switcher, with a stack of monitors and one big monitor, and keep the look consistent — black levels, paint controls — on all cameras, live.”

Later, this segued into setting up and controlling on- and off-board videotape and data-recorder digital cinema cameras on set for commercial film production.

“I just kind of fell into [DIT work] because of what I had done, and then it just continued to evolve,” says deLorimier. With the introduction of digital cinema cameras, DITs with a film and video background were needed during the transition period — spawning the term “digital imaging technician.”

“It went from being tape-based, where you’re creating and baking in a look while you’re shooting, to tape-based where you’re shooting sort of a flat pass and creating a timeline of looks you’re delivering alongside the videotape. And then to data recording, delivering files and additionally honing the look after the footage is ingested,” she says.

Among the equipment deLorimier uses is a reference grade monitor “that must be calibrated properly,” she says, a way to objectively assess exposure, such as with a waveform monitor, and some method of objectively assessing color, so a type of vectorscope. That is the base-level equipment. For commercials, efficient hardware and software are needed for downloading, manipulating and QC’ing the footage, color correcting it and creating deliverables for post.”

deLorimier prefers Flanders Scientific monitors — she has six for various tasks: a pair of 25 inch, a 24 inch, a pair of 21 inch and a 17 inch — as well as a Leader waveform monitor/vectorscope.

“We’re using wireless video a lot these days so we can move around freely and the cables aren’t all over the ground to trip on,” she says. “That part of the chain can have the incorrect setting, so it’s important to ensure that everything is [set at] baseline and that what you are adding to it — usually some form of a LUT to the livestream — is baseline too.” This starts with settings in the camera and then anything the video signal chain might touch.

Then there is various software, drivers, readers, cables and power management, which change and get updated regularly. Thus, deLorimier stresses that any software change should be tested and updated during prep, to ensure compatibility. “There are unexpected things that you can’t prep for. There are times when you show up at a shoot and will be told, ‘We shot some drone footage yesterday,’ and it’s with a camera that you had no control over the settings,” she says. “So, the more you can prep for, the higher the rate of success you will have.”

Over the years, deLorimier has worked on a variety of productions, from features to TV commercials, with each type of project requiring a different setup. Preparing for a commercial usually entails physically prepping equipment and putting pieces together, as well as checking its end-to-end signal chain, from camera settings, through distribution of the video signal, to the final destination for monitoring and data delivery.

A day before this interview, deLorimier finished a Subaru commercial, shooting in Sequoia National Forest for the first few days, then Griffith Park and some areas around LA. Before that was a multi-unit job for a Nike spot that was filmed in numerous cities over the course of five days. For that project, each of the DITs for the A, B and C units had to coordinate with one another for consistency, ensuring that the cameras would be set up the same way, that they had the same specs and were delivering a similar look. “We were shooting with big projectors onto buildings and screens, and the cameras needed to sync to the projectors in some instances,” deLorimier explains.

According to deLorimier, it is unusual for the work of a DIT not to be physical. “We’re on the move a lot,” she says, crediting her past concert experience for her ability to adjust to adverse and unexpected conditions. “And we are not working in a controlled environment, but we do our best under the constraints we have and always try to keep post in mind.”

She recalls one physically demanding job that required three consecutive nights of shooting in the rain near Santa Barbara, to film a train coming down the tracks. Part of the crew was on one side of the tracks, and part on the other. And deLorimier was in a cornfield with her carts, computer system and monitors, inside a tent to keep dry. “They kept calling me to come to B camera. But I was also remotely setting up live looks inside my tent.

“I had a headlamp on because I had to deal with cables and stuff in my tent, and at one point illuminated by my headlamp, I could see that there were at least 45 snails crawling up the inside of my tent and cart. I was getting mud on my glasses and in my eyes. Then my whole cart, which was pretty heavy, started tipping and tilting, and I was bracing myself and my feet were starting to get sucked into the mud in the mole holes that were filling with rainwater. I couldn’t even call for help because it took both of my hands to hold up the cart, and the snails were everywhere! And, through it all, they kept calling on the walkie-talkie, ‘Michele, B camera needs you. The train’s coming.’”

Insofar as acquisition formats are concerned, deLorimier says that it’s higher resolution and almost always raw files for commercials these days. “A minimum of 4K is almost mandatory across the board,” she notes. And if the project is shooting with Red Digital Cinema cameras, it is between 6K and 8K, as the team she works with mostly use Red Monstros or ARRIRAW. She also works with Phantom Cine raw files.

“The higher data rates have definitely given me more gray hairs,” says deLorimier with a smile. “There’s no downtime. There’s always six or seven balls in the air, and there’s very little room for error or any fixing on set. This is also why the prep day is vital; so much can be worked out and pre-run during the prep, and this pays off for production during the shoot.”

Francesco Luigi Giardiello
Francesco Luigi Giardiello defines his role as that of an on-set workflow supervisor, as opposed to a traditional DIT. “Over the last five to 10 years, I have been designing a workflow that basically extends from set to post production, focusing on the whole pipeline so we don’t have to throw away what has been done on set,” he says.

Giardiello has been designing a pipeline based on a white balance match, which he says is quite unusual in the business because everything gets done through a simplified and more standardized color grading. “We designed something that goes a bit deeper into the color science and works with the Academy’s ACES workflow, trying to establish a common working colorspace, common color pipeline and a common method to control and manipulate colors. This — across any possible camera or source media used in production — is to provide balanced and consistent footage to the DI and visual effects teams. This allows the CG to be applied without having to spend time on balancing and tweaking the color of the shots.”

The Thor team (L-R): Francesco Giardiello, Kramer Morgenthau ASC (DP), Fabio Ferrantini (data manager).

This is important, especially today, where people are shooting with different digital systems. Or maybe even film and digital cameras, plus different lenses, so the shots look very different, even with the same lighting conditions. To this end, Giardiello’s role as DIT would be to grade or match everything so it all looks the same.

“Normally this gets done by using color tools, some of which are more sophisticated than others. When the tools are too sophisticated, they are intractable in the workflow and, therefore, become useless after leaving the set. When they are too ‘simple,’ like CDLs, often they are insufficient in correctly balancing the shots. And, because they are applied during a stage of the pipeline where the cinematographer’s look is introduced, they end up lost or often convolute the pipeline,” he notes. “We designed a system where the color balance occurs before any other color grading, and then the color grading is applied just as a look.”

Giardiello is currently in production on Marvel Studios’ Spider-Man: Far from Home, scheduled for release July 5, 2019. Not his first trip into the Marvel universe, he has worked on Thor: The Dark World, in addition to a number of episodic TV series and other big VFX productions, including Jurassic World and Aladdin. “You are the ambassador of the post production and VFX work,” he explains. “You have to foresee any technical issue and establish a workflow that will facilitate them. So, doing my job without being on set would be a complete waste of time. Sure, I can work in the studios and post production facilities to design workflows that will work without a DIT, but the problem is that things happen on set because that’s where decisions get made.”

As Giardiello points out, the other departments, such as camera and VFX, even the cinematographers, have different priorities and different jobs to fulfill. So, they’re not necessarily spending the time to ensure that every camera, every lens and every setting is in line with a consistent workflow to match the others. “They tend to shoot with whatever camera or medium they think is best and then expect that VFX or post will be able to fit that into an existing workflow.”

On average, Giardiello spends a few weeks of prep to design a project’s workflow, probably longer than producers and production companies would like. But, he believes that the more you plan, the less you have to deal with on set and in post production. When a shoot is finished, he will spend a week or two with the post facility, more to facilitate the handoff than to fix major issues.

Jurassic World was shot with 6K Arri Alexa 65s and the 8K Red Digital Cinema Helium camera, but the issue with high-resolution cameras is the amount of data they generate. “When you start shooting 4, 5, 6 or 8 terabytes a day, you have to make sure you are on set as a data point and that post production is capable of handling all this incoming data,” Giardiello advises. To this end, he had been working with Pinewood Digital to streamline a workflow for moving the data from set to post, whereby rather than sending the original mags to post, his group packaged up the data into very fast, very secure Codex Digital SLEDs.

The most important challenge on a VFX-oriented film, Giardiello says, is the color pipeline, as large studios, like Marvel, Disney, Warner Bros. and Universal, are focused on making sure that the so-called “digital negatives,” or raw footage, that arrive to post and VFX is well balanced and doesn’t require a lot of fixing before those departments can begin their work. “So, having balanced footage has been, and still is, one of the biggest concerns for any major studio when it comes to managing color from set to post production,” he notes.

So, for the last few years, this issue has been handled through the in-camera white balance with a system developed by Giardiello. “We changed the white balance on every single camera, using that to match every single shot before it gets to post production. So when it arrives in front of a VFX compositor and the DI suite, the average color and density of every single shot is consistent,” he adds.

Francesco Giardiello’s rig on Jurassic World.

Giardiello’s workflow is one that he has designed and developed over a five-year period and shines particularly when it comes to facilitating VFX interaction with action footage. “If you have to spend weeks fixing things in VFX on a big job like Jurassic World, Aladdin or Spider-Man, we’re talking about losing thousands of dollars every day,” he points out.

The work entails using a range of tools, some of which are designed for each new job. One tool that has been used on Giardiello’s last few films modifies the metadata for Red cameras to match them with that of the Alexa camera. Meanwhile, on set he uses Filmlight’s Prelight for light grading or to design CDLs. Probably the most important tool for dealing with RAW footage, he maintains, is Codex Digital’s Codex Production Suite. “It allows us to streamline the cloning and backup processes, to perform a visual QC near set and to access the metadata of raw footage and change it (when it is not changed in-camera).

“When those files get to post production in [Filmlight’s] Daylight, which is mostly used these days to process rushes, Daylight doesn’t recognize that change as an actual change, but as something that the DIT does on set in-camera,” Giardiello says.

In addition, he also uses the new SSD SLED designed by Codex, which offers encryption — an important feature for studios like Marvel or Sony. Then, on set, he uses BoxIOs, a LUT box from Flanders Scientific, as well as Flanders monitors, either DM240s (LCDs) or DM250s (OLEDs), depending on the type of project.

Over the years, Giardiello has often worked with the same DPs, but in the past three years, his major clients instead have been studios: Universal, Marvel and Warner Bros. “But my boss is still the DP,” he adds.

During the past 12 years, Giardiello has witnessed an evolution in the role of DIT and expects this to continue, particularly as media continues to converge and merge — from cinema or television to mobile devices. “So yeah, I would say our job has changed and is going to change, but I think it’s more important now than it was 10 years ago, and obviously it’s going to be even more important in the next 10 years.”


Karen Moltenbrey is a longtime writer and editor in the CG and post industries.

Behind the Camera: Television DPs

By Karen Moltenbrey

Directors of photography on television series have their work cut out for them. Most collaborate early on with the director on a signature “look.” Then they have to make sure that aesthetic is maintained with each episode and through each season, should they continue on the series past the pilot. Like film cinematographers, their job entails a wide range of responsibilities aside from the camera work. Once shooting is done, they are often found collaborating with the colorists to ensure that the chosen look is maintained throughout the post process.

Here we focus on two DPs working on two popular television series — one drama, one sitcom — both facing unique challenges inherent in their current projects as they detail their workflows and equipment choices.

Ben Kutchins: Ozark
Lighting is a vital aspect in the look of the Netflix family crime drama Ozark. Or perhaps more accurate, the lack of lighting.

Ben Kutchins (left) on set with actor/director Jason Bateman.

“I’m going for a really naturalistic feel,” says DP Ben Kutchins. “My hope is that it never feels like there’s a light or any kind of artificial lighting on the actors or lighting the space. Rather, it’s something that feels more organic, like sunlight or a lamp that’s on in the room, but still offers a level of being stylized and really leans into the darkness… mining the shadows for the terror that goes along with Ozark.”

Ozark, which just kicked off its second season, focuses on financial planner Marty Byrde, who relocates his family from the Chicago suburbs to a summer resort area in the Missouri Ozarks. After a money laundering scheme goes awry, he must pay off a debt to a Mexican drug lord by moving millions of the cartel’s money from this seemingly quiet place, or die. But, trouble is waiting for them in the Ozarks, as Marty is not the only criminal operating there, and he soon finds himself in much deeper than he ever imagined.

“It’s a story about a family up against impossible odds, who constantly fear for their safety. There is always this feeling of imminent threat. We’re trying to invoke a heightened sense of terror and fear in the audience, similar to what the characters might be feeling,” explains Kutchins. “That’s why a look that creates a vibe of fear and danger is so important. We want it to feel like there is danger lurking around every corner — in the shadows, in the trees behind the characters, in the dark corners of the room.”

In summary, the look of the show is dark — literally and figuratively.

“It is pretty extreme by typical television standards,” Kutchins concedes. “We’ve embraced an aesthetic and are having fun pushing its boundaries, and we’re thrilled that it stands out from a pretty crowded market.”

According to Kutchins, there are numerous examples where the actor disappears into the shadows and then reappears moments later in a pool of light, falling in and out of shadow. For instance, a character may turn off a light and plunge the room into complete darkness, and you do not see that character again until they reappear, until they’re lit by moonlight coming through a window or silhouetted against a window.

“We’re not spending a lot of time trying to fill in the shadows. In fact, we spend most of our time creating more shadows than exist naturally,” he points out.

Jason Bateman, who plays Marty, is also an executive producer and directed the first two and last two episodes of Season 1. Early on, he, along with Kutchins and Pepe Avila del Pino, who shot the pilot, hashed out the desired look for the show, leaning into a very cyan and dark color palette — and leaning in pretty strongly. “Most people think of [this area as] the South, where it’s warm and bright, sweaty and hot. We just wanted to lean into something more nuanced, like a storm was constantly brewing,” Kutchins explains. “Jason really pushed that aesthetic hard across every department.”

Alas, that was made even more difficult since the show was mainly shot outdoors in the Atlanta area, and a good deal of work went into reacting to Mother Nature and transforming the locations to reflect the show’s Ozark mountain setting. “I spent an immense amount of time and effort killing direct sunlight, using a lot of negative fill and huge overheads, and trying to get rid of that direct, harsh sun,” says Kutchins. “Also, there are so many windows inside the Byrde house that it’s essentially like shooting an exterior location; there’s not a lot of controlled light, so you again are reacting and adapting.”

Kutchins shoots the series on a Panasonic VariCam, which he typically underexposes by a stop or two, mining the darker part of the sensor, “the toe of the exposure curve.” And by doing so, he is able to bring out the dirtier, more naturalistic, grimy parts of the image, rather than something that looks clean and polished. “Something that has a little bit of texture to it, some grit and grain, something that’s evocative of a memory, rather than something that looks like an advertisement,” he says.

To further achieve the look, Kutchins uses an in-camera LUT that mimics old Fuji film stock. “Then we take that into post,” he says, giving kudos to his colorist, Company 3’s Tim Stipan, who he says has been invaluable in helping to develop the “vibe” of the show. “As we moved along through Season 1 and into Season 2, he’s been instrumental in enhancing the footage.”

A lot of Kutchins’ work occurs in post, as the raw images captured on set are so different from the finals. Insofar as the digital intermediate is concerned, significant time is spent darkening parts of the frame, brightening small sections of the frame and working to draw the viewer into the frame. “I want people to be leaning on the edge of their seat, kind of wanting to look inside of the screen and poke their head in for a look around,” Kutchins says. “So I do a lot of vignetting and darkening of the edges, and darkening specific things that I think are distracting.”

Nevertheless, there is a delicate balance he must maintain. “I talk about the darkness of Ozark, but I am trying to ride that fine line of how dark it can be but still be something that’s pleasant to watch. You know, where you’re not straining to see the actor’s face, where there’s just enough information there and the frame is just balanced enough so your eyes feel comfortable looking at it,” he explains. “I spend a lot of time creating a focal point in the frame for your eyes to settle on — highlighting certain areas and letting some areas go black, leaving room for mystery in every frame.”

When filming, Kutchins and his crew use Steadicams, cranes, dollies and handheld. He also uses Cooke Optics’ S4 lenses, which he tends to shoot wide open, “to let the flaws and character of the lenses shine through.”

Before selecting the Panasonic VariCam, Kutchins and his group tested other cameras. Because of Netflix’s requirement for 4K, that immediately ruled out the ARRI Alexa, which is Kutchins’ preferred camera. “But the Panasonic ended up shining,” he adds.

In Ozark, the urban family is pitted against nature, and thus, the natural elements around them need to feel dangerous, Kutchins points out. “There’s a line in the first season about how people drown in the lake all the time. The audience should always feel that; when we are at the water’s edge, that someone could just slip in and disappear forever,” he says. “So, the natural elements play a huge role in the inspiration for the lighting and the feel of the show.”

Jason Blount:The Goldbergs
A polar opposite to Ozark in almost every way, The Goldbergs is a single-camera comedy sitcom set in the ’80s about a caring but grumpy dad, an overbearing mother and three teens — the oldest, a popular girl; the middle one, who fancies himself a gifted athlete and strives to be popular; and the youngest, a geek who is obsessed with filmmaking, as he chronicles his life and that of his family on film. The series is created and executive-produced by Adam F. Goldberg and is based on his own life and childhood, which he indeed captured on film while growing up.

The series is filmed mostly on stage, with the action taking place within the family home or at the kids’ schools. For the most part, The Goldbergs is an up-lit, broad comedy. The colors are rich, with a definite nod to the vibrant palette of the ’80s. “Our colorist, Scott Ostrowsky [from Level 3], has been grading the show from day one. He knows the look of the show so well that by the time I sit with him, there are very few changes that have to be made,” says Blount.

The Goldbergs began airing in 2013 and is now entering its sixth season. And the series’ current cinematographer, Jason Blount, has been involved since the start, first serving as the A camera/Steadicam operator before assuming the role of DP for the Season 1 finale — for a total of 92 episodes now and counting.

As this was a Sony show for ABC, the plan was to shoot with a Sony PMW-F55 CineAlta 4K digital camera, but at the time, it did not record at a fast enough frame rate for some of the high-speed work the production wanted. So, they ended up using the ARRI Alexa for Season 1. Blount took over as DP full time from Season 2 onward, and the decision was made to switch to the F55 for Season 2, as the frame rate issue had been resolved.

“The look of the show had already been established, and I wanted to make sure that the transition between cameras was seamless,” says Blount. “Our show is all about faces and seeing the comedy. From the onset, I was very happy with the Sony F55. The way the camera renders skin tone, the lack of noise in the deep shadows and the overall user-friendly nature of the camera impressed me from the beginning.”

Blount points to one particular episode where the F55 really shined. “The main character was filming a black-and-white noir-style home movie. The F55 handled the contrast beautifully. The blacks were rich and the highlights held onto detail very well,” he says. “We had a lot of smoke, hard light directly into the lens, and really pushed the limits of the sensor. I couldn’t have been happier with the results.”

In fact, the camera has proved its mettle winter, spring, summer and fall. “We’ve used it in the dead of winter, at night in the rain and during day exterior [shots] at the height of summer when it’s been over 100 degrees. It’s never skipped a beat.”

Blount also commends Keslow Camera in Los Angeles, which services The Goldbergs’ cameras. In addition, the rental house has accessorized the F55 camera body with extra bracketry and integrated power ports for more ease of use.

Due to the fast pace at which the show is filmed — often covering 10-plus pages of script a day — Blount uses Angenieux Optimo zoom lenses. “The A camera has a full set of lightweight zooms covering 15mm to 120mm, and the B camera always has the [Optimo] 24-290,” he says. “The Optimo lenses and F55 are a great combination, making it easy to move fast and capture beautiful images.”

Blount points out that he also does all the Steadicam work on the show, and with the F55 being so lightweight, compact and versatile, it makes for a “very comfortable camera in Steadicam mode. It’s perfect to use in all shooting modes.”

The Goldbergs’ DP always shoots with two cameras, sometimes three depending on the scene or action. And, there is never an issue of the cameras not matching, according to Blount. “I’m not a big fan of the GoPro image in the narrative world, and I own a Sony a7S. It’s become my go-to camera for mounts or tight space work on the show, and works perfectly with the F55.”

And, there is something to say for consistency, too. “Having used the same camera and lens package for the past five seasons has made it easy to keep the look consistent for The Goldbergs,” says Blount. “At the beginning of this season, I looked at shooting with the new Sony Venice. It’s a fantastic-looking camera, and I love the options, like the variable ND filters, more color temperature options and the dual ISO, but the limit of 60fps at this stage was a deal-breaker for me; we do a fair amount of 72fps and 120fps.”

“If only the F55 had image stabilization to take out the camera shake when the camera operators are laughing so hard at the actors’ performances during some scenes. Then it would be the perfect camera!” he says with a laugh himself.


Karen Moltenbrey is a longtime writer and editor in the CG and post industries.

Q&A: Camera Operators

By Randi Altman

Camera operators might not always get the glory, but they certainly do get the job done. Working hand in hand with DPs and directors, these artists make sure the camera is in the right place for the right shot, and so much more. As one of the ops we spoke to says, “The camera operator is the “protector of the frame.”

We reached out to three different camera operators, all of whom are members of the Society of Camera Operators (SOC), to find out more about their craft and how their job differs from some of the others on set.

Lisa Stacilauskas

Lisa Stacilauskas, SOC
What is the role of the camera operator? What is the camera operator accountable for on set?
The role of the camera operator varies quite a bit depending on the format. I work primarily in scripted television on “single camera” comedies. Don’t let the name “single camera” fool you. It’s meant to differentiate the shooting format from multicam, but these days most single camera shows shoot with two or three cameras. The show I work on, American Housewife, uses three cameras. I am the C camera operator.

In the most basic sense, the camera operator is responsible for the movement of the camera and the inclusion or exclusion of what is in frame. It takes a team of craftspeople to accomplish this. My immediate team includes a 1st and 2nd camera assistant and a dolly grip. Together we get the camera where it needs to be to get the shots and to tell the story as efficiently as possible.

In a larger sense, the camera operator is a storyteller. It is my responsibility to know the story we are trying to tell and assist the director in attaining their vision of that story. As C camera operator, I think about how the scene will come together in editing so I know which pieces of coverage to get.

Another big part of my job is keeping lighting equipment and abandoned water bottles out of my shot. The camera operator is the “protector of the frame”!

How do you typically work with the DP?
The DP is the head of the camera department. Each DP has nuances in the way they work with their operators. Some DPs tell you exactly where to put your camera and what focal length your lenses should be. Others give you an approximate position and an indication of the size (wide, medium, close-up) and let you work it out with the actors or stand-ins.

American Housewife

How does the role of the camera operator and a DP differ?
The DP is in charge of camera and lighting. Officially, I have no responsibility for lighting. However, it’s very important for a camera operator to think like a DP; to know and pay attention to the lighting. Additionally, especially when shooting digitally, once the blocking is determined, camera operators stay on set, working with all the other departments to prepare a shot while the DP is at the monitors evaluating the lighting and/or discussing set ups with the director.

What is the relationship between the operator and the director?
The relationship between the operator and director can vary depending on the director and the DP. Some directors funnel all instructions through the DP and only come to you with minor requests once the shots have already been determined.

If the director comes to me directly without going through the DP, it is my responsibility to let the DP know of the requested shot, especially if she/he hasn’t lit for it! Sometimes you are a mediator between the two and hopefully steer them closer to being on the same page. It can be a tough spot to be in if the DP and director have different visions.

Can you talk about recent projects you’ve worked on?
I’m currently working on Season 3 of American Housewife. The C camera position was a day-playing position at the beginning of Season 1, but DP Andrew Rawson loves to work with three cameras, and really knows how to use all three efficiently. Once production saw how much time we saved them, they brought us on full time. Shooting quickly and efficiently is especially important on American Housewife because three of our five principal actors are minors, whose hours on set are restricted by law.

During a recent hiatus, I operated B camera on a commercial with the DP operating A camera. It seemed like the DP appreciated the “extra set of eyes.”

Prior to American Housewife, I worked on several comedies operating the B camera, including Crazy Ex- Girlfriend (Season 1), Teachers (Season 1) and Playing House (Season 2).

Stephen Campanelli, SOC
What is the role of the camera operator? What is the camera operator accountable for on set?
The role of the camera operator on the set is to physically move the camera around to tell the story. The camera operator is accountable for what the director and the director of photography interpret to be the story that needs to be told visually by the camera.

Stephen Campanelli (center) on the set of American Sniper.

As a camera operator, you listen to their input, and sometimes have your own input and opinion to make the shot better or to convey the story point from a different view. It is the best job on set in my opinion, as you get to physically move the camera to tell great stories and work with amazing actors who give you their heart and soul right in front of you!

How do you typically work with the DP?
I have been very fortunate in my career to have worked with some very collaborative DPs. After 24 years of working with Clint Eastwood, I have absorbed so much of his directing style and visual nature that working closely with the DP we have created the Eastwood-style of filmmaking. When I am doing other films with other DPs we always talk about conveying the story in the truest, most visual way without letting the camera get in the way of a good story. That is one of the most important things to remember: A camera operator is not to bring attention to the camera, but bring attention to the story!

How does the role of the camera operator and a DP differ?
A DP usually is in charge of the lighting and the look of the entire motion picture. Some DPs also operate the camera, but that is a lot of work on both sides. A camera operator is very essential, as he or she can rehearse with the actors or stand-ins while the director of photography can concentrate solely on the lighting.

What is the relationship between the operator and the director?
As I mentioned earlier, my relationship with Clint Eastwood has been a very close one, as he works closely with the camera operator rather than the director of photography. We have an incredible bond where very few words are spoken, but we each know how to tell the story once we read the script. On some films, the director and the DP are the ones that work together closely to cohesively set the tone for the movie and to tell the story, and the camera operator interprets that and physically moves the camera with the collaboration of both the director and director of photography.

Can you talk about recent projects you’ve worked on?
I recently just wrapped my 22nd movie with Clint Eastwood as his camera operator, it is called The Mule. We filmed in Atlanta, New Mexico and Colorado. It is a very good script and Clint is back in front of the camera again, acting in it. It also stars Bradley Cooper, Laurence Fishburne and Michael Pena. [Editor’s note: He recently worked on A Star is Born, also with Bradley Cooper.]

Recently, I moved up to directing. In 2015, I directed a movie called Momentum, and this year I directed an award-winning film called Indian Horse that was a big hit in Canada and will soon be released in the United States.

Jamie Hitchcock

Jamie Hitchcock, SOC
What is the role of the camera operator? What is the camera operator accountable for on set?
The role of a camera operator is to compose an assigned shot and physically move the camera if necessary to perform that shot or series of shots as many times as needed to achieve the final take. The camera operator is responsible for maintaining the composition of the shot while also scanning the frame for anything that shouldn’t be there. The camera operator is responsible for communicating to all departments about elements that should or should not be in the frame.

How do you typically work with the DP?
The way a camera operator works with a director of photography varies depending on the type of project they are working on. On a feature film, episodic or commercial, the director of photography is very involved. The DP will set each shot and the operator then repeats it as many times as necessary. On a variety show, live show or soap opera, the DP is usually a lighting designer/director, and the director works with the camera operators to set the shots. On multi-camera sitcoms, the shots are usually set by the director… with the camera operator. When the production requires a complicated scene or location, the DP will become more actively involved with the selection of the shots.

How does the role of the camera operator and a DP differ?
The role of the DP and operator are quite different yet the goal is the same. The DP is involved with the pre-production process, lighting, running the set, managing the crew and the post process. The operator is involved on set working shot by shot. Ultimately, both the DP and operator are responsible for the final image the viewing audience will see.

What is the relationship between the operator and the director?
The relationship between the operator and the director, like that of the operator and DP, varies depending on the type of project. On a feature-type project, the director may be only using one camera. On a sports or variety program the director might be looking at 15 or more cameras. In all cases, the director is counting on the operators to perform their assigned shots each time. Ultimately, when a shot or take is complete, the director is the person who decides to move on or do it again, and they trust the operator to tell them if the shot was good or not.

CBS’s Mom

Can you talk about recent projects you’ve worked on?
I am currently working on The Big Bang Theory and Mom for CBS. Both are produced by Chuck Lorre Productions and Warner Bros. Television. Steven V. Silver, ASC, is the DP for both shows. Both shows use four cameras and are taped in front of a studio audience. We work in what I like to call the “Desilu-type” format because all four cameras are on a J.L. Fisher dolly with a 1st assistant working the lens and a dolly grip physically moving the camera. This format was perfected by Desi Arnez for I Love Lucy and still works well for this format.

The working relationship with the DP and director on our shows falls somewhere in the middle. Mark Cendroski and Jamie Widdoes direct almost all of our shows, and they work directly with the four operators to set the required shots. They have a lot of trust in us to know what elements are needed in the frame, and sometimes the only direction we receive is the type of shot they want. I work on a center camera, which is commonly referred to as a “master” camera, however it’s not uncommon to have a master, close-up and two or three shots all in the same scene. Each scene is shot beginning to end with all the coverage set, and a final edit is done in post. We do have someone cutting a live edit that feeds to the audience so they can follow along.

Our process is very fast, and our DP usually only sees the lighting when we start blocking shots with stand-ins. Steve spends a lot of time at the monitor and constantly switches between all four cameras — he’s looking at composition and lighting, and setting the final look with our video controller. Generally, when the actors are on set we roll the cameras. It’s a pretty high-pressure way to work for a product that will potentially be seen by millions of people around the world, but I love it and can’t imagine doing anything else.

Main Image Caption: Stephen Campanelli


The SOC, which celebrates 40 years in 2019, is an international organization that aims to bring together camera operators and crew. The Society also hosts an annual Lifetime Achievement Awards show, publishes the magazine Camera Operator and has a charitable commitment to The Vision Center at Children’s Hospital Los Angeles.

The ASC: Mentoring and nurturing diversity

Cynthia Pusheck, ASC, co-chairs the ASC Vision Committee, along with John Simmons, ASC. Working together they focus on encouraging and supporting the advancement of underrepresented cinematographers, their crews and other filmmakers. They hope their efforts inspire others in the industry to help positive change through hiring talent that better reflects society.

In addition to her role on the ASC Vision Committee, Pusheck is a VP of the ASC board. She became a member in 2013. Her credits include Sacred Lies, Good Girls Revolt, Revenge and Brothers & Sisters. She is currently shooting Limetown for Facebook Watch.

To find out more about their work, we reached out to Pusheck.

Can you talk about what the ASC Vision Committee has done since its inception? What it hopes to accomplish?
The ASC Vision Committee was formed in January 2016 as a way for the ASC to actively support those who face unique hurdles as they build their cinematography careers. We’ve held three full-day diversity events, and some individual panel discussions.

We’ve also awarded a number of scholarships to the ASC Master Class and will continue awarding a handful each year. Our mentorship program is getting off the ground now with many ASC members offering to give time to young DPs from underrepresented groups. There’s a lot more that John Simmons (my co-chair) and our committee members want to accomplish, and with the support of the ASC staff, board members and president, we will continue to push things forward.

(L-R) Diversity Day panel: Rebecca Rhine, Dr. Stacy Smith, Alan Caso, Natasha Foster-Owens, Xiomara Comrie, Tema Staig, Sarah Caplan.

The word “progress” has always been part of the ASC mission statement. So, with the goal of progress in mind, we redesigned an ASC red lapel pin and handed it out at the ASC Awards earlier this year (#ASCVision). We wanted to use it to call attention to the work of our committee and to encourage our own community of cinematographers and camera people to do their part. If directors of photography and their department heads (camera, grip and set lighting) hire with inclusivity in mind, then we can change the face of the industry.

What do you think is contributing to more females becoming interested in camera crew careers? What are you seeing in terms of tangible developments?
Gender inequality in this industry has certainly gotten a lot of attention the last few years, which is fantastic but despite all that attention, the actual facts and figures don’t show as much change as you’d think.

The percentage of women or people of color shooting movies and TV shows hasn’t really changed much. There certainly is a lot more “content” getting produced for TV, and that has been great for many of us, and it’s a very exciting time. But, we have a long way to go still.

What’s very hopeful, though, is that more producers and studios are really pushing for inclusivity. That means hiring more women and people of color in positions of leadership, and encouraging their crews to bring more underrepresented crew members onto the production.

Currently we’re also seeing more young female DPs getting some really good shooting opportunities very early in their careers. That didn’t happen so much in the past, and I think that continues to motivate more young women to consider the camera department, or cinematography, as a viable career path.

We also have to remember that it’s not just about getting more women on set, it’s about having our sets look like society at large. The ultimate goal should be that everyone has a fair chance to succeed in this industry.

How can women looking to get into this part of the industry find mentors?
The union (Local 600), and also now the ASC have mentorship programs. The union’s program is great for those coming up the ranks looking for help or advice as they build their career.

For example, an assistant can find another assistant, or an operator, to help them navigate the next phase of their career and give them advice. The ASC mentorship program is aimed more for young cinematographers or operators from underrepresented groups who may benefit from the support of an experienced DP.

Another way to find a mentor is by contacting someone whom you admire directly. Many women would be surprised to find that if they reach out and request a coffee or phone call, often that person will try and find time for them.

My advice would be to do your homework about the person you’re contacting and be specific in your questions and your goals. Asking broad questions like “How do I get a job” or “Will you hire me?” won’t get you very far.

What do you think will create the most change? What are the hurdles that still must be overcome?
Bias and discrimination, whether conscious or unconscious, is still a problem on our sets. It may have lessened in the last 25 years, but we all continue to hear stories about crew members (at all levels) who behave badly, make inappropriate comments or just have trouble working for woman or people of color. These are all unnecessary stresses for those trying to get hired and build their careers.

Behind the Camera: Feature Film DPs

By Karen Moltenbrey

The responsibilities of a director of photography (DP) span far more than cinematography. Perhaps they are best known for their work behind the camera capturing the action on set, but that is just one part of their multi-faceted job. Well before they step onto the set, they meet with the director, at times working hand-in-hand to determine the overall look of the project. They also make a host of technical selections, such as the type of camera and lenses they will use as well as the film stock if applicable – crucial decisions that will support the director’s vision and make it a reality.

Here we focus on two DPs for a pair of recent films with specialized demands and varying aesthetics, as they discuss their workflows on these projects as well as the technical choices they made concerning equipment and the challenges each project presented.

Hagen Bogdanski: Papillon
The 2018 film Papillon, directed by Michael Noer, is a remake of the 1973 classic. Set in the 1930s, it follows two inmates who must serve, and survive, time in a French Guyana penal colony. The safecracker nicknamed Papillon (Charlie Hunnam) is serving a life sentence and offers protection to wealthy inmate Louis Dega (Rami Malek) in exchange for financing Papillon’s escape.

“We wanted to modernize the script, the whole story. It is a great story but it feels aged. To bring it to a new, younger audience, it had to be modernized in a more radical way, even though it is a classic,” says Hagen Bogdanski, the film’s DP, whose credits include the film The Beaver and the TV series Berlin Station, among others. To that end, he notes, “we were not interested in mimicking the original.”

This was done in a number of ways. First, through the camera work, using a semi-documentary style. The director has a history of shooting documentaries and, therefore, the crew shot with two cameras at all times. “We also shot the rehearsals,” notes Bogdanski, who was brought onto the project and given nearly five weeks of prep before shooting began. Although this presented a lot of potential risk for Bogdanski, the film “came out great in the end. I think it’s one of the reasons the look feels so modern, so spontaneous.”

In the film, the main characters face off against the harsh environment of their prison island. But to film such a landscape required the cinematographer and crew to also contend with these trying conditions. They shot on location outdoors for the majority of the feature, using just one physical structure: the prison. Also helping to define the film’s aesthetic was the lighting, which, as is typical with Bogdanski’s films, is as natural as possible without large artificial sources.

Most of the movie was shot in Montenegro, near sun-drenched Greece and Albania. Bogdanski does not mince words: “The locations were difficult.”

Weather seemed to impact Bogdanski the most. “It was very remote, and if it’s raining, it’s really raining. If it’s getting dark, it’s dark, and if it’s foggy, there is fog. You have to deal with a lot of circumstances you cannot control, and that’s always a bit of a nightmare for any cinematographer,” he says. “But, what is good about it is that you get the real thing, and you get texture, layers, and sometimes it’s better when it rains than when the sun is shining. Most of the time we were lucky with the weather and circumstances. The reality of location shooting adds quite heavily to the look and to the whole texture of the movie.”

The location shooting also affected this DP’s choice of cameras. “The footprint [I used] was as small as possible because we basically visited abandoned locales. Therefore, I chose as small a kit — lenses, cameras and lights — as possible,” Bogdanski points out. “Because [the camera] was handheld, every pound counted.” In this regard, he used ARRI’s Arriflex Mini cameras and one Alexa SXT, and only shot with Zeiss Ultra Prime lenses – “big zooms, no big filters, nothing,” he adds.

The prison build was on a remote mountain. On the upside, Bogdanski could shoot 360 degrees there without requiring the addition of CGI later. On the downside, the crew had to get up the mountain. A road was constructed to transport the gear and for the set construction, but even so, the trek was not easy. “It took two hours or longer each day from our hotel. It was quite an adventure,” he says.

As for the lighting, Bogdanski tried to shoot when the light was good, taking advantage of the location’s natural light as much as possible — within his documentary style. When this was not enough, LEDs were used. “Again, small footprint, smaller lens, smaller electrical power, smaller generators….” The night scenes were especially challenging because the nights were very short, no longer than five to six hours. When artificial rain had to be used, shooting was “a little painful” due to the size of the set, requiring the use of more traditional lighting sources, such as large Tungsten light units.

According to Bogdanski, filming Papillon followed what he calls an “eclectic” workflow, akin to the European method of filming whereby rehearsal occurred in the morning and was quite long, as the director rehearsed with the actors. Then, scenes were shot in script order, on the first take without technical rehearsals. “From there, we tried to cover the scene in handheld mode with two cameras in a kind of mash-up. We did pick up the close-ups and all that, but always in a very spontaneous and quick way,” says Bogdanski.

Looking back, Bogdanski describes Papillon as a “modern-period film”: a period look, without looking “period.” “It sounds a bit Catch-22, which it is, in my opinion, but that’s what we aimed for, a film that plays basically in the ’40s and ’50s, and later in the ’60s,” he says.

During the time since the original film was made in 1973, the industry has witnessed quite a technical revolution in terms of film equipment, providing the director and DP on the remake with even more tools and techniques at their disposal to leave their own mark on this classic for a new generation.

Nancy Schreiber: Mapplethorpe
Award-winning cinematographer Nancy Schreiber, ASC, has a resume spanning episodic television (The Comeback), documentaries (Eva Hesse) and features (The Nines). Her latest film, Mapplethorpe, paints an unflinching portrait of controversial-yet-revered photographer Robert Mapplethorpe, who died at the age of 42 from AIDS-related complications in 1989. Mapplethorpe, whose daring work influenced popular culture, rose to fame in the 1970s with his black-and-white photography.

In the early stages of planning the film, Schreiber worked with director Ondi Timoner and production designer Jonah Markowitz while they were still in California prior to the shoot in New York, where Mapplethorpe (played by The Crown’s Matt Smith) lived and worked at the height of his popularity.

“We looked at a lot of reference materials — books and photographs — as Ondi and I exchanged look books. Then we honed in on the palette, the color of my lights, the set dressing and wardrobe, and we were off to the races,” says Schreiber. Shooting began mid-July 2017.

Mapplethorpe is a period piece that spans three decades, all of which have a slightly different feel. “We kept the ’60s and into the ’70s quite warm in tone,” as this is the period when he first meets Patty Smith, his girlfriend at the time, and picks up a camera, explains Schreiber. “It becomes desaturated but still warm tonally when he and Patti visit his parents back home in Queens while the two are living at the Chelsea Hotel. The look progresses until it’s very much on the cool blue/gray side, almost black and white, in the later ’70s and ’80s.” During that time period, Mapplethorpe is successful, with an enormous studio, photographically exploring male body parts like no other person has ever done, while continuing to shoot portraits of the rich and famous.

Schreiber opted to use film, Super 16, rather than digital to capture the life of this famed photographer. “He shot in film, and we felt that format was true to his photography,” she notes. Despite Mapplethorpe’s penchant for mostly shooting in black and white, neither Timoner nor Schreiber considered using that format for the feature, mostly because the ’60s through ’80s in New York had very distinctive color palettes. They felt, however, that film in and of itself was very “textural and beautiful,” whereas you have to work a little harder with digital to make it look like film — even though new ways of adding grain to digital have become quite sophisticated. “Yet, the grain of Super 16 is so distinctive,” she says.

In addition, Kodak had just opened a lab in New York in the spring of 2017, facilitating their ability to shoot film by having it processed quickly nearby.

Schreiber used an ARRI Arriflex 416 camera for the project; when possible, she used two. She also had a set of Zeiss 35mm Super Speed lenses, along with two zoom lenses she used only occasionally for outdoor shots. “The Super Speeds were terrific. They’re vintage and were organic to the look of this period.”

She also used a light meter faithfully. Although Schreiber occasionally uses light meters when shooting digital, it was not optional for shooting film. “I had to use it for every shot, although after a couple of days, I was pretty good at guessing [by eyeing it],” Schreiber points out, “as I used to do when we only shot film.”

Soon after ARRI had introduced the Arriflex 416 – which is small and lightweight – the industry started moving to digital, prompting ARRI to roll out the now-popular Alexa. “But the [Arriflex 416] camera really caught on for those still shooting Super 16, as they do for the series The Walking Dead, Schreiber says, adding she was able to get her pair from TCS Technological Cinevideo Services rental house in New York.

“I had owned an Aaton, a French camera that was very popular in the 1980s and ’90s. But today, the 416 is very much in demand, resembling the shape of my Aaton, both of which are ergonomic, fitting nicely on your shoulder. There were numerous scenes in the car, and I could just jump in the car with this very small camera, much smaller than the digital cameras we use on movies; it was so flexible and easy to work with,” recalls Schreiber.

As for the lenses, “again, I chose the Super Speed Primes not only because they were vintage, but because I needed the speed of the 1.3 lens since film requires more light.” She tested other lenses at TCS, but those were her favorites.

While Schreiber has used film on some commercials and music videos, it had been some time since she had used it for an entire movie. “I had forgotten how freeing it is, how you can really move. There are no cables to worry about. Although, we did transmit to a tiny video village,” she says. “We didn’t always have two cameras [due to cost], so I needed to move fast and get all the coverage the editor needed. We had 19 days, and we were limited in how long we could shoot each day; our budget was small and we couldn’t afford overtime.” At times, though, she was able to hire a Steadicam or B operator who really helped move them along, keeping the camera fluid and getting extra coverage. Timoner also shot a bit of Super 8 along the way.

There was just one disadvantage to using film: The stocks are slow. As Schreiber explains, she used a 500 ASA stock. Therefore, she needed very fast lenses and a fair amount of light in order to compensate. “That worked OK for me on Mapplethorpe because there was a different sense of lighting in the 1970s, and films seemed more ‘lit.’ For example, I might use backlight or hair light, which I never would do for [a film set in] present day,” she says. “I rated that stock at 400 to get rich blacks; that also slightly minimized the grain when the day interior stock was 250 that I rated at 200. We are so used to shooting at 800 or 1280 ISO these days. It was an adjustment.”

Schreiber on set with “Mapplethorpe” director Ondi Timoner.

Shooting with film was also more efficient for Schreiber. “We had monitors for the video village, but we were standard def, old-school, which is not an exact representation. So, I could move quickly to get enough coverage, and I never looked at a monitor except when we had Steadicam. What you see is not what you get with an SD tap. I was trusted to create the imagery as I saw fit. I think many people today are used to seeing the digital image on the monitor as what the final film will look like and may be nervous about waiting for the processing and transfer, not trusting the mystery or mystique of how celluloid will look.”

To top things off, Schreiber was backed by an all-female A camera team. “I know how hard it is for women to get work,” she adds. “There are so many competent women working behind the camera these days, and I was happy to hire them. I remember how challenging it was when I was a gaffer or started to shoot.”

As for costs, digital camera equipment is more expensive than Super 16 film equipment, yet there were processing and transfer costs associated with getting the film into the edit suite. So, when all was said and done, film was indeed more expensive to use, but not by much.

“I am really proud that we were able to do the movie in 19 days with a very limited budget, in New York, covering many periods,” concludes Schreiber. “We had a great time, and I am happy I was able to hire so many women in my departments. Women are still really under-represented, and we must demonstrate that there is not a scarcity of talent, just a lack of exposure and opportunity.”

Mapplethorpe is expected in theaters this October.


Karen Moltenbrey is a longtime writer and editor in the CG and post industries.

DP Rick Ray: Traveling the world capturing stock images

By Randi Altman

It takes a special kind of human to travel the world, putting himself in harm’s way to collect hard-to-find stock imagery, but Rick Ray thrives on this way of life. This Adobe Stock contributor has a long history as a documentary filmmaker and a resume that includes 10 Questions for the Dalai Lama (2006), Letters Home from the South China Seas: Adventures in Singapore & Borneo (1989) and Letters Home from Iceland (1990).

Let’s find out more about what makes Ray tick.

As a DP, are you just collecting footage to sell or are you working on films, docs and series as well?
I used to be a documentary filmmaker and have about 24 published titles in travel and biography, including the 10 Questions For The Dalai Lama and the TV series Raising The Bamboo Curtain With Martin Sheen. However, I found that unless you are Ken Burns or Michael Moore, making a living in the world of documentary films can be very difficult. It wasn’t until I came to realize that individual shots taken from my films and used in other productions were earning me more income than the whole film itself that I understood how potentially lucrative and valuable your footage can be when it is repurposed as stock.

That said, I still hire myself out as a DP on many Hollywood and independent films whenever possible. I also try to retain the stock rights for these assignments whenever possible.

A Bedouin man in Jordan.

How often are you on the road, and how do you pick your next place to shoot?
I travel for about three to four months each year now. Lately, I travel to places that interest me from a beauty or cultural perspective, whether or not they may be of maximal commercial potential. The stock footage world is inundated with great shots of Paris, London or Tokyo. It’s very hard for your footage to be noticed in such a crowded field of content. For that reason, lesser known locations of the world are attractive to me because there is less good footage of those places.

I also enjoy the challenges of traveling and filming in less comfortable places in the world, something I suppose I inherited from my days as a 25-year-old backpacking and hitchhiking around the world.

Are you typically given topics to capture — filling a need — or just shooting what interests you?
Mostly what interests me, but also I see a need for many topics of political relevance, and this also informs my shooting itinerary.

For example, immigration is in the news intensively these days, so I have recently driven the border wall from Tijuana to the New Mexico border capturing imagery of that. It’s not a place I’d normally go for a shoot, but it proved to be very interesting and it’s licensing all the time.

Rick Ray

Do you shoot alone?
Yes, normally. Sometimes I go with one other person, but that’s it. To be an efficient and effective stock shooter, you are not a “film crew” per se. You are not hauling huge amounts of gear around. There are no “grips,” and no “craft services.” In stock shooting around the world, as I define it, I am a low-key casual observer making beautiful images with low-key gear and minimal disruption to life in the countries I visit. If you are a crew of three or more, you become a group unto yourself, and it’s much more difficult to interact and experience the places you are visiting.

What do you typically capture with camera-wise? What format? Do you convert footage or let Adobe Stock do that?
I travel with two small (but excellent) Sony 4K handicams (FDR-AX100), two drones, a DJI Osmo handheld steady-grip, an Edelkrone slider kit and two lightweight tripods. Believe it or not, these can all fit into one standard large suitcase. I shoot in XDCAM 4K and then convert it to Apple ProRes in post. Adobe Stock does not convert my clips for me. I deliver them ready to be ordered.

You edit on Adobe Premiere. Why is that the right system for you, and do you edit your footage before submitting? How does that Adobe Stock process work?
I used to work in Final Cut Pro 7 and Final Cut Pro X, but I switched to Adobe Premiere Pro after struggling with FCPX. As for “editing,” it doesn’t really play a part in stock footage submission. There is no editing as we are almost always dealing with single clips. I do grade, color correct, stabilize and de-noise many clips before I export them. I believe in having the clips look great before they are submitted. They have to compete with thousands of other clips on the site, and mine need to jump out at you and make you want to use them. Adobe allows users to submit content directly from Premiere to Adobe Stock, but since I deal in large volumes of clips in submitting, I don’t generally use this approach. I send a drive in with a spreadsheet of data when a batch of clips are done.

A firefighter looks back as a building collapses during the Thomas Fire in Ventura, California.

What are the challenges of this type of shooting?
Well, you are 100% responsible for the success or failure of the mission. There is no one to blame but yourself. Since you are mostly traveling low-key and without a lot of protection, it’s very important to have a “fixer” or driver in difficult countries. You might get arrested or have all of your equipment stolen by corrupt customs authorities in a country like Macedonia, as happened to me. It happens! You have to roll with the good and the bad, ask forgiveness rather than permission and be happy for the amazing footage you do manage to get,

You left a pretty traditional job to travel the world. What spurred that decision, and do you ever see yourself back at a more 9-to-5  type of existence?
Never! I have figured out the perfect retirement plan for myself. Every day I can check my sales from anywhere in the world, and on most days the revenue more than justifies the cost of the travel! And it’s all a tax write-off. Who has benefits like that?

A word of warning, though — this is not for everyone. You have to be ok with the idea of spending money to build a portfolio before you see significant revenue in return. It can take time and you may not be as lucky as I have been. But for those who are self-motivated and have a knack for cinematography and travel, this is a perfect career.

Can you name some projects that feature your work?
Very often this takes me by surprise since I often don’t know exactly how my footage is used. More often than not, I’m watching CNN, a TV show or a movie and I see my footage. It’s always a surprise and makes me laugh. I’ve seen my work on the Daily Show, Colbert, CNN, in commercials for everything from pharmaceuticals to Viking Cruises, in political campaign ads for people I agree and disagree with, and in music videos for Neil Young, Bruce Springsteen, Coldplay and Roger Waters.

Fire burns along the road near a village in the Palestinian territories.

Shooting on the road must be interesting. Can you share a story with us?
There have been quite a few. I have had my gear stolen in Israel (twice). In Thailand my gear was confiscated by corrupt customs authorities in Macedonia, as I mentioned earlier. I have been jailed by Ethiopian police for not having a valid filming permit, which was not necessary. Once a proper bribe was arranged they changed clothes from police into costumed natives and performed as tour guides and cultural emissaries for me.

In India, I was on a train to the Kumba Mela, which was stopped by a riot and burned. I escaped with minor injuries. I was also accosted by communist revolutionaries in Bihar, India. Rather than be a victim, I got out of the car and filmed it, and the leader and his generals then reviewed the footage and decided to do it over. After five takes of them running down the road and past the camera, the leader finally approved the take and I was left unharmed.

I’ve been in Syria and Lebanon and felt truly threatened by violence. I’ve been chased by Somali bandits at night in a van in Northern Kenya. Buy me a beer sometime, I’ll tell you more.

Behind the Title: WIG director/DP Daniel Hall

NAME: Daniel Hall

COMPANY: LA-based Where It’s Greater (@whereitsgreater)

Dan on set for Flyknit Hyperdunk project.

CAN YOU DESCRIBE YOUR COMPANY?
Where It’s Greater is a nimble creative studio. We sit somewhere between the traditional production company and age-old advertising agency, meaning we are a small team of creatives who are able to work with brands and other agencies alike. It doesn’t matter where they are in the spectrum of their campaign; we help bring their projects to life from concept to camera to final delivery. We like getting our hands dirty. We have a physical studio space with various production capabilities and in-house equipment that affords us some unique opportunities from both and efficiency and creative standpoint.

WHAT’S YOUR JOB TITLE?
Along with being the founder, I am director and lead cinematographer.

WHAT DOES THAT ENTAIL?
That entails pretty much everything and then some. Where It’s Greater is my baby, so everything from physically lighting and capturing the photos on shoots to making sure we’re headed in the right direction as a company to securing new clients and jobs on a consistent basis. I take out the trash sometimes, too.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I think what may surprise people the most is that we work mostly client-direct. A lot of agencies or cinematographers have agents or reps that go out and get them work, but I’ve been fortunate enough to personally establish long-lasting, fruitful relationships with clients like Nike and Beats By Dre and MeUndies.

WHAT’S YOUR FAVORITE PART OF THE JOB?
By far, my favorite part is creating beautiful advertising work for great brands. It’s really special when you get to connect with clients who not only share the same values as you, but also align and speak the same language in terms of taste and preferences. Those projects always come out memorable.

WHAT’S YOUR LEAST FAVORITE?
All the other mundane tasks I take on during a day-to-day basis solely so that I can create some truly great work every now and then. But it’s apart of the process; you can’t have one without the other.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
Anytime a client calls me with an exciting new opportunity (smiles).

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I could see myself doing a few different things, but they are all in the creative/production field. So I would most likely be doing what I’m doing, but just not for myself.

HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
I always remember having a creative eye from a young age. I get that naturally from my dad who was a camera operator, but it wasn’t until my cousin put a camera in my hand around 18 or 19 that I really fell in love with photography. But even then I didn’t exactly know what to do with it. I just followed the flow of life. I took advantage of the opportunities in front of me and worked my ass off to maximize them and, in turn, set myself for the next opportunity.

After 10 years, I have a 4,000-square-foot studio space in Los Angeles with a bunch of toys and equipment that I love to use on projects with some of the top brands in the world. I’m very grateful and fortunate in that way. I’m excited to look up again in the next 10 years.

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
We most recently worked with Beats By Dre on their global ‘Made Defiant’ campaign. We were commissioned to direct and produce a series of product films and still-life imagery to showcase their product line of headphones and earbuds in new colors that resemble their original headphone in order to pay homage and celebrate the brand’s 10-year anniversary. We took advantage of this opportunity to use our six-axis robotic arm, which we own and operate in-house. The arm gave us the ability to capture a series of beauty shots in motion that wouldn’t be possible with any other tech on the market. I think that is what made this job special.

WHAT IS THE PROJECT THAT YOU ARE MOST PROUD OF?
I really loved what we did last summer for Nike Basketball and Dick’s Sporting Goods. We directed and produced a 30-second live-action spot centered around one of the most popular basketball shoes of the summer, the Flyknit Hyperdunk. Again, we were able to produce this completely in-house, building out a stylize basketball court in our studio space and harnessing our six-axis robot yet again to make a simple yet compelling advert for the sportswear giant.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
Chemex — I’m by no means a coffee snob, but I definitely have to have a cup to start my day. There is something therapeutic about it.

Color meter — I can live without my light meter. I rarely, if ever, shoot film for commercial jobs, at least at this phase in my career, but I love my Sekonic C-700R color meter. It allows me to balance all my images and films to taste.

Hyperice foam roller — In the last year I’ve been a lot more active and more into health and fitness. It’s really changed my life in a lot of ways for the better. This vibrating foam roller is a major key to keeping my muscles loose and stretched so I can recover a lot faster.

DO YOU LISTEN TO MUSIC WHILE YOU WORK?
Of course. I got my start growing up in Atlanta directing music videos for some pretty noteworthy artists, so there is frequently some form of southern hip-hop playing throughout the studio. From the iconic duo of Outkast to the newer generation of artists like Future and 2 Chainz, who I’ve had the pleasure of working with, I always have something playing in the background.

WHAT DO YOU DO TO DE-STRESS FROM IT ALL?
I do some of the typical things on a regular basis: exercise, massage therapy, vacation time. Nothing special really as of yet, but if I crack the code and find a new technique I’ll be sure to share!

Atomos Ninja V records 4K 10-bit from new Nikon mirrorless cameras  

The new Nikon Z6 and Z7 mirrorless cameras output a full-frame 10-bit 4K N-Log signal, which the new Atomos Ninja V 4K HDR monitor/recorder can record and display in HDR.

The Nikon Z6 and Z7 have sensors that output 4K images over HDMI, ready for conversion to HDR by Atomos. The Atomos Ninja V records the output to production-ready 10-bit Apple ProRes or Avid DNx formats.

Atomos supports Nikon Log, Apple ProRes recording and HDR monitoring from the Z series cameras. The tiny Ninja V 5-inch device makes a nice go-to for these full-frame mirrorless cameras, making the setup ideal for corporate, news, documentary, nature films or b-roll for Hollywood productions.

The Z6 and Z7 offer the Nikon N-Log gamma, a brand-new Log gamma designed by Nikon to get the most out of the cameras’ sensors and wide dynamic range. Atomos helped resolve N-Log to HDR on their devices and their engineers have developed specific presets for it. Setup is automatic — plug the Ninja V into the cameras. The Ninja V can show 10+ stops of dynamic range on-screen to allow users to make accurate exposure and color decisions. The recorder can receive timecode and be triggered directly from the cameras.

The Ninja V costs $695, excluding SSD and batteries.

“It’s fantastic to push technology barriers with our friends at Nikon,” says Atomos CEO Jeromy Young. “Combining the new Nikon and our Ninja V HDR monitor/recorder gives filmmakers exactly what they have been asking for — a compact full-frame 4K 10-bit recording system at [this] price point.”

Review: OConnor camera assistant bag

By Brady Betzel

After years and years of gear acquisition, I often forget to secure proper bags and protection for my equipment. From Pelican cases to the cheapest camera bags, a truly high-quality bag will extend the life of your equipment.

In this review I am going to go over a super-heavy-duty assistant camera bag by OConnor, which is a part of the Vitec Group. While the Vitec Group provides many different products — from LED lighting to robotic camera systems — OConnor is typically known for their professional fluid heads and tripods. This camera bag is made to not only fit their products, but also other gear, such as pan bars and ARRI plates. The OConnor AC bag is a no-nonsense camera and accessory bag with velcro enforced-repositionable inserts that will accommodate most cameras and accessories you have.

As soon as I opened the box and touched the AC bag I could tell it was high quality. The bag exterior is waterproof and easily wipeable. But, more importantly, there is an internal water- and dust-proof liner that allows the lid to be hinged while the equipment is close at hand while the liner is fully zipped. This internal waterproofing is resistant up to a 1.2M/4ft. column of water. Once I got past the quality of materials, my second inspection focused on the zippers. If I have a camera bag with bad zippers or snaps, it usually is given away or tossed, but the AC bag has strong and easy gliding zippers.

On the lid and inside of the front pockets are extremely tough and see-through mesh pockets for everything from batteries to memory cards. On the front is a business card/label holder. Around the outside are multiple pockets with fixing points for Carabiner hooks. In addition, there are d-rings for the included leather strap if you want to carry this bag over your shoulder instead of using the handles. The bag comes with five dividers to be velcroed on the inside, including two right angle dividers.The dividers are made to securely tie down all OConnor heads and accessories. Finally, the AC bag comes with a separate pouch to use on set for quick use.

Summing Up
In the end, the OConnor AC bag is a well made and roomy bag that will protect your camera gear and accessories from dust as well as water for $375. The inside measures in at 18x12x10.5 inches while the outside measures in at 22×14.5×10.5 inches and has been designed to fit inside of a Pelicase 1620. You can check out the OConnor AC bag on their website and find a dealer in your area.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on Life Below Zero and Cutthroat Kitchen. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Franz Kraus to advisory role at ARRI, Michael Neuhaeuser takes tech lead

The ARRI Group has named Dr. Michael Neuhaeuser as the new executive board member responsible for technology. He succeeds Professor Franz Kraus, who after more than 30 years at ARRI, joins the Supervisory Board and will continue to be closely associated with the company. Neuhaeuser starts September 1.

Kraus, who has been leading tech development at ARRI for the last few decades, played an essential role in the development of the Alexa digital camera system and early competence in multi-channel LED technology for ARRI lighting. During Kraus’ tenure at ARRI, and while he was responsible for research and development, the company was presented with nine Scientific and Technical Awards by the Academy of Motion Picture Arts and Sciences for its outstanding technical achievements.

In 2011, along with two colleagues, Kraus was honored with an Academy Award of Merit, an Oscar statuette for the design and development of the digital film
recorder, the ARRILASER.

Neuhaeuser, who is now responsible for technology at the ARRI Group, previously served as VP of automotive microcontroller development at Infineon Technologies in Munich. He studied electrical engineering at the Ruhr-University Bochum, Germany, and subsequently completed his doctorate in semiconductor devices. He brings with him 30 years of experience in the electronics industry.

Neuhaeuser started his industrial career at Siemens Semiconductor in Villach, Austria, and also took over leadership development at Micram Microelectronic in Bochum. He joined Infineon Technologies in 1998, where he performed various management functions in Germany and abroad. Some of his notable accomplishments include being responsible for the digital cordless business since 2005 and, together with his team, having developed the world’s first fully integrated DECT chip. In 2009, he was appointed to VP/GM at Infineon Technologies Romania in Bucharest where, as country manager, he built up various local activities with more than 300 engineers. In 2012, he was asked to head up the automotive microcontroller development division for which he and his team developed the highly successful Aurix product family, which is used in every second car worldwide.

Main Image: L-R: Franz Kraus and Michael Neuhaeuser.

Roundtable: Director Autumn McAlpin and her Miss Arizona post team

By Randi Altman

The independent feature film Miss Arizona is a sort of fish out of water tale that focuses on Rose Raynes, former beauty queen and current bored wife and mother who accepts an invitation to teach a life skills class at a women’s shelter. As you might imagine, the four women who she meets there don’t feel they have much in common. While Rose is “teaching,” the women are told that one of their abusers is on his way to the shelter. The women escape and set out on an all-night adventure through LA and, ultimately, to a club where the women enter Rose into a drag queen beauty pageant — and, of course, along the way they form a bond that changes them all.

L-R: Camera operator Eitan Almagor, DP Jordan McKittrick and Autumn McAlpin.

Autumn McAlpin wrote and directed the film, which has been making its way through the film festival circuit. She hired a crew made up of 70 percent women to help tell this tale of female empowerment. We reached out to her, her colorist Mars Williamson and her visual effects/finishing artist John Davidson to find out more.

Why did you choose the Alexa Mini? And why did you shoot mostly handheld?
Autumn McAlpin: The Alexa Mini was the first choice of our DP Jordan McKittrick, with whom I frequently collaborate. We were lucky enough to be able to score two Alexa Mini cameras on this shoot, which really helped us achieve the coverage needed for an ensemble piece in which five-plus key actors were in almost every shot. We love the image quality and dynamic range of the Alexas, and the compact and lightweight nature of the Mini helped us achieve an aggressive shooting schedule in just 14 days.

We felt handheld would achieve the intimate yet at times erratic look we were going for following an ensemble of five women from very different backgrounds who were learning to get along while trying to survive. We wanted the audience to feel as if they were going on the journey along with the women, and thus felt handheld would be a wise approach to accomplish this goal.

How early did post — edit, color — get involved?
McAlpin: We met with our editor Carmen Morrow before the shoot, and she and her assistant editor Dustin Fleischmann were integral in delivering a completed rough cut just five weeks after we wrapped. We needed to make key festival deadlines. Each day Dustin would drive footage from set over to Carmen’s bay, where she could assemble while we were shooting so we could make sure we weren’t missing anything crucial. This was amazing, as we’d often be able to see a rough assembly of a scene we had shot in the morning by the end of day. They cut on Avid Media Composer.

My DP Jordan and I agreed on the overall look of the film and how we wanted the color to feel rich and saturated. We were really excited about what we saw in our colorist’s reel. We didn’t meet our colorist Mars Williamson until after we had wrapped production. Mars had moved from LA to Melbourne, so we knew we wouldn’t be able to work in close quarters, but we were confident we’d be able to accomplish the desired workflow in the time needed. Mars was extremely flexible to work with.

Can you talk more about the look of the film.
McAlpin: Due to the nature of our film, we sought to create a rich, saturated look color wise. Our film follows a former pageant queen on an all-night adventure through LA with four unlikely friends she meets at a women’s shelter. In a way, we tried to channel an Oz-like world as our ensemble embarks into the unknown. We deliberately used color to represent the various realities the women inhabit. In the film’s open, our production design (by Gabriel Gonzales) and wardrobe (by Cat Velosa) helped achieve a stark, cold world — filled with blues and whites — to represent our protagonist Rose’s loneliness.

As Rose moves into the shelter, we went with warmer tones and a more eclectic production design. A good portion of Act II takes place in a drag club, which we asked Gabe to design to be rich and vibrant, using reds and purples. Toward the end of the film as Rose finds resolution, we went with more naturalistic lighting, primarily outdoor shots and golden hues. Before production, Jordan and I pulled stills from films such as Nick & Norah’s Infinite Playlist, Black Swan and Short Term 12, which provided strong templates for the looks we were trying to achieve.

Is there a particular scene or look that stands out for you?
McAlpin: There is a scene when our lead Rose (Johanna Braddy) performs a ventriloquist act onstage with a puppet and they sing Shania Twain’s “Man, I Feel Like a Woman.”  Both Rose and the puppet wore matching cowgirl wardrobe and braids, and this scene was lit to be particularly vibrant with hot pinks and purples. I remember watching the monitors on set and feeling like we had really nailed the rich, saturated look we were going for in this offbeat pageant world we had created.

L-R: Dana Wheeler-Nicholson, Shoniqua Shandai, producer De Cooper, Johanna Brady, Autumn McAlpin, Otmara Marrero and Robyn Lively.

Can you talk about the workflow from set to post?
McAlpin: As a low-budget indie, many of our team work from home offices, which made collaboration friendly and flexible. For the four months following production, I floated between the workspaces of our talented and efficient editor Carmen Morrow, brilliant composer Nami Melumad, dedicated sound designer Yu-Ting Su, VFX and online extraordinaire John Davidson, and we used Frame.io to work with our amazing colorist Mars Williamson. Everyone worked so hard to help achieve our vision in our timeframe. Using Frame.io and Box helped immensely with file delivery, and I remember many careful drives around LA, toting our two RAID drives between departments. Postmates food delivery service helped us power through! Everyone worked hard together to deliver the final product, and for that I’m so grateful.

Can you talk about the type of film you were trying to make, and did it turn out as you hoped?
McAlpin: I volunteered in a women’s shelter for several years teaching a life skills class, and this was an experience that introduced me to strong, vibrant women whose stories I longed to tell. I wrote this script very quickly, in just three weeks, though really, the story seemed to write itself. It was the fall of 2016, at a time where I was agitated by the way women were being portrayed in the media. This was shortly before the #metoo movement, and during the election and women’s march. The time felt right to tell a story about women and other marginalized groups coming together to help each other find their voices and a safe community in a rapidly divisive world.

I’m not going to lie, with our budget, all facets of production and post were quite challenging, but I was so overwhelmed by the fastidious efforts of everyone on our team to create something powerful. I feel we were all aligned in vision, which kept everyone fueled to create a finished product I am very proud of. The crowning moment of the experience was after our world premiere at Geena Davis’ Bentonville Film Fest, when a few women from the audience approached and confided that they, too, had lived in shelters and felt our film spoke to the truths they had experienced. This certainly made the whole process worthwhile.

Autumn, you wrote as well as directed. Did the story change or evolve once you started shooting or did you stick to the original script?
McAlpin: As a director who is very open to improv and creative play on set, I was quite surprised by how little we deviated from the script. Conceptually, we stuck to the story as written. We did have a few actors who definitely punched up scenes by making certain lines more their own (and much more humorous, i.e. the drag queens). And there were moments when location challenges forced last-minute rewrites, but hey, I guess that’s one advantage to having the writer in the director’s chair! This story seemed to flow from the moment it first arrived in my head, telling me what it wanted to be, so we kind of just trusted that, and I think we achieved our narrative goals.

You used a 70 percent female crew. Can you talk about why that was important to you?
McAlpin: For this film, our producer DeAnna Cooper and I wanted to flip the traditional gender ratios found on sets, as ours was indeed a story rooted in female empowerment. We wanted our set to feel like a compatible, safe environment for characters seeking safety and trusted female friendships. So many of the cast and crew who joined our team expressed delight in joining a largely female team, and I think/hope we created a safe space for all to create!

Also, as women, we tend to get each other — and there were times when those on our production team (all mothers) were able to support each other’s familial needs when emergencies at home arose. We also want to give a shout-out to the numerous woman-supporting men we had on our team, who were equally wonderful to work with!

What was everyone’s favorite scene and why?
McAlpin: There’s a moment when Rose has a candid conversation with a drag queen performer named Luscious (played by Johnathan Wallace) in a green room during which each opens up about who they are and how they got there. Ours is a fish out of water story as Rose tries to achieve her goal in a world quite new to her, but in this scene, two very different people bond in a sincere and heartfelt way. The performances in this scene were just dynamite, thanks to the talents of Johanna and Johnathan. We are frequently told this scene really affects viewers and changes perspectives.

I also have a personal favorite moment toward the end of the film in which a circle of women from very different backgrounds come together to help out a character named Leslie, played by the dynamic Robyn Lively, who is searching for her kids. One of the women helping Leslie says, “I’m a mama, too,” and I love the strength felt in this group hug moment as the village comes together to defend each other.

If you all had to do it again, what would you do differently?
McAlpin: This was one fast-moving train, and I know, as is the case in every film, there are little shots or scenes we’d all love to tweak just a little if given the chance to start over from scratch. But at this point, we are focusing on the positives and what lies in store for Miss Arizona. Since our Bentonville premiere and LA premiere at Dances With Films, we have been thrilled to receive numerous distribution offers, and it’s looking like a fall worldwide release may be in store. We look forward to connecting with audiences everywhere as we share the message of this film.

Mars Williamson

Mars, can you talk about your process and how you worked with the team? 
Williamson: Autumn put us in touch, and John and I touched based a little bit before I was going to start color. We all had a pretty good idea of where we were taking it from the offline and discussed little tweaks here and there, so it was fairly straightforward. There were a couple of things like changing a wall color and the last scene needing more sunset than was shot. Autumn and John are super easy and great to work with. We found out pretty early that we’d be able to collaborate pretty easily since John has DaVinci Resolve on his end in the states as well.  I moved to Melbourne permanently right before I really got into the grade.

Unbeknownst to me, Melbourne was/is in the process of upgrading their Internet, which is currently painfully slow. We did a couple of reviews via Frame.io and eventually moved to me just emailing John my project. He could relink to the media on his end and all of my color grading would come across for sessions in LA with Autumn. It was the best solution to contend with the snail pace uploads of large files. From there it was just going through it reel by reel and getting notes from the stateside team. I couldn’t have worked on this with a better group of people.

What types of projects do you work on most often?
Williamson: My bread and butter has always been TV commercials, but I’ve worked hard to make sure I work on all sort of formats across different genres. I like to make sure I’ve got a wide range of stuff under my belt. The pool is smaller here in Australia than it is in LA (where I moved from) so TV commercials are still the bill payers, but I’m also still dipping into the indie scene here and trying to diversify what I work on. Still working on a lot of indie projects and music videos from the states as well so thank you stateside clients! Thankfully the difference in time hasn’t hindered most of them (smiles). It has led to an all-nighter here and there for me, but I’m happy to lose sleep for the right projects.

How did you work with the DP and director on the look of the film? What look did you want and how did you work to achieve that look or looks?
John Davidson: Magic Feather is a production company and creative agency that I started back in 2004. We provide theatrical marketing and creative services for a wide variety of productions. From the 3D atomic transitions in Big Bang Theory to the recent Jurassic World Fallen Kingdom week-long event on Discovery, we have a pretty great body of work. I came onboard Miss Arizona very much by accident. Last year, after working with Weta in New Zealand, we moved to Laguna Niguel and connected with Autumn and her husband Michael via some mutual friends. I was intrigued that they had just finished shooting this movie on their own and offered to replace a few license plates and a billboard. Somehow I turned that into coordinating the post-Avid workflow across the planet and creating 100-plus visual effects shots. It was a fantastic opportunity to use every tool in our arsenal to help a film with a nice message and a family we have come to adore.

John Davidson

Working with Jordan and Autumn for VFX and final mastering was educational for all of us, but definitely so with me. As I mentioned to Jordan after the showing in Hollywood, if I did my job right you would never know. There were quite a few late nights, but I think that they are both very happy with the results.

John, I understand there were some challenges in the edit? Relinking the camera source footage? Can you talk about that and how you worked around it?
Davidson: The original Avid cut was edited off of the dailies at 1080p with embedded audio. The masters were 3.2k Arri Alexa Mini Log with no sync sound. There were timecode issues the first few days on set and because Mars was using DaVinci Resolve to color, we knew we had to get the footage from Avid to Resolve somehow. Once we got the footage into DaVinci via AAF, I realized it was going to be a challenge relinking sources from the dailies. Resolve was quite the utility knife, and after a bit of tweaking we were able to get the silent master video clips linked up. Because 12TB drives are expensive, we thought it best to trim media to 48-frame handles and ship a smaller drive to Australia for Mars to work with. With Mars’s direction we were able to get that handled and shipped.

While Mars was coloring in Australia, I went back into the sources and began the process of relinking the original separate audio to the video sources because I needed to be able to adjust/re-edit a few scenes that had technical issues we couldn’t fix with VFX. Resolve was fantastic here again. Any clip that couldn’t be automatically linked via timecode was connected with clap marks using the waveform. For safety, I batch-exported all of the footage out with embedded audio and then relinked the film to that. This was important for archival purposes as well as any potential fixes we might have to do before the film delivered.

At this point Mars was sharing her cuts on Frame.io with Jordan and Autumn. I felt like a little green shift was being introduced over H.264 so we would occasionally meet here to review a relinked XML that Mars would send for a full quality inspection. For VFX we used Adobe After Effects and worked in flat color. We then would upload shots to box.com for Mars to incorporate into her edit. There were also two re-cut scenes that were done this way as well which was a challenge because any changes had to be shared with the audio teams who were actively scoring and mixing.

Once Mars was done we put the final movie together here, and I spent about two weeks working on it. At this point I took the film from Resolve to FCP X. Because we were mastering at 1080p, we had the full 3.2K frame for flexibility. Using a 1080p timeline in FCP X, the first order of business was making final on-site color adjustments with Autumn.

Can you talk about the visual effects provided?
Davidson: For VFX, we focused on things like the license plates and billboards, but also took a bit of initiative and reviewed the whole movie for areas we could help. Like everyone else, I loved the look of the stage and club scenes, but wanted to add just a little flare to the backlights so the LED grids would be less visible. This was done in Final Cut Pro X using the MotionVFX plugin mFlare2. It made very quick work of using its internal Mocha engine to track the light sources and obscure them as needed when a light went behind a person’s head, for example. It would have been agony tracking so many lights in all those shots using anything else. We had struggled for a while getting replacement license plates to track using After Effects and Mocha. However, the six shots that gave us the most headaches were done as a test in FCP X in less than a day using CoreMelt’s TrackX. We also used Final Cut Pro X’s stabilization to smooth out any jagged camera shakes as well as added some shake using FCP X’s handheld effect on a few shots that needed it for consistency.

Another area we had to get creative with was with night driving shots that were just too bright even after color. By layering a few different Rampant Design overlays set to multiply, we were able to simulate lights in motion around the car at night with areas randomly increasing and decreasing in brightness. That had a big impact on smoothing out those scenes, and I think everyone was pretty happy with the result. For fun, Autumn also let me add in a few mostly digital shots, like the private jet. This was done in After Effects using Trapcode Particular for the contrails, and a combination of Maxon Cinema 4D and Element 3D for the jet.

Resolve’s face refinement and eye brightening were used in many scenes to give a little extra eye light. We also used Resolve for sky replacement on the final shot of the film. Resolve’s tracker is also pretty incredible, and was used to hide little things that needed to be masked or de-emphasized.

What about finishing?
Davidson: We finalized everything in FCP X and exported a full, clean ProRes cut of the film. We then re-imported that and added grain, unsharp masks and a light vignette for a touch of cinematic texture. The credits were an evolving process, so we created an Apple Numbers document that was shared with my internal Magic Feather team, as well as Autumn and the producers. As the final document was adjusted and tweaked we would edit an Affinity Photo file that my editor AJ Paschall and I shared. We would then export a huge PNG file of the credits into FCP X and set position keyframes to animate the scroll. Any time a change was made we would just relink to the new PNG export and FCP X would automatically update the credits. Luckily, that was easy because we did that probably 50 times.

Lastly, our final delivery to the DCP company was a HEVC 10-bit 2K encode. I am a huge fan of HEVC. It’s a fantastic codec, but it does have a few caveats in that it takes forever to encode. Using Apple Compressor and a 10-core iMac Pro, it took approximately 13 hours. That said, it was worth it because the colors were accurately represented and gave us a file that 5.52GB versus 18GB or 20GB. That’s a hefty savings on size while also being an improvement in quality over H.264.

Photo Credit: Rich Marchewka

 

DP Patrick Stewart’s path and workflow on Netflix’s Arrested Development

With its handheld doc-style camerawork, voiceover narration and quirky humor, Arrested Development helped revolutionize the look of TV sitcoms. Created by Mitchell Hurwitz, with Ron Howard serving as one of its executive producers, the half-hour comedy series follows the once-rich Bluth family, that continues to live beyond their means in Southern California. At the center of the family is the mostly sane Michael Bluth (Jason Bateman), who does his best to keep his dysfunctional family intact.

Patrick Stewart

The series first aired for three seasons on the Fox TV network (2003-2006) but was canceled due to low ratings. Because the series was so beloved, in 2013, Netflix brought it back to life with its original cast in place. In May 2018, the fifth season began streaming, shot by cinematographer Patrick Stewart (Curb Your Enthusiasm, The League, Flight of the Conchords). He called on VariCam LT cinema cameras.

Stewart’s path to becoming a cinematographer wasn’t traditional. Growing up in Los Angeles and graduating with a degree in finance from the University of Santa Clara, he got his start in the industry when a friend called him up and asked if he’d work on a commercial as a dolly grip. “I did it well enough where they called me for more and more jobs,” explains Stewart. “I started as a dolly grip but then I did sound, worked as a tape op and then started in the camera department. I also worked with the best gaffers in San Francisco, who showed me how to look at the light, understand it and either augment it or recreate it. It was the best practical film school I could have ever attended.”

Not wanting to stay “in a small pond with big fish” Stewart decided to move back to LA and started working for MTV, which brought him into the low-budget handheld world. It also introduced him to “interview lighting” where he lit celebrities like Barbara Streisand, Mick Jagger and Paul McCartney. “At that point I got to light every single amazing musician, actor, famous person you could imagine,” he says. “This practice afforded me the opportunity to understand how to light people who were getting older, and how to make them look their best on camera.”

In 1999, Stewart received an offer to shoot Mike Figgis’ film Time Code (2000), which was one of the landmark films of the DV/film revolution. “It was groundbreaking not only in the digital realm but the fact that Time Code was shot with four cameras from beginning to end, 93 minutes, without stopping, shown in a quad split with no edits — all handheld,” explains Stewart. “It was an amazingly difficult project, because having no edits meant you couldn’t make mistakes. I was very fortunate to work with a brilliant renegade director like Mike Figgis.”

Triple Coverage
When hired for Arrested Development, the first request Stewart approached Hurwitz with was to add a third camera. Shooting with three cameras with multiple characters can be a logistical challenge, but Stewart felt he could get through scenes more quickly and effectively, in order to get the actors out on time. “I call the C camera the center camera and the A and the B are screen left and screen right,” Stewart explains. “C covers the center POV, while A and B cover the scene from their left and right side POV, which usually starts with overs. As we continue to shoot the scene, each camera will get tighter and tighter. If there are three or more actors in the scene, C will get tighter on whoever is in the center. After that, C camera might cover the scene following the dialogue with ‘swinging’ singles. If no swinging singles are appropriate, then the center camera can move over and help out coverage on the right or left side.

“I’m on a walkie — either adjusting the shots during a scene for either of their framing or exposure, or I’m planning ahead,” he continues. “You give me three cameras and I’ll shoot a show really well for you and get it done efficiently, and with cinematic style.”

Because it is primarily a handheld show, Stewart needed lenses that would not weigh down his operators during long takes. He employed Fujinon Cabrio zooms (15-35mm, 19-90mm, and 85-300mm), which are all f/2.8 lenses.

For camera settings, Stewart captures 10-bit 422 UHD (3840×2160) AVC Intra files at 23.98-fps. He also captures in V-Log but uses the V-709 LUT. “To me, you can create all the LUTs you want,” he says, “but more than likely you get to color correction and end up changing things. I think the basic 709 LUT is really nice and gentle on all the colors.”

Light from Above
Much of Arrested Development is shot on a stage, so lighting can get complicated, especially when there are multiple characters in a scene. To makes things less complicated, Stewart provided a gentle soft light from softboxes covering the top of each stage set, using 4-by-8 wooden frames with Tungsten-balanced Quasar tubes dimmed down to 50%. His motivated lighting explanation is that the unseen source could basically be a skylight. If characters are close to windows, he uses HMIs creating “natural sunlight” punching through to light the scene. “The nice thing about the VariCam is that you don’t need as many photons, and I did pretty extensive tests during pre-production on how to do it.”

On stage, Stewart sets his ISO to 5000 base and dials down to 2500 and generally shoots at an f/2.8 and ½. He even uses one level of ND on top of that. “You can imagine 27-foot candles at one level of ND at a 2.8 and 1/2 — that’s a pretty sensitive camera, and I noticed very little noise. My biggest concern was mid-tones, so I did a lot of testing — shooting at 5000, shooting at 2500, 800, 800 pushed up to 1600 and 2500.

“Sometimes with certain cameras, you can develop this mid-tone noise that you don’t really notice until you’re in post. I felt like shooting at 5000 knocked down to 2500 was giving me the benefit of lighting the stage at these beautifully low-lit levels where we would never be hot. I could also easily put 5Ks outside the windows to have enough sunlight to make it look like it’s overexposed a bit. I felt that the 5000 base knocked down to 2500, the noise level was negligible. At native 5000 ISO, there was a little bit more mid-tone noise, even though it was still acceptable. For daytime exteriors, we usually shot at ISO 800, dialing down to 500 or below.”

Stewart and Arrested Development director Troy Miller have known each other for many years since working together on the HBO’s Flight of the Conchords. “There was a shorthand between director and DP that really came in handy,” says Stewart. “Troy knows that I know what I’m doing, and I know on his end that he’s trying to figure out this really complicated script and have us shoot it. Hand in hand, we were really able to support Mitch.”

Zoe Iltsopoulos Borys joins Panavision Atlanta as VP/GM

Panavision has hired Zoe Iltsopoulos Borys to lead the company’s Atlanta office as vice president and general manager. Borys will oversee day-to-day operations in the region.

Borys’ 25 years of experience in the motion picture industry includes business development for Production Resources Group (PRG), and GM for Fletcher Camera and Lenses (now VER). This is her second turn at Panavision, having served in a marketing role at the company from 1998-2006. She is also an associate member of the American Society of Cinematographers.

Panavision’s Atlanta facilities, located in West Midtown and at Pinewood Studios, supplies camera rental equipment in the southern US, with a full staff of prep technicians and camera service experts. The Atlanta team has provided equipment and services to productions including Avengers: Infinity War, Black Panther, Guardians of the Galaxy Vol. 2, The Immortal Life of Henrietta Lacks, Baby Driver and Pitch Perfect 3.

Kees van Oostrum weighs in on return as ASC president

The American Society of Cinematographers (ASC) has re-elected Kees van Oostrum as president. He will serve his third consecutive term at the organization.

The ASC board also re-upped its roster of officers for 2018-2019, including Bill Bennett, John Simmons and Cynthia Pusheck as vice presidents; Levie Isaacks as treasurer; David Darby as secretary; and Isidore Mankofsky as sergeant-at-arms.

Van Oostrum initiated and chairs the ASC Master Class program, which has expanded to locations worldwide under his presidency. The Master Classes take place several times a year and are taught by ASC members. The classes are designed for cinematographers with an intermediate-to-advanced skill set and incorporates practical, hands-on demonstrations of lighting and camera techniques with essential instruction in current workflow practices.

The ASC Vision Committee, founded during van Oostrum’s first term, continues to organize successful symposiums that encourage diversity and inclusion on camera crews, and also offers networking opportunities. The most recent was a standing-room-only event that explored practical and progressive ideas for changing the face of the industry. The ASC will continue to host more of these activities during the coming years.

Van Oostrum has earned two Primetime Emmy nominations for his work on the telefilms Miss Rose White and Return to Lonesome Dove. His peers chose the latter for a 1994 ASC Outstanding Achievement Award. Additional ASC Award nominations for his television credits came for The Burden of Proof, Medusa’s Child and Spartacus. He also shot the Emmy-winning documentary The Last Chance.

A native of Amsterdam, van Oostrum studied at the Dutch Film Academy with an emphasis on both cinematography and directing. He went on to earn a scholarship sponsored by the Dutch government, which enabled him to enroll in the American Film Institute (AFI). Van Oostrum broke into the industry shooting television documentaries for several years. He has subsequently compiled a wide range of some 80-plus credits, including movies for television and the cinema, such as Gettysburg, Gods and Generals and occasional documentaries. He recently wrapped the final season of TV series The Fosters.

The 2018-2019 board who voted in this election includes John Bailey, Paul Cameron, Russell Carpenter, Curtis Clark, Dean Cundey, George Spiro Dibie, Stephen Lighthill, Lowell Peterson, Roberto Schaefer, John Toll and Amelia Vincent. Alternate Board members are Karl-Walter Lindenlaub, Stephen Burum, David Darby, Charlie Lieberman and Eric Steelberg.

The ASC has over 20 committees driving the organization’s initiatives, such as the award-winning Motion Imaging Technology Council (MITC), and the Educational and Outreach committee.

We reached out to Van Oostrum to find out more:

How fulfilling has being ASC President been —either personally or professionally (or both)?
My presidency has been a tremendously fulfilling experience. The ASC grew its educational programs. The masterclass expanded from domestic to international locations, and currently eight to 10 classes a year are being held based on demand (up from four to five from the inaugural year of the master class). Our public outreach activities have brought in over 7,000 students in the last two years, giving them a chance to meet ASC members and ask questions about cinematography and filmmaking.

Our digital presence has also grown, and the ASC and American Cinematographer websites are some of the most visited sites in our industry. Interest from the vendor community has expanded as well, introducing a broader range of companies who are involved in the image pipeline to our members. Then, our efforts to support ASC’s heritage, research and museum acquisitions have taken huge steps forward. I believe the ASC has grown into a relevant organization for people to watch.

What do you hope to accomplish in the coming year?
We will complete our Educational Center, a new building behind the historic ASC clubhouse in Hollywood; produce several online master classes about cinematography; and we also are set to produce two major documentaries about cinematography and will continue to strengthen our role as a technology partner through the efforts of our Motion Imaging Technology Council (formerly the ASC Technology Committee).

What are your proudest achievements from previous years?
I’m most proud of the success of the Master Classes, as well as the support and growth in the number of activities by the Vision Committee. I’m also pleased with the Chinese language edition of our magazine, and having cinematography stories shared in a global way. We’ve also beefed up our overall internal communications so members feel more connected.

Testing large format camera workflows

By Mike McCarthy

In the last few months, we have seen the release of the Red Monstro, Sony Venice, Arri Alexa LF and Canon C700 FF, all of which have larger or full-frame sensors. Full frame refers to the DSLR terminology, with full frame being equivalent to the entire 35mm film area — the way that it was used horizontally in still cameras. All SLRs used to be full frame with 35mm film, so there was no need for the term until manufacturers started saving money on digital image sensors by making them smaller than 35mm film exposures. Super35mm motion picture cameras on the other hand ran the film vertically, resulting in a smaller exposure area per frame, but this was still much larger than most video imagers until the last decade, with 2/3-inch chips being considered premium imagers. The options have grown a lot since then.

L-R: 1st AC Ben Brady, DP Michael Svitak and Mike McCarthy on the monitor.

Most of the top-end cinema cameras released over the last few years have advertised their Super35mm sensors as a huge selling point, as that allows use of any existing S35 lens on the camera. These S35 cameras include the Epic, Helium and Gemini from Red, Sony’s F5 and F55, Panasonic’s VaricamLT, Arri’s Alexa and Canon’s C100-500. On the top end, 65mm cameras like the Alexa65 have sensors twice as wide as Super35 cameras, but very limited lens options to cover a sensor that large. Full frame falls somewhere in between and allows, among other things, use of any 35mm still film lenses. In the world of film, this was referred to as Vista Vision, but the first widely used full-frame digital video camera was Canon’s 5D MkII, the first serious HDSLR. That format has suddenly surged in popularity recently, and thanks to this I recently had opportunity to be involved in a test shoot with a number of these new cameras.

Keslow Camera was generous enough to give DP Michael Svitak and myself access to pretty much all their full-frame cameras and lenses for the day in order to test the cameras, workflows and lens options for this new format. We also had the assistance of first AC Ben Brady to help us put all that gear to use, and Mike’s daughter Florendia as our model.

First off was the Red Monstro, which while technically not the full 24mm height of true full frame, uses the same size lenses due to the width of its 17×9 sensor. It offers the highest resolution of the group at 8K. It records compressed RAW to R3D files, as well as options for ProRes and DNxHR up to 4K, all saved to Red mags. Like the rest of the group, smaller portions of the sensor can be used at lower resolution to pair with smaller lenses. The Red Helium sensor has the same resolution but in a much smaller Super35 size, allowing a wider selection of lenses to be used. But larger pixels allow more light sensitivity, with individual pixels up to 5 microns wide on the Monstro and Dragon, compared to Helium’s 3.65-micron pixels.

Next up was Sony’s new Venice camera with a 6K full-frame sensor, allowing 4K S35 recording as well. It records XAVC to SxS cards or compressed RAW in the X-OCN format with the optional ASX-R7 external recorder, which we used. It is worth noting that both full-frame recording and integrated anamorphic support require additional special licenses from Sony, but Keslow provided us with a camera that had all of that functionality enabled. With a 36x24mm 6K sensor, the pixels are 5.9microns, and footage shot at 4K in the S35 mode should be similar to shooting with the F55.

We unexpectedly had the opportunity to shoot on Arri’s new AlexaLF (Large Format) camera. At 4.5K, this had the lowest resolution, but that also means the largest sensor pixels at 8.25microns, which can increase sensitivity. It records ArriRaw or ProRes to Codex XR capture drives with its integrated recorder.

Another other new option is the Canon C700 FF with a 5.9K full-frame sensor recording RAW, ProRes, or XAVC to CFast cards or Codex Drives. That gives it 6-micron pixels, similar to the Sony Venice. But we did not have the opportunity to test that camera this time around, maybe in the future.

One more factor in all of this is the rising popularity of anamorphic lenses. All of these cameras support modes that use the part of the sensor covered by anamorphic lenses and can desqueeze the image for live monitoring and preview. In the digital world, anamorphic essentially cuts your overall resolution in half, until the unlikely event that we start seeing anamorphic projectors or cameras with rectangular sensor pixels. But the prevailing attitude appears to be, “We have lots of extra resolution available so it doesn’t really matter if we lose some to anamorphic conversion.”

Post Production
So what does this mean for post? In theory, sensor size has no direct effect on the recorded files (besides the content of them) but resolution does. But we also have a number of new formats to deal with as well, and then we have to deal with anamorphic images during finishing.

Ever since I got my hands on one of Dell’s new UP3218K monitors with an 8K screen, I have been collecting 8K assets to display on there. When I first started discussing this shoot with DP Michael Svitak, I was primarily interested in getting some more 8K footage to use to test out new 8K monitors, editing systems and software as it got released. I was anticipating getting Red footage, which I knew I could playback and process using my existing software and hardware.

The other cameras and lens options were added as the plan expanded, and by the time we got to Keslow Camera, they had filled a room with lenses and gear for us to test with. I also had a Dell 8K display connected to my ingest system, and the new 4K DreamColor monitor as well. This allowed me to view the recorded footage in the highest resolution possible.

Most editing programs, including Premiere Pro and Resolve, can handle anamorphic footage without issue, but new camera formats can be a bigger challenge. Any RAW file requires info about the sensor pattern in order to debayer it properly, and new compression formats are even more work. Sony’s new compressed RAW format for Venice, called X-OCN, is supported in the newest 12.1 release of Premiere Pro, so I didn’t expect that to be a problem. Its other recording option is XAVC, which should work as well. The Alexa on the other hand uses ArriRaw files, which have been supported in Premiere for years, but each new camera shoots a slightly different “flavor” of the file based on the unique properties of that sensor. Shooting ProRes instead would virtually guarantee compatibility but at the expense of the RAW properties. (Maybe someday ProResRAW will offer the best of both worlds.) The Alexa also has the challenge of recording to Codex drives that can only be offloaded in OS X or Linux.

Once I had all of the files on my system, after using a MacBook Pro to offload the media cards, I tried to bring them into Premiere. The Red files came in just fine but didn’t play back smoothly over 1/4 resolution. They played smoothly in RedCineX with my Red Rocket-X enabled, and they export respectably fast in AME, (a five-minute 8K anamorphic sequence to UHD H.265 in 10 minutes), but for some reason Premiere Pro isn’t able to get smooth playback when using the Red Rocket-X. Next I tried the X-OCN files from the Venice camera, which imported without issue. They played smoothly on my machine but looked like they were locked to half or quarter res, regardless of what settings I used, even in the exports. I am currently working with Adobe to get to the bottom of that because they are able to play back my files at full quality, while all my systems have the same issue. Lastly, I tried to import the Arri files from the AlexaLF, but Adobe doesn’t support that new variation of ArriRaw yet. I would anticipate that will happen soon, since it shouldn’t be too difficult to add that new version to the existing support.

I ended up converting the files I needed to DNxHR in DaVinci Resolve so I could edit them in Premiere, and I put together a short video showing off the various lenses we tested with. Eventually, I need to learn how to use Resolve more efficiently, but the type of work I usually do lends itself to the way Premiere is designed — inter-cutting and nesting sequences with many different resolutions and aspect ratios. Here is a short clip demonstrating some of the lenses we tested with:

This is a web video, so even at UHD it is not meant to be an analysis of the RAW image quality, but instead a demonstration of the field of view and overall feel with various lenses and camera settings. The combination of the larger sensors and the anamorphic lenses leads to an extremely wide field of view. The table was only about 10 feet from the camera, and we can usually see all the way around it. We also discovered that when recording anamorphic on the Alexa LF, we were recording a wider image than was displaying on the monitor output. You can see in the frame grab below that the live display visible on the right side of the image isn’t displaying the full content that got recorded, which is why we didn’t notice that we were recording with the wrong settings with so much vignetting from the lens.

We only discovered this after the fact, from this shot, so we didn’t get the opportunity to track down the issue to see if it was the result of a setting in the camera or in the monitor. This is why we test things before a shoot, but we didn’t “test” before our camera test, so these things happen.

We learned a lot from the process, and hopefully some of those lessons are conveyed here. A big thanks to Brad Wilson and the rest of the guys at Keslow Camera for their gear and support of this adventure and, hopefully, it will help people better prepare to shoot and post with this new generation of cameras.

Main Image: DP Michael Svitak


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Panavision Millennium DXL2’s ecosystem grows with color science, lenses, more

Panavision’s Millennium DXL2 8K camera was on display at Cine Gear last week featuring  a new post-centric firmware upgrade, along with four new large-format lens sets, a DXL-inspired accessories kit for Red DSMC2 cameras and a preview of custom advancements in filter technology.

DXL2 incorporates technology advancements based on input from cinematographers, camera assistants and post production groups. The camera offers 16 stops of dynamic range with improved shadow detail, a native ISO setting of 1600 and 12-bit ProRes XQ up to 120fps. New to the DXL2 is version 1.0 of a directly editable (D2E) workflow. D2E gives DITs wireless LUT and CDL look control and records all color metadata into camera-generated proxy files for instant and render-free dailies.

DXL2, which is available to rent worldwide, also incorporates an updated color profile: Light Iron Color 2 (LiColor2). This latest color science provides cinematographers and DITs with a film-inspired tonal look that makes the DXL2 feel more cinematic and less digital.

Panavision also showcased their large-format spherical and anamorphic lenses. Four new large-format lens sets were on display:
• Primo X is a cinema lens designed for use on drones and gimbals. It’s fully sealed, weatherproof and counterbalanced to be aerodynamic and it’s able to easily maintain a proper center of gravity. Primo X lenses come in two primes – 14mm (T3.1) and 24mm (T1.6) – and one 24-70mm zoom (T2.8) and will be available in 2019.

• H Series is a traditionally designed spherical lens set with a rounded, soft roll-off, giving what the company calls a “pleasing tonal quality to the skin.” Created with vintage glass and coating, these lenses offer slightly elevated blacks for softer contrast. High speeds separate subject and background with a smooth edge transition, allowing the subject to appear naturally placed within the depth of the image. These lenses are available now.
• Ultra Vista is a series of large-format anamorphic optics. Using a custom 1.6x squeeze, Ultra Vista covers the full height of the 8K sensor in the DXL and presents an ultra-widescreen 2.76:1 aspect ratio along with a classic elliptical bokeh and Panavision horizontal flare. Ultra Vista lenses will be available in 2019.
• PanaSpeed is a large-format update of the classic Primo look. At T1.4, PanaSpeed is a fast large-format lens. It will be available in Q3 of 2018.

Panavision also showed an adjustable liquid crystal neutral density (LCND) filter. LCND adjusts up to six individual stops with a single click or ramp — a departure from traditional approaches to front-of-lens filters, which require carrying a set and manually swapping individual NDs based on changing light. LCND starts at 0.3 and goes through 0.6, 0.9, 1.2, 1.5, to 1.8. It will be available in 2019.

Following up on the DXL1 and DXL2, Panavision launched the latest in its cinema line-up with the newly created DXL-M accessory kit. Designed to work with Red DSMC2 cameras, DXL-M marries the quality and performance of DXL with the smaller size and weight of the DSMC2. DXL-M brings popular features of DXL to Red Monstro, Gemini and Helium sensors, such as the DXL menu system (via an app for the iPhone), LiColor2, motorized lenses, wireless timecode (ACN) and the Primo HDR viewfinder. It will be available in Q4 of 2018.

Sony updates Venice to V2 firmware, will add HFR support

At CineGear, Sony introduced new updates and developments for its Venice CineAlta camera system including Version 2 firmware, which will now be available in early July.

Sony also showed the new Venice Extension System, which features expanded flexibility and enhanced ergonomics. Also announced was Sony’s plan for high frame rate support for the Venice system.

Version 2 adds new features and capabilities specifically requested by production pros to deliver more recording capability, customizable looks, exposure tools and greater lens freedom. Highlights include:

With 15+ stops of exposure latitude, Venice will support high base ISO of 2500 in addition to an existing ISO of 500, taking full advantage of Sony’s sensor for superb low-light performance with dynamic range from +6 stops to -9 stops as measured at 18% middle gray. This increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in high dynamic range while maintaining the maximum shadow details; Select FPS (off speed) in individual frame increments, from 1 to 60; V2.0 adds several Imager Modes, including 25p in 6K full-frame, 25p in 4K 4:3 anamorphic, 6K 17:9, 1.85:1 and 4K 6:5 anamorphic imager modes; user-uploadable 3D LUTs allows users to customize their own looks and save them directly into the camera; wired LAN remote control allows users to remotely control and change key functions, including camera settings, fps, shutter, EI, iris (Sony E-mount lens), record start/stop and built-in optical ND filters; and E-mount allows users to remove the PL mount and use a wide assortment of native E-mount lenses.

The Venice Extension System is a full-frame tethered extension system that allows the camera body to detach from the actual image sensor block with no degradation in image quality up to 20 feet apart. These are the result of Sony’s long-standing collaboration with James Cameron’s Lightstorm Entertainment.

“This new tethering system is a perfect example of listening to our customers, gathering strong and consistent feedback, and then building that input into our product development,” said Peter Crithary, marketing manager for motion picture cameras, Sony. “The Avatar sequels will be among the first feature films to use the new Venice Extension System, but it also has tremendous potential for wider use with handheld stabilizers, drones, gimbals and remote mounting in confined places.”

Also at CineGear, Sony shared the details of a planned optional upgrade to support high frame rate — targeting speeds up to 60fps in 6K, up to 90fps in 4K and up to 120fps in 2K. It will be released in North America in the spring of 2019.

Red simplifies camera lineup with one DSMC2 brain

Red Digital Cinema modified its camera lineup to include one DSMC2 camera Brain with three sensor options — Monstro 8K VV, Helium 8K S35 and Gemini 5K S35. The single DSMC2 camera Brain includes high-end frame rates and data rates regardless of the sensor chosen. In addition, this streamlined approach will result in a price reduction compared to Red’s previous camera line-up.

“We have been working to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain,” says Jarred Land, president of Red Digital Cinema. “As a result, I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

Red’s DSMC2 camera Brain is a modular system that allows users to configure a fully operational camera setup to meet their individual needs. Red offers a range of accessories, including display and control functionality, input/output modules, mounting equipment, and methods of powering the camera. The camera Brain is capable of up to 60fps at 8K, offers 300MB/s data transfer speeds and simultaneous recording of RedCode RAW and Apple ProRes or Avid DNxHD/HR.

The Red DSMC2 camera Brain and sensor options:
– DSMC2 with Monstro 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills and offers 17+ stops of dynamic range for $54,500.
– DSMC2 with Helium 8K S35 offers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
– DSMC2 with Gemini 5K S35 uses dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low-light mode for darker environments priced at $19,500.

Red will begin to phase out new sales of its Epic-W and Weapon camera Brains starting immediately. In addition to the changes to the camera line-up, Red will also begin offering new upgrade paths for customers looking to move from older Red camera systems or from one sensor to another. The full range of upgrade options can be found here.

 

 

The Duffer Brothers: Showrunners on Netflix’s Stranger Things

By Iain Blair

Kids in jeopardy! The Demogorgon! The Hawkins Lab! The Upside Down! Thrills and chills! Since they first pitched their idea for Stranger Things, a love letter to 1980’s genre films set in 1983 Indiana, twin brothers Matt and Ross Duffer have quickly established themselves as masters of suspense in the science-fiction and horror genres.

The series was picked up by Netflix, premiered in the summer of 2016, and went on to become a global phenomenon, with the brothers at the helm as writers, directors and executive producers.

The Duffer Brothers

The atmospheric drama, about a group of nerdy misfits and strange events in an outwardly average small town, nailed its early ’80s vibe and overt homages to that decade’s master pop storytellers: Steven Spielberg and Stephen King. It quickly made stars out of its young ensemble cast — Millie Bobby Brown, Natalia Dyer, Charlie Heaton, Joe Keery, Gaten Matarazzo, Caleb McLaughlin, Noah Schnapp, Sadie Sink and Finn Wolfhard.

It also quickly attracted a huge, dedicated fan base, critical plaudits and has won a ton of awards, including Emmys, a SAG Award for Best Ensemble in a Drama Series and two Critics Choice Awards for Best Drama Series and Best Supporting Actor in a Drama Series. The show has also been nominated for a number of Golden Globes.

I recently talked with the Duffers, who are already hard at work on the highly anticipated third season (which will premiere on Netflix in 2019) about making the ambitious hit series, their love of post and editing, and VFX.

How’s the new season going?
Matt Duffer: We’re two weeks into shooting, and it’s going great. We’re very excited about it as there are some new tones and it’s good to be back on the ground with everyone. We know all the actors better and better, the kids are getting older and are becoming these amazing performers — and they were great before. So we’re having a lot of fun.

Are you shooting in Atlanta again?
Ross Duffer: We are, and we love it there. It’s really our home base now, and we love all these pockets of neighborhoods that have not changed at all since the ‘80s, and there is an incredible variety of locations. We’re also spreading out a lot more this season and not spending so much time on stages. We have more locations to play with.

Will all the episodes be released together next year, like last time? That would make binge-watchers very happy.
Matt: Yes, but we like to think of it more as like a big movie release. To release one episode per week feels so antiquated now.

The show has a very cinematic look and feel, so how do you balance that with the demands of TV?
Ross: It’s interesting, because we started out wanting to make movies and we love genre, but with a horror film they want big scares every few minutes. That leaves less room for character development. But with TV, it’s always more about character, as you just can’t sustain hours and hours of a show if you don’t care about the people. So ‘Stranger Things’ was a world where we could tell a genre story, complete with the monster, but also explore character in far more depth than we could in a movie.

Matt: Movies and TV are almost opposites in that way. In movies, it’s all plot and no character, and in TV it’s about character and you have to fight for plot. We wanted this to have pace and feel more like a movie, but still have all the character arcs. So it’s a constant balancing act, and we always try and favor character.

Where do you post the show?
Matt: All in Hollywood, and the editors start working while we’re shooting. After we shoot in Atlanta, we come back to our offices and do all the post and VFX work right there. We do all the sound mix and all the color timing at Technicolor down the road. We love post. You never have enough time on the set, and there’s all this pressure if you want to redo a shot or scene, but in post if a scene isn’t working we can take time to figure it out.

Tell us about the editing. I assume you’re very involved?
Ross: Very. We have two editors this season. We brought back one of our original editors, Dean Zimmerman, from season one. We are also using Nat Fuller, who was on season two. He was Dean’s assistant originally and then moved up, so they’ve been with us since the start. Editing’s our favorite part of the whole process, and we’re right there with them because we love editing. We’re very hands on and don’t just give notes and walk away. We’re there the whole time.

Aren’t you self-taught in terms of editing?
Matt: (Laughs) I suppose. We were taught the fundamentals of Avid at film school, but you’re right. We basically taught ourselves to edit as kids, and we started off just editing in-camera, stopping and starting, and playing the music from a tape recorder. They weren’t very good, but we got better.

When iMovie came out we learned how to put scenes together, so in college the transition to Avid wasn’t that hard. We fell in love with editing and just how much you can elevate your material in post. It’s magical what you can do with the pace, performances, music and sound design, and then you add all the visual effects and see it all come together in post. We love seeing the power of post as you work to make your story better and better.

How early on do you integrate post and VFX with the production?
Ross: On day one now. The biggest change from season one to two was that we integrated post far earlier in the second season — even in the writing stage. We had concept artists and the VFX guys with us the whole time on set, and they were all super-involved. So now it all kind of happens together.

All the VFX are a much bigger deal. For last season we had a lot more VFX than the first year — about 1,400 shots, which is a huge amount, like a big movie. The first season it wasn’t a big deal. It was a very old-school approach, with mainly practical effects, and then in the middle we realized we were being a bit naïve, so we brought in Paul Graff as our VFX supervisor on season two, and he’s very experienced. He’s worked on big movies like The Wolf of Wall Street as well as Game of Thrones and Boardwalk Empire, and he’s doing this season too. He’s in Atlanta with us on the shoot.

We have two main VFX houses on the show — Atomic Fiction and Rodeo — they’re both incredible, and I think all the VFX are really cinematic now.

But isn’t it a big challenge in terms of a TV show’s schedule?
Ross: You’re right, and it’s always a big time crunch. Last year we had to meet that Halloween worldwide release date and we were cutting it so close trying to finish all the shots in time.

Matt: Everyone expects movie-quality VFX — just in a quarter of the time, or less. So it’s all accelerated.

The show has a very distinct, eerie, synth-heavy score by Kyle Dixon and Michael Stein, the Grammy nominated duo. How important is the music and sound, which won several Emmys last year?
Ross: It’s huge. We use it so much for transitions, and we have great sound designers — including Brad North and Craig Henighan — and great mixers, and we pay a lot of attention to all of it. I think TV has always put less emphasis on great sound compared to film, and again, you’re always up against the scheduling, so it’s always this balancing act.

You can’t mix it for a movie theater as very few people have that set up at home, so you have to design it for most people who’re watching on iPhones, iPads and so on, and optimize it for that, so we mostly mix in stereo. We want the big movie sound, but it’s a compromise.

The DI must be vital?
Matt: Yes, and we work very closely with colorist Skip Kimball (who recently joined Efilm), who’s been with us since the start. He was very influential in terms of how the show ended up looking. We’d discussed the kind of aesthetic we wanted, and things we wanted to reference and then he played around with the look and palette. We’ve developed a look we’re all really happy with. We have three different LUTs on set designed by Skip and the DP Tim Ives will choose the best one for each location.

Everyone’s calling this the golden age of TV. Do you like being showrunners?
Ross: We do, and I feel we’re very lucky to have the chance to do this show — it feels like a big family. Yes, we originally wanted to be movie directors, but we didn’t come into this industry at the right time, and Netflix has been so great and given us so much creative freedom. I think we’ll do a few more seasons of this, and then maybe wrap it up. We don’t want to repeat ourselves.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

postPerspective names NAB Impact Award MVPs and winners

NAB is a bear. Anyone who has attended this show can attest to that. But through all the clutter, postPerspective sought to seek out the best of the best for our Impact Awards. So we turned to a panel of esteemed industry pros (to whom we are very grateful!) to cast their votes on what they thought would be most impactful to their day-to-day workflows, and those of their colleagues.

In addition to our Impact Award winners, this year we are also celebrating two pieces of technology that not only caused a big buzz around the show, but are also bringing things a step further in terms of technology and workflow: Blackmagic’s DaVinci Resolve 15 and Apple’s ProRes RAW.

With ProRes RAW, Apple has introduced a new, high-quality video recording codec that has already been adopted by three competing camera vendors — Sony, Canon and Panasonic. According to Mike McCarthy, one of our NAB bloggers and regular contributors, “ProRes RAW has the potential to dramatically change future workflows if it becomes even more widely supported. The applications of RAW imaging in producing HDR content make the timing of this release optimal to encourage vendors to support it, as they know their customers are struggling to figure out simpler solutions to HDR production issues.”

Fairlight’s audio tools are now embedded in the new Resolve 15.

With Resolve 15, Blackmagic has launched the product further into a wide range of post workflows, and they haven’t raised the price. This standalone app — which comes in a free version — provides color grading, editing, compositing and even audio post, thanks to the DAW Fairlight, which is now built into the product.

These two technologies are Impact Award winners, but our judges felt they stood out enough to be called postPerspective Impact Award MVPs.

Our other Impact Award winners are:

• Adobe for Creative Cloud

• Arri for the Alexa LF

• Codex for Codex One Workflow and ColorSynth

• FilmLight for Baselight 5

• Flanders Scientific for the XM650U monitor

• Frame.io for the All New Frame.io

• Shift for their new Shift Platform

• Sony for their 8K CLED display

In a sea of awards surrounding NAB, the postPerspective Impact Awards stand out, and are worth waiting for, because they are voted on by working post professionals.

Flanders Scientific’s XM650U monitor.

“All of these technologies from NAB are very worthy recipients of our postPerspective Impact Awards,” says Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually have an impact on workflows as well as the ability to make users’ working lives easier and their projects better. This year we have honored 10 different products that span the production and post pipeline.

“We’re very proud of the fact that companies don’t ‘submit’ for our awards,” continues Altman. “We’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We feel it makes our awards quite special.”

With our Impact Awards, postPerspective is also hoping to help those who weren’t at the show, or who were unable to see it all, with a starting point for their research into new gear that might be right for their workflows.

postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at SIGGRAPH 2018.

Atomos at NAB offering ProRes RAW recorders

Atomos is at this year’s NAB showing support for ProRes RAW, a new format from Apple that combines the performance of ProRes with the flexibility of RAW video. The ProRes RAW update will be available free for the Atomos Shogun Inferno and Sumo 19 devices.

Atomos devices are currently the only monitor recorders to offer ProRes RAW, with realtime recording from the sensor output of Panasonic, Sony and Canon cameras.

The new upgrade brings ProRes RAW and ProRes RAW HQ recording, monitoring, playback and tag editing to all owners of an Atomos Shogun Inferno or Sumo19 device. Once installed, it will allow the capture of RAW images in up to 12-bit RGB — direct from many of our industry’s most advanced cameras onto affordable SSD media. ProRes RAW files can be imported directly into Final Cut Pro 10.4.1 for high-performance editing, color grading, and finishing on Mac laptop and desktop systems.
Eight popular cine cameras with a RAW output — including the Panasonic AU-EVA1, Varicam LT, Sony FS5/FS7 and Canon C300mkII/C500 — will be supported with more to follow.

With this ProRes RAW support, filmmakers can work easily with RAW – whether they are shooting episodic TV, commercials, documentaries, indie films or social events.

Shooting ProRes RAW preserves maximum dynamic range, with a 12-bit depth and wide color gamut — essential for HDR finishing. The new format, which is available in two compression levels — ProRes RAW and ProRes RAW HQ — preserves image quality with low data rates and file sizes much smaller than uncompressed RAW.

Atomos recorders through ProRes RAW allow for increased flexibility in captured frame rates and resolutions. Atomos can record ProRes RAW up to 2K at 240 frames a second, or 4K at up to 120 frames per second. Higher resolutions such as 5.7K from the Panasonic AU-EVA1 are also supported.

Atomos’ OS, AtomOS 9, gives users filming tools to allow them to work efficiently and creatively with ProRes RAW in portable devices. Fast connections in and out and advanced HDR screen processing means every pixel is accurately and instantly available for on-set creative playback and review. Pull the SSD out and dock to your Mac over Thunderbolt 3 or USB-C 3.1 for immediate super fast post production.

Download the AtomOS 9 update for Shogun Inferno and Sumo 19 at www.atomos.com/firmware.

B&H expands its NAB footprint to target multiple workflows

By Randi Altman

In a short time, many in our industry will be making the pilgrimage to Las Vegas for NAB. They will come (if they are smart) with their comfy shoes, Chapstick and the NAB Show app and plot a course for the most efficient way to see all they need to see.

NAB is a big show that spans a large footprint, and typically companies showing their wares need to pick a hall — Central, South Lower, South Upper or North. This year, however, The Studio-B&H made some pros’ lives a bit easier by adding a booth in South Lower in addition to their usual presence in Central Hall.

B&H’s business and services have grown, so it made perfect sense to Michel Suissa, managing director at The Studio-B&H, to grow their NAB presence to include many of the digital workflows the company has been servicing.

We reached out to Suissa to find out more.

This year B&H and its Studio division are in the South Lower. Why was it important for you guys to have a presence in both the Central and South Halls this year?
The Central Hall has been our home for a long time and it remains our home with our largest footprint, but we felt we needed to have a presence in South Hall as well.

Production and post workflows merge and converge constantly and we need to be knowledgeable in both. The simple fact is that we serve all segments of our industry, not just image acquisition and camera equipment. Our presence in image and data centric workflows has grown leaps and bounds.

This world is a familiar one for you personally.
That’s true. The post and VFX worlds are very dear to me. I was an editor, Flame artist and colorist for 25 years. This background certainly plays a role in expanding our reach and services to these communities. The Studio-B&H team is part of a company-wide effort to grow our presence in these markets. From a business standpoint, the South Hall attendees are also our customers, and we needed to show we are here to assist and support them.

What kind of workflows should people expect to see at both your NAB locations?
At the South Hall, we will show a whole range of solutions to show the breadth and diversity of what we have to offer. That includes VR post workflow, color grading, animation and VFX, editing and high-performance Flash storage.

In addition to the new booth in South Hall, we have two in Central. One is for B&H’s main product offerings, including our camera shootout, which is a pillar of our NAB presence.

This Studio-B&H booth features a digital cinema and broadcast acquisition technology showcase, including hybrid SDI/IP switching, 4K studio cameras, a gyro-stabilized camera car, the most recent full-frame cinema cameras, and our lightweight cable cam, the DynamiCam.

Our other Central Hall location is where our corporate team can discuss all business opportunities with new and existing B2B customers

How has The Studio-B&H changed along with the industry over the past year or two?
We have changed quite a bit. With our services and tools, we have re-invented our image from equipment providers to solution providers.

Our services now range from system design to installation and deployment. One of the more notable recent examples is our recent collaboration with HBO Sports on World Championship Boxing. The Studio-B&H team was instrumental in deploying our DynamiCam system to cover several live fights in different venues and integrating with NEP’s mobile production team. This is part of an entirely new type of service —  something the company had never offered its customers before. It is a true game-changer for our presence in the media and entertainment industry.

What do you expect the “big thing” to be at NAB this year?
That’s hard to say. Markets are in transition with a number of new technology advancements: machine learning and AI, cloud-based environments, momentum for the IP transition, AR/VR, etc.

On the acquisition side, full frame/large sensor cameras have captured a lot of attention. And, of course, HDR will be everywhere. It’s almost not a novelty anymore. If you’re not taking advantage of HDR, you are living in the past.

Red’s new Gemini 5K S35 sensor offers low-light and standard mode

Red Digital Cinema’s new Gemini 5K S35 sensor for its Red Epic-W camera leverages dual-sensitivity modes, allowing shooters to use standard mode for well-lit conditions or low-light mode for darker environments.

In low-light conditions, the Gemini 5K S35 sensor allows for cleaner imagery with less noise and better shadow detail. Camera operators can easily switch between modes through the camera’s on-screen menu with no down time.

The Gemini Mini 5K S35 sensor offers an increased field of view at 2K and 4K resolutions compared to the higher-resolution Red Helium sensor. In addition, the sensor’s 30.72mm x 18mm dimensions allow for greater anamorphic lens coverage than with Helium or Red Dragon sensors.

“While the Gemini sensor was developed for low-light conditions in outer space, we quickly saw there was so much more to this sensor,” explains Jarred Land, president of Red Digital Cinema. “In fact, we loved the potential of this sensor so much, we wanted to evolve it to for broader appeal. As a result, the Epic-W Gemini now sports dual-sensitivity modes. It still has the low-light performance mode, but also has a default, standard mode that allows you to shoot in brighter conditions.”

Built on the compact DSMC2 form factor, this new camera and sensor combination captures 5K full-format motion at up to 96fps along with data speeds of up to 275MB per second. Additionally, it supports Red’s IPP2 enhanced image processing pipeline in-camera. Like all of Red’s DSMC2 cameras, the Epic-W is able to shoot simultaneous Redcode RAW and Apple ProRes or Avid DNxHD/HR recording and adheres to Red’s “Obsolescence Obsolete” program, which allows current Red owners to upgrade their technology as innovations are unveiled. It also lets’ them move between camera systems without having to purchase all new gear.

Starting at $24,500, the new Red Epic-W with Gemini 5K S35 sensor is available for purchase now. Alternatively, Weapon Carbon Fiber and Red Epic-W 8K customers will have the option to upgrade to the Gemini sensor at a later date.

Sony to ship Venice camera this month, adds capabilities

Sony’s next-gen CineAlta motion picture camera Venice, which won a postPerspective Impact Award for IBC2017, will start shipping this month. As previously announced, V.1.0 features support for full-frame 24x36mm recording. In addition, and as a result of customer feedback, Sony has added several new capabilities, including a Dual Base ISO mode. With 15+ stops of exposure latitude, Venice will support an additional High Base ISO of 2500 using the sensor’s physical attributes. This takes advantage of Sony’s sensor for low-light performance with high dynamic range — from 6 stops over to 9 stops under 18% middle gray.

This new capability increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in HDR, while maintaining the maximum shadow details. An added benefit within Venice is its built-in 8-step optical ND filter servo mechanism. This can emulate different ISO operating points when in High Base ISO 2500 and also maintains the extremely low levels of noise characteristics of the Venice sensor.

Venice also features new color science designed to offer a soft tonal film look, with shadows and mid-tones having a natural response and the highlights preserving the dynamic range.

Sony has also developed the Venice camera menu simulator. This tool is designed to give camera operators an opportunity to familiarize themselves with the camera’s operational workflow before using Venice in production.

Features and capabilities planned to be available later this year as free firmware upgrades in Version 2 include:
• 25p in 6K full-frame mode will be added in Version 2
• False Color (moved from Version 3 to Version 2)

Venice has an established workflow with support from Sony’s RAW Viewer 3, and third-party vendors including Filmlight Baselight 5, Davinci Resolve 14.3, and Assimilate Scratch 8.6 among others. Sony continues to work closely with all relevant third parties on workflows including editing, grading, color management and dailies.

Another often requested feature is support for high frame rates, which Sony is working to implement and make available at a later date.

Venice features include:
• True 36x24mm full frame imaging based on the photography standard that goes back 100 years
• Built-in 8-step optical ND filter servo mechanism
• Dual Base ISO mode, with High Base ISO 2500
• New color science for appealing skin tones and graceful highlights – out of the box
• Aspect ratio freedom: Full frame 3:2 (1.5:1), 4K 4:3 full height anamorphic, spherical 17:9, 16:9.
• Lens mount with 18mm flange depth opens up tremendous lens options (PL lens mount included)
• 15+ stops of exposure latitude
• User-interchangeable sensor that requires removal of just six screws
• 6K resolution (6048 x 4032) in full frame mode

Seasoned pros and young talent team on short films

By James Hughes

In Los Angeles on a Saturday morning, a crew of 10 students from Hollywood High School — helmed by 17-year-old director Celine Gimpirea — were transforming a corner of the Calgary Cemetery into a movie set. In The Box, a boy slips inside a cardboard box and finds himself transported to other realms. On this well-manicured lawn, among rows of flat, black granite grave markers, are rows of flat, black camera cases holding Red cameras, DIT stations, iPads and MacBook Pros.

Gimpirea’s is one of three teams of filmmakers involved in a month-long filmmaking workshop connecting creative pros with emerging talent. The teams worked with tools from Apple, including the MacBook Pro, iMac and Final Cut Pro X, as well as the Red Raven camera for shooting. LA-based independent filmmaking collective We Make Movies provided post supervision. They used a workflow very similar to that of the feature film Whiskey Tango Foxtrot, which was shot on Red and edited in FCP X.

In the documentary La Buena Muerte produced by instructors from the Mobile Film Classroom, a non-profit that provides digital media workshops to youth in under-resourced communities, the filmmakers examine mortality and family bonds surrounding the Day of the Dead, the Mexican holiday honoring lost loved ones. And in The Dancer, director Krista Amigone channels her background in theater to tell a personal story about a dancer confronting the afterlife.

Krista Amigone

During a two-week post period, teams received feedback from a rotating cast of surprise guests and mentors from across the industry, each a professional working in the field of film and television production.

Among the first mentors to view The Dancer was Sean Baker, director of 2017’s critically acclaimed The Florida Project and the 2015 feature Tangerine, shot entirely on iPhone 5S. Baker, who edits his own films, surveyed clips from Amigone’s shoot. Each take had been marked with the Movie Slate app on an iPad, which automatically stores and logs the timecode data. Together, they discussed Amigone’s backstory as well. A stay-at-home mother of a three-year-old daughter, she is no stranger to maximizing time and resources. She not only served as writer and director, but also star and choreographer.

Meanwhile, the La Buena Muerte crew, headed by executive producer Manon Banta, were editing their piece. Reviewing the volume of interviews and B-roll, all captured by cinematographer Elle Schneider on the 4.5K Red Raven camera, initially felt like a daunting task. Fortunately, their metadata was automatically organized after being imported straight into Final Cut Pro X from Shot Notes X and Lumberjack, along with the secondary source audio via Sync-N-Link X, which spared days of hand syncing.

Perhaps the most constructive feedback about story structure came from TJ Martin, director of LA92 and Undefeated, the Oscar-winner for Best Documentary Feature in 2012, which director Jean Balest has used as teaching material in the Mobile Film Classroom. Midway through the cut, Martin was struck by a plot point he felt required precision placement up front: A daughter is introduced while presiding over a conceptual art altar alongside her mother, who reveals she’s coping with her own pending death after a stage four cancer diagnosis.

Reshoots were vital to The Box. The dream world Gimpirea created — she cites Christopher Nolan’s Inception as an influence — required some clarification. During a visit from Valerie Faris, the Oscar-nominated co-director of Little Miss Sunshine and Battle of the Sexes, Gimpirea listened intently as she offered advice for pickup shots. Faris urged Gimpirea to keep the story focused on the point of view of her young lead during his travels. “There’s a lot told in his body and seeing him from behind,” Faris said. “In some ways, I’m more with him when I’m traveling behind him and seeing what he’s seeing.”

Celine Gimpirea

Gimpirea’s collaborative nature was evident throughout post. She was helped out by Antonio Manriquez, a video production teacher at Hollywood High, as well as her crew. Kais Karram was the film’s assistant director, and twin brother Zane was cinematographer. The brothers’ athleticism was an asset on-set, particularly during a day-long shoot in Griffith Park where they executed numerous tracking shots behind the film’s fleet-footed star as he navigated a walkway they had cleared of park visitors.

The selection of music was crucial, particularly for Amigone. For her main theme, she wanted a sound reminiscent of John Coltrane’s “After The Rain” and Claude Debussy’s “Clair De Lune.” She chose an original nocturne by John Mickevich, a composer and fellow member of the collective We Make Movies, whose founder/CEO Sam Mestman is also the CEO of LumaForge, developer of the Jellyfish Mobile — a “portable cloud,” as he put it — which, along with two MacBook Pros, were storing and syncing Amigone’s footage on location. Mestman believes “post should live on set.” As proof, a half-day of work for the editing team was done before the dance studio shoot had even wrapped.

During his mentor visit, Aaron Kaufman, director and longtime producing partner of filmmaker Robert Rodriguez, encouraged the teams to not be precious about losing shots in service of story. The documentary team certainly heeded this advice, as did Gimpirea, who cut a whole scene from Calvary Cemetery from her film.

As the project was winding down, Gimpirea reflected on her experience. “Knowing all the possibilities that I have in post now, it allows me to look completely differently at production and pre-production, and to pick out, more precisely, what I want,” she said.

Main Image: Shooting with the Red Raven at the Calvary Cemetery.


James Hughes is a writer and editor based in Chicago.

Panavision Hollywood names Dan Hammond VP/GM

Panavision has named Dan Hammond, a longtime industry creative solutions technologist, as vice president and general manager of Panavision Hollywood. He will be responsible for overseeing daily operations at the facility and working with the Hollywood team on camera systems, optics, service and support.

Hammond is a Panavision veteran, who worked at the company between 1989 and 2008 in various departments, including training, technical marketing and sales. Most recently he was at Production Resource Group (PRG), expanding his technical services skills. He is active with industry organizations, and is an associate member of the American Society of Cinematographers (ASC), as well as a member of the Academy of Television Arts and Sciences (ATAS) and Association of Independent Commercial Producers (AICP).

Review: GoPro Fusion 360 camera

By Mike McCarthy

I finally got the opportunity to try out the GoPro Fusion camera I have had my eye on since the company first revealed it in April. The $700 camera uses two offset fish-eye lenses to shoot 360 video and stills, while recording ambisonic audio from four microphones in the waterproof unit. It can shoot a 5K video sphere at 30fps, or a 3K sphere at 60fps for higher motion content at reduced resolution. It records dual 190-degree fish-eye perspectives encoded in H.264 to separate MicroSD cards, with four tracks of audio. The rest of the magic comes in the form of GoPro’s newest application Fusion Studio.

Internally, the unit is recording dual 45Mb H.264 files to two separate MicroSD cards, with accompanying audio and metadata assets. This would be a logistical challenge to deal with manually, copying the cards into folders, sorting and syncing them, stitching them together and dealing with the audio. But with GoPro’s new Fusion Studio app, most of this is taken care of for you. Simply plug-in the camera and it will automatically access the footage, and let you preview and select what parts of which clips you want processed into stitched 360 footage or flattened video files.

It also processes the multi-channel audio into ambisonic B-Format tracks, or standard stereo if desired. The app is a bit limited in user-control functionality, but what it does do it does very well. My main complaint is that I can’t find a way to manually set the output filename, but I can rename the exports in Windows once they have been rendered. Trying to process the same source file into multiple outputs is challenging for the same reason.

Setting Recorded Resolution (Per Lens) Processed Resolution (Equirectangular)
5Kp30 2704×2624 4992×2496
3Kp60 1568×1504 2880×1440
Stills 3104×3000 5760×2880

With the Samsung Gear 360, I researched five different ways to stitch the footage, because I wasn’t satisfied with the included app. Most of those will also work with Fusion footage, and you can read about those options here, but they aren’t really necessary when you have Fusion Studio.

You can choose between H.264, Cineform or ProRes, your equirectangular output resolution and ambisonic or stereo audio. That gives you pretty much every option you should need to process your footage. There is also a “Beta” option to stabilize your footage, which once I got used to it, I really liked. It should be thought of more as a “remove rotation” option since it’s not for stabilizing out sharp motions — which still leave motion blur — but for maintaining the viewer’s perspective even if the camera rotates in unexpected ways. Processing was about 6x run-time on my Lenovo Thinkpad P71 laptop, so a 10-minute clip would take an hour to stitch to 360.

The footage itself looks good, higher quality than my Gear 360, and the 60p stuff is much smoother, which is to be expected. While good VR experiences require 90fps to be rendered to the display to avoid motion sickness that does not necessarily mean that 30fps content is a problem. When rendering the viewer’s perspective, the same frame can be sampled three times, shifting the image as they move their head, even from a single source frame. That said, 60p source content does give smoother results than the 30p footage I am used to watching in VR, but 60p did give me more issues during editorial. I had to disable CUDA acceleration in Adobe Premiere Pro to get Transmit to work with the WMR headset.

Once you have your footage processed in Fusion Studio, it can be edited in Premiere Pro — like any other 360 footage — but the audio can be handled a bit differently. Exporting as stereo will follow the usual workflow, but selecting ambisonic will give you a special spatially aware audio file. Premiere can use this in a 4-track multi-channel sequence to line up the spatial audio with the direction you are looking in VR, and if exported correctly, YouTube can do the same thing for your viewers.

In the Trees
Most GoPro products are intended for use capturing action moments and unusual situations in extreme environments (which is why they are waterproof and fairly resilient), so I wanted to study the camera in its “native habitat.” The most extreme thing I do these days is work on ropes courses, high up in trees or telephone poles. So I took the camera out to a ropes course that I help out with, curious to see how the recording at height would translate into the 360 video experience.

Ropes courses are usually challenging to photograph because of the scale involved. When you are zoomed out far enough to see the entire element, you can’t see any detail, or if you are so zoomed in close enough to see faces, you have no good concept of how high up they are — 360 photography is helpful in that it is designed to be panned through when viewed flat. This allows you to give the viewer a better sense of the scale, and they can still see the details of the individual elements or people climbing. And in VR, you should have a better feel for the height involved.

I had the Fusion camera and Fusion Grip extendable tripod handle, as well as my Hero6 kit, which included an adhesive helmet mount. Since I was going to be working at heights and didn’t want to drop the camera, the first thing I did was rig up a tether system. A short piece of 2mm cord fit through a slot in the bottom of the center post and a triple fisherman knot made a secure loop. The cord fit out the bottom of the tripod when it was closed, allowing me to connect it to a shock-absorbing lanyard, which was clipped to my harness. This also allowed me to dangle the camera from a cord for a free-floating perspective. I also stuck the quick release base to my climbing helmet, and was ready to go.

I shot segments in both 30p and 60p, depending on how I had the camera mounted, using higher frame rates for the more dynamic shots. I was worried that the helmet mount would be too close, since GoPro recommends keeping the Fusion at least 20cm away from what it is filming, but the helmet wasn’t too bad. Another inch or two would shrink it significantly from the camera’s perspective, similar to my tripod issue with the Gear 360.

I always climbed up with the camera mounted on my helmet and then switched it to the Fusion Grip to record the guy climbing up behind me and my rappel. Hanging the camera from a cord, even 30-feet below me, worked much better than I expected. It put GoPro’s stabilization feature to the test, but it worked fantastically. With the camera rotating freely, the perspective is static, although you can see the seam lines constantly rotating around you. When I am holding the Fusion Grip, the extended pole is completely invisible to the camera, giving you what GoPro has dubbed “Angel View.” It is as if the viewer is floating freely next to the subject, especially when viewed in VR.

Because I have ways to view 360 video in VR, and because I don’t mind panning around on a flat screen view, I am less excited personally in GoPro’s OverCapture functionality, but I recognize it is a useful feature that will greater extend the use cases for this 360 camera. It is designed for people using the Fusion as a more flexible camera to produce flat content, instead of to produce VR content. I edited together a couple OverCapture shots intercut with footage from my regular Hero6 to demonstrate how that would work.

Ambisonic Audio
The other new option that Fusion brings to the table is ambisonic audio. Editing ambisonics works in Premiere Pro using a 4-track multi-channel sequence. The main workflow kink here is that you have to manually override the audio settings every time you import a new clip with ambisonic audio in order to set the audio channels to Adaptive with a single timeline clip. Turn on Monitor Ambisonics by right clicking in the monitor panel and match the Pan, Tilt, and Roll in the Panner-Ambisonics effect to the values in your VR Rotate Sphere effect (note that they are listed in a different order) and your audio should match the video perspective.

When exporting an MP4 in the audio panel, set Channels to 4.0 and check the Audio is Ambisonics box. From what I can see, the Fusion Studio conversion process compensates for changes in perspective, including “stabilization” when processing the raw recorded audio for Ambisonic exports, so you only have to match changes you make in your Premiere sequence.

While I could have intercut the footage at both settings together into a 5Kp60 timeline, I ended up creating two separate 360 videos. This also makes it clear to the viewer which shots were 5K/p30 and which were recorded at 3K/p60. They are both available on YouTube, and I recommend watching them in VR for the full effect. But be warned that they are recorded at heights up to 80 feet up, so it may be uncomfortable for some people to watch.

Summing Up
GoPro’s Fusion camera is not the first 360 camera on the market, but it brings more pixels and higher frame rates than most of its direct competitors, and more importantly it has the software package to assist users in the transition to processing 360 video footage. It also supports ambisonic audio and offers the OverCapture functionality for generating more traditional flat GoPro content.

I found it to be easier to mount and shoot with than my earlier 360 camera experiences, and it is far easier to get the footage ready to edit and view using GoPro’s Fusion Studio program. The Stabilize feature totally changes how I shoot 360 videos, giving me much more flexibility in rotating the camera during movements. And most importantly, I am much happier with the resulting footage that I get when shooting with it.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Mercy Christmas director offers advice for indie filmmakers

By Ryan Nelson

After graduating from film school at The University of North Carolina School of the Arts, I was punched in the gut. I had driven into Los Angeles mere hours after the last day of school ready to set Hollywood on fire with my thesis film. But Hollywood didn’t seem to know I’d arrived. A few months later, Hollywood still wasn’t knocking on my door. Desperate to work on film sets and learn the tools of the trade, I took a job as a grip. In hindsight, it was a lucky accident. I spent the next few years watching some of the industry’s most successful filmmakers from just a few feet away.

Like a sponge, I soaked in every aspect of filmmaking that I could from my time on the sets of Avengers, Real Steel, Spider Man 3, Bad Boys 2, Seven Psychopaths, Smokin’ Aces and a slew of Adam Sandler comedies. I spent hours working, watching, learning and judging. How are they blocking the actors in this scene? What sort of cameras are they using? Why did they use that light? When do you move the camera? When is it static? When I saw the finished films in theaters, I ultimately asked myself, did it all work?

During that same time, I wrote and directed a slew of my own short films. I tried many of the same techniques I’d seen on set. Some of those attempts succeeded and some failed.

Recently, the stars finally aligned and I directed my first feature-length film, Mercy Christmas, from a script I co-wrote with my wife Beth Levy Nelson. After five years of writing, fundraising, production and post production, the movie is finished. We made the movie outside the Hollywood system, using crowd funding, generous friends and loving family members to compile enough cash to make the ultra-low-budget version of the Mercy Christmas screenplay.

I say low budget because it was financially, but thanks to my time on set, years of practice and much trial and error, the finished film looks and feels like much more than it cost.

Mercy Christmas, by the way, features Michael Briskett, who meets the perfect woman and his ideal Christmas dream comes true when she invites him to her family’s holiday celebration. Michael’s dream shatters, however, when he realizes that he will be the Christmas dinner. The film is currently on iTunes.

My experience working professionally in the film business while I struggled to get my shot at directing taught me many things. I learned over those years that a mastery of the techniques and equipment used to tell stories for film was imperative.

The stories I gravitate towards tend to have higher concept set pieces. I really enjoy combining action and character. At this point in my career, the budgets are more limited. However, I can’t allow financial restrictions to hold me back from the stories I want to tell. I must always find a way to use the tools available in their best way.

Ryan Nelson with camera on set.

Two Cameras
I remember an early meeting with a possible producer for Mercy Christmas. I told him I was planning to shoot two cameras. The producer chided me, saying it would be a waste of money. Right then, I knew I didn’t want to work with that producer, and I didn’t.

Every project I do now and in the future will be two cameras. And the reason is simple: It would be a waste of money not to use two cameras. On a limited budget, two cameras offer twice the coverage. Yes, understanding how to shoot two cameras is key, but it’s also simple to master. Cross coverage is not conducive to lower budget lighting so stacking the cameras on a single piece of coverage gives you a medium shot and close shot at the same time. Or for instance, when shooting the wide master shot, you can also get a medium master shot to give the editor another option to breakaway to while building a scene.

In Mercy Christmas, we have a fight scene that consists of seven minutes of screen time. It’s a raucous fight that covers three individual fights happening simultaneously. We scheduled three days to shoot the fight. Without two cameras it would have taken more days to shoot, and we definitely didn’t have more days in the budget.

Of course, two camera rentals and camera crews are budget concerns, so the key is to find a lower budget but high-quality camera. For Mercy Christmas, we chose the Canon C-300 Mark II. We found the image to be fantastic. I was very happy with the final result. You can also save money by only renting one lens package to use for both cameras.

Editing
Good camera coverage doesn’t mean much without an excellent editor. Our editor for Mercy Christmas, Matt Evans, is a very good friend and also very experienced in post. Like me, Matt started at the bottom and worked his way up. Along the way, he worked on many studio films as apprentice editor, first assistant editor and finally editor. Matt’s preferred tool is Avid Media Composer. He’s incredibly fast and understands every aspect of the system.

Matt’s technical grasp is superb, but his story sense is the real key. Matt’s technique is a fun thing to witness. He approaches a scene by letting the footage tell him what to do on a first pass. Soaking in the performances with each take, Matt finds the story that the images want to tell. It’s almost as if he’s reading a new script based on the images. I am delighted each time I can watch Matt’s first pass on a scene. I always expect to see something I hadn’t anticipated. And it’s a thrill.

Color Grading
Another aspect that should be budgeted into an independent film is professional color grading. No, your editor doing color does not count. A professional post house with a professional color grader is what you need. I know this seems exorbitant for a small-budget indie film, but I’d highly recommend planning for it from the beginning. We budgeted color grading for Mercy Christmas because we knew it would take the look to professional levels.

Color grading is not only a tool for the cinematographer it’s a godsend for the director as well. First and foremost, it can save a shot, making a preferred take that has an inferior look actually become a usable take. Second, I believe strongly that color is another tool for storytelling. An audience can be as moved by color as by music. Every detail coming to the audience is information they’ll process to understand the story. I learned very early in my career how shots I saw created on set were accentuated in post by color grading. We used Framework post house in Los Angeles on Mercy Christmas. The colorist was David Sims who did the color and conform in DaVinci Resolve 12.

In the end, my struggle over the years did gain my one of my best tools: experience. I’ve taken the time to absorb all the filmmaking I’ve been surrounded by. Watching movies. Working on sets. Making my own.

After all that time chasing my dream, I kept learning, refining my skills and honing my technique. For me, filmmaking is a passion, a dream and a job. All of those elements made me the storyteller I am today and I wouldn’t change a thing.

On Hold: Making an indie web series

By John Parenteau

On Hold is an eight-episode web series, created and co-written by myself and Craig Kuehne, about a couple of guys working at a satellite company for an India-based technology firm. They have little going for themselves except each other, and that’s not saying much. Season 1 is available now, and we are in prepro on Season 2.

While I personally identify as a filmmaker, I’ve worn a wide range of hats in the entertainment industry since graduating from USC School of Cinematic Arts in the late ‘80s. As a visual effects supervisor, I’ve been involved in projects as diverse as Star Trek: Voyager and Hunger Games. I have also filled management roles at companies such as Amblin Entertainment, Ascent Media, Pixomondo and Shade VFX.

That’s me in the chair, conferring on setup.

It was with my filmmaker hat on that I recently partnered with Craig, a long-time veteran of visual effects, whose credits include Westworld and Game of Thrones. We thought it might be interesting to share our experiences as we ventured into live-action production.

It’s not unique that Craig and I want to be filmmakers. I think most industry professionals, who are not already working as directors or producers, strive to eventually reach that goal. It’s usually the reason people like us get into the business in the first place, and what many of us continue to pursue. Often we’ve become successful in another aspect of entertainment and found it difficult to break out of those “golden handcuffs.” I know Craig and I have both felt that way for years, despite having led fairly successful lives as visual effects pros.

But regardless of our successes in other roles, we still identify ourselves as filmmakers, and at some point, you just have to make the big push or let the dream go. I decided to live by my own mantra that “filmmakers make film.” Thus, On Hold was born.

Why the web series format, you might ask? With so many streaming and online platforms focused on episodic material, doing a series would show we are comfortable with the format, even if ours was a micro-version of a full series. We had, for years, talked about doing a feature film, but that type of project takes so many resources and so much coordination. It just seemed daunting in a no-budget scenario. The web series concept allows us to produce something that resembles a marketable project, essentially on little or no budget. In addition, the format is easily recreated for an equally low budget, so we knew we could do a second season of the show once we had done the first.

This is Craig, pondering a shot.

The Story
We have been friends for years, and the idea for the series came from both our friendship and  our own lives. Who hasn’t felt, as they were getting older, that maybe some of the life choices they made might not have been the best? That can be a serious topic, but we took a comedic angle, looking for the extremes. Our main characters, Jeff (Jimmy Blakeney) and Larry (Paul Vaillancourt), are subtle reflections of us (Craig is Jeff, the somewhat over-thinking, obsessive nerd, and I’m Larry, a bit of a curmudgeon, who can take himself way too seriously), but they quickly took a life of their own, as did the rest of the cast. We added in Katy (Brittney Bertier), their over-energetic intern, Connie (Kelly Keaton), Jeff’s bigger-than-life sister, and Brandon (Scott Rognlien), the creepy and not-very- bright boss. The chemistry just clicked. They say casting is key, and we certainly discovered that on this project. We were very lucky to find the actors we did, and  played off of each other perfectly.

So what does it take to do a web series? First off, writing was key. We spent a few months working out the overall storyline of the first season and then honed in on the basic outlines of each episode. We actually worked out a rough overall arc of the show itself, deciding on a four-season project, which gave us a target to aim for. It was just some basic imagery for an ultimate ending of the show, but it helped keep us focused and helped drive the structure of the early episodes. We split up writing duties, each working on alternate episodes and then sharing scripts with each other. We tried to be brutally honest; It was important that the show reflect both of our views. We spent many nights arguing over certain moments in each episode, both very passionate about the storyline.

In the end we could see we had something good, we just needed to add our talented actors to make it great.

On Hold

The Production
We shot on a Blackmagic Cinema camera, which was fairly new at that point. I wanted the flexibility of different lenses but a high-resolution and high-quality picture. I had never been thrilled with standard DSLR cameras, so I thought the Blackmagic camera would be a good option. To top it off, I could get one for free — always a deciding factor at our budget level. We ended up shooting with a single Canon zoom lens that Craig had, and for the most part it worked fine. I can’t tell you how important the “glass” you shoot with can be. If we had the budget I would have rented some nice Zeiss lenses or something equally professional, and the quality of the image reflects the lack of budget. But the beauty of the Blackmagic Cinema Camera is that it shoots such a nice image already, and at such a high resolution, that we knew we would have some flexibility in post. We recorded in Apple ProRes.

As a DP, I have shot everything from PBS documentaries to music videos, commercials and EPKs (a.k.a. behind the scenes projects), and have had the luxury of working with a load of gear, sometimes with a single light. At USC Film School, my alma mater, you learn to work with what you have, so I learned early to adapt my style to the gear on hand. I ended up using a single lighting kit (a Lowell DP 3 head kit) which worked fine. Shooting comedy is always more about static angles and higher key lighting, and my limited kit made that easily accessible. I would usually lift the ambience in the room by bouncing a light off a wall or ceiling area off camera, then use bounce cards on C-stands to give some source light from the top/side, complementing but not competing with the existing fluorescents in the office. The bigger challenges were when we shot toward the windows. The bright sunlight outside, even with the blinds closed, was a challenge, but we creatively scheduled those shots for early or late in the day.

Low-budget projects are always an exercise in inventiveness and flexibility, mostly by the crew. We had a few people helping off and on, but ultimately it came down to the two of us wearing most of the hats and our associate producer, Maggie Jones, filling in the gaps. She handled the SAG paperwork, some AD tasks, ordered lunch and even operated the boom microphone. That left me shooting all but one episode, while we alternated directing episodes. We shot an episode a day, using a friend’s office on the weekends for free. We made sure we created shot lists ahead of time, so I could see what he had in mind when I shot Craig’s episodes, but also so he could act as a backup check on my list when I was directing.

The Blackmagic camera at work.

One thing about SAG — we decided to go with the guild’s new media contract for our actors. Most of them were already SAG, and while they most likely would have been fine shooting such a small project non-union, we wanted them to be comfortable with the work. We also wanted to respect the guild. Many people complain that working under SAG, especially at this level, is a hassle, but we found it to be exactly the opposite. The key is keeping up with the paperwork each day you shoot. Unless you are working incredibly long hours, or plan to abuse your talent (not a good idea regardless), it’s fairly easy to remain compliant. Maggie managed the daily paperwork and ensured we broke for lunch as per the requirements. Other than that, it was a non-issue.

The Post
Much like our writing and directing, Craig and I split editorial tasks. We both cut on Apple Final Cut Pro X (he with pleasure, me begrudgingly), and shared edits with each other. It was interesting to note differences in style. I tended to cut long, letting scenes breathe. Craig, a much better editor than I, had snappier cuts that moved quicker. This isn’t to say my way didn’t work at times, but it was a nice balance as we made comments on each other’s work. You can tell my episodes are a bit longer than his, but I learned from the experience and managed to shorten my episodes significantly.

I did learn another lesson, one called “killing your darlings.” In one episode, we had as scene where Jeff enjoyed a box of donuts, fishing through them to find the fruit-filled one he craved. The process of him licking each one and putting them back, or biting into a few and spitting out pieces, was hilarious onset, but in editorial I soon learned that too much of a good thing can be bad. Craig persuaded me to trim the scene, and I realized quickly that having one strong beat is just as good as several.

We had a variety of issues with other areas of post, but with no budget we could do little about them. Our “mix” consisted of adjusting levels in our timeline. Our DI amounted to a little color correction. While we were happy with the end result, we realized quickly that we want to make season two even better.

On Hold

The Lessons
A few things pop out as areas needing improvement. First of all, shooting a comedy series with a great group of improv comedians mandates at least two cameras. Both Craig and I, as directors, would do improv takes with the actors after getting the “scripted version,” but some of it was not usable since cutting between different improv takes from a single camera shoot is nearly impossible. We also realized the importance of a real sound mixer on set. Our single mic, mono tracks, run by our unprofessional hands, definitely needed some serious fixing in post. Simply having more experienced hands would have made our day more efficient as well.

For post, I certainly wanted to use newer tools, and we called in some favors for finishing. A confident color correction really makes the image cohesive, and even a rudimentary audio mix can remove many sound issues.

All in all, we are very proud of our first season of On Hold. Despite the technical issues and challenges, what really came together was the performances, and, ultimately, that is what people are watching. We’ve already started development on Season 2, which we will start shooting in January 2018, and we couldn’t be more excited.

The ultimate lesson we’ve learned is that producing a project like On Hold is not as hard as you might think. Sure it has its challenges, but what part of entertainment isn’t a challenge? As Tom Hanks says in A League of Their Own, “It’s supposed to be hard. If it wasn’t hard everyone would do it.” Well, this time, the hard work was worth it, and has inspired us to continue on. Ultimately, isn’t that the point of it all? Whether making films for millions of dollars, or no-budget web series, the point is making stuff. That’s what makes us filmmakers.

 

 

Timecode Systems intros SyncBac Pro for GoPro Hero6

Not long after GoPro introduced its latest offering, Timecode Systems released a customized SyncBac Pro for GoPro Hero6 Black cameras, a timecode-sync solution for the newest generation of action cameras.

By allowing the Hero6 to generate its own frame-accurate timecode, the SyncBac Pro creates the capability to timecode-sync multiple GoPro cameras wirelessly over long-range RF. If GoPro cameras are being used as part of a wider multicamera shoot, SyncBac Pro also allows GoPro cameras to timecode-sync with pro cameras and audio devices. At the end of a shoot, the edit team receives SD cards with frame-accurate timecode embedded into the MP4 file. According to Timecode Systems, using SyncBac Pro for timecode saves around 85 percent in post.

“With the Hero6, GoPro has added features that advance camera performance and image quality, which increases the appeal of using GoPro cameras for professional filming for television and film,” says Ashok Savdharia, CTO at Timecode Systems. “SyncBac Pro further enhances the camera’s compatibility with professional production methods by adding the ability to integrate footage into a multicamera film and broadcast workflow in the same way as larger-scale professional cameras.”

The new SyncBac Pro for GoPro Hero6 Black will start shipping this winter, and it is now available for preorder.

Color plays big role in director Sean Baker’s The Florida Project

Director Sean Baker is drawing wide praise for his realistic portrait of life on the fringe in America in his new film The Florida Project. Baker applies a light touch to the story of a precocious six-year-old girl living in the shadow of Disney World, giving it the feel of a slice-of-life documentary. That quality is carried through in the film’s natural look. Where Baker shot his previous film, Tangerine, entirely with an iPhone, The Florida Project was recorded almost wholly on anamorphic 35mm film by cinematographer Alexis Zabe.

Sam Daley

Post finishing for the film was completed at Technicolor PostWorks New York, which called on a traditional digital intermediate workflow to accommodate Baker’s vision. The work began with scanning the 35mm negative to 2K digital files for dailies and editorial. It ended months later with rescanning at 4K and 6K resolution, editorial conforming and color grading in the facility’s 4K DI theater. Senior colorist Sam Daley applied the final grade via Blackmagic Resolve v.12.5.

Shooting on film was a perfect choice, according to Daley, as it allowed Baker and Zabe to capture the stark contrasts of life in Central Florida. “I lived in Florida for six years, so I’m familiar with the intensity of light and how it affects color,” says Daley. “Pastels are prominent in the Florida color palette because of the way the sun bleaches paint.”

He adds that Zabe used Kodak Vision3 50D and 250D stock for daylight scenes shot in the hot Florida sun, noting, “The slower stock provided a rich color canvas, so much so, that at times we de-emphasized the greenery so it didn’t feel hyper real.”

The film’s principal location is a rundown motel, ironically named the Magic Castle. It does not share the sun-bleached look of other businesses and housing complexes in the area as it has been freshly painted a garish shade of purple.

Baker asked Daley to highlight such contrasts in the grade, but to do so subtly. “There are many colorful locations in the movie,” Daley says. “The tourist traps you see along the highway in Kissimmee are brightly colored. Blue skies and beautiful sunsets appear throughout the film. But it was imperative not to allow the bright colors in the background to distract from the characters in the foreground. The very first instruction that I got from Sean was to make it look real, then dial it up a notch.”

Mixing Film and Digital for Night Shots
To make use of available light, nighttime scenes were not shot on film, but rather were captured digitally on an Arri Alexa. Working in concert with color scientists from Technicolor PostWorks New York and Technicolor Hollywood, Daley helmed a novel workflow to make the digital material blend with scenes that were film-original. He first “pre-graded” the digital shots and then sent them to Technicolor Hollywood where they were recorded out to film. After processing at FotoKem, the film outs were returned to Technicolor Hollywood and scanned to 4K digital files. Those files were rushed back to New York via Technicolor’s Production Network where Daley then dropped them into his timeline for final color grading. The result of the complex process was to give the digitally acquired material a natural film color and grain structure.

“It would have been simpler to fly the digitally captured scenes into my timeline and put on a film LUT and grain FX,” explains Daley, “but Sean wanted everything to have a film element. So, we had to rethink the workflow and come up with a different way to make digital material integrate with beautifully shot film. The process involved several steps, but it allowed us to meet Sean’s desire for a complete film DI.”

Calling on iPhone for One Scene
A scene near the end of the film was, for narrative reasons, captured with an iPhone. Daley explains that, although intended to stand out from the rest of the film, the sequence couldn’t appear so different that it shocked the audience. “The switch from 4K scanned film material to iPhone footage happens via a hard cut,” he explains. “But it needed to feel like it was part of the same movie. That was a challenge because the characteristics of Kodak motion picture stock are quite different from an iPhone.”

The iPhone material was put through the same process as the Alexa footage; it was pre-graded, recorded out to film and scanned back to digital. “The grain helps tie it to the rest of the movie,” reports Daley. “And the grain that you see is real; it’s from the negative that the scene was recorded out to. There are no artificial looks and nothing gimmicky about any of the looks in this film.”

The apparent lack of artifice is, in fact, one of the film’s great strengths. Daley notes that even a rainbow that appears in a key moment was captured naturally. “It’s a beautiful movie,” says Daley. “It’s wonderfully directed, photographed and edited. I was very fortunate to be able to add my touch to the imagery that Sean and Alexis captured so beautifully.”

A Closer Look: VR solutions for production and post

By Alexandre Regeffe

Back in September, I traveled to Amsterdam to check out new tools relating to VR and 360 production and post. As a producer based in Paris, France, I have been working in the virtual reality part of the business for over two years. While IBC took place in September, the information I have to share is still quite relevant.

KanDao

I saw some very cool technology at the show regarding VR and 360 video, especially within the cinematic VR niche. And niche is the perfect word — I see the market slightly narrowing after the wave of hype that happened a couple of years ago. Personally, I don’t think the public has been reached yet, but pardon my French pessimism. Let’s take a look…

Cameras
One new range of products I found amazing were the Obsidian cameras from manufacturer KanDao. This Chinese brand has a smart product line with their 3D/360 cameras. Starting with the Obsidian Go, they reach pro cinematic levels with the Obsidian R (for Resolution, which is 8K per eye) and the Obsidian S (for speed, which you can capture at 120fps). It offers a small radial form factor, only six eyes to produce very smooth stereoscopy, with very a high resolution per eye, which is one of the keys to reaching a good feeling of immersion using a HMD.

Kandao’s features are promising, including handling 6DoF with depth map generation. To me, this is the future of cinematic VR producing — you will be able to have more freedom as the viewer, translating slightly your point of view to see behind objects with natural parallax distortion in realtime! Let me call it “extended” stereoscopic 360.

I can’t speak about professional 360 cameras without also mentioning the Ozo from Nokia. Considered by users to be the first pro VR camera, the Ozo+ version launched this year with a new ISP and offers astonishing new features, especially when you transfer your shots in the Ozo creator tool, which is in version 2.1.

Nokia Ozo+

Powerful tools, like highlights and shadow recovery, haze removal, auto stabilization and better denoising. are there to improve the overall image quality. Another big thing on the Nokia booth was the version 2.0 of the Ozo Live system. Yes, you can now webcast your live event in stereoscopic 360 with a 4K-per-eye resolution! And you can simply use a (boosted) laptop to do it! All the VR tools from Nokia are part of what they call Ozo Reality, an integrated ecosystem where you can create, deliver and experience cinematic VR.

VR Post
When you talk about VR post you have to talk about stitching — assembling all sources to obtain a 360 image. As a French-educated man, you know I have to complain somehow: I hate stitching. And I often yell at these guys who shoot at wrong camera positions. Spending hours (and money) dealing with seam lines is not my tasse de thé.

A few months before IBC, I found my grace: Mistika VR from SGO. Well known for their color grading tool Mistika Ultima (which is one of the finest in stereoscopic), SGO launched a stitching tool for 360 video. Fantastic results. Fantastic development team.

In this very intuitive tool, you can stitch sources of almost all existing cameras and rigs available on the market now, from Samsung gear 360 to Jaunt. With amazing optical flow algorithms, seam line fine adjustments, color matching and many other features, it is to me by far the best tool for outputing a clean, seamless equirectangular image. And the upcoming Mistika VR 3D for stitching stereoscopic sources is very promising. You know what? Thanks to Mistika VR, the stitching process could be fun. Even for me.

In general, optical flow is a huge improvement for stitching, and we can find this parameter in the Kandao Studio stitching tool (designed only for Obsidian cameras), for instance. When you’re happy with your stitch, you can then edit, color grade and maybe add VFX and interactivity in order to bring a really good experience to viewers.

Immersive video within Adobe Premiere.

Today, Adobe CC takes the lead of the editing scene with their specific 360 tools, such as their contextual viewer. But the big hit was when they acquired the Skybox plugins suite from Mettle, which will be integrated natively in the next Adobe CC version (for Premiere and After Effects).

With this set of tools you can easily manipulate your equirectangular sources, do tripod removal, sky replacements and all the invisible effects that were tricky to do without Skybox. You can then add contextual 360 effects like text, blur, transitions, greenscreen, and much more, in monoscopic and even stereoscopic mode. All this while viewing your timeline directly in your Oculus Rift and in realtime! And, incredibly it’s working — I use these tools all day long.

So let’s talk about the Mettle team. Created by two artists back in 1992, they joined the VR movement three years ago with the Skybox suite. They understood they had to bring tech to creative people. As a result they made smart tools with very well-designed GUI. For instance, look at Mettle’s new Mantra creative toolset for After Effects and Premiere. It is incredible to work with because you get the power to create very artistic designs in 360 in Adobe CC. And if you’re a solid VFX tech, wait for their Volumatrix depth-related VR FX software tools. Working in collaboration with Facebook, Mettle will launch the next big tool to do VFX in 3D/360 environments using camera-generated depth maps. It will open new awesome possibilities for content creators.

You know, the current main issue in cinematic 360 is image quality. Of course, we could talk about resolution or pixel per eye, but I think we should focus on color grading. This task is very creative — bringing emotions to the viewers. For me, the best 360 color grading tool to achieve these goals with uncompromised quality is Scratch VR from Assimilate. Beautiful. Formidable. Scratch is a very powerful color grading system, always on top in terms of technology. Now that they’ve added VR capabilities, you can color grade your stereoscopic equirectangular sources as easily as with normal sources. My favorite is mask repeater function, so you can naturally handle masks even in the back seam, which is almost impossible in other color grading tools. And you can also view your results directly in your HMD.

Scratch VR and ZCam collaboration.

At NAB 2017, they provided Scratch VR Z, an integrated workflow in collaboration with ZCam, the manufacturer of the S1 and S1 Pro. In this workflow you can, for instance, stitch sources directly into Scratch and do super high-quality color grading with realtime live streaming, along with logo insertion, greenscreen capabilities, layouts, etc. Crazy. For finishing, the Scratch VR output module is also very useful, enabling you to render your result in ProRes even on Windows, or in 10-bit H264, and many other formats.

Finishing and Distribution
So your cinematic VR experience is finished (you’ll notice I’ve skipped the sound part of the process, but since it’s not the part I work on I will not speak about this essential stage). But maybe you want to add some interactivity for a better user experience?

I visited IBC’s Future Zone to talk with the Liquid Cinema team. What is it? Simply, it’s a set of tools enabling you to enhance your cinematic VR experience. One important word is storytelling — with liquid cinema you can add an interactive layer to your story. The first tool needed is the authoring application where you drop your sources, which can be movies, stills, 360 and 2D stuff. Then create and enjoy.

For example, you can add graphic layers and enable the viewers gaze function, create multibranching scenarios based on intelligent timelines, play with forced perspective features so your viewer never misses an important thing… you must to try it.

The second part of the suite is about VR distribution. As a content creator you want your experience to be on all existing platforms, HMDs, channels … not an easy feat, but with Liquid Cinema it’s possible. Their player is compatible with Samsung Gear VR, Oculus Rift, HTC Vive, iOS, Android, Daydream and more. It’s coming to Apple TV soon.

IglooVision

The third part of the suite is the management of your content. Liquid Cinema has a CMS tool, which is very simple and allows changes, like geoblocking, easily, and provides useful analytics tools like heat map. And you can use your Vimeo pro account as a CDN if needed. Perfect.

Also in the Future Zone was the igloo from IglooVision. This is one of the best “social” ways to experience cinematic VR that I have ever seen. Enter this room with your friends and you can watch 360 all around and finish your drink (try this with an HMD). Comfortable, isn’t it? You can also use it as a “shared VR production suite” by connecting Adobe Premiere or your favorite tool directly to the system. Boom. You have now an immersive 360-degree monitor around you and your post production team.

So that was my journey into the VR stuff of IBC 2017. Of course, this is a non-exhaustive list of tools, with nothing about sound (which is very important in VR), but it’s my personal choice. Period.

One last thing: VR people. I have met a lot of enthusiastic, smart, interesting and happy women and men, helping content producers like me to push their creative limits. So thanks to all of them and see ya.


Paris-based Alexandre Regeffe is a 25-year veteran of TV and film. He is currently VR post production manager at Neotopy, a VR studio, as well as a VR effects specialist working on After Effects and the entire Adobe suite. His specialty is cinematic VR post workflows.

Winners: IBC2017 Impact Awards

postPerspective has announced the winners of our postPerspective Impact Awards from IBC2017. All winning products reflect the latest version of the product, as shown at IBC.

The postPerspective Impact Award winners from IBC2017 are:

• Adobe for Creative Cloud
• Avid for Avid Nexis Pro
• Colorfront for Transkoder 2017
• Sony Electronics for Venice CineAlta camera

Seeking to recognize debut products and key upgrades with real-world applications, the postPerspective Impact Awards are determined by an anonymous judging body made up of industry pros. The awards honor innovative products and technologies for the post production and production industries that will influence the way people work.

“All four of these technologies are very worthy recipients of our first postPerspective Impact Awards from IBC,” said Randi Altman, postPerspective’s founder and editor-in-chief. “These awards celebrate companies that push the boundaries of technology to produce tools that actually make users’ working lives easier and projects better, and our winners certainly fall into that category. You’ll notice that our awards from IBC span the entire pro pipeline, from acquisition to on-set dailies to editing/compositing to storage.

“As IBC falls later in the year, we are able to see where companies are driving refinements to really elevate workflow and enhance production. So we’ve tapped real-world users to vote for the Impact Awards, and they have determined what could be most impactful to their day-to-day work. We’re very proud of that fact, and it makes our awards quite special.”

IBC2017 took place September 15-19 in Amsterdam. postPerspective Impact Awards are next scheduled to celebrate innovative product and technology launches at the 2018 NAB Show.

Red intros Monstro 8K VV, a full-frame sensor

Red Digital Cinema has a new cinematic full-frame sensor for its Weapon cameras called the Monstro 8K VV. Monstro evolves beyond the Dragon 8K VV sensor with improvements in image quality including dynamic range and shadow detail.

This newest camera and sensor combination, Weapon 8K VV, offers full-frame lens coverage, captures 8K full-format motion at up to 60fps, produces ultra-detailed 35.4 megapixel stills and delivers incredibly fast data speeds — up to 300MB/s. And like all of Red’s DSMC2 cameras, Weapon shoots simultaneous RedCode RAW and Apple ProRes or Avid DNxHD/HR recording. It also adheres to the company’s Obsolescence Obsolete — its operating principle that allows current Red owners to upgrade their technology as innovations are unveiled and move between camera systems without having to purchase all new gear.

The new Weapon is priced at $79,500 (for the camera brain) with upgrades for carbon fiber Weapon customers available for $29,500. Monstro 8K VV will replace the Dragon 8K VV in Red’s line-up, and customers that had previously placed an order for a Dragon 8K VV sensor will be offered this new sensor beginning now. New orders will start being fulfilled in early 2018.

Red has also introduced a service offering for all carbon fiber Weapon owners called Red Armor-W. Red Armor-W offers enhanced and extended protection beyond Red Armor, and also includes one sensor swap each year.

According to Red president Jarred Land, “We put ourselves in the shoes of our customers and see how we can improve how we can support them. Red Armor-W builds upon the foundation of our original extended warranty program and includes giving customers the ability to move between sensors based upon their shooting needs.”

Additionally, Red has made its enhanced image processing pipeline (IPP2) available in-camera with the company’s latest firmware release (V.7.0) for all cameras with Helium and Monstro sensors. IPP2 offers a completely overhauled workflow experience, featuring enhancements such as smoother highlight roll-off, better management of challenging colors, an improved demosaicing algorithm and more.

GoPro intros Hero6 and its first integrated 360 solution, Fusion

By Mike McCarthy

Last week, I traveled to San Francisco to attend GoPro’s launch event for its new Hero6 and Fusion cameras. The Hero6 is the next logical step in the company’s iteration of action cameras, increasing the supported frame rates to 4Kp60 and 1080p240, as well as adding integrated image stabilization. The Fusion on the other hand is a totally new product for them, an action-cam for 360-degree video. GoPro has developed a variety of other 360-degree video capture solutions in the past, based on rigs using many of their existing Hero cameras, but Fusion is their first integrated 360-video solution.

While the Hero6 is available immediately for $499, the Fusion is expected to ship in November for $699. While we got to see the Fusion and its footage, most of the hands-on aspects of the launch event revolved around the Hero6. Each of the attendees was provided a Hero6 kit to record the rest of the days events. My group was provided a ride on the RocketBoat through the San Francisco Bay. This adventure took advantage of a number of features of the camera, including the waterproofing, the slow motion and the image stabilization.

The Hero6

The big change within the Hero6 is the inclusion of GoPro’s new custom-designed GP1 image processing chip. This allows them to process and encode higher frame rates, and allows for image stabilization at many frame-rate settings. The camera itself is physically similar to the previous generations, so all of your existing mounts and rigs will still work with it. It is an easy swap out to upgrade the Karma drone with the new camera, which also got a few software improvements. It can now automatically track the controller with the camera to keep the user in the frame while the drone is following or stationary. It can also fly a circuit of 10 waypoints for repeatable shots, and overcoming a limitation I didn’t know existed, it can now look “up.”

There were fewer precise details about the Fusion. It is stated to be able to record a 5.2K video sphere at 30fps and a 3K sphere at 60fps. This is presumably the circumference of the sphere in pixels, and therefore the width of an equi-rectangular output. That would lead us to conclude that the individual fish-eye recording is about 2,600 pixels wide, plus a little overlap for the stitch. (In this article, GoPro’s David Newman details how the company arrives at 5.2K.)

GoPro Fusion for 360

The sensors are slightly laterally offset from one another, allowing the camera to be thinner and decreasing the parallax shift at the side seams, but adding a slight offset at the top and bottom seams. If the camera is oriented upright, those seams are the least important areas in most shots. They also appear to have a good solution for hiding the camera support pole within the stitch, based on the demo footage they were showing. It will be interesting to see what effect the Fusion camera has on the “culture” of 360 video. It is not the first affordable 360-degree camera, but it will definitely bring 360 capture to new places.

A big part of the equation for 360 video is the supporting software and the need to get the footage from the camera to the viewer in a usable way. GoPro already acquired Kolor’s Autopano Video Pro a few years ago to support image stitching for their larger 360 video camera rigs, so certain pieces of the underlying software ecosystem to support 360-video workflow are already in place. The desktop solution for processing the 360 footage will be called Fusion Studio, and is listed as coming soon on their website.

They have a pretty slick demonstration of flat image extraction from the video sphere, which they are marketing as “OverCapture.” This allows a cellphone to pan around the 360 sphere, which is pretty standard these days, but by recording that viewing in realtime they can output standard flat videos from the 360 sphere. This is a much simpler and more intuitive approach to virtual cinematography that trying to control the view with angles and keyframes in a desktop app.

This workflow should result in a very fish-eye flat video, similar to the more traditional GoPro shots, due to the similar lens characteristics. There are a variety of possible approaches to handling the fish-eye look. GoPro’s David Newman was explaining to me some of the solutions he has been working on to re-project GoPro footage into a sphere, to reframe or alter the field of view in a virtual environment. Based on their demo reel, it looks like they also have some interesting tools coming for using the unique functionality that 360 makes available to content creators, using various 360 projections for creative purposes within a flat video.

GoPro Software
On the software front, GoPro has also been developing tools to help its camera users process and share their footage. One of the inherent issues of action-camera footage is that there is basically no trigger discipline. You hit record long before anything happens, and then get back to the camera after the event in question is over. I used to get one-hour roll-outs that had 10 seconds of usable footage within them. The same is true when recording many attempts to do something before one of them succeeds.

Remote control of the recording process has helped with this a bit, but regardless you end up with tons of extra footage that you don’t need. GoPro is working on software tools that use AI and machine learning to sort through your footage and find the best parts automatically. The next logical step is to start cutting together the best shots, which is what Quikstories in their mobile app is beginning to do. As someone who edits video for a living, and is fairly particular and precise, I have a bit of trouble with the idea of using something like that for my videos, but for someone to whom the idea of “video editing” is intimidating, this could be a good place to start. And once the tools get to a point where their output can be trusted, automatically sorting footage could make even very serious editing a bit easier when there is a lot of potential material to get through. In the meantime though, I find their desktop tool Quik to be too limiting for my needs and will continue to use Premiere to edit my GoPro footage, which is the response I believe they expect of any professional user.

There are also a variety of new camera mount options available, including small extendable tripod handles in two lengths, as well as a unique “Bite Mount” (pictured, left) for POV shots. It includes a colorful padded float in case it pops out of your mouth while shooting in the water. The tripods are extra important for the forthcoming Fusion, to support the camera with minimal obstruction of the shot. And I wouldn’t recommend the using Fusion on the Bite Mount, unless you want a lot of head in the shot.

Ease of Use
Ironically, as someone who has processed and edited hundreds of hours of GoPro footage, and even worked for GoPro for a week on paper (as an NAB demo artist for Cineform during their acquisition), I don’t think I had ever actually used a GoPro camera. The fact that at this event we were all handed new cameras with zero instructions and expected to go out and shoot is a testament to how confident GoPro is that their products are easy to use. I didn’t have any difficulty with it, but the engineer within me wanted to know the details of the settings I was adjusting. Bouncing around with water hitting you in the face is not the best environment for learning how to do new things, but I was able to use pretty much every feature the camera had to offer during that ride with no prior experience. (Obviously I have extensive experience with video, just not with GoPro usage.) And I was pretty happy with the results. Now I want to take it sailing, skiing and other such places, just like a “normal” GoPro user.

I have pieced together a quick highlight video of the various features of the Hero6:


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Making the jump to 360 Video (Part 1)

By Mike McCarthy

VR headsets have been available for over a year now, and more content is constantly being developed for them. We should expect that rate to increase as new headset models are being released from established technology companies, prompted in part by the new VR features expected in Microsoft’s next update to Windows 10. As the potential customer base increases, the software continues to mature, and the content offerings broaden. And with the advances in graphics processing technology, we are finally getting to a point where it is feasible to edit videos in VR, on a laptop.

While a full VR experience requires true 3D content, in order to render a custom perspective based on the position of the viewer’s head, there is a “video” version of VR, which is called 360 Video. The difference between “Full VR” and “360 Video,” is that while both allow you to look around every direction, 360 Video is pre-recorded from a particular point, and you are limited to the view from that spot. You can’t move your head to see around behind something, like you can in true VR. But 360 video can still offer a very immersive experience and arguably better visuals, since they aren’t being rendered on the fly. 360 video can be recorded in stereoscopic or flat, depending on the capabilities of the cameras used.

Stereoscopic is obviously more immersive, less of a video dome and inherently supported by the nature of VR HMDs (Head Mounted Displays). I expect that stereoscopic content will be much more popular in 360 Video than it ever was for flat screen content. Basically the viewer is already wearing the 3D glasses, so there is no downside, besides needing twice as much source imagery to work with, similar to flat screen stereoscopic.

There are a variety of options for recording 360 video, from a single ultra-wide fisheye lens on the Fly360, to dual 180-degree lens options like the Gear 360, Nikon KeyMission, and Garmin Virb. GoPro is releasing the Fusion, which will fall into this category as well. The next step is more lens, with cameras like the Orah4i or the Insta360 Pro. Beyond that, you are stepping into the much more expensive rigs with lots of lenses and lots of stitching, but usually much higher final image quality, like the GoPro Omni or the Nokia Ozo. There are also countless rigs that use an array of standard cameras to capture 360 degrees, but these solutions are much less integrated than the all-in-one products that are now entering the market. Regardless of the camera you use, you are going to be recording one or more files in a pixel format fairly unique to that camera that will need to be processed before it can be used in the later stages of the post workflow.

Affordable cameras

The simplest and cheapest 360 camera option I have found is the Samsung Gear 360. There are two totally different models with the same name, usually differentiated by the year of their release. I am using the older 2016 model, which has a higher resolution sensor, but records UHD instead of the slightly larger full 4K video of the newer 2017 model.

The Gear 360 records two fisheye views that are just over 180 degrees, from cameras situated back to back in a 2.5-inch sphere. Both captured image circles are recorded onto a single frame, side by side, resulting in a 2:1 aspect ratio files. These are encoded into JPEG (7776×3888 stills) or HEVC (3840×1920 video) at 30Mb and saved onto a MicroSD card. The camera is remarkably simple to use, with only three buttons, and a tiny UI screen to select recording mode and resolution. If you have a Samsung Galaxy phone, there are a variety of other functions that allows, like remote control and streaming the output to the phone as a viewfinder and such. Even without a Galaxy phone, the camera did everything I needed to generate 360 footage to stitch and edit with but it was cool to have a remote viewfinder for the driving shots.

Pricier cameras

One of the big challenges of shooting with any 360 camera is how to avoid getting gear and rigging in the shot since the camera records everything around it. Even the tiny integrated tripod on the Gear 360 is visible in the shots, and putting it on the plate of my regular DSLR tripod fills the bottom of the footage. My solution was to use the thinnest support I could to keep the rest of the rigging as far from the camera as possible, and therefore smaller from its perspective. I created a couple options to shoot with that are pictured below. The results are much less intrusive in the resulting images that are recorded. Obviously besides the camera support, there is the issue of everything else in the shot including the operator. Since most 360 videos are locked off, an operator may not be needed, but there is no “behind the camera” for hiding gear or anything else. Your set needs to be considered in every direction, since it will all be visible to your viewer. If you can see the camera, it can see you.

There are many different approaches to storing 360 images, which are inherently spherical, as a video file, which is inherently flat. This is the same issue that cartographers have faced for hundreds of years — creating flat paper maps of a planet that is inherently curved. While there are sphere map, cube map and pyramid projection options (among others) based on the way VR headsets work, the equirectangular format has emerged as the standard for editing and distribution encoding, while other projections are occasionally used for certain effects processing or other playback options.

Usually the objective of the stitching process is to get the images from all of your lenses combined into a single frame with the least amount of distortion and the fewest visible seams. There are a number of software solutions that do this, from After Effects plugins, to dedicated stitching applications like Kolor AVP and Orah VideoStitch-Studio to unique utilities for certain cameras. Once you have your 360 video footage in the equirectangular format, most of the other steps of the workflow are similar to their flat counterparts, besides VFX. You can cut, fade, title and mix your footage in an NLE and then encode it in the standard H.264 or H.265 formats with a few changes to the metadata.

Technically, the only thing you need to add to an existing 4K editing workflow in order to make the jump to 360 video is a 360 camera. Everything else could be done in software, but the other thing you will want is a VR headset or HMD. It is possible to edit 360 video without an HMD, but it is a lot like grading a film using scopes but no monitor. The data and tools you need are all right there, but without being able to see the results, you can’t be confident of what the final product will be like. You can scroll around the 360 video in the view window, or see the whole projected image all distorted, but it won’t have the same feel as experiencing it in a VR headset.

360 Video is not as processing intensive as true 3D VR, but it still requires a substantial amount of power to provide a good editing experience. I am using a Thinkpad P71 with an Nvidia Quadro P5000 GPU to get smooth performance during all these tests.

Stay tuned for Part 2 where we focus on editing 360 Video.


Mike McCarthy is an online editor/workflow consultant with 10 years of experience on feature films and commercials. He has been working on new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

Blackmagic’s new Ultimatte 12 keyer with one-touch keying

Building on the 40-year heritage of its Ultimatte keyer, Blackmagic Design has introduced the Ultimatte 12 realtime hardware compositing processor for broadcast-quality keying, adding augmented reality elements into shots, working with virtual sets and more. The Ultimatte 12 features new algorithms and color science, enhanced edge handling, greater color separation and color fidelity and better spill suppression.

The 12G-SDI design gives Ultimatte 12 users the flexibility to work in HD and switch to Ultra HD when they are ready. Sub-pixel processing is said to boost image quality and textures in both HD and Ultra HD. The Ultimatte 12 is also compatible with most SD, HD and Ultra HD equipment, so it can be used with existing cameras.

With Ultimatte 12, users can create lifelike composites and place talent into any scene, working with both fixed cameras and static backgrounds or automated virtual set systems. It also enables on-set previs in television and film production, letting actors and directors see the virtual sets they’re interacting with while shooting against a green screen.

Here are a few more Ultimatte 12 features:

  • For augmented reality, on-air talent typically interacts with glass-like computer-generated charts, graphs, displays and other objects with colored translucency. Adding tinted, translucent objects is very difficult with a traditional keyer, and the results don’t look realistic. Ultimatte 12 addresses this with a new “realistic” layer compositing mode that can add tinted objects on top of the foreground image and key them correctly.
  • One-touch keying technology analyzes a scene and automatically sets more than 100 parameters, simplifying keying as long as the scene is well-lit and the cameras are properly white-balanced. With one-touch keying, operators can pull a key accurately and with minimum effort, freeing them to focus on the program with fewer distractions.
  • Ultimatte 12’s new image processing algorithms, large internal color space, and automatic internal matte generation lets users work on different parts of the image separately with a single keyer.
  • For color handling, Ultimatte 12 has new flare, edge and transition processing to remove backgrounds without affecting other colors. The improved flare algorithms can remove green tinting and spill from any object — even dark shadow areas or through transparent objects.
  • Ultimatte 12 is controlled via Ultimatte Smart Remote 4, a touch-screen remote device that connects via Ethernet. Up to eight Ultimatte 12 units can be daisy-chained together and connected to the same Smart Remote, with physical buttons for switching and controlling any attached Ultimatte 12.

Ultimatte 12 is now available from Blackmagic Design resellers.

Sony adds 36×24 full-frame camera to CineAlta line

Sony has introduced Venice, the company’s first full-frame digital motion picture camera system and the newest of its CineAlta camera lineup, which is designed to expand the filmmaker’s creative freedom through immersive, large-format, full-frame capture of filmic imagery that enables production of natural skin tones, elegant highlight handling and wide dynamic range.

Venice was officially unveiled on September 6 to American Society of Cinematographers (ASC) members and a range of other industry pros. Sony also screened the first footage shot with Venice, a short film, The Dig, that was produced in anamorphic, written and directed by Joseph Kosinski, and shot by Academy Award-winning cinematographer Claudio Miranda, ASC.

The new sensor.

“We really went back to the drawing board for this one,” says Peter Crithary, marketing manager, Sony Electronics. “It is our next-generation camera system, a ground-up development initiative encompassing a completely new image sensor. We carefully considered key aspects such as form factor, ergonomics, build quality, ease of use, a refined picture and painterly look — with a simple, established workflow. We worked in close collaboration with film industry professionals. We also considered the longer-term strategy by designing a user-interchangeable sensor that is as quick and simple to swap as removing four screws, and can accommodate different shooting scenarios as the need arises.”

Venice features a newly developed 36x24mm full-frame sensor to meet the demands of feature filmmaking. Full frame offers the advantages of compatibility with a wide range of lenses, including anamorphic, Super 35mm, spherical and full-frame PL mount lenses for a greater range of expressive freedom with shallow depth of field. The lens mount can also be changed to support E-mount lenses for shooting situations that require smaller, lighter and wider lenses. User-selectable areas of the image sensor allow shooting in Super 35mm 4-perf. Future firmware upgrades are planned to allow the camera to handle 36mm-wide 6K resolution. Fast image scan technology minimizes “Jello” effects.

A new color management system with an ultra-wide color gamut gives users more control and greater flexibility in working with images during grading and post production. Venice also has more than 15 stops of latitude to handle challenging lighting situations from low light to harsh sunlight with a gentle roll-off handling of highlights.

Venice uses Sony’s 16-bit RAW/X-OCN via the AXS-R7 recorder, and 10-bit XAVC workflows. The new camera is also compatible with current and upcoming CineAlta camera hardware accessories, including the DVF-EL200 full-HD OLED viewfinder, AXS-R7 recorder, AXS-CR1 and high-speed Thunderbolt-enabled AXS-AR1 card reader, using established AXS and SxS memory card formats.

Venice has a fully modular and intuitive design with functionality refined to support simple and efficient on-location operation. It is the film industry’s first camera with a built-in stage glass ND filter system, making the shooting process efficient and streamlining camera setup. The camera is designed for easy operation with an intuitive control panel placed on the assistant and operator sides of the camera. A 24-V power supply input/output and LEMO connector allow use of many standard camera accessories designed for use in harsh environments.

Users can customize Venice by enabling the features needed, matched to their individual production requirements. Optional licenses will be available in permanent, monthly and weekly durations to expand the camera’s capabilities, with new features including 4K anamorphic and full frame sold separately.

The Venice CineAlta digital motion picture camera system is scheduled to be available in February 2018.